id
stringlengths
10
10
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
content
stringlengths
3.91k
873k
references
dict
2003.11535
Training Binary Neural Networks with Real-to-Binary Convolutions
This paper shows how to train binary networks to within a few percent points ($\sim 3-5 \%$) of the full precision counterpart. We first show how to build a strong baseline, which already achieves state-of-the-art accuracy, by combining recently proposed advances and carefully adjusting the optimization procedure. Secondly, we show that by attempting to minimize the discrepancy between the output of the binary and the corresponding real-valued convolution, additional significant accuracy gains can be obtained. We materialize this idea in two complementary ways: (1) with a loss function, during training, by matching the spatial attention maps computed at the output of the binary and real-valued convolutions, and (2) in a data-driven manner, by using the real-valued activations, available during inference prior to the binarization process, for re-scaling the activations right after the binary convolution. Finally, we show that, when putting all of our improvements together, the proposed model beats the current state of the art by more than 5% top-1 accuracy on ImageNet and reduces the gap to its real-valued counterpart to less than 3% and 5% top-1 accuracy on CIFAR-100 and ImageNet respectively when using a ResNet-18 architecture. Code available at https://github.com/brais-martinez/real2binary.
http://arxiv.org/pdf/2003.11535
Brais Martinez, Jing Yang, Adrian Bulat, Georgios Tzimiropoulos
cs.CV
ICLR 2020
null
cs.CV
20200325
20200325
0 2 0 2 r a M 5 2 ] V C . s c [ 1 v 5 3 5 1 1 . 3 0 0 2 : v i X r a Published as a conference paper at ICLR 2020 # TRAINING BINARY NEURAL NETWORKS WITH REAL- TO-BINARY CONVOLUTIONS Brais Martinez1, Jing Yang1,2,*, Adrian Bulat1,* & Georgios Tzimiropoulos1,2 1 Samsung AI Research Center, Cambridge, UK 2 Computer Vision Laboratory, The University of Nottingham, UK {brais.a,adrian.bulat,georgios.t}@samsung.com # ABSTRACT This paper shows how to train binary networks to within a few percent points (∼ 3 − 5%) of the full precision counterpart. We first show how to build a strong baseline, which already achieves state-of-the-art accuracy, by combining recently proposed advances and carefully adjusting the optimization procedure. Secondly, we show that by attempting to minimize the discrepancy between the output of the binary and the corresponding real-valued convolution, additional significant accuracy gains can be obtained. We materialize this idea in two complemen- tary ways: (1) with a loss function, during training, by matching the spatial at- tention maps computed at the output of the binary and real-valued convolutions, and (2) in a data-driven manner, by using the real-valued activations, available during inference prior to the binarization process, for re-scaling the activations right after the binary convolution. Finally, we show that, when putting all of our improvements together, the proposed model beats the current state of the art by more than 5% top-1 accuracy on ImageNet and reduces the gap to its real- valued counterpart to less than 3% and 5% top-1 accuracy on CIFAR-100 and ImageNet respectively when using a ResNet-18 architecture. Code available at https://github.com/brais-martinez/real2binary. # INTRODUCTION Following the introduction of the BinaryNeuralNet (BNN) algorithm (Courbariaux et al., 2016), binary neural networks emerged as one of the most promising approaches for obtaining highly effi- cient neural networks that can be deployed on devices with limited computational resources. Binary convolutions are appealing mainly for two reasons: (a) Model compression: if the weights of the network are stored as bits in a 32-bit float, this implies a reduction of 32× in memory usage. (b) Computational speed-up: computationally intensive floating-point multiply and add operations are replaced by efficient xnor and pop-count operations, which have been shown to provide practi- cal speed-ups of up to 58× on CPU (Rastegari et al., 2016) and, as opposed to general low bit-width operations, are amenable to standard hardware. Despite these appealing properties, binary neural networks have been criticized as binarization typically results in large accuracy drops. Thus, their deployment in practical scenarios is uncommon. For example, on ImageNet classification, there is a ∼ 18% gap in top-1 accuracy between a ResNet-18 and its binary counterpart when binarized with XNOR-Net (Rastegari et al., 2016), which is the method of choice for neural network binarization. But how far are we from training binary neural networks that are powerful enough to become a viable alternative to real-valued networks? Our first contribution in this work is to take stock of recent advances on binary neural networks and train a very strong baseline which already results in state-of-the-art performance. Our second contribution is a method for bridging most of the remain- ing gap, which boils down to minimizing the discrepancy between the output of the binary and the corresponding real-valued convolution. This idea is materialized in our work in two complemen- tary ways: Firstly, we use an attention matching strategy so that the real-valued network can more * Denotes equal contribution 1 Published as a conference paper at ICLR 2020 Binary block + BatchNorm pwc Hxwxe GlobalAvgPool | 1*1«c Pep ¥ ¥ ¢ Sign Linear rxixt Yiasr r7 em HxWxC ReLU axixt + ae ¥ AO) Convolution Linear ixaxe eC t Det Hxwxc Sigmoid xix —e ° HxWwxc Activation Real block (teacher) Mi Binary block Pep ¢ Yiasr em ae AO) eC Det —e ° Real block (teacher) Figure 1: Left: The proposed real-to-binary block. The diagram shows how spatial attention maps computed from a teacher real-valued network are matched with the ones computed from the binary network. Supervision is injected at the end of each binary block. See also section 4.2. Right: The proposed data-driven channel re-scaling approach. The left-hand side branch corresponds to the standard binary convolution module. The right-hand side branch corresponds to the proposed gating function that computes the channel-scaling factors from the output of the batch normalization. The factor r controls the compression ratio on the gating function, and H, W and C indicate the two spatial and the channel dimensions of the activation tensors. See also section 4.3. closely guide the binary network during optimization. However, we show that due to the architec- tural discrepancies between the real and the binary networks, a direct application of teacher-student produces sub-optimal performance. Instead, we propose to use a sequence of teacher-student pairs that progressively bridges the architectural gap. Secondly, we further propose to use the real-valued activations of the binary network, available prior to the binarization preceding convolution, to com- pute scale factors that are used to re-scale the activations right after the application of the binary convolution. This is in line with recent works which have shown that re-scaling the binary convo- lution output can result in large performance gains (Rastegari et al., 2016; Bulat & Tzimiropoulos, 2019). However, unlike prior work, we compute the scaling factors in a data-driven manner based on the real-valued activations of each layer prior to binarization, which results in superior performance. Overall, we make the following contributions: • We construct a very strong baseline by combining some recent insights on training binary networks and by performing a thorough experimentation to find the most well-suited opti- mization techniques. We show that this baseline already achieves state-of-the-art accuracy on ImageNet, surpassing all previously published works on binary networks. • We propose a real-to-binary attention matching: this entails that matching spatial attention maps computed at the output of the binary and real-valued convolutions is particularly suited for training binary neural networks (see Fig. 1 left and section 4.2). We also devise an approach in which the architectural gap between real and binary networks is progressively bridged through a sequence of teacher-student pairs. • We propose a data-driven channel re-scaling: this entails using the real-valued activations of the binary network prior to their binarization to compute the scale factors used to re- scale the activations produced right after the application of the binary convolution. See Fig. 1, right, and section 4.3. • We show that our combined contributions provide, for the first time, competitive results on two standard datasets, achieving 76.2% top-1 performance on CIFAR-100 and 65.4% top-1 performance on ImageNet when using a ResNet-18 –a gap bellow 3% and 5% respectively compared to their full precision counterparts. 2 Published as a conference paper at ICLR 2020 2 RELATED WORK While being pre-dated by other works on binary networks (Soudry et al., 2014), the BNN algo- rithm (Courbariaux et al., 2016) established how to train networks with binary weights within the familiar back-propagation paradigm. The training method relies on a real-valued copy of the net- work weights which is binarized during the forward pass, but is updated during back-propagation ignoring the binarization step. Unfortunately, BNN resulted in a staggering ∼ 28% gap in top-1 accuracy compared to the full precision ResNet-18 on ImageNet. It is worth noting that binary networks do have a number of floating point operations. In fact, the output of a binary convolution is not binary (values are integers resulting from the count). Also, in accordance to other low bit-width quantization methodologies, the first convolution (a costly 7 × 7 kernel in ResNet), the fully connected layer and the batch normalization layers are all real-valued. In consequence, a line of research has focused on developing methodologies that add a fractional amount of real-valued operations in exchange for significant accuracy gains. For example, the sem- inal work of XNOR-Net (Rastegari et al., 2016) proposed to add a real-valued scaling factor to each output channel of a binary convolution, a technique that has become standard for binary networks. Similarly, Bi-Real Net (Liu et al., 2018) argued that skip connections are fundamental for binary networks and observed that the flow of full precision activations provided by the skip connections is interrupted by the binary downsample convolutions. This degrades the signal and make subsequent skip connections less effective. To alleviate this, they proposed making the downsample layers real valued, obtaining around 3% accuracy increase in exchange for a small increase in computational complexity. Improving the optimization algorithm for binary networks has been another fundamental line of research. Examples include the use of smooth approximations of the gradient, the use of PReLU (Bulat et al., 2019), a two-stage training which binarizes the weights first and then the activations (Bulat et al., 2019) and progressive quantization (Gong et al., 2019; Bulat et al., 2019). The work in (Wang et al., 2019) proposed to learn channel correlations through reinforcement learning to better preserve the sign of a convolution output. A set of regularizers are added to the loss term in (Ding et al., 2019) so as to control the range of values of the activations, and guarantee good gradient flow. Other optimization aspects, such the effect of gradient clipping or batch-norm momentum, were empirically tested in (Alizadeh et al., 2019). In section 4.1, we show how to combine many of the insights provided in these works with standard optimization techniques to obtain a very strong baseline that already achieves state-of-the-art accuracy. While the aforementioned works either maintain the same computational cost, or increase it by a fractional amount, other research has focused instead on relaxing the problem constraints by in- creasing the number of binary operations by a large amount, typically a factor of 2 to 8 times. Examples include ABC-Net (Lin et al., 2017), the structure approximation of (Zhuang et al., 2019), the circulant CNN of (Liu et al., 2019), and the binary ensemble of (Zhu et al., 2019). Note that the large increase of binary operations diminishes the efficiency claim that justifies the use of binary networks in first place. Furthermore, we will show that there is still a lot of margin in order to bridge the accuracy gap prior to resorting to scaling up the network capacity1. The methodology proposed in this paper has some relations with prior work: our use of atten- tion matching as described in section 4.2 is somewhat related to the feature distillation approach of (Zhuang et al., 2018). However, (Zhuang et al., 2018) tries to match whole feature maps of the to-be-quantized network with the quantized feature maps of a real-valued network that is trained in parallel with the to-be-quantized network. Such an approach is shown to improve training of low-bitwidth quantized models but not binary networks. Notably, our approach based on matching attention maps is much simpler and shown to be effective for the case of binary networks. Our data-driven channel re-scaling approach, described in section 4.3, is related to the channel re- scaling approach of XNOR-Net, and also that of (Xu & Cheung, 2019; Bulat & Tzimiropoulos, 2019), which propose to learn the scale factors discriminatively through backpropagation. Contrary to (Xu & Cheung, 2019; Bulat & Tzimiropoulos, 2019), our method is data-driven and avoids using 1There is also a large body of work focusing on other low-bit quantization strategies, but a review of these techniques goes beyond the scope of this section. 3 Published as a conference paper at ICLR 2020 fixed scale factors learnt during training. Contrary to XNOR-Net, our method discriminatively learns how to produce the data-driven scale factors so that they are optimal for the task in hand. # 3 BACKGROUND This section reviews the binarization process proposed in (Courbariaux et al.|/2016) and its improved version from (Rastegari et al.||2016), which is the method of choice for neural network binarization. We denote by W € R°*°**** and A € R°Xwim Xin the weights and input features of a CNN layer, where o and c represent the number of output and input channels, k the width and height of the kernel, and w;,, and hin, represent the spatial dimension of the input features A. In 16), both weights and activations are binarized using the sign function and then convolution is performed as A * W % sign(.A) @ sign(W) where @) denotes the binary convolution, which can be implemented using bit-wise operations. However, this direct binarization approach introduces a high quantization error that leads to low accuracy. To alleviate this, XNOR-Net (Rastegari et al., 2016) proposes to use real-valued scaling factors to re-scale the output of the binary convolution as Ax*W & (sign(A) @ sign(W)) © Ka, dd) where © denotes the element-wise multiplication, a and K are the weight and activation scaling factors, respectively, calculated in |Rastegari et al.| (2016) in an analytic manner. More recently, Bulat & Tzimiropoulos} (2019) proposed to fuse a and K into a single factor I that is learned via backpropagation, resulting in further accuracy gains. # 4 METHOD This section firstly introduces our strong baseline. Then, we present two ways to improve the ap- proximation of Eq. 1: Firstly, we use a loss based on matching attention maps computed from the binary and a real-valued network (see section 4.2). Secondly, we make the scaling factor a function of the real-valued input activations A (see section 4.3). 4.1 BUILDING A STRONG BASELINE Currently, almost all works on binary networks use XNOR-Net and BNN as baselines. In this sec- tion, we show how to construct a strong baseline by incorporating insights and techniques described in recent works as well as standard optimization techniques. We show that our baseline already achieves state-of-the-art accuracy. We believe this is an important contribution towards understand- ing the true impact of proposed methodologies and towards assessing the true gap with real-valued networks. Following prior work in binary networks, we focus on the ResNet-18 architecture and apply the improvements listed below: Block structure: It is well-known that a modified ResNet block must be used to obtain optimal results for binary networks. We found the widely-used setting where the operations are ordered as BatchNorm → Binarization → BinaryConv → Activation to be the best. The skip connection is the last operation of the block (Rastegari et al., 2016). Note that we use the sign function to binarize the activations. However, the BatchNorm layer includes an affine transformation and this ordering of the blocks allows its bias term act as a learnable binarization threshold. Residual learning: We used double skip connections, as proposed in (Liu et al., 2018). Activation: We used PReLU (He et al., 2015) as it is known to facilitate the training of binary networks (Bulat et al., 2019). Scaling factors: We used discriminatively learnt scaling factors via backpropagation as in (Bulat & Tzimiropoulos, 2019). Downsample layers: We used real-valued downsample layers (Liu et al., 2018). We found the large accuracy boost to be consistent across our experiments (around 3 − 4% top-1 improvement on ImageNet). 4 Published as a conference paper at ICLR 2020 We used the following training strategies to train our strong baseline: Initialization: When training binary networks, it is crucial to use a 2-stage optimization strat- egy (Bulat et al., 2019). In particular, we first train a network using binary activations and real-valued weights, and then use the resulting model as initialization to train a network where both weights and activations are binarized. Weight decay: Setting up weight decay carefully is surprisingly important. We use 1e − 5 when training stage 1 (binary activation and real weights network), and set it to 0 on stage 2 (Bethge et al., 2019). Note that weights at stage 2 are either 1 or −1, so applying an L2 regularization term to them does not make sense. Data augmentation: For CIFAR-100 we use the standard random crop, horizontal flip and rotation (±15◦). For ImageNet, we found that random cropping, flipping and colour jitter augmentation worked best. However, colour jitter is disabled for stage 2. Mix-up: We found that mix-up (Zhang et al., 2017) is crucial for CIFAR-100, while it slightly hurts performance for ImageNet – this is due to the higher risk of overfitting on CIFAR-100. Warm-up: We used warm-up for 5 epochs during stage 1 and no warm-up for stage 2. Optimizer: We used Adam (Kingma & Ba, 2014) with a stepwise scheduler. The learning rate is set to 1e − 3 for stage 1, and 2e − 4 for stage 2. For CIFAR-100, we trained for 350 epochs, with steps at epochs 150, 250 and 320. For ImageNet, we train for 75 epochs, with steps at epochs 40, 60 and 70. Batch sizes are 256 for ImageNet and 128 for CIFAR-100. 4.2 REAL-TO-BINARY ATTENTION MATCHING We make the reasonable assumption that if a binary network is trained so that the output of each binary convolution more closely matches the output of a real convolution in the corresponding layer of a real-valued network, then significant accuracy gains can be obtained. Notably, a similar as- sumption was made in (Rastegari et al., 2016) where analytic scale factors were calculated so that the error between binary and real convolutions is minimized. Instead, and inspired by the attention transfer method of (Zagoruyko & Komodakis, 2017), we propose to enforce such a constraint via a loss term at the end of each convolutional block by comparing attention maps calculated from the binary and real-valued activations. Such supervisory signals provide the binary network with much- needed extra guidance. It is also well-known that backpropagation for binary networks is not as effective as for real-valued ones. By introducing such loss terms at the end of each block, gradients do not have to traverse the whole network and suffer a degraded signal. Assuming that attention matching is applied at a set of J transfer points within the network, the total loss can be expressed as: FI Qi, Q} La = S~ || 2 - —F |, (2) > 3ll2 lle j=l where Q = 7¢_, |.A;|? and A; is the i—th channel of activation map A. Moreover, at the end of the network, we apply a standard logit matching loss 2015). Progressive teacher-student: We observed that teacher and student having as similar architecture as possible is very important in our case. We thus train a sequence of teacher-student pairs that progressively bridges the differences between the real network and the binary network in small increments: Step 1: the teacher is the real-valued network with the standard ResNet architecture. The student is another real-valued network, but with the same architecture as the binary ResNet-18 (e.g. double skip connection, layer ordering, PReLU activations, etc). Furthermore, a soft binarization (a Tanh function) is applied to the activations instead of the binarization (sign) function. In this way the network is still real-valued, but it behaves more closely to a network with binary activations. Step 2: The network resulting from the previous step is used as the teacher. A network with binary activations and real-valued weights is used as the student. Step 3: The network resulting from step 2 is used as the teacher and the network with binary weights and binary activations is the student. In this stage, only logit matching is used. 5 Published as a conference paper at ICLR 2020 4.3 DATA-DRIVEN CHANNEL RE-SCALING While the approach of the previous section provides better guidance for the training of binary net- works, the representation power of binary convolutions is still limited, hindering its capacity to approximate the real-valued network. Here we describe how to boost the representation capability of a binary neural network and yet incur in only a negligible increment on the number of operations. Previous works have shown the effectiveness of re-scaling binary convolutions with the goal of better approximating real convolutions. XNOR-Net (Rastegari et al., 2016) proposed to compute these scale factors analytically while (Bulat & Tzimiropoulos, 2019; Xu & Cheung, 2019) proposed to learn them discriminatively in an end-to-end manner, showing additional accuracy gains. For the latter case, during training, the optimization aims to find a set of fixed scaling factors that minimize the average expected loss for the training set. We propose instead to go beyond this and obtain discriminatively-trained input-dependent scaling factors – thus, at test time, these scaling factors will not be fixed but rather inferred from data. Let us first recall what the signal flow is when going through a binary block. The activations entering a binary block are actually real-valued. Batch normalization centers the activations, which are then binarized, losing a large amount of information. Binary convolution, re-scaling and PReLU follow. We propose to use the full-precision activation signal, available prior to the large information loss incurred by the binarization operation, to predict the scaling factors used to re-scale the output of the binary convolution channel-wise. Specifically, we propose to approximate the real convolution as follows: A«W ® (sign(A) © sign(W)) © a © G(A; Wa), (3) where WG are the parameters of the gating function G. Such function computes the scale factors used to re-scale the output of the binary convolution, and uses the pre-convolution real-valued acti- vations as input. Fig. 1 shows our implementation of function G. The design is inspired by Hu et al. (2018), but we use the gating function to predict ahead rather than as a self-attention mechanism. An optimal mechanism to modulate the output of the binary convolution clearly should not be the same for all examples as in Bulat & Tzimiropoulos (2019) or Xu & Cheung (2019). Note that in Rastegari et al. (2016) the computation of the scale factors depends on the input activations. However the analytic calculation is sub-optimal with respect to the task at hand. To circumvent the aforementioned problems, our method learns, via backpropagation for the task at hand, to predict the modulating factors using the real-valued input activations. By doing so, more than 1/3 of the remaining gap with the real-valued network is bridged. 4.4 COMPUTATIONAL COST ANALYSIS Table 1 details the computational cost of the different binary network methodologies. We differen- tiate between the number of binary and floating point operations, including operations such as skip connections, pooling layers, etc. It shows that our method leaves the number of binary operations constant, and that the number of FLOPs increases by only 1% of the total floating point operation count. This is assuming a factor r of 8, which is the one used in all of our experiments. To put this into perspective, the magnitude is similar to the operation increase incurred by the XNOR-Net with respect to its predecessor, BNN. Similarly, the double skip connections proposed in (Liu et al., 2018) adds again a comparable amount of operations. Note however that in order to fully exploit the computational efficiency of binary convolutions during inference, a specialized engine such as (Zhang et al., 2019; Yang et al., 2017) is required. # 5 RESULTS We present two main sets of experiments. We used ImageNet (Russakovsky et al., 2015) as a bench- mark to compare our method against other state-of-the-art approaches in Sec. 5.1. ImageNet is the most widely used dataset to report results on binary networks and, at the same time, allows us to show for the first time that binary networks can perform competitively on a large-scale dataset. We further used CIFAR-100 (Krizhevsky & Hinton, 2009) to conduct ablation studies (Sec. 5.2). 6 Published as a conference paper at ICLR 2020 Method BNN (Courbariaux et al., 2016) XNOR-Net (Rastegari et al., 2016) Double Skip ((Liu et al., 2018) Bi-Real (Liu et al., 2018) Ours Full Precision BOPS 1.695×109 1.695×109 1.695×109 1.676×109 1.676×109 0 FLOPS 1.314×108 1.333×108 1.351×108 1.544×108 1.564×108 1.826×109 Table 1: Breakdown of floating point and binary operations for variants of binary ResNet-18. 5.1 COMPARISON WITH THE STATE-OF-THE-ART Table 2 shows a comparison between our method and relevant state-of-the-art methods, including low-bit quantization methods other than binary. Vs. other binary networks: Our strong baseline already comfortably achieves state-of-the art results, surpassing the previously best-reported result by about 1% (Wang et al., 2019). Our full method further improves over the state-of-the-art by 5.5% top-1 accuracy. When comparing to binary models that scale the capacity of the network (second set of results on Tab. 2), only (Zhuang et al., 2019) outperforms our method, surpassing it by 0.9% top-1 accuracy - yet, this is achieved using 4 times the number of binary blocks. Vs. real-valued networks: Our method reduces the performance gap with its real-valued counter- part to ∼ 4% top-1 accuracy, or ∼ 5% if we compare against a real-valued network trained with attention transfer. Vs. other low-bit quantization: Table 2 also shows a comparison to the state-of-the-art for low-bit quantization methods (first set of results). It can be seen that our method surpasses the performance of all methods, except for TTQ (Zhu et al., 2017), which uses 2-bit weights, full-precision activations and 1.5 the channel width at each layer. 5.2 ABLATION STUDIES In order to conduct a more detailed ablation study we provide results on CIFAR-100. We thoroughly optimized a ResNet-18 full precision network to serve as the real-valued baseline. Teacher-Student effectiveness: We trained a real-valued ResNet-18 using ResNet-34 as its teacher, yielding ∼ 1% top-1 accuracy increase. Instead, our progressive teacher-student strategy yields ∼ 5% top-1 accuracy gain, showing that it is a fundamental tool when training binary networks, and that its impact is much larger than for real-valued networks, where the baseline optimization is already healthier. Performance gap to real-valued: We observe that, for CIFAR-100, we close the gap with real- valued networks to about 2% when comparing with the full-precision ResNet-18, and to about 3% when optimized using teacher supervision. The gap is consistent to that on ImageNet in relative terms: 13% and 10% relative degradation on ImageNet and CIFAR-100 respectively. Binary vs real downsample: Our proposed method achieves similar performance increase irrespec- tive of whether binary or real-valued downsample layers are used, the improvement being 5.5% and 6.6% top-1 accuracy gain respectively. It is also interesting to note that the results on the ablation study are consistent for all entries on both cases. Scaling factors and attention matching: It is also noteworthy that the gating module is not effec- tive in the absence of attention matching (see SB+G entries). It seems clear from this result that both are interconnected: the extra supervisory signal is necessary to properly guide the training, while the extra flexibility added through the gating mechanism boosts the capacity of the network to mimic the attention map. 7 Published as a conference paper at ICLR 2020 Method BWN (Rastegari et al., 2016) TTQ (Zhu et al., 2017) HWGQ (Cai et al., 2017) LQ-Net (Zhang et al., 2018) SYQ (Faraone et al., 2018) DOREFA-Net (Zhou et al., 2016) ABC-Net (Lin et al., 2017) Circulant CNN (Liu et al., 2019) Struct Appr (Zhuang et al., 2019) Struct Appr** (Zhuang et al., 2019) Ensemble (Zhu et al., 2019) BNN (Courbariaux et al., 2016) XNOR-Net (Rastegari et al., 2016) Trained Bin (Xu & Cheung, 2019) Bi-Real Net (Liu et al., 2018)** CI-Net (Wang et al., 2019) XNOR-Net++ (Bulat & Tzimiropoulos, 2019) CI-Net (Wang et al., 2019)** Strong Baseline (ours)** Real-to-Bin (ours)** Real valued Real valued T-S ImageNet Bitwidth (W/A) Top-1 Top-5 83.0 87.2 82.2 84.3 78.6 84.4 85.9 82.8 85.6 86.6 – 69.2 73.2 77.9 79.5 80.1 79.9 84.2 83.0 86.2 89.2 90.0 1/32 2/32 1/2 1/2 1/2 2/2 (1/1)×5 (1/1)×4 (1/1)×4 (1/1)×4 (1/1)×6 1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/1 1/1 32/32 32/32 60.8 66.6 59.6 62.6 55.4 62.6 65.0 61.4 64.2 66.3 61.0 42.2 51.2 54.2 56.4 56.7 57.1 59.9 60.9 65.4 69.3 70.7 Table 2: Comparison with state-of-the-art methods on ImageNet. ** indicates real-valued down- sample. The second column indicates the number of bits used to represent weights and activations. Methods include low-bit quantization (upper section), and methods multiplying the capacity of the network (second section). For the latter case, the second column includes the multiplicative factor of the network capacity used. Stage 1 Stage 2 Method Strong Baseline SB + Att Trans SB + Att Trans + HKD SB + G SB + Progressive TS Real-to-Bin Strong Baseline** SB + Att Trans** SB + Att Trans + HKD** SB + G** SB + Progressive TS** Real-to-Bin** Full Prec (our impl.) Full Prec + TS (our impl.) Top-1 / Top-5 Top-1 / Top-5 69.3 / 88.7 72.2 / 90.3 73.1 / 91.2 67.2 / 87.0 73.8 / 91.5 75.0 / 92.2 72.1 / 89.9 74.3 / 91.3 75.4 / 92.2 72.0 / 89.8 75.7 / 92.1 76.5 / 92.8 68.0 / 88.3 71.1 / 90.1 71.9 / 90.9 66.2 / 86.8 72.3 / 89.8 73.5 / 91.6 69.6 / 89.2 72.6 / 91.4 73.9 / 91.2 70.9 / 89.3 74.6 / 91.8 76.2 / 92.7 78.3 / 93.6 79.3 / 94.4 Table 3: Top-1 and Top-5 classification accuracy using ResNet-18 on CIFAR-100. ** indicates real-valued downsample layers. G indicates that the gating function of Sec. 4.3 is used. 8 Published as a conference paper at ICLR 2020 # 6 CONCLUSION In this work we showed how to train binary networks to within a few percent points of their real- valued counterpart, turning binary networks from hopeful research into a compelling alternative to real-valued networks. We did so by training a binary network to not only predict training labels, but also mimic the behaviour of real-valued networks. To this end, we devised a progressive attention matching strategy to drive optimization, and combined it with a gating strategy for scaling the output of binary convolutions, increasing the representation power of the convolutional block. The two strategies combine perfectly to boost the state-of-the-art of binary networks by 5.5 top-1 accuracy on ImageNet, the standard benchmark for binary networks. # REFERENCES Milad Alizadeh, Javier Fern´andez-Marqu´es, Nicholas D. Lane, and Yarin Gal. An empirical study of binary neural networks’ optimisation. In International Conference on Learning Representations, 2019. Joseph Bethge, Haojin Yang, Marvin Bornstein, and Christoph Meinel. Back to simplicity: How to train accurate BNNs from scratch? arXiv preprint arXiv:1906.08637, 2019. Adrian Bulat and Georgios Tzimiropoulos. XNOR-Net++: Improved binary neural networks. In British Machine Vision Conference, 2019. Adrian Bulat, Georgios Tzimiropoulos, Jean Kossaifi, and Maja Pantic. Improved training of binary networks for human pose estimation and image recognition. arXiv preprint arXiv:1904.05868, 2019. Zhaowei Cai, Xiaodong He, Jian Sun, and Nuno Vasconcelos. Deep learning with low precision by half-wave gaussian quantization. In IEEE Conference on Computer Vision and Pattern Recogni- tion, 2017. Matthieu Courbariaux, Itay Hubara, Daniel Soudry, Ran El-Yaniv, and Yoshua Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to +1 or-1. arXiv, 2016. Ruizhou Ding, Ting-Wu Chin, Zeye Liu, and Diana Marculescu. Regularizing activation distribu- tion for training binarized deep networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. Julian Faraone, Nicholas J. Fraser, Michaela Blott, and Philip H. W. Leong. SYQ: learning symmet- ric quantization for efficient deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. Ruihao Gong, Xianglong Liu, Shenghu Jiang, Tianxiang Li, Peng Hu, Jiazhen Lin, Fengwei Yu, and Junjie Yan. Differentiable soft quantization: Bridging full-precision and low-bit neural networks. arXiv, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpass- ing human-level performance on imagenet classification. In IEEE International Conference on Computer Vision, pp. 1026–1034, 2015. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Alex Krizhevsky and Geoffrey Hinton. Learning multiple layers of features from tiny images. 2009. 9 Published as a conference paper at ICLR 2020 Xiaofan Lin, Cong Zhao, and Wei Pan. Towards accurate binary convolutional neural network. In Advances on Neural Information Processing Systems, 2017. Chunlei Liu, Wenrui Ding, Xin Xia, Baochang Zhang, Jiaxin Gu, Jianzhuang Liu, Rongrong Ji, and David Doermann. Circulant binary convolutional networks: Enhancing the performance of 1-bit In IEEE Conference on Computer Vision and Pattern dcnns with circulant back propagation. Recognition, 2019. Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. Bi-Real Net: Enhancing the performance of 1-bit CNNs with improved representational capability and advanced training algorithm. In European Conference on Computer Vision, 2018. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-Net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, 2016. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei- Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal on Computer Vision, 115(3):211–252, 2015. Daniel Soudry, Itay Hubara, and Ron Meir. Expectation backpropagation: Parameter-free train- ing of multilayer neural networks with continuous or discrete weights. In Advances on Neural Information Processing Systems, 2014. Ziwei Wang, Jiwen Lu, Chenxin Tao, Jie Zhou, and Qi Tian. Learning channel-wise interactions for binary convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. Zhe Xu and Ray C.C. Cheung. Accurate and compact convolutional neural networks with trained binarization. In British Machine Vision Conference, 2019. Haojin Yang, Martin Fritzsche, Christian Bartz, and Christoph Meinel. BMXNet: An open-source binary neural network implementation based on MXNet. In ACM International Conference on Multimedia, 2017. Sergey Zagoruyko and Nikos Komodakis. Paying more attention to attention: Improving the per- formance of convolutional neural networks via attention transfer. In International Conference on Learning Representations, 2017. Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, and Gang Hua. LQ-Nets: Learned quantization for highly accurate and compact deep neural networks. In European Conference on Computer Vision, 2018. Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. Mixup: Beyond empiri- cal risk minimization. arXiv preprint arXiv:1710.09412, 2017. Jianhao Zhang, Yingwei Pan, Ting Yao, He Zhao, and Tao Mei. dabnn: A super fast inference framework for binary neural networks on ARM devices. In ACM International Conference on Multimedia, 2019. Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. DoReFa-Net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv, 2016. Chenzhuo Zhu, Song Han, Huizi Mao, and William J Dally. Trained ternary quantization. Interna- tional Conference on Learning Representations, 2017. Shilin Zhu, Xin Dong, and Hao Su. Binary ensemble neural network: More bits per network or more networks per bit? In IEEE Conference on Computer Vision and Pattern Recognition, 2019. Bohan Zhuang, Chunhua Shen, Mingkui Tan, Lingqiao Liu, and Ian D. Reid. Towards effective low-bitwidth convolutional neural networks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. 10 Published as a conference paper at ICLR 2020 Bohan Zhuang, Chunhua Shen, Mingkui Tan, Lingqiao Liu, and Ian Reid. Structured binary neural networks for accurate image classification and semantic segmentation. In IEEE Conference on Computer Vision and Pattern Recognition, 2019. 11
{ "id": "1710.09412" }
2003.11080
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization
Much recent progress in applications of machine learning models to NLP has been driven by benchmarks that evaluate models across a wide variety of tasks. However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evaluation of Multilingual Encoders XTREME benchmark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multilingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and representative set of languages and tasks.
http://arxiv.org/pdf/2003.11080
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson
cs.CL, cs.LG
In Proceedings of the 37th International Conference on Machine Learning (ICML). July 2020
null
cs.CL
20200324
20200904
0 2 0 2 p e S 4 ] L C . s c [ 5 v 0 8 0 1 1 . 3 0 0 2 : v i X r a # XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization # Junjie Hu * 1 Sebastian Ruder * 2 Aditya Siddhant 3 Graham Neubig 1 Orhan Firat 3 Melvin Johnson 3 # Abstract Much recent progress in applications of machine learning models to NLP has been driven by bench- marks that evaluate models across a wide variety of tasks. However, these broad-coverage bench- marks have been mostly limited to English, and despite an increasing interest in multilingual mod- els, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing. To this end, we introduce the Cross-lingual TRansfer Evalua- tion of Multilingual Encoders (XTREME) bench- mark, a multi-task benchmark for evaluating the cross-lingual generalization capabilities of multi- lingual representations across 40 languages and 9 tasks. We demonstrate that while models tested on English reach human performance on many tasks, there is still a sizable gap in the performance of cross-lingually transferred models, particularly on syntactic and sentence retrieval tasks. There is also a wide spread of results across languages. We release the benchmark1 to encourage research on cross-lingual learning methods that transfer linguistic knowledge across a diverse and repre- sentative set of languages and tasks. # 1. Introduction In natural language processing (NLP), there is a pressing urgency to build systems that serve all of the world’s ap- proximately 6,900 languages to overcome language barriers and enable universal information access for the world’s citi- zens (Ruder et al., 2019; Aharoni et al., 2019; Arivazhagan et al., 2019). At the same time, building NLP systems for *Equal 1Carnegie Mellon University Jun- Correspondence to: Johnson contribution 2DeepMind 3Google Research. jie Hu <[email protected]>, Melvin <[email protected]>. most of these languages is challenging due to a stark lack of data. Luckily, many languages have similarities in syntax or vocabulary, and multilingual learning approaches that train on multiple languages while leveraging the shared structure of the input space have begun to show promise as ways to alleviate data sparsity. Early work in this direction fo- cused on single tasks, such as grammar induction (Snyder et al., 2009), part-of-speech (POS) tagging (T¨ackstr¨om et al., 2013), parsing (McDonald et al., 2011), and text classifi- cation (Klementiev et al., 2012). Over the last few years, there has been a move towards general-purpose multilingual representations that are applicable to many tasks, both on the word level (Mikolov et al., 2013; Faruqui & Dyer, 2014; Artetxe et al., 2017) or the full-sentence level (Devlin et al., 2019; Lample & Conneau, 2019). Despite the fact that such representations are intended to be general-purpose, evalu- ation of them has often been performed on a very limited and often disparate set of tasks—typically focusing on trans- lation (Glavaˇs et al., 2019; Lample & Conneau, 2019) and classification (Schwenk & Li, 2018; Conneau et al., 2018b)— and typologically similar languages (Conneau et al., 2018a). To address this problem and incentivize research on truly general-purpose cross-lingual representation and transfer learning, we introduce the Cross-lingual TRansfer Eval- uation of Multilingual Encoders (XTREME) benchmark. XTREME covers 40 typologically diverse languages span- ning 12 language families and includes 9 tasks that require reasoning about different levels of syntax or semantics.2 In addition, we introduce pseudo test sets as diagnostics that cover all 40 languages by automatically translating the English test set of the natural language inference and question-answering dataset to the remaining languages. XTREME focuses on the zero-shot cross-lingual transfer sce- nario, where annotated training data is provided in English but none is provided in the language to which systems must transfer.3 We evaluate a range of state-of-the-art machine 2By typologically diverse, we mean languages that span a wide set of linguistic phenomena such as compounding, inflection, derivation, etc. which occur in many of the world’s languages. 1The benchmark is publicly available at https://sites. research.google/xtreme. The codes used for download- ing data and training baseline models are available at https: //github.com/google-research/xtreme. 3This is done both for efficiency purposes (as it only requires testing, not training, on each language) and practical considerations (as annotated training data is not available for many languages). XTREME: A Benchmark for Evaluating Cross-lingual generalization translation (MT) and multilingual representation-based ap- proaches to performing this transfer. We find that while state- of-the-art models come close to human performance in En- glish on many of the tasks we consider, performance drops significantly when evaluated on other languages. Overall, performance differences are highest for syntactic and sen- tence retrieval tasks. Further, while models do reasonably well in most languages in the Indo-European family, we observe lower performance particularly for Sino-Tibetan, Japonic, Koreanic, and Niger-Congo languages. In sum, our contributions are the following: (i) We release a suite of 9 cross-lingual benchmark tasks covering 40 ty- pologically diverse languages. (ii) We provide an online platform and leaderboard for the evaluation of multilingual models. (iii) We provide a set of strong baselines, which we evaluate across all tasks, and release code to facilitate adop- tion. (iv) We provide an extensive analysis of limitations of state-of-the-art cross-lingual models. On the other hand, cross-lingual approaches have been evalu- ated on a wide range of tasks, including dependency parsing (Schuster et al., 2019), named entity recognition (Rahimi et al., 2019), sentiment analysis (Barnes et al., 2018), natu- ral language inference (Conneau et al., 2018b), document classification (Schwenk & Li, 2018), and question answer- ing (Artetxe et al., 2020; Lewis et al., 2019). Evaluation on a single task is problematic as past work has noted po- tential issues with standard datasets: MLDoc (Schwenk & Li, 2018) can be solved by matching keywords (Artetxe et al., 2020), while MultiNLI, the dataset from which XNLI (Conneau et al., 2018b) was derived, contains superficial cues that can be exploited (Gururangan et al., 2018). Evalu- ation on multiple tasks is thus necessary to fairly compare cross-lingual models. Benchmarks covering multiple tasks like GLUE (Wang et al., 2019b) and SuperGLUE (Wang et al., 2019a) have arguably spurred research in monolin- gual transfer learning. In the cross-lingual setting, such a benchmark not only needs to cover a diverse set of tasks but also languages. XTREME aims to fill this gap. # 2. Related Work Cross-lingual representations Early work focused on learning cross-lingual representations using either parallel corpora (Gouws et al., 2015; Luong et al., 2015) or a bilin- gual dictionary to learn a linear transformation (Mikolov et al., 2013; Faruqui & Dyer, 2014). Later approaches re- duced the amount of supervision required using self-training (Artetxe et al., 2017) and unsupervised strategies such as adversarial training (Conneau et al., 2018a), heuristic initial- isation (Artetxe et al., 2018), and optimal transport (Zhang et al., 2017). Building on advances in monolingual trans- fer learning (McCann et al., 2017; Howard & Ruder, 2018; Peters et al., 2018; Devlin et al., 2019), multilingual exten- sions of pretrained encoders have recently been shown to be effective for learning deep cross-lingual representations (Eriguchi et al., 2018; Pires et al., 2019; Wu & Dredze, 2019; Lample & Conneau, 2019; Siddhant et al., 2020). Cross-lingual evaluation One pillar of the evaluation of cross-lingual representations has been translation, either on the word level (bilingual lexicon induction) or on the sen- tence level (machine translation). In most cases, evaluation has been restricted to typologically related languages and similar domains; approaches have been shown to fail in less favorable conditions (Glavaˇs et al., 2019; Vuli´c et al., 2019; Guzm´an et al., 2019). Past work has also reported issues with common datasets for bilingual lexicon induc- tion (Czarnowska et al., 2019; Kementchedjhieva et al., 2019) and a weak correlation with certain downstream tasks (Glavaˇs et al., 2019). Translation, however, only covers one facet of a model’s cross-lingual generalization ability. For instance, it does not capture differences in classification per- formance that are due to cultural differences (Mohammad et al., 2016; Smith et al., 2016). # 3. XTREME # 3.1. Design principles Given XTREME’s goal of providing an accessible benchmark for the evaluation of cross-lingual transfer learning on a diverse and representative set of tasks and languages, we select the tasks and languages that make up the benchmark based on the following principles: Task difficulty Tasks should be sufficiently challenging so that cross-language performance falls short of human performance. Task diversity Tasks should require multilingual models to transfer their meaning representations at different levels, e.g. words, phrases and sentences. For example, while clas- sification tasks require sentence-level transfer of meaning, sequence labeling tasks like part-of-speech (POS) tagging or named entity recognition (NER) test the model’s transfer capabilities at the word level. Training efficiency Tasks should be trainable on a single GPU for less than a day. This is to make the benchmark accessible, in particular to practitioners working with low- resource languages under resource constraints. Multilinguality We prefer tasks that cover as many lan- guages and language families as possible. Sufficient monolingual data Languages should have suf- ficient monolingual data for learning useful pre-trained rep- resentations. Accessibility Each task should be available under a per- missive license that allows the use and redistribution of the XTREME: A Benchmark for Evaluating Cross-lingual generalization Table 1. Characteristics of the datasets in XTREME for the zero-shot transfer setting. For tasks that have training and dev sets in other languages, we only report the English numbers. We report the number of test examples per target language and the nature of the test sets (whether they are translations of English data or independently annotated). The number in brackets is the size of the intersection with our selected languages. For NER and POS, sizes are in sentences. Struct. pred.: structured prediction. Sent. retrieval: sentence retrieval. Task Corpus |Train| |Dev| |Test| Test sets |Lang.| Task Metric Domain Classification XNLI PAWS-X 392,702 49,401 2,490 2,000 5,010 2,000 translations translations 15 NLI 7 Paraphrase Acc. Acc. Misc. Wiki / Quora Struct. pred. POS NER 21,253 20,000 3,974 10,000 47-20,436 1,000-10,000 ind. annot. ind. annot. 33 (90) POS 40 (176) NER F1 F1 Misc. Wikipedia QA XQuAD MLQA TyDiQA-GoldP 87,599 3,696 34,726 634 1,190 4,517–11,590 323–2,719 translations translations ind. annot. 11 7 9 Span extraction Span extraction Span extraction F1 / EM Wikipedia F1 / EM Wikipedia F1 / EM Wikipedia Retrieval BUCC Tatoeba - - - - 1,896–14,330 1,000 - - 5 33 (122) Sent. retrieval Sent. retrieval F1 Acc. Wiki / news misc. data for research purposes. # 3.2. Tasks XTREME consists of nine tasks that fall into four different categories requiring reasoning on different levels of mean- ing. We give an overview of all tasks in Table 1, and describe the task details as follows. XNLI The Cross-lingual Natural Language Inference cor- pus (Conneau et al., 2018b) asks whether a premise sentence entails, contradicts, or is neutral toward a hypothesis sen- tence. Crowd-sourced English data is translated to ten other languages by professional translators and used for evalua- tion, while the MultiNLI (Williams et al., 2018) training data is used for training. (Artetxe et al., 2020) requires identifying the answer to a question as a span in the corresponding paragraph. A subset of the English SQuAD v1.1 (Rajpurkar et al., 2016) dev set was translated into ten other languages by professional translators and is used for evaluation. MLQA The Multilingual Question Answering (Lewis et al., 2019) dataset is another cross-lingual question answering dataset similar to XQuAD. The evaluation data for English and six other languages was obtained by automatically min- ing target language sentences that are parallel to sentences in English from Wikipedia, crowd-sourcing annotations in English, and translating the question and aligning the an- swer spans in the target languages. For both XQuAD and MLQA, we use the SQuAD v1.1 training data for training and evaluate on the test data of the corresponding task. PAWS-X The Cross-lingual Paraphrase Adversaries from Word Scrambling (Yang et al., 2019) dataset requires to determine whether two sentences are paraphrases. A subset of the PAWS dev and test sets (Zhang et al., 2019) was translated to six other languages by professional translators and is used for evaluation, while the PAWS training set is used for training. POS We use POS tagging data from the Universal De- pendencies v2.5 (Nivre et al., 2018) treebanks, which cover 90 languages. Each word is assigned one of 17 universal POS tags. We use the English training data for training and evaluate on the test sets of the target languages. TyDiQA-GoldP We use the gold passage version of the Typologically Diverse Question Answering (Clark et al., 2020) dataset, a benchmark for information-seeking ques- tion answering, which covers nine languages. The gold passage version is a simplified version of the primary task, which uses only the gold passage as context and excludes unanswerable questions. It is thus similar to XQuAD and MLQA, while being more challenging as questions have been written without seeing the answers, leading to 3× and 2× less lexical overlap compared to XQuAD and MLQA respectively. We use the English training data for training and evaluate on the test sets of the target languages. NER For NER, we use the Wikiann (Pan et al., 2017) dataset. Named entities in Wikipedia were automatically annotated with LOC, PER, and ORG tags in IOB2 format using a combination of knowledge base properties, cross- lingual and anchor links, self-training, and data selection. We use the balanced train, dev, and test splits from Rahimi et al. (2019). BUCC The goal of the second and third shared task of the workshop on Building and Using Parallel Corpora (Zweigen- baum et al., 2017; 2018) is to extract parallel sentences from a comparable corpus between English and four other lan- guages. The dataset provides train and test splits for each language. For simplicity, we evaluate representations on the test sets directly without fine-tuning and calculate similarity XQuAD The Cross-lingual Question Answering Dataset XTREME: A Benchmark for Evaluating Cross-lingual generalization using cosine similarity.4 Tatoeba We use the Tatoeba dataset (Artetxe & Schwenk, 2019), which consists of up to 1,000 English-aligned sen- tence pairs covering 122 languages. We find the nearest neighbour using cosine similarity and calculate error rate. diverse set of tasks, we automatically translate the English portions of a representative classification and QA task to the remaining languages using an in-house translation system.8 We choose XNLI and XQuAD as both have test sets that are translations of the English data by professional translators. # 3.3. Languages As noted in Section 3.1, we choose our target languages based on availability of monolingual data, and typological diversity. We use the number of articles in Wikipedia as a proxy for the amount of monolingual data available online. In order to strike a balance between language diversity and availability of monolingual data, we select all languages out of the top 100 Wikipedias5 with the most articles as of December 2019.6 We first select all languages that appear in at least three of our benchmark datasets. This leaves us with 19 languages, most of which are Indo-European or major world languages. We now select 21 additional languages that appear in at least one dataset and come from less represented language families. Wherever possible, we choose at least two languages per family.7 We first verify that performance on the translated test sets is a good proxy for performance on the gold standard test sets. We report the detailed results in the appendix. For XQuAD, the automatically translated test sets underesti- mate mBERT’s true performance by 3.0 F1 / 0.2 EM points, similar to the 2.6 F1 points reported by Agi´c & Schluter (2018) when translating the test data to other languages.9 For XNLI, the automatically translated test sets overestimate the true prediction accuracy by 2.4 points. In order to mea- sure the translation quality between the human-translated test data and our pseudo test data, we compute the BLEU score, and the chrF score (Popovi´c, 2015), which is suitable for measuring the translation quality of some languages such as Chinese and Russian. For the 14 languages in XNLI, we obtain average scores of 34.2 BLEU and 58.9 chrF scores on our pseudo test data compared to the reference transla- tions, which correlate with a Pearson’s ρ of 0.57 and 0.28 respectively with mBERT performance. In total, XTREME covers the following 40 languages (shown with their ISO 639-1 codes for brevity) belonging to 12 language families and two isolates: af, ar, bg, bn, de, el, en, es, et, eu, fa, fi, fr, he, hi, hu, id, it, ja, jv, ka, kk, ko, ml, mr, ms, my, nl, pt, ru, sw, ta, te, th, tl, tr, ur, vi, yo, and zh. We provide a detailed overview of these languages in terms of their number of Wikipedia articles, linguistic features, and coverage in XTREME in the appendix. Translating the English data to the remaining languages yields 40-way parallel pseudo test data that we employ for analyses in Section 5. # 4. Experiments # 4.1. Training and evaluation setup While XTREME covers these languages in the sense that there is gold standard data in at least one task in each lan- guage, this does not mean that it covers all aspects of each language that are necessary for transfer. Languages may re- veal different characteristics based on the task, domain, and register in which they are used. XTREME thus only serves as a glimpse into a model’s true cross-lingual generalization capability. XTREME focuses on the evaluation of multilingual represen- tations. We do not place any restriction on the amount or nature of the monolingual data used for pretraining multi- lingual representations. However, we request authors to be explicit about the data they use for training, in particular any cross-lingual signal. In addition, we suggest authors should not use any additional labelled data in the target task beyond the one that is provided. # 3.4. Pseudo test data for analyses XTREME covers 40 languages overall. Evaluation across the majority of languages is only possible for a subset of tasks, i.e. POS, NER, and Tatoeba. As additional diagnositics and to enable a broader comparison across languages for a more 4Results can be improved using more sophisticated similarity metrics (Artetxe & Schwenk, 2019). For evaluation, we focus on zero-shot cross-lingual trans- fer with English as the source language as this is the most common setting for the evaluation of multilingual represen- tations and as many tasks only have training data available in English. Although English is not generally the best source language for cross-lingual transfer for all target languages (Lin et al., 2019), this is still the most practically useful setting. A single source language also facilitates evaluation as models only need to be trained once and can be evaluated 5https://meta.wikimedia.org/wiki/List_of_ Wikipedias 6This also has the benefit that they are covered by state-of-the- art methods such as mBERT and XLM. 7For the Austro-Asiatic, Kartvelian, and Kra-Dai families as well as for isolates, we only obtain one language. 8Details of our translation system are provided in the appendix. 9Note that even human translated test sets may underestimate a model’s true cross-lingual generalization ability as such transla- tionese has been shown to be less lexically diverse than naturally composed language (Koppel & Ordan). XTREME: A Benchmark for Evaluating Cross-lingual generalization on all other languages.10 Concretely, pretrained multilingual representations are fine- tuned on English labelled data of an XTREME task. The model is then evaluated on the test data of the task in the target languages. dance of in-language data and a requirement for annotation projection. Translate-train multi-task We also experiment with a multi-task version of the translate-train setting where we fine-tune mBERT on the combined translated training data of all languages jointly. # 4.2. Baselines We evaluate a number of strong baselines and state-of- the-art models. The approaches we consider learn mul- tilingual representations via self-supervision or leverage translations—either for representation learning or for train- ing models in the source or target language. We focus on models that learn deep contextual representations as these have achieved state-of-the-art results on many tasks. For comparability among the representation learning ap- proaches, we focus on models that learn a multilingual embedding space between all languages in XTREME. We en- courage future work to focus on these languages to capture as much language diversity as possible. We report hyper- parameters in the appendix. All hyper-parameter tuning is done on English validation data. We encourage authors evaluating on XTREME to do the same. Translate-test Alternatively, we train the English BERT- Large (Devlin et al., 2019) model on the English training data and evaluate it on test data that we translated from the target language to English using our in-house MT system. In-language model For the POS, NER, and TyDiQA- GoldP tasks where target-language training data is available, we fine-tune mBERT on monolingual data in the target language to estimate how useful target language labelled data is compared to labelled data in a source language. In-language few-shot In many cases, it may be possible to procure a small number of labelled examples in the target language (Eisenschlos et al., 2019). To evaluate the viabil- ity of such an approach, we additionally compare against an mBERT model fine-tuned on 1,000 target language ex- amples for the tasks where monolingual training data is available in the target languages. mBERT Multilingual BERT (Devlin et al., 2019) is a trans- former model (Vaswani et al., 2017) that has been pretrained on the Wikipedias of 104 languages using masked language modelling (MLM). In-language multi-task For the tasks where monolingual training data is available, we additionally compare against an mBERT model that is jointly trained on the combined training data of all languages. XLM XLM (Lample & Conneau, 2019) uses a similar pretraining objective as mBERT with a larger model, a larger shared vocabulary, and trained on the same Wikipedia data covering 100 languages. XLM-R XLM-R Large (Conneau et al., 2020) is similar to XLM but was trained on more than a magnitude more data from the web covering 100 languages. MMTE The massively multilingual translation encoder is part of an NMT model that has been trained on in-house parallel data of 103 languages extracted from the web (Ari- vazhagan et al., 2019). For transfer, we fine-tune the encoder of the model (Siddhant et al., 2020). Human performance For XNLI, PAWS-X, and XQuAD, we obtain human performance estimates from the En- glish datasets they are derived from, MNLI, PAWS-X, and SQuAD respectively (Nangia & Bowman, 2019; Zhang et al., 2019; Rajpurkar et al., 2016).11 For TyDiQA-GoldP, we use the performance estimate of Clark et al. (2020). For MLQA, as answers are annotated using the same format as SQuAD, we employ the same human performance estimate. For POS tagging, we adopt 97% as a canonical estimate of human performance based on Manning (2011). We are not able to obtain human performance estimates for NER as annotations have been automatically generated and for sentence retrieval as identifying a translation among a large number of documents is too time-consuming. Translate-train For many language pairs, an MT model may be available, which can be used to obtain data in the tar- get language. To evaluate the impact of using such data, we translate the English training data into the target language using our in-house MT system. We then fine-tune mBERT on the translated data. We provide details on how we align answer spans in the source and target language for the QA tasks in the appendix. We do not provide translation-based baselines for structured prediction tasks due to an abun- # 4.3. Results Overall results We show the main results in Table 2. XLM- R is the best-performing zero-shot transfer model and gen- erally improves upon mBERT significantly. The improve- ment is smaller, however, for the structured prediction tasks. MMTE achieves performance competitive with mBERT on most tasks, with stronger results on XNLI, POS, and BUCC. 10Future work may also consider multi-source transfer, which is interesting particularly for low-resource languages, and transfer to unknown languages or unknown language-task combinations. 11Performance may differ across languages due to many factors but English performance still serves as a reasonable proxy. XTREME: A Benchmark for Evaluating Cross-lingual generalization Table 2. Overall results of baselines across all XTREME tasks. Translation-based baselines are not meaningful for sentence retrieval. We provide in-language baselines where target language training data is available. Note that for the QA tasks, translate-test performance is not directly comparable to the other scores as a small number of test questions were discarded and alignment is measured on the English data. Sentence retrieval TyDiQA-GoldP BUCC Tatoeba Pair sentence Structured prediction Question answering Model Avg XNLI PAWS-X POS NER XQuAD MLQA Metrics Acc. Acc. F1 F1 F1 / EM F1 / EM F1 / EM F1 Acc. Cross-lingual zero-shot transfer (models are trained on English data) mBERT XLM XLM-R Large MMTE 59.8 55.7 68.2 59.5 65.4 69.1 79.2 67.4 81.9 80.9 86.4 81.3 71.5 71.3 73.8 73.5 62.2 61.2 65.4 58.3 64.5 / 49.4 59.8 / 44.3 76.6 / 60.8 64.4 / 46.2 61.4 / 44.2 48.5 / 32.6 71.6 / 53.2 60.3 / 41.4 59.7 / 43.9 43.6 / 29.1 65.1 / 45.0 58.1 / 43.8 56.7 56.8 66.0 59.8 38.7 32.6 57.3 37.9 Translate-train (models are trained on English training data translated to the target language) mBERT mBERT, multi-task - - 74.6 75.1 86.3 88.9 - - - - 70.0 / 56.0 72.4 / 58.3 65.6 / 48.0 67.6 / 49.8 55.1 / 42.1 64.2 / 49.3 - - - - Translate-test (models are trained on English data and evaluated on target language data translated to English) BERT-large - 76.8 84.4 - - 76.3 / 62.1 72.9 / 55.3 72.1 / 56.0 - - In-language models (models are trained on the target language training data) mBERT, 1000 examples mBERT mBERT, multi-task - - - - - - - - - 87.6 89.8 91.5 77.9 88.3 89.1 - - - - - - 58.7 / 46.5 74.5 / 62.7 77.6 / 68.0 - - - - - - # Human 92.8 97.5 97.0 91.2 / 82.3 91.2 / 82.3 90.1 / - If a strong MT system is available, translating the training sets provides improvements over using the same model with zero-shot transfer. Translating the test data provides similar benefits compared to translating the training data and is particularly effective for the more complex QA tasks, while being more expensive during inference time. While using an MT system as a black box leads to strong baselines, the MT system could be further improved in the context of data augmentation. For the tasks where in-language training data is available, multilingual models trained on in-language data outperform zero-shot transfer models. However, zero-shot transfer mod- els nevertheless outperform multilingual models trained on only 1,000 in-language examples on the complex QA tasks as long as more samples in English are available. For the structured prediction tasks, 1,000 in-language examples en- able the model to achieve performance that is similar to being trained on the full labelled dataset, similar to findings for classification (Eisenschlos et al., 2019). Finally, multi- task learning on the Translate-train and In-language setting generally improves upon single language training. Table 3. The cross-lingual transfer gap (lower is better) of differ- ent models on XTREME tasks. The transfer gap is the difference between performance on the English test set and the average per- formance on the other languages. A transfer gap of 0 indicates perfect cross-lingual transfer. For the QA datasets, we only show EM scores. The average gaps are computed over the sentence classification and QA tasks. Model XNLI PAWS-X XQuAD MLQA TyDiQA-GoldP Avg POS NER mBERT XLM-R Translate-train Translate-test 16.5 10.2 7.3 6.7 14.1 12.4 9.0 12.0 25.0 16.3 17.6 16.3 27.5 19.1 22.2 18.3 22.2 13.3 24.2 11.2 21.1 14.3 16.1 12.9 25.5 23.6 24.3 19.8 - - - - such as XLM-R reduce the gap significantly compared to mBERT for challenging tasks such as XQuAD and MLQA, they do not have the same impact on the syntactic structured prediction tasks. On the classification tasks, the transfer learning gap is lowest, indicating that there may be less headroom for progress on these tasks. The use of MT re- duces the gap across all tasks. Overall, a large gap remains for all approaches, which indicates much potential for work on cross-lingual transfer. Cross-lingual transfer gap For a number of representa- tive models, we show the cross-lingual transfer gap, i.e. the difference between the performance on the English test set and all other languages in Table 3.12 While powerful models # 5. Analyses We conduct a series of analyses investigating the limitations of state-of-the-art cross-lingual models. 12This comparison should be taken with a grain of salt, as scores across languages are not directly comparable for the tasks where test sets differ, i.e. POS, NER, MLQA, and TyDiQA-GoldP and differences in scores may not be linearly related. Best zero-shot model analysis We show the performance of the best zero-shot transfer model, XLM-R Large broken down by task and language in Figure 1. The figure illus- XTREME: A Benchmark for Evaluating Cross-lingual generalization Human 100 + t + t+ oR & wat ri iy 5 a of Pp pit alt x i i . 2 dye s 8 ° a a or er acs 5 : sey ¢| ox vn . a sronesian 40 , ‘ e tan ® 2 © Dravidian 20 Bl Ngerconge o | |e ammnae t kimeine 0 \ ka one s GE SF PS SK SF me s — Y Pr Ss Table 4. Accuracy of mBERT on POS tag trigrams and 4-grams in the target language dev data that appeared and did not appear in the English training data. We show the performance on English, the average across all other languages, and their difference. trigram, seen trigram, unseen 4-gram, seen 4-gram, unseen en avg w/o en 90.3 50.6 63.0 12.1 88.1 44.3 67.5 18.3 difference 39.7 50.9 43.7 49.2 Figure 1. An overview of XLM-R’s performance on the XTREME tasks across all languages in each task. We highlight an estimate of human performance, performance on the English test set, the average of all languages excluding English, and the family of each language. Performance on pseudo test sets for XNLI and XQuAD is shown with slightly transparent markers. trates why it is important to evaluate general-purpose multi- lingual representations across a diverse range of tasks and languages: On XNLI, probably the most common standard cross-lingual evaluation task, and PAWS-X, scores cluster in a relatively small range—even considering pseudo test sets for XNLI. However, scores for the remaining tasks have significantly wider spread, particularly as we include pseudo test sets. For TyDiQA-GoldP, English performance is low- est in comparison; the high performance on members of the Austronesian and Uralic language families (Indonesian and Finnish) may be due to less complex Wikipedia context passages for these languages. Across tasks, we generally observe higher performance on Indo-European languages and lower performance for other language families, particu- larly for Sino-Tibetan, Japonic, Koreanic, and Niger-Congo languages. Some of these difficulties may be due to to- kenisation and an under-representation of ideograms in the joint sentencepiece vocabulary, which has been shown to be important in a cross-lingual model’s performance (Artetxe et al., 2020; Conneau et al., 2020). We observe similar trends for mBERT, for which we show the same graph in the appendix. Correlation with pretraining data size We calculate the Pearson correlation coefficient ρ of the model performance and the number of Wikipedia articles (see appendix) in each language and show results in Figure 2.13 For mBERT, which was pretrained on Wikipedia, we observe a high correlation for most tasks (ρ ≈ 0.8) except for the structured prediction tasks where ρ ≈ 0.35. We observe similar trends for XLM and XLM-R, with lower numbers for XLM-R due to the 13We observe similar correlations when using the number of tokens in Wikipedia instead. different pretraining domain (see the appendix). This indi- cates that current models are not able to fully leverage the information extracted from the pretraining data to transfer to syntactic tasks. Analysis of language characteristics We analyze results based on different language families and writing scripts in Figure 3. For mBERT, we observe the best transfer perfor- mance on branches of the Indo-European language family such as Germanic, Romance and Slavic languages. In con- trast, cross-lingual transfer performance on low-resource language families such as Niger-Congo and Kra-Dai is still low. Looking at scripts, we find that the performance on syn- tactic tasks differs among popular scripts such as Latin and ideogram scripts. For example in the NER task, mBERT performs better on data in Latin script than that in Chi- nese or Japanese ideograms. This indicates that the current models still have difficulty transferring word-level syntactic information across languages written in different scripts. Errors across languages For XNLI and XQuAD where the other test sets are translations from English, we ana- lyze whether approaches make the same type of errors in the source and target languages. To this end, we explore whether examples that are correctly and incorrectly pre- dicted in English are correctly predicted in other languages. On the XNLI dev set, mBERT correctly predicts on av- erage 71.8% of examples that were correctly predicted in English. For examples that were misclassified, the model’s performance is about random. On average, predictions on XNLI are consistent between English and another language for 68.3% of examples. On the XQuAD test set, mBERT correctly predicts around 60% of examples that were cor- reclty predicted in English and 20% of examples that were incorrectly predicted. While some of these are plausible spans, more work needs to focus on achieving consistent predictions across languages. Generalization to unseen tag combinations and entities We analyze possible reasons for the less successful transfer on structured prediction tasks. The Universal Dependen- cies dataset used for POS tagging uses a common set of 17 POS tags for all languages, so a model is not required to XTREME: A Benchmark for Evaluating Cross-lingual generalization 100 80 7 » 60 y 2 ’ g S ’ - S x P wr + 7 20 © MLQA,p = 087 + TyDIOA, p= 082 + BUCC, p= 095 0 sw te bn hi ur el be id ar vi cs irde 00 007, «OS 0203 05 07 «I 153 # of Wikipedia documents (in millions) 100 80 ong + . . 4 ca _ —_._;_!__, i - » 60 " = = . g oF * 5 : . S : . . So . . . DP 10 “ A 20 © POS, p = 0.36 = NER, p =035 . Tatocha, p = 0.68 0 yo sw jv tetlaf tathurel etkkbg t hud fa zh esl de 005 007° «01 015 02 03 05 07 1° «15 3 # of Wikipedia documents (in millions) 100 100 80 80 ong + . . 4 ca _ —_._;_!__, 7 i - 60 y » 60 " = = . ’ g oF * 5 : . ’ - S : . . x So . . . wr + 7 DP 10 “ A 20 © MLQA,p = 087 20 © POS, p = 0.36 + TyDIOA, p= 082 = NER, p =035 + BUCC, p= 095 . Tatocha, p = 0.68 0 0 sw te bn hi ur el be id ar vi cs irde yo sw jv tetlaf tathurel etkkbg t hud fa zh esl de 00 007, «OS 0203 05 07 «I 153 005 007° «01 015 02 03 05 07 1° «15 3 # of Wikipedia documents (in millions) # of Wikipedia documents (in millions) Figure 2. Performance of mBERT across tasks and languages in comparison to the number of Wikipedia articles for each language. We show tasks with a Pearson correlation coefficient ρ > 0.7 on the left and others on the right. Numbers across tasks are not directly comparable. We remove the x axis labels of overlapping languages for clarity. We additionally plot the linear fit for each task (curved due to the logarithmic scale of the x axis). 100 + tg 80 - #5 ———# . . saul e's eee * = v - g Â¥ . p 60 a * $ 2 sy x x g Q 0 6 Mi 0 . x x 0 § x x tea . NO . . : : 20 - » XNLI © POS x XQUAD + TyDIQA Tatoeba < PAWS-X = NER e MLQA * BUCC SS ik we Language families sorted by # of Wikipedia documents 100 eo 80 . = - os . < . d . “I . 7 . Mi wm 60 4 . * i v 0 y * 2 + 6 ‘ 6 . oF x $ x * : ‘ < PY 40 + 5 * 7 20 » XNLI © POS x XQUAD + TyDIQA Tatoeba < PAWS-X = NER e MLOA * BUCC o SF SS SF FS SF F SS SS i Pe we ee rs v Â¥ Sr Language scripts sorted by # of Wikipedia documents 100 100 + tg eo 80 - #5 ———# 80 . = - . . saul e's os . < . eee * = v - g Â¥ . d . “I . 7 . Mi 60 a * $ wm 60 4 . * i sy x x v 0 y * Q 0 2 + 6 ‘ Mi 0 . x x 0 6 . oF x x x tea . $ x * : ‘ < NO . PY 40 + 5 * . : : 7 20 - 20 » XNLI © POS x XQUAD + TyDIQA Tatoeba » XNLI © POS x XQUAD + TyDIQA Tatoeba < PAWS-X = NER e MLQA * BUCC < PAWS-X = NER e MLOA * BUCC o SS SF SS SF FS SF F SS SS ik i Pe we ee rs v we Â¥ Sr Language families sorted by # of Wikipedia documents Language scripts sorted by # of Wikipedia documents Figure 3. Performance of mBERT across tasks grouped by language families (left) and scripts (right). The number of languages per group is in brackets and the groups are from low-resource to high-resource on the x-axis. We additionally plot the 3rd order polynomial fit for the minimum and maximum values for each group. generalize to unseen tags at test time. However, a model may be required to generalize to unseen tag combinations at test time, for instance due to differences in word order between languages. We gauge how challenging such gener- alization is by computing a model’s accuracy for POS tag n-grams in the target language dev data that were not seen in the English training data. We calculate values for tag trigrams and 4-grams and show accuracy scores for mBERT in Table 4. We observe the largest differences in perfor- mance for unseen trigrams and 4-grams, which highlights that existing cross-lingual models struggle to transfer to the syntactic characteristics of other languages. For NER, we estimate how well models generalize to unseen entities at test time. We compute mBERT’s accuracy on entities in the target language dev data that were not seen in the English training data. We observe the largest difference between performance on seen and unseen entities for Indonesian and Swahili. Isolating for confounding factors such as entity length, frequency, and Latin script, we find the largest differ- ences in performance for Swahili and Basque. Together, this indicates that the model may struggle to generalize to enti- ties that are more characteristic of the target language. We show the detailed results for both analyses in the appendix. # 6. Conclusions As we have highlighted in our analysis, a model’s cross- lingual transfer performance varies significantly both be- tween tasks and languages. XTREME is a first step towards obtaining a more accurate estimate of a model’s cross- lingual generalization ability. While XTREME is still inher- ently limited by the data coverage of its constituent tasks for many low-resource languages, XTREME nevertheless pro- vides significantly broader coverage and more fine-grained analysis tools to encourage research on cross-lingual gen- eralization ability of models. We have released the code for XTREME and scripts for fine-tuning models on tasks in XTREME, which should be to catalyze future research. XTREME: A Benchmark for Evaluating Cross-lingual generalization # Acknowledgements We’d like to thank Jon Clark for sharing with us the TyDiQA Gold Passage data and for valuable feedback. We would also like to thank Sam Bowman, Sebastian Goodman, and Tal Linzen for their feedback. JH and GN are sponsored by the Air Force Research Laboratory under agreement number FA8750-19-2-0200. References Agi´c, ˇZ. and Schluter, N. Baselines and test data for cross- lingual inference. In Proceedings of LREC 2018, 2018. Aharoni, R., Johnson, M., and Firat, O. Massively Multi- lingual Neural Machine Translation. In Proceedings of NAACL 2019, 2019. Arivazhagan, N., Bapna, A., Firat, O., Lepikhin, D., John- son, M., Krikun, M., Chen, M. X., Cao, Y., Foster, G., Cherry, C., Macherey, W., Chen, Z., and Wu, Y. Massively Multilingual Neural Machine Translation in the Wild: Findings and Challenges. arXiv preprint arXiv:1907.05019, 2019. Artetxe, M. and Schwenk, H. Massively Multilingual Sen- tence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond. Transactions of the ACL 2019, 2019. Artetxe, M., Labaka, G., and Agirre, E. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of ACL 2017, pp. 451–462, 2017. Artetxe, M., Labaka, G., and Agirre, E. A robust self- learning method for fully unsupervised cross-lingual map- pings of word embeddings. In Proceedings of ACL 2018, pp. 789–798, 2018. Artetxe, M., Ruder, S., and Yogatama, D. On the Cross- lingual Transferability of Monolingual Representations. In Proceedings of ACL 2020, 2020. Barnes, J., Klinger, R., and Schulte im Walde, S. Bilin- gual sentiment embeddings: Joint projection of senti- ment across languages. In Proceedings of ACL 2018, pp. 2483–2493, Melbourne, Australia, 2018. Association for Computational Linguistics. Conneau, A., Rinott, R., Lample, G., Williams, A., Bowman, S., Schwenk, H., and Stoyanov, V. XNLI: Evaluating cross-lingual sentence representations. In Proceedings of EMNLP 2018, pp. 2475–2485, 2018b. Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzm´an, F., Grave, E., Ott, M., Zettlemoyer, L., and Stoyanov, V. Unsupervised Cross-lingual Repre- sentation Learning at Scale. In Proceedings of ACL 2020, 2020. Czarnowska, P., Ruder, S., Grave, E., Cotterell, R., and Copestake, A. Don’t forget the long tail! a comprehen- sive analysis of morphological generalization in bilingual lexicon induction. In Proceedings of EMNLP 2019, pp. 973–982, 2019. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for lan- guage understanding. In Proceedings of NAACL 2019, 2019. Eisenschlos, J., Ruder, S., Czapla, P., Kadras, M., Gugger, S., and Howard, J. MultiFiT: Efficient Multi-lingual Language Model Fine-tuning. In Proceedings of EMNLP 2019, 2019. Eriguchi, A., Johnson, M., Firat, O., Kazawa, H., and Macherey, W. Zero-shot cross-lingual classification using multilingual neural machine translation. arXiv preprint arXiv:1809.04686, 2018. Faruqui, M. and Dyer, C. Improving vector space word representations using multilingual correlation. In Pro- ceedings of EACL 2014, pp. 462–471, 2014. Glavaˇs, G., Litschko, R., Ruder, S., and Vuli´c, I. How to (Properly) Evaluate Cross-Lingual Word Embeddings: On Strong Baselines, Comparative Analyses, and Some Misconceptions. In Proceedings of ACL 2019, 2019. Gouws, S., Bengio, Y., and Corrado, G. BilBOWA: Fast bilingual distributed representations without word align- ments. In Proceedings of ICML 2015, pp. 748–756, 2015. Gururangan, S., Swayamdipta, S., Levy, O., Schwartz, R., Bowman, S. R., and Smith, N. A. Annotation Artifacts in Natural Language Inference Data. In Proceedings of NAACL-HLT 2018, 2018. J. H., Choi, E., Collins, M., Garrette, D., Kwiatkowski, T., Nikolaev, V., and Palomaki, J. TyDi QA: A Benchmark for Information-Seeking Question Answer- ing in Typologically Diverse Languages. In Transactions of the Association of Computational Linguistics, 2020. Guzm´an, F., Chen, P.-J., Ott, M., Pino, J., Lample, G., Koehn, P., Chaudhary, V., and Ranzato, M. The FLoRes Evaluation Datasets for Low-Resource Machine Transla- tion: Nepali-English and Sinhala-English. In Proceedings of EMNLP 2019, pp. 6100–6113, 2019. Conneau, A., Lample, G., Ranzato, M., Denoyer, L., and J´egou, H. Word translation without parallel data. In Proceedings of ICLR 2018, 2018a. Howard, J. and Ruder, S. Universal language model fine- tuning for text classification. In Proceedings of ACL 2018, pp. 328–339, 2018. XTREME: A Benchmark for Evaluating Cross-lingual generalization Hsu, T.-y., Liu, C.-l., and Lee, H.-y. Zero-shot Reading Comprehension by Cross-lingual Transfer Learning with Multi-lingual Language Representation Model. In Pro- ceedings of EMNLP 2019, pp. 5935–5942, 2019. Mohammad, S. M., Salameh, M., and Kiritchenko, S. How translation alters sentiment. Journal of Artificial Intelli- gence Research, 55:95–130, 2016. Kementchedjhieva, Y., Hartmann, M., and Søgaard, A. Lost in evaluation: Misleading benchmarks for bilingual dic- tionary induction. In Proceedings of EMNLP 2019, pp. 3327–3332, 2019. Nangia, N. and Bowman, S. R. Human vs. Muppet: A Conservative Estimate of Human Performance on the In Proceedings of ACL 2019, pp. GLUE Benchmark. 4566–4575, 2019. Inducing Crosslingual Distributed Representations of Words. In Proceedings of COLING 2012, 2012. Nivre, J., Abrams, M., Agi´c, ˇZ., Ahrenberg, L., Antonsen, L., Aranzabe, M. J., Arutie, G., Asahara, M., Ateyah, L., Attia, M., et al. Universal dependencies 2.2. 2018. Koppel, M. and Ordan, N. Translationese and its di- alects. In Proceedings of ACL 2011, pages=1318–1326, year=2011, organization=Association for Computational Linguistics. Pan, X., Zhang, B., May, J., Nothman, J., Knight, K., and Ji, H. Cross-lingual name tagging and linking for 282 languages. In Proceedings of ACL 2017, pp. 1946–1958, 2017. Lample, G. and Conneau, A. Cross-lingual Language Model Pretraining. In Proceedings of NeurIPS 2019, 2019. Lee, K., Yoon, K., Park, S., and Hwang, S. W. Semi- supervised training data generation for multilingual ques- tion answering. In Proceedings of LREC 2018, pp. 2758– 2762, 2018. Peters, M., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. Deep contextualized word representations. In Proceedings of NAACL 2018, pp. 2227–2237, 2018. Pires, T., Schlinger, E., and Garrette, D. How multilingual is Multilingual BERT? In Proceedings of ACL 2019, 2019. Lewis, P., Ouz, B., Rinott, R., Riedel, S., and Schwenk, H. MLQA: Evaluating Cross-lingual Extractive Question Answering. arXiv preprint arXiv:1910.07475, 2019. Popovi´c, M. chrF: character n-gram f-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pp. 392–395, Lisbon, Portugal, 2015. Lin, Y.-H., Chen, C.-Y., Lee, J., Li, Z., Zhang, Y., Xia, M., Rijhwani, S., He, J., Zhang, Z., Ma, X., Anastasopoulos, A., Littell, P., and Neubig, G. Choosing Transfer Lan- guages for Cross-Lingual Learning. In Proceedings of ACL 2019, 2019. Luong, T., Pham, H., and Manning, C. D. Bilingual word representations with monolingual quality in mind. In Pro- ceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pp. 151–159, 2015. Manning, C. D. Part-of-speech tagging from 97% to 100%: In International con- is it time for some linguistics? ference on intelligent text processing and computational linguistics, pp. 171–189. Springer, 2011. McCann, B., Bradbury, J., Xiong, C., and Socher, R. Learned in translation: Contextualized word vectors. In Proceedings of NIPS 2017, pp. 6294–6305, 2017. McDonald, R., Petrov, S., and Hall, K. Multi-source transfer of delexicalized dependency parsers. In Proceedings of EMNLP 2011, pp. 62–72, 2011. Mikolov, T., Le, Q. V., and Sutskever, I. Exploiting simi- larities among languages for machine translation. arXiv preprint arXiv:1309.4168, 2013. Rahimi, A., Li, Y., and Cohn, T. Massively Multilingual Transfer for NER. In Proceedings of ACL 2019, 2019. Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceedings of EMNLP 2016, 2016. Ruder, S., Vuli´c, I., and Søgaard, A. A Survey of Cross- lingual Word Embedding Models. Journal of Artificial Intelligence Research, 65:569–631, 2019. Schuster, T., Ram, O., Barzilay, R., and Globerson, A. Cross-Lingual Alignment of Contextual Word Embed- dings, with Applications to Zero-shot Dependency Pars- ing. In Proceedings of NAACL 2019, 2019. Schwenk, H. and Li, X. A Corpus for Multilingual Docu- ment Classification in Eight Languages. In Proceedings of LREC 2018, 2018. Siddhant, A., Johnson, M., Tsai, H., Arivazhagan, N., Riesa, J., Bapna, A., Firat, O., and Raman, K. Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation. In Proceedings of AAAI 2020, 2020. XTREME: A Benchmark for Evaluating Cross-lingual generalization Smith, L., Giorgi, S., Solanki, R., Eichstaedt, J., Schwartz, H. A., Abdul-Mageed, M., Buffone, A., and Ungar, L. Does well-being translate on twitter? In Proceedings of EMNLP 2016, pp. 2042–2047, 2016. Zhang, Y., Baldridge, J., and He, L. PAWS: Paraphrase adversaries from word scrambling. In Proceedings of NAACL 2019, pp. 1298–1308, 2019. Snyder, B., Naseem, T., and Barzilay, R. Unsupervised multilingual grammar induction. In Proceedings of ACL 2009, pp. 73–81, 2009. Zweigenbaum, P., Sharoff, S., and Rapp, R. Overview of the second bucc shared task: Spotting parallel sentences in comparable corpora. In Proceedings of the 10th Workshop on Building and Using Comparable Corpora, pp. 60–67, 2017. T¨ackstr¨om, O., Das, D., Petrov, S., McDonald, R., and Nivre, J. Token and Type Constraints for Cross-Lingual Part-of-Speech Tagging. In Transactions of the Associa- tion for Computational Linguistics, 2013. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention Is All You Need. In Proceedings of NIPS 2017, 2017. Zweigenbaum, P., Sharoff, S., and Rapp, R. Overview of the third bucc shared task: Spotting parallel sentences in comparable corpora. In Proceedings of 11th Workshop on Building and Using Comparable Corpora, pp. 39–42, 2018. Vuli´c, I., Glavaˇs, G., Reichart, R., and Korhonen, A. Do We Really Need Fully Unsupervised Cross-Lingual Embed- dings? In Proceedings of EMNLP 2019, 2019. Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. Super- glue: A stickier benchmark for general-purpose language understanding systems. In Proceedings of NeurIPS 2019, 2019a. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. In Proceedings of ICLR 2019, 2019b. Williams, A., Nangia, N., and Bowman, S. R. A Broad- Coverage Challenge Corpus for Sentence Understanding through Inference. In Proceedings of NAACL-HLT 2018, 2018. Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., et al. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771, 2019. Wu, S. and Dredze, M. Beto, Bentz, Becas: The Surprising Cross-Lingual Effectiveness of BERT. In Proceedings of EMNLP 2019, 2019. Yang, Y., Zhang, Y., Tar, C., and Baldridge, J. PAWS-X: A cross-lingual adversarial dataset for paraphrase identifi- cation. In Proceedings of EMNLP 2019, pp. 3685–3690, 2019. Zhang, M., Liu, Y., Luan, H., and Sun, M. Earth mover’s distance minimization for unsupervised bilingual lexicon induction. In Proceedings of EMNLP 2017, pp. 1934– 1945, 2017. XTREME: A Benchmark for Evaluating Cross-lingual generalization # A. Languages We show a detailed overview of languages in the cross- lingual benchmark including interesting typological differ- ences in Table 5. Wikipedia information is taken from Wikipedia14 and linguistic information from WALS On- line15. XTREME includes members of the Afro-Asiatic, Austro-Asiatic, Austronesian, Dravidian, Indo-European, Japonic, Kartvelian, Kra-Dai, Niger-Congo, Sino-Tibetan, Turkic, and Uralic language families as well as of two iso- lates, Basque and Korean. 100 . F # Bh e # * * a 5 Â¥ j + 2 oe ' a : 5 x H q 4 ‘ x ? 3 B 49 a ae i $ ‘ 7 a : 20 { x 0 ' S te F&F SeFadt Â¥ s < 2 a ee S or & Q + 8 8 # B. Hyper-parameters Table 6 summarizes the hyper-parameters of baseline and state-of-the-art models. We refer to XLM-100 as XLM, and XLM-R-large as XLM-R in our paper to simplify the notation. mBERT We use the cased version, which covers 104 lan- guages, has 12 layers, 768 hidden units per layer, 12 atten- tion heads, a 110k shared WordPiece vocabulary, and 110M parameters.16 The model was trained using Wikipedia data in all 104 languages, oversampling low-resource languages with an exponential smoothing factor of 0.7. We generally fine-tune mBERT for two epochs, with a training batch size of 32 and a learning rate of 2e-5. For training BERT models on the QA tasks, we use the original BERT codebase. For all other tasks, we use the Transformers library (Wolf et al., 2019). XLM and XLM-R We use the XLM and XLM-R Large versions that cover 100 languages, use a 200k shared BPE vocabulary, and that have been trained with masked lan- guage modelling.17 We fine-tune both for two epochs with a learning rate of 3e-5 and an effective batch size of 16. In contrast to XLM, XLM-R does not use language embed- dings. We use the Transformers library for training XLM and XLM-R models on all tasks. # C. Translations for QA datasets We use an in-house translation tool to obtain translations for our datasets. For the question answering tasks (XQuAD and MLQA), the answer span is often not recoverable if the context is translated directly. We experimented with enclos- ing the answer span in the English context in quotes (Lee et al., 2018; Lewis et al., 2019) but found that quotes were often dropped in translations (at different rates depending Figure 4. An overview of mBERT’s performance on the XTREME tasks for the languages of each task. We highlight an estimate of human performance, performance on the English test set, the average of all languages excluding English, and the family of each language. Performance on pseudo test sets for XNLI and XQuAD is shown with slightly transparent markers. on the language). We found that enclosing the answer span in HTML tags (e.g. <b> and </b>) worked more reliably. If this fails, as a back-off we fuzzy match the translated answer with the context similar to (Hsu et al., 2019). If the minimal edit distance between the closest match and the translated answer is larger than min(10, answer len/2), we drop the example. On the whole, using this combination, we recover more than 97% of all answer spans in training and test data. # D. Performance on translated test sets We show results comparing the performance of mBERT and translate-train (mBERT) baselines on the XQuAD test sets with automatically translated test sets in Table 7. Per- formance on the automatically translated test sets under- estimates the performance of mBERT by 2.9 F1 / 0.2 EM points but overestimates the performance of the translate- train baseline by 4.0 F1 / 6.7 EM points. The biggest part of this margin is explained by the difference in scores on the Thai test set. Overall, this indicates that automatically translated test sets are useful as a proxy for cross-lingual performance but may not be reliable for evaluating models that have been trained on translations as these have learnt to exploit the biases of translationese. 14https://meta.wikimedia.org/wiki/List_of_ Wikipedias # 15https://wals.info/languoid 16https://github.com/google-research/bert/ blob/master/multilingual.md 17https://github.com/facebookresearch/XLM # E. mBERT performance across tasks and languages We show the performance of mBERT across all tasks and languages of XTREME in Table 4. XTREME: A Benchmark for Evaluating Cross-lingual generalization Table 5. Statistics about languages in the cross-lingual benchmark. Languages belong to 12 language families and two isolates, with Indo-European (IE) having the most members. Diacritics / special characters: Language adds diacritics (additional symbols to letters). Compounding: Language makes extensive use of word compounds. Bound words / clitics: Function words attach to other words. Inflection: Words are inflected to represent grammatical meaning (e.g. case marking). Derivation: A single token can represent entire phrases or sentences. Language ISO 639-1 code # Wikipedia articles (in millions) Script Language family Diacritics / special characters Extensive compound- ing Bound words / clitics Inflec- tion Deriva- tion Afrikaans Arabic Basque Bengali Bulgarian Burmese Dutch English Estonian Finnish French Georgian German Greek Hebrew Hindi Hungarian Indonesian Italian Japanese Javanese Kazakh Korean Malay Malayalam Mandarin Marathi Persian Portuguese Russian Spanish Swahili Tagalog Tamil Telugu Thai Turkish Urdu Vietnamese Yoruba af ar eu bn bg my nl en et fi fr ka de el he hi hu id it ja jv kk ko ms ml zh mr fa pt ru es sw tl ta te th tr ur vi yo 0.09 1.02 0.34 0.08 0.26 0.05 1.99 5.98 0.20 0.47 2.16 0.13 2.37 0.17 0.25 0.13 0.46 0.51 1.57 1.18 0.06 0.23 0.47 0.33 0.07 1.09 0.06 0.70 1.02 1.58 1.56 0.05 0.08 0.12 0.07 0.13 0.34 0.15 1.24 0.03 Latin Arabic Latin Brahmic Cyrillic Brahmic Latin Latin Latin Latin Latin Georgian Latin Greek Hebrew Devanagari Latin Latin Latin Ideograms Brahmic Arabic Hangul Latin Brahmic Chinese ideograms Devanagari Perso-Arabic Latin Cyrillic Latin Latin Brahmic Brahmic Brahmic Brahmic Latin Perso-Arabic Latin Arabic IE: Germanic Afro-Asiatic Basque IE: Indo-Aryan IE: Slavic Sino-Tibetan IE: Germanic IE: Germanic Uralic Uralic IE: Romance Kartvelian IE: Germanic IE: Greek Afro-Asiatic IE: Indo-Aryan Uralic Austronesian IE: Romance Japonic Austronesian Turkic Koreanic Austronesian Dravidian Sino-Tibetan IE: Indo-Aryan IE: Iranian IE: Romance IE: Slavic IE: Romance Niger-Congo Austronesian Dravidian Dravidian Kra-Dai Turkic IE: Indo-Aryan Austro-Asiatic Niger-Congo X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X # datasets with language 3 7 3 3 4 1 3 9 3 3 6 2 8 5 3 6 4 4 3 4 1 1 5 2 2 8 3 2 3 7 7 3 1 3 4 4 5 4 6 1 XTREME: A Benchmark for Evaluating Cross-lingual generalization Table 6. Hyper-parameters of baseline and state-of-the-art models. We do not use XLM-15 and XLM-R-Base in our experiments. Model Parameters Langs Vocab size Layers BERT-large mBERT MMTE XLM-15 XLM-100 XLM-R-Base XLM-R-Large 364,353,862 178,566,653 191,733,123 346,351,384 827,696,960 470,295,954 816,143,506 1 104 103 15 100 100 100 28,996 119,547 64,000 95,000 200,000 250,002 250,002 24 12 6 12 12 12 24 60 50 F1 scores Figure 5. Comparison of mBERT’s sentence representations by averaging word embeddings in each layer in the BUCC task. # F. Correlation with pretraining data size We show the Pearson correlation coefficient ρ of mBERT, XLM, and XLM-R with the number of Wikipedia articles in Table 9. XLM and mBERT were pretrained on Wikipedia, while XLM-R was pretrained on data from the web. # I. Sentence representations across all layers # G. Generalization to unseen tag combinations We show the performance of mBERT on POS tag trigrams and 4-grams that were seen and not seen in the English training data in Table 10. # H. Generalization to unseen entities We show the performance of mBERT on entities in the tar- get language NER dev data that were seen and not seen in the English NER training data in Table 11. For simplicity, we count an entity as occurring in the English training data if a subset of at least two tokens matches with an entity in the English training data. As most matching entities in the target language data only consist of up to two tokens, are somewhat frequent, and consist only of Latin characters, we provide the performance on all entities fitting each criterion respectively for comparison. For all target languages in the table except Spanish, entities that appeared in the English training data are more likely to be tagged correctly than ones that did not. The differences are largest for two languages that are typologically distant to English, Indonesian (id) and Swahili (sw). For most languages, entities that appear in the English training data are similarly likely to be correctly classified as entities that are either frequent, appear in Latin characters, or are short. However, for Swahili and Basque (eu), mBERT does much better on entities that appeared in the English training data compared to the comparison enti- ties. Another interesting case is Georgian (ka), which uses a unique script. The NER model is very good at recognizing entities that are written in Latin script but performs less well on entities in Georgian script. For sentence retrieval tasks, we analyze whether the multi- lingual sentence representations obtained from all layers are well-aligned in the embedding spaces. Without fine-tuning on any parallel sentences at all, we explore three ways of extracting the sentence representations from all the models: (1) the embeddings of the first token in the last layer, also known as [CLS] token; (2) the average word embeddings in each layer; (3) the concatenation of the average word embeddings in the bottom, middle, and top 4 layers, i.e., Layer 1 to 4 (bottom), Layer 5 to 8 (middle), Layer 9 to 12 (top). Figure 5 shows the F1 scores of the average word embeddings in each layer of mBERT in the BUCC task. We observe that the average word embeddings in the middle lay- ers, e.g., Layer 6 to 8, perform better than that in the bottom or the top layers. In Table 14, we show the performance of these three types of sentence embeddings in the BUCC task. The embeddings of the CLS token perform relatively bad in cross-lingual retrieval tasks. We conjecture that the CLS embeddings highly abstract the semantic meaning of a sen- tence, while they lose the token-level information which is important for matching two translated sentences in two lan- guages. With respect to the concatenation of average word embeddings from four continuous layers, We also observe that embeddings from the middle layers perform better than that from the bottom and top layers. Average word embed- dings in the middle individual layer perform comparative to the concatenated embeddings from the middle four layers. # I.1. Language Families and Scripts We also report the performance of XLM-R in all the tasks across different language families and writing scripts in Figure 6. XTREME: A Benchmark for Evaluating Cross-lingual generalization Table 7. Comparison of F1 and EM scores of mBERT and translate-train (mBERT) baselines on XQuAD test sets (gold), which were translated by professional translators and automatically translated test sets (auto). Test set es de el ru tr ar vi th zh hi avg mBERT translate-train gold auto gold auto 75.6 / 56.9 76.1 / 58.7 80.2 / 63.1 80.7 / 66.0 70.6 / 54.0 64.3 / 49.9 75.6 / 60.7 71.1 / 58.9 62.6 / 44.9 57.9 / 42.5 70.0 / 53.0 69.3 / 54.5 71.3 / 53.3 68.3 / 51.8 75.0 / 59.7 75.7 / 61.5 55.4 / 40.1 55.6 / 42.9 68.9 / 54.8 71.2 / 59.1 61.5 / 45.1 62.1 / 48.6 68.0 / 51.1 74.3 / 60.7 69.5 / 49.6 68.6 / 54.3 75.6 / 56.2 76.8 / 64.0 42.7 / 33.5 41.1 / 32.6 36.9 / 33.5 79.5 / 74.8 58.0 / 48.3 48.5 / 47.7 66.2 / 56.6 59.3 / 58.0 59.2 / 46.0 54.1 / 40.9 69.6 / 55.4 69.1 / 55.2 62.6 / 47.2 59.7 / 47.0 68.7 / 54.6 72.7 / 61.3 Table 8. Comparison of accuracy scores of mBERT baseline on XNLI test sets (gold), which were translated by professional translators and automatically translated test sets (auto) in 14 languages. BLEU and chrF scores are computed to measure the translation quality between gold and automatically translated test sets. Languages zh es de ar ur ru bg el fr hi sw th tr vi avg auto Acc. gold Acc. 69.1 67.8 74.7 73.5 72.8 70.0 66.5 64.3 64.5 57.2 71.6 67.8 70.2 68.0 67.7 65.3 74.3 73.4 65.1 58.9 50.2 49.7 54.5 54.1 60.0 60.9 72.7 69.3 66.7 64.3 BLEU chrF 40.92 35.96 43.46 67.92 30.94 60.28 32.35 59.64 20.13 48.21 22.62 50.38 45.04 67.52 60.29 75.34 47.91 69.58 29.55 53.85 31.25 59.84 10.65 54.89 15.39 51.46 56.93 69.37 34.82 58.87 100 v0} — a as Â¥ . u . my y 60 ss a=, _@ 55 8 ' x --* = . ° + -# ,° S : $ x B op ——— 2 oo . + TyDIQA Tatocba * BUCC Language families sorted by # of Wikipedia documents 100 0 — - boat . 6 ’ . ny 60 x Be nt “x g . * . . x 5 a = : ——_ 0 \XNLI POS « XQUAD + TyDIQA Tatoeba < PAWS-X = NER e MLOA * BUCC SP SF SO EC SF FF SS SS § Yr SV Or Se er Language scripts sorted by # of Wikipedia documents 100 100 v0} — a as 0 — - boat Â¥ . u . my . 6 ’ . y 60 ss a=, _@ 55 8 ' x --* ny 60 x Be nt “x = . ° + -# ,° g . * . . S : $ x x 5 op ——— 2 oo . a = : ——_ + TyDIQA Tatocba 0 \XNLI POS « XQUAD + TyDIQA Tatoeba * BUCC < PAWS-X = NER e MLOA * BUCC SP SF SO EC SF FF SS SS § Yr SV Or Se er Language families sorted by # of Wikipedia documents Language scripts sorted by # of Wikipedia documents Figure 6. Performance of XLM-R across tasks grouped by language families (left) and scripts (right). The number of languages per group is in brackets and the groups are from low-resource to high-resource on the x-axis. We additionally plot the 3rd order polynomial fit for the minimum and maximum values for each group. XTREME: A Benchmark for Evaluating Cross-lingual generalization # J. Results for each task and language Table 9. Pearson correlation coefficients (ρ) of zero-shot transfer performance and Wikipedia size across datasets and models. We show the detailed results for all tasks and languages in Tables 12 (XNLI), 15 (PAWS-X), 20 (POS), 21 (NER), 17 (XQuAD), 19 (MLQA), 18 (TyDiQA-GoldP), 16 (BUCC), and 13 (Tatoeba). XNLI PAWS-X POS NER XQuAD MLQA TyDiQA-GoldP BUCC Tatoeba mBERT XLM XLM-R 0.79 0.80 0.75 0.81 0.76 0.79 0.36 0.32 0.22 0.35 0.29 0.27 0.80 0.74 0.50 0.87 0.73 0.76 0.82 0.52 0.14 0.95 0.61 0.36 0.68 0.68 0.49 Table 10. Accuracy of mBERT on the target language dev data on POS tag trigrams and 4-grams that appeared and did not appear in the English training data. We show the average performance across all non-English languages and the difference of said average compared to the English performance on the bottom. trigram, seen trigram, unseen 4-gram, seen en 90.3 63.0 88.1 67.5 af ar bg de el es et eu he hi hu id it ja ko mr nl pt ru ta te tr ur zh 68.1 22.0 63.1 77.8 59.6 68.6 60.7 32.8 52.7 38.7 55.5 60.8 75.5 16.3 22.0 31.7 75.5 76.2 69.1 30.3 57.8 41.2 30.6 29.0 8.2 0.7 14.6 47.2 9.1 10.6 14.4 7.1 35.7 13.0 28.8 16.6 12.8 0.0 2.9 0.0 24.1 14.9 4.8 0.0 0.0 6.2 18.3 0.0 64.1 14.9 56.1 73.0 52.5 62.4 53.1 28.7 44.0 32.6 46.9 54.7 71.8 12.3 14.7 25.5 71.0 71.2 63.8 24.5 48.7 33.9 22.3 21.7 24.2 4.6 23.9 48.7 14.2 24.9 31.9 8.1 27.4 12.5 23.7 21.6 23.5 1.0 3.8 3.3 37.8 30.6 20.6 4.2 24.7 10.1 10.9 3.9 # 4-gram, unseen # avg diff 50.6 39.7 12.1 50.9 44.3 43.7 18.3 49.2 XTREME: A Benchmark for Evaluating Cross-lingual generalization Table 11. Comparison of accuracies for entities in the target language NER dev data that were seen in the English NER training data (a); were not seen in the English NER training data (b); only consist of up to two tokens (c); only consist of Latin characters (d); and occur at least twice in the dev data (e). We only show languages where the sets (a–e) contain at least 100 entities each. We show the difference between (a) and (b) and the minimum difference between (a) and (c-e). he en pt nl ms ka it id hu fr fi eu et es el de af ru sw tr (a) Seen (b) Not seen 94.7 82.1 88.3 80.2 91.4 74.8 91.9 84.6 76.3 80.4 88.3 78.9 83.6 69.4 85.3 79.8 90.5 80.1 78.2 56.5 90.7 78.3 89.4 58.0 88.4 81.5 92.3 70.2 88.6 75.0 93.5 82.9 88.6 82.3 83.9 68.5 96.3 66.6 85.2 73.7 (a) − (b) 12.6 8.1 16.5 7.2 -4.1 9.4 14.1 5.5 10.4 21.7 12.3 31.5 6.9 22.1 13.6 10.6 6.4 15.4 29.7 11.6 (c) Short (d) Latin (e) Freq min((a) − (c–e)) 86.5 83.6 87.3 7.4 82.9 81.2 80.6 5.4 80.3 87.5 81.9 3.9 88.2 86.2 91.6 0.3 86.6 80.0 83.4 3.7 81.7 79.5 79.4 6.6 72.5 70.3 68.8 11.0 83.9 80.3 85.7 0.4 88.6 81.1 77.3 1.9 66.3 77.2 66.8 1.0 83.7 79.9 86.0 4.7 85.8 61.8 56.5 3.6 87.2 82.6 88.8 0.4 72.5 89.6 74.3 2.7 89.1 76.3 81.3 0.5 87.6 84.2 87.1 5.9 87.8 83.0 84.4 0.8 78.0 83.8 76.5 0.1 65.7 70.0 49.1 26.4 83.1 75.0 81.9 2.2 vi 91.4 73.4 18.0 84.6 74.9 78.6 6.8 Table 12. XNLI accuracy scores for each language. Model en ar bg de el es fr hi ru sw th tr ur vi zh mBERT XLM XLMR MMTE 80.8 82.8 88.7 79.6 64.3 66.0 77.2 64.9 68.0 71.9 83.0 70.4 70.0 72.7 82.5 68.2 65.3 70.4 80.8 67.3 73.5 75.5 83.7 71.6 73.4 74.3 82.2 69.5 58.9 62.5 75.6 63.5 67.8 69.9 79.1 66.2 49.7 58.1 71.2 61.9 54.1 65.5 77.4 66.2 60.9 66.4 78.0 63.6 57.2 59.8 71.7 60.0 69.3 70.7 79.3 69.7 67.8 70.2 78.2 69.2 Translate-train (multi-task) Translate-train Translate-test 81.9 80.8 85.9 73.8 73.6 73.1 77.6 76.6 76.6 77.6 77.4 76.9 75.9 75.7 75.3 79.1 78.1 78.0 77.8 77.4 77.5 70.7 71.9 69.1 75.4 75.2 74.8 70.5 69.4 68.0 70.0 70.9 67.1 74.3 75.3 73.5 67.4 67.2 66.4 77.0 75.0 76.6 77.6 74.1 76.3 avg 65.4 69.1 79.2 67.5 75.1 74.6 76.8 Table 13. Tatoeba results (Accuracy) for each language af Lang. 42.7 BERT 43.2 XLM XLMR 58.2 ar 25.8 18.2 47.5 bg 49.3 40 71.6 bn 17 13.5 43 de 77.2 66.2 88.8 el 29.8 25.6 61.8 es 68.7 58.4 75.7 et 29.3 24.8 52.2 eu 25.5 17.1 35.8 fa 46.1 32.2 70.5 fi 39 32.2 71.6 fr 66.3 54.5 73.7 he 41.9 32.1 66.4 hi 34.8 26.5 72.2 hu 38.7 30.1 65.4 id 54.6 45.9 77 it 58.4 56.5 68.3 jv ka kk ko ml mr nl pt ru sw ta te th tl tr ur vi 17.6 BERT 22.4 XLM XLMR 14.1 20.5 22.9 52.1 27.1 17.9 48.5 38.5 25.5 61.4 19.8 20.1 65.4 20.9 13.9 56.8 68 59.6 80.8 69.9 63.9 82.2 61.2 44.8 74.1 11.5 12.6 20.3 14.3 20.2 26.4 16.2 12.4 35.9 13.7 31.8 29.4 16 14.8 36.7 34.8 26.2 65.7 31.6 18.1 24.3 62 47.1 74.7 ja 42 40 60.6 zh 71.6 42.2 68.3 XTREME: A Benchmark for Evaluating Cross-lingual generalization Table 14. Three types of sentence embeddings from mBERT in BUCC tasks: (1) CLS token embeddings in the last layer; (2) Average word embeddings in the middle layers, i.e., Layer 6, 7, 8; (3) the concatenation of average word embeddings in the continuous four layers, i.e., Layer 1-4 (bottom layers), Layer 5-8 (middle layers), Layer 9-12 (top layers). Type de fr zh ru CLS Layer 6 Layer 7 Layer 8 Layer 1-4 Layer 5-8 Layer 9-12 3.88 51.29 62.51 64.32 6.98 63.12 53.97 4.73 56.32 62.62 62.46 12.3 63.42 52.68 0.89 41.38 49.99 50.49 12.05 52.84 44.18 2.15 38.81 51.84 53.58 4.33 51.67 43.13 Table 15. PAWS-X accuracy scores for each language. Model en de es fr ja ko zh avg mBERT XLM XLMR MMTE 94.0 94.0 94.7 93.1 85.7 85.9 89.7 85.1 87.4 88.3 90.1 87.2 87.0 87.4 90.4 86.9 73.0 69.3 78.7 72.0 69.6 64.8 79.0 69.2 77.0 76.5 82.3 75.9 81.9 80.9 86.4 81.3 Translate-train Translate-train (multi-task) Translate-test 94.0 94.5 93.5 87.5 90.5 88.2 89.4 91.6 89.3 89.6 91.7 87.4 78.6 84.4 78.4 81.6 83.9 76.6 83.5 85.8 77.6 86.3 88.9 84.4 Table 16. BUCC results (F1 scores) for each languages. Model de fr ru zh avg 62.5 BERT XLM 56.3 XLMR 67.5 MMTE 67.9 62.6 63.9 66.5 63.9 51.8 60.6 73.5 54.3 50.0 46.6 56.7 53.3 56.7 56.8 66.0 59.8 XTREME: A Benchmark for Evaluating Cross-lingual generalization Table 17. XQuAD results (F1 / EM) for each language. Model en ar de el es hi ru th tr vi zh avg mBERT XLM XLMR MMTE 83.5 / 72.2 74.2 / 62.1 86.5 / 75.7 80.1 / 68.1 61.5 / 45.1 61.4 / 44.7 68.6 / 49.0 63.2 / 46.2 70.6 / 54.0 66.0 / 49.7 80.4 / 63.4 68.8 / 50.3 62.6 / 44.9 57.5 / 39.1 79.8 / 61.7 61.3 / 35.9 75.5 / 56.9 68.2 / 49.8 82.0 / 63.9 72.4 / 52.5 59.2 / 46.0 56.6 / 40.3 76.7 / 59.7 61.3 / 47.2 71.3 / 53.3 65.3 / 48.2 80.1 / 64.3 68.4 / 45.2 42.7 / 33.5 35.4 / 24.5 74.2 / 62.8 48.4 / 35.9 55.4 / 40.1 57.9 / 41.2 75.9 / 59.3 58.1 / 40.9 69.5 / 49.6 65.8 / 47.6 79.1 / 59.0 70.9 / 50.1 58.0 / 48.3 49.7 / 39.7 59.3 / 50.0 55.8 / 36.4 64.5 / 49.4 59.8 / 44.3 76.6 / 60.8 64.4 / 46.2 Translate-train Translate-train (multi-task) Translate-test 83.5 / 72.2 86.0 / 74.5 87.9 / 77.1 68.0 / 51.1 71.0 / 54.1 73.7 / 58.8 75.6 / 60.7 78.8 / 63.9 79.8 / 66.7 70.0 / 53.0 74.2 / 56.1 79.4 / 65.5 80.2 / 63.1 82.4 / 66.2 82.0 / 68.4 69.6 / 55.4 71.3 / 56.2 74.9 / 60.1 75.0 / 59.7 78.1 / 63.0 79.9 / 66.7 36.9 / 33.5 38.1 / 34.5 64.6 / 50.0 68.9 / 54.8 70.6 / 55.7 67.4 / 49.6 75.6 / 56.2 78.5 / 58.8 76.3 / 61.5 66.2 / 56.6 67.7 / 58.7 73.7 / 59.1 70.0 / 56.0 72.4 / 58.3 76.3 / 62.1 Table 18. TyDiQA-GoldP results (F1 / EM) for each language. Model en ar bn fi id ko ru sw te avg mBERT XLM XLM-R MMTE 75.3 / 63.6 66.9 / 53.9 71.5 / 56.8 62.9 / 49.8 62.2 / 42.8 59.4 / 41.2 67.6 / 40.4 63.1 / 39.2 49.3 / 32.7 27.2 / 15.0 64.0 / 47.8 55.8 / 41.9 59.7 / 45.3 58.2 / 41.4 70.5 / 53.2 53.9 / 42.1 64.8 / 45.8 62.5 / 45.8 77.4 / 61.9 60.9 / 47.6 58.8 / 50.0 14.2 / 5.1 31.9 / 10.9 49.9 / 42.6 60.0 / 38.8 49.2 / 30.7 67.0 / 42.1 58.9 / 37.9 57.5 / 37.9 39.4 / 21.6 66.1 / 48.1 63.1 / 47.2 49.6 / 38.4 15.5 / 6.9 70.1 / 43.6 54.2 / 45.8 59.7 / 43.9 43.6 / 29.1 65.1 / 45.0 58.1 / 43.8 Translate-train Translate-train (multi-task) Translate-test 75.3 / 63.6 73.2 / 62.5 75.9 / 65.9 61.5 / 44.1 71.8 / 54.2 68.8 / 49.6 31.9 / 31.9 49.7 / 36.3 66.7 / 48.1 62.6 / 49.0 68.1 / 53.6 72.0 / 56.6 68.6 / 52.0 72.3 / 55.2 76.8 / 60.9 53.2 / 41.3 58.6 / 47.8 69.2 / 55.7 53.1 / 33.9 64.3 / 45.3 71.4 / 54.3 61.9 / 45.5 66.8 / 48.9 73.3 / 53.8 27.4 / 17.5 53.3 / 40.2 75.1 / 59.2 55.1 / 42.1 64.2 / 49.3 72.1 / 56.0 Monolingual Monolingual few-shot Joint monolingual 75.3 / 63.6 63.1 / 50.9 77.6 / 69.3 80.5 / 67.0 61.3 / 44.8 82.7 / 69.4 71.1 / 60.2 58.7 / 49.6 79.6 / 69.9 75.6 / 64.1 51.4 / 38.1 79.2 / 67.8 81.3 / 70.4 70.4 / 58.1 68.9 / 72.7 59.0 / 49.6 45.4 / 38.4 68.9 / 59.4 72.1 / 56.2 56.9 / 42.6 75.8 / 59.2 75.0 / 66.7 55.4 / 46.3 81.9 / 74.3 80.2 / 66.4 65.2 / 49.6 83.4 / 70.3 74.5 / 62.7 58.7 / 46.5 77.6 / 68.0 # Table 19. MLQA results (F1 / EM) for each language. vi Model en ar de es hi zh avg mBERT XLM XLM-R MMTE 80.2 / 67.0 68.6 / 55.2 83.5 / 70.6 78.5 / – 52.3 / 34.6 42.5 / 25.2 66.6 / 47.1 56.1 / – 59.0 / 43.8 50.8 / 37.2 70.1 / 54.9 58.4 / – 67.4 / 49.2 54.7 / 37.9 74.1 / 56.6 64.9 / – 50.2 / 35.3 34.4 / 21.1 70.6 / 53.1 46.2 / – 61.2 / 40.7 48.3 / 30.2 74 / 52.9 59.4 / – 59.6 / 38.6 40.5 / 21.9 62.1 / 37.0 58.3 / – 61.4 / 44.2 48.5 / 32.6 71.6 / 53.2 60.3 / 41.4 Translate-train Translate-train (multi-task) Translate-test 80.2 / 67.0 80.7 / 67.7 83.8 / 71.0 55.0 / 35.6 58.9 / 39.0 65.3 / 46.4 64.4 / 49.4 66.0 / 51.6 71.2 / 54.0 70.0 / 52.0 71.3 / 53.7 73.9 / 55.9 60.1 / 43.4 62.4 / 45.0 71.0 / 55.1 65.7 / 45.5 67.9 / 47.6 70.6 / 54.0 63.9 / 42.7 66.0 / 43.9 67.2 / 50.6 65.6 / 47.9 67.6 / 49.8 71.9 / 55.3 Table 20. POS results (Accuracy) for each language af ar bg de el en es et eu fa fi fr he hi hu id 56.2 63.1 67.5 65.9 85.0 85.0 88.1 87.2 85.2 85.8 88.5 85.8 81.1 84.3 86.3 77.7 95.5 95.4 96.1 96.6 86.9 85.8 88.3 85.8 79.1 78.3 86.5 81.6 60.7 62.8 72.5 61.9 66.7 64.7 70.6 67.3 78.9 78.4 85.8 81.1 84.2 82.8 87.2 84.3 56.2 65.9 68.3 57.3 67.2 66.2 76.4 76.4 78.3 77.3 82.6 78.1 71.0 70.2 72.4 73.5 ja kk ko mr nl pt ru ta te th tl tr ur vi yo zh 70.5 70.2 78.1 70.5 49.6 50.1 53.9 59.3 69.4 68.7 80.8 74.4 88.6 88.1 89.5 83.2 86.2 84.9 87.6 86.1 85.5 86.5 89.5 88.1 59.0 59.8 65.2 63.7 75.9 76.8 86.6 81.9 41.7 55.2 47.2 43.1 81.4 76.3 92.2 80.3 68.5 66.4 76.3 71.8 57.0 61.2 70.3 61.1 53.2 52.4 56.8 56.2 55.7 20.5 24.6 51.9 61.6 65.4 25.7 68.1 # Lang. # mBERT 86.6 88.5 XLM 89.8 XLMR 86.2 MMTE # mBERT 49.2 49.0 XLM 15.9 XLMR 48.6 MMTE 88.4 87.4 89.4 89.2 # avg 71.5 71.3 73.8 73.5 XTREME: A Benchmark for Evaluating Cross-lingual generalization Table 21. NER results (F1 Score) for each language Lang. en mBERT 85.2 82.6 XLM 84.7 XLMR 77.9 MMTE ka mBERT 64.6 XLM 67.7 71.6 XLMR MMTE 60.9 af 77.4 74.9 78.9 74.9 kk 45.8 57.2 56.2 43.9 ar 41.1 44.8 53.0 41.8 ko 59.6 26.3 60.0 58.2 bg bn de el es et eu fa fi fr he 77.0 76.7 81.4 75.1 70.0 70.0 78.8 64.9 78.0 78.1 78.8 71.9 72.5 73.5 79.5 68.3 77.4 74.8 79.6 71.8 75.4 74.8 79.1 74.9 66.3 62.3 60.9 62.6 46.2 49.2 61.9 45.6 77.2 79.6 79.2 75.2 79.6 78.5 80.5 73.9 56.6 57.7 56.8 54.2 ml mr ms my nl pt ru sw ta te th 52.3 59.4 67.8 44.8 58.2 62.4 68.1 58.5 72.7 69.6 57.1 68.3 45.2 47.6 54.3 42.9 81.8 81.2 84.0 74.8 80.8 77.9 81.9 72.9 64.0 63.5 69.1 58.2 67.5 68.4 70.5 66.3 50.7 53.6 59.5 48.1 48.5 49.6 55.8 46.9 3.6 0.3 1.3 3.9 hi 65.0 66.1 73.0 66.2 tl 71.7 78.6 73.2 64.1 hu 76.4 76.5 79.8 73.8 tr 71.8 71.0 76.1 61.9 id 53.5 53.1 53.0 47.9 ur 36.9 43.0 56.4 37.2 it 81.5 80.7 81.3 74.1 vi 71.8 70.1 79.4 68.1 ja 29.0 23.6 23.2 31.2 yo 44.9 26.5 33.6 32.1 jv 66.4 63.0 62.5 63.9 zh 42.7 32.4 33.1 28.9
{ "id": "1809.04686" }
2003.10555
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators
Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.
http://arxiv.org/pdf/2003.10555
Kevin Clark, Minh-Thang Luong, Quoc V. Le, Christopher D. Manning
cs.CL
ICLR 2020
null
cs.CL
20200323
20200323
0 2 0 2 r a M 3 2 ] L C . s c [ 1 v 5 5 5 0 1 . 3 0 0 2 : v i X r a Published as a conference paper at ICLR 2020 ELECTRA: PRE-TRAINING TEXT ENCODERS AS DISCRIMINATORS RATHER THAN GENERATORS Kevin Clark Stanford University [email protected] Minh-Thang Luong Google Brain [email protected] Quoc V. Le Google Brain [email protected] # Christopher D. Manning Stanford University & CIFAR Fellow [email protected] # ABSTRACT Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to re- construct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach cor- rupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more ef- ficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representa- tions learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural lan- guage understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute. # INTRODUCTION Current state-of-the-art representation learning methods for language can be viewed as learning denoising autoencoders (Vincent et al., 2008). They select a small subset of the unlabeled input sequence (typically 15%), mask the identities of those tokens (e.g., BERT; Devlin et al. (2019)) or attention to those tokens (e.g., XLNet; Yang et al. (2019)), and then train the network to recover the original input. While more effective than conventional language-model pre-training due to learning bidirectional representations, these masked language modeling (MLM) approaches incur a substan- tial compute cost because the network only learns from 15% of the tokens per example. As an alternative, we propose replaced token detection, a pre-training task in which the model learns to distinguish real input tokens from plausible but synthetically generated replacements. Instead of masking, our method corrupts the input by replacing some tokens with samples from a proposal distribution, which is typically the output of a small masked language model. This corruption proce- dure solves a mismatch in BERT (although not in XLNet) where the network sees artificial [MASK] tokens during pre-training but not when being fine-tuned on downstream tasks. We then pre-train the network as a discriminator that predicts for every token whether it is an original or a replacement. In contrast, MLM trains the network as a generator that predicts the original identities of the corrupted tokens. A key advantage of our discriminative task is that the model learns from all input tokens instead of just the small masked-out subset, making it more computationally efficient. Although our 1 Published as a conference paper at ICLR 2020 90 7 XtNet 200ksteps _ _300ksteps _ 400ksteps_ | qi eee a SOT) TT Fe Eats XLNet A-Large tm) RoBERTa ROBERTA 100k steps 100k sees ! | 300k steps 500K steps 85 4 85 1m ' le ' ' i ee ' ' ' 80-4 80 in ' i ! 1 1 i 1 75 + @BERT-Small 75 40 1 1 i ' f @ELMo ° ' 704 704! 1 eGlove m= Replaced Token Detection Pre-training iP 1 @—® Masked Language Model Pre-training 1 ' T T T T T T T T T TT T T T 0 1 2 3 4 5 6 7 8 0 1 2 3 4 Pre-train FLOPs 1e20 Pre-train FLOPs 1e21 # ov 5 a w 2 3 [c) Figure 1: Replaced token detection pre-training consistently outperforms masked language model pre-training given the same compute budget. The left figure is a zoomed-in view of the dashed box. approach is reminiscent of training the discriminator of a GAN, our method is not adversarial in that the generator producing corrupted tokens is trained with maximum likelihood due to the difficulty of applying GANs to text (Caccia et al., 2018). We call our approach ELECTRA1 for “Efficiently Learning an Encoder that Classifies Token Re- placements Accurately.” As in prior work, we apply it to pre-train Transformer text encoders (Vaswani et al., 2017) that can be fine-tuned on downstream tasks. Through a series of ablations, we show that learning from all input positions causes ELECTRA to train much faster than BERT. We also show ELECTRA achieves higher accuracy on downstream tasks when fully trained. Most current pre-training methods require large amounts of compute to be effective, raising con- cerns about their cost and accessibility. Since pre-training with more compute almost always re- sults in better downstream accuracies, we argue an important consideration for pre-training methods should be compute efficiency as well as absolute downstream performance. From this viewpoint, we train ELECTRA models of various sizes and evaluate their downstream performance vs. their compute requirement. In particular, we run experiments on the GLUE natural language understand- ing benchmark (Wang et al., 2019) and SQuAD question answering benchmark (Rajpurkar et al., 2016). ELECTRA substantially outperforms MLM-based methods such as BERT and XLNet given the same model size, data, and compute (see Figure 1). For example, we build an ELECTRA-Small model that can be trained on 1 GPU in 4 days.2 ELECTRA-Small outperforms a comparably small BERT model by 5 points on GLUE, and even outperforms the much larger GPT model (Radford et al., 2018). Our approach also works well at large scale, where we train an ELECTRA-Large model that performs comparably to RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019), de- spite having fewer parameters and using 1/4 of the compute for training. Training ELECTRA-Large further results in an even stronger model that outperforms ALBERT (Lan et al., 2019) on GLUE and sets a new state-of-the-art for SQuAD 2.0. Taken together, our results indicate that the discrim- inative task of distinguishing real data from challenging negative samples is more compute-efficient and parameter-efficient than existing generative approaches for language representation learning. # 2 METHOD We first describe the replaced token detection pre-training task; see Figure 2 for an overview. We suggest and evaluate several modeling improvements for this method in Section 3.2. 1Code and pre-trained weights will be released at https://github.com/google-research/ electra 2It has 1/20th the parameters and requires 1/135th the pre-training compute of BERT-Large. 2 Published as a conference paper at ICLR 2020 the —»[MASK] original chef —» chef original Generator Discriminator cooked —> [MASK] (typically a (ELECTRA) replaced the —» the small MLM) original meal —> meal original Published as a conference paper at ICLR 2020 the —»[MASK] original chef —» chef original Generator Discriminator cooked —> [MASK] (typically a (ELECTRA) replaced the —» the small MLM) original meal —> meal original Figure 2: An overview of replaced token detection. The generator can be any model that produces an output distribution over tokens, but we usually use a small masked language model that is trained jointly with the discriminator. Although the models are structured like in a GAN, we train the generator with maximum likelihood rather than adversarially due to the difficulty of applying GANs to text. After pre-training, we throw out the generator and only fine-tune the discriminator (the ELECTRA model) on downstream tasks. Our approach trains two neural networks, a generator G and a discriminator D. Each one primarily consists of an encoder (e.g., a Transformer network) that maps a sequence on input tokens 2 = [1,..., Un] into a sequence of contextualized vector representations h(a) = [h1,..., hn]. For a given position t, (in our case only positions where x; = [MASK] ), the generator outputs a probability for generating a particular token x, with a softmax layer: pa(a|a) = exp (e(a)"ha(a)s) />oexp (e(a")"he(a)s) pa(a|a) = exp (e(a)"ha(a)s) />oexp (e(a")"he(a)s) where e denotes token embeddings. For a given position t, the discriminator predicts whether the token xt is “real,” i.e., that it comes from the data rather than the generator distribution, with a sigmoid output layer: D(x, t) = sigmoid(wT hD(x)t) The generator is trained to perform masked language modeling (MLM). Given an input x = [x1, x2, ..., xn], MLM first select a random set of positions (integers between 1 and n) to mask out m = [m1, ..., mk].3 The tokens in the selected positions are replaced with a [MASK] token: we denote this as xmasked = REPLACE(x, m, [MASK]). The generator then learns to predict the original identities of the masked-out tokens. The discriminator is trained to distinguish tokens in the data from tokens that have been replaced by generator samples. More specifically, we create a corrupted example xcorrupt by replacing the masked-out tokens with generator samples and train the discriminator to predict which tokens in xcorrupt match the original input x. Formally, model inputs are constructed according to xmasked = REPLACE(x, m, [MASK]) xcorrupt = REPLACE(x, m, ˆx) mi ∼ unif{1, n} for i = 1 to k ˆxi ∼ pG(xi|xmasked) for i ∈ m and the loss functions are Lim («, 0) = E (x —benc(oie™) iem Loisc(@, 9p) = E (> =1(a5™" = x) log D(a™", t) — 1(xS™" £ 2) log(1 — pies s)) t=1 Although similar to the training objective of a GAN, there are several key differences. First, if the generator happens to generate the correct token, that token is considered “real” instead of “fake”; we found this formulation to moderately improve results on downstream tasks. More importantly, the generator is trained with maximum likelihood rather than being trained adversarially to fool the discriminator. Adversarially training the generator is challenging because it is impossible to back- propagate through sampling from the generator. Although we experimented circumventing this issue Typically k = [0.15], i.e., 15% of the tokens are masked out. 3 Published as a conference paper at ICLR 2020 by using reinforcement learning to train the generator (see Appendix F), this performed worse than maximum-likelihood training. Lastly, we do not supply the generator with a noise vector as input, as is typical with a GAN. We minimize the combined loss min θG,θD x∈X LMLM(x, θG) + λLDisc(x, θD) over a large corpus X of raw text. We approximate the expectations in the losses with a single sample. We don’t back-propagate the discriminator loss through the generator (indeed, we can’t because of the sampling step). After pre-training, we throw out the generator and fine-tune the discriminator on downstream tasks. 3 EXPERIMENTS 3.1 EXPERIMENTAL SETUP We evaluate on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019) and Stanford Question Answering (SQuAD) dataset (Rajpurkar et al., 2016). GLUE contains a variety of tasks covering textual entailment (RTE and MNLI) question-answer entailment (QNLI), paraphrase (MRPC), question paraphrase (QQP), textual similarity (STS), sentiment (SST), and lin- guistic acceptability (CoLA). See Appendix C for more details on the GLUE tasks. Our evaluation metrics are Spearman correlation for STS, Matthews correlation for CoLA, and accuracy for the other GLUE tasks; we generally report the average score over all tasks. For SQuAD, we evaluate on versions 1.1, in which models select the span of text answering a question, and 2.0, in which some questions are unanswerable by the passage. We use the standard evaluation metrics of Exact-Match (EM) and F1 scores. For most experiments we pre-train on the same data as BERT, which consists of 3.3 Billion tokens from Wikipedia and BooksCorpus (Zhu et al., 2015). However, for our Large model we pre-trained on the data used for XLNet (Yang et al., 2019), which extends the BERT dataset to 33B tokens by including data from ClueWeb (Callan et al., 2009), CommonCrawl, and Gigaword (Parker et al., 2011). All of the pre-training and evaluation is on English data, although we think it would be interesting to apply our methods to multilingual data in the future. Our model architecture and most hyperparameters are the same as BERT’s. For fine-tuning on GLUE, we add simple linear classifiers on top of ELECTRA. For SQuAD, we add the question- answering module from XLNet on top of ELECTRA, which is slightly more sophisticated than BERT’s in that it jointly rather than independently predicts the start and end positions and has a “answerability” classifier added for SQuAD 2.0. Some of our evaluation datasets are small, which means accuracies of fine-tuned models can vary substantially depending on the random seed. We therefore report the median of 10 fine-tuning runs from the same pre-trained checkpoint for each result. Unless stated otherwise, results are on the dev set. See the appendix for further training details and hyperparameter values. 3.2 MODEL EXTENSIONS We improve our method by proposing and evaluating several extensions to the model. Unless stated otherwise, these experiments use the same model size and training data as BERT-Base. Weight Sharing We propose improving the efficiency of the pre-training by sharing weights be- tween the generator and discriminator. If the generator and discriminator are the same size, all of the transformer weights can be tied. However, we found it to be more efficient to have a small genera- tor, in which case we only share the embeddings (both the token and positional embeddings) of the generator and discriminator. In this case we use embeddings the size of the discriminator’s hidden states.4 The “input” and “output” token embeddings of the generator are always tied as in BERT. We compare the weight tying strategies when the generator is the same size as the discriminator. We train these models for 500k steps. GLUE scores are 83.6 for no weight tying, 84.3 for tying token embeddings, and 84.4 for tying all weights. We hypothesize that ELECTRA benefits from 4We add linear layers to the generator to project the embeddings into generator-hidden-sized representations. 4 Published as a conference paper at ICLR 2020 Which generator size works best? Comparison of Training Algorithms 85 84-4 84- 83-4 824 g 824 $ 804 e ra) w Discriminator 3e17y " 3 784 Loss oO 804 Discriminator Size — ELECTRA oo eo 768 76+ — Adversarial ELECTRA 79 4 ma 512 — Two-Stage ELECTRA dA 256 745 == BERT 7a + r r r i << a a ns a unigram 32 64 128 256 512 7681024 0 1 2 3 4 5 6 Generator Size Pre-Train FLOPs le19 # g S nw w # oO Figure 3: Left: GLUE scores for different generator/discriminator sizes (number of hidden units). Interestingly, having a generator smaller than the discriminator improves results. Right: Comparison of different training algorithms. As our focus is on efficiency, the x-axis shows FLOPs rather than train steps (e.g., ELECTRA is trained for fewer steps than BERT because it includes the generator). tied token embeddings because masked language modeling is particularly effective at learning these representations: while the discriminator only updates tokens that are present in the input or are sampled by the generator, the generator’s softmax over the vocabulary densely updates all token embeddings. On the other hand, tying all encoder weights caused little improvement while incurring the significant disadvantage of requiring the generator and discriminator to be the same size. Based on these findings, we use tied embeddings for further experiments in this paper. Smaller Generators If the generator and discriminator are the same size, training ELECTRA would take around twice as much compute per step as training only with masked language mod- eling. We suggest using a smaller generator to reduce this factor. Specifically, we make models smaller by decreasing the layer sizes while keeping the other hyperparameters constant. We also explore using an extremely simple “unigram” generator that samples fake tokens according their frequency in the train corpus. GLUE scores for differently-sized generators and discriminators are shown in the left of Figure 3. All models are trained for 500k steps, which puts the smaller gen- erators at a disadvantage in terms of compute because they require less compute per training step. Nevertheless, we find that models work best with generators 1/4-1/2 the size of the discriminator. We speculate that having too strong of a generator may pose a too-challenging task for the discriminator, preventing it from learning as effectively. In particular, the discriminator may have to use many of its parameters modeling the generator rather than the actual data distribution. Further experiments in this paper use the best generator size found for the given discriminator size. Training Algorithms Lastly, we explore other training algorithms for ELECTRA, although these did not end up improving results. The proposed training objective jointly trains the generator and discriminator. We experiment with instead using the following two-stage training procedure: 1. Train only the generator with LMLM for n steps. 2. Initialize the weights of the discriminator with the weights of the generator. Then train the discriminator with LDisc for n steps, keeping the generator’s weights frozen. Note that the weight initialization in this procedure requires having the same size for the generator and discriminator. We found that without the weight initialization the discriminator would some- times fail to learn at all beyond the majority class, perhaps because the generator started so far ahead of the discriminator. Joint training on the other hand naturally provides a curriculum for the dis- criminator where the generator starts off weak but gets better throughout training. We also explored training the generator adversarially as in a GAN, using reinforcement learning to accommodate the discrete operations of sampling from the generator. See Appendix F for details. Results are shown in the right of Figure 3. During two-stage training, downstream task performance notably improves after the switch from the generative to the discriminative objective, but does not end up outscoring joint training. Although still outperforming BERT, we found adversarial training to underperform maximum-likelihood training. Further analysis suggests the gap is caused by two 5 Published as a conference paper at ICLR 2020 Model Train / Infer FLOPs Speedup Params Train Time + Hardware GLUE ELMo GPT BERT-Small BERT-Base 3.3e18 / 2.6e10 4.0e19 / 3.0e10 1.4e18 / 3.7e9 6.4e19 / 2.9e10 19x / 1.2x 1.6x / 0.97x 45x / 8x 1x / 1x 96M 117M 14M 110M 14d on 3 GTX 1080 GPUs 25d on 8 P6000 GPUs 4d on 1 V100 GPU 4d on 16 TPUv3s 71.2 78.8 75.1 82.2 ELECTRA-Small 50% trained 25% trained 12.5% trained 6.25% trained ELECTRA-Base 1.4e18 / 3.7e9 7.1e17 / 3.7e9 3.6e17 / 3.7e9 1.8e17 / 3.7e9 8.9e16 / 3.7e9 6.4e19 / 2.9e10 45x / 8x 90x / 8x 181x / 8x 361x / 8x 722x / 8x 1x / 1x 14M 14M 14M 14M 14M 110M 4d on 1 V100 GPU 2d on 1 V100 GPU 1d on 1 V100 GPU 12h on 1 V100 GPU 6h on 1 V100 GPU 4d on 16 TPUv3s 79.9 79.0 77.7 76.0 74.1 85.1 Table 1: Comparison of small models on the GLUE dev set. BERT-Small/Base are our implemen- tation and use the same hyperparameters as ELECTRA-Small/Base. Infer FLOPs assumes single length-128 input. Training times should be taken with a grain of salt as they are for different hard- ware and with sometimes un-optimized code. ELECTRA performs well even when trained on a single GPU, scoring 5 GLUE points higher than a comparable BERT model and even outscoring the much larger GPT model. problems with adversarial training. First, the adversarial generator is simply worse at masked lan- guage modeling; it achieves 58% accuracy at masked language modeling compared to 65% accuracy for an MLE-trained one. We believe the worse accuracy is mainly due to the poor sample efficiency of reinforcement learning when working in the large action space of generating text. Secondly, the adversarially trained generator produces a low-entropy output distribution where most of the proba- bility mass is on a single token, which means there is not much diversity in the generator samples. Both of these problems have been observed in GANs for text in prior work (Caccia et al., 2018). # 3.3 SMALL MODELS As a goal of this work is to improve the efficiency of pre-training, we develop a small model that can be quickly trained on a single GPU. Starting with the BERT-Base hyperparameters, we shortened the sequence length (from 512 to 128), reduced the batch size (from 256 to 128), reduced the model’s hidden dimension size (from 768 to 256), and used smaller token embeddings (from 768 to 128). To provide a fair comparison, we also train a BERT-Small model using the same hyperparameters. We train BERT-Small for 1.5M steps, so it uses the same training FLOPs as ELECTRA-Small, which was trained for 1M steps.5 In addition to BERT, we compare against two less resource-intensive pre-training methods based on language modeling: ELMo (Peters et al., 2018) and GPT (Radford et al., 2018).6 We also show results for a base-sized ELECTRA model comparable to BERT-Base. Results are shown in Table 1. See Appendix D for additional results, including stronger small-sized and base-sized models trained with more compute. ELECTRA-Small performs remarkably well given its size, achieving a higher GLUE score than other methods using substantially more compute and parameters. For example, it scores 5 points higher than a comparable BERT-Small model and even outperforms the much larger GPT model. ELECTRA-Small is trained mostly to convergence, with models trained for even less time (as little as 6 hours) still achieving reasonable performance. While small models distilled from larger pre-trained transformers can also achieve good GLUE scores (Sun et al., 2019b; Jiao et al., 2019), these models require first expending substantial compute to pre-train the larger teacher model. The results also demonstrate the strength of ELECTRA at a moderate size; our base-sized ELECTRA model substantially outperforms BERT-Base and even outperforms BERT-Large (which gets 84.0 GLUE score). We hope ELECTRA’s ability to achieve strong results with relatively little compute will broaden the accessibility of developing and applying pre-trained models in NLP. 5ELECTRA requires more FLOPs per step because it consists of the generator as well as the discriminator. 6GPT is similar in size to BERT-Base, but is trained for fewer steps. 6 Published as a conference paper at ICLR 2020 Model Train FLOPs Params CoLA SST MRPC STS QQP MNLI QNLI RTE Avg. BERT RoBERTa-100K RoBERTa-500K XLNet 1.9e20 (0.27x) 335M 60.6 6.4e20 (0.90x) 356M 66.1 356M 68.0 3.2e21 (4.5x) 360M 69.0 3.9e21 (5.4x) 93.2 88.0 95.6 91.4 96.4 90.9 97.0 90.8 90.0 91.3 86.6 92.2 92.0 89.3 92.1 92.2 90.2 92.2 92.3 90.8 92.3 94.0 94.7 94.9 70.4 84.0 82.7 87.9 86.6 88.9 85.9 89.1 BERT (ours) 7.1e20 (1x) ELECTRA-400K 7.1e20 (1x) ELECTRA-1.75M 3.1e21 (4.4x) 335M 67.0 335M 69.3 335M 69.1 95.9 89.1 96.0 90.6 96.9 90.8 91.2 91.5 89.6 92.1 92.4 90.5 92.6 92.4 90.9 93.5 94.5 95.0 79.5 87.2 86.8 89.0 88.0 89.5 Table 2: Comparison of large models on the GLUE dev set. ELECTRA and RoBERTa are shown for different numbers of pre-training steps, indicated by the numbers after the dashes. ELECTRA performs comparably to XLNet and RoBERTa when using less than 1/4 of their pre-training compute and outperforms them when given a similar amount of pre-training compute. BERT dev results are from Clark et al. (2019). Model Train FLOPs CoLA SST MRPC STS QQP MNLI QNLI RTE WNLI Avg.* Score BERT RoBERTa ALBERT XLNet 1.9e20 (0.06x) 60.5 3.2e21 (1.02x) 67.8 3.1e22 (10x) 69.1 3.9e21 (1.26x) 70.2 94.9 85.4 96.7 89.8 97.1 91.2 97.1 90.5 86.5 89.3 86.7 91.9 90.2 90.8 92.0 90.5 91.3 92.6 90.4 90.9 92.7 95.4 – – 70.1 65.1 88.2 89.0 89.2 91.8 88.5 92.5 79.8 88.1 89.0 89.1 80.5 88.1 – – ELECTRA 3.1e21 (1x) 71.7 97.1 90.7 92.5 90.8 91.3 95.8 89.8 92.5 89.5 89.4 Table 3: GLUE test-set results for large models. Models in this table incorporate additional tricks such as ensembling to improve scores (see Appendix B for details). Some models do not have QNLI scores because they treat QNLI as a ranking task, which has recently been disallowed by the GLUE benchmark. To compare against these models, we report the average score excluding QNLI (Avg.*) in addition to the GLUE leaderboard score (Score). “ELECTRA” and “RoBERTa” refer to the fully-trained ELECTRA-1.75M and RoBERTa-500K models. 3.4 LARGE MODELS We train big ELECTRA models to measure the effectiveness of the replaced token detection pre- training task at the large scale of current state-of-the-art pre-trained Transformers. Our ELECTRA- Large models are the same size as BERT-Large but are trained for much longer. In particular, we train a model for 400k steps (ELECTRA-400K; roughly 1/4 the pre-training compute of RoBERTa) and one for 1.75M steps (ELECTRA-1.75M; similar compute to RoBERTa). We use a batch size 2048 and the XLNet pre-training data. We note that although the XLNet data is similar to the data used to train RoBERTa, the comparison is not entirely direct. As a baseline, we trained our own BERT-Large model using the same hyperparameters and training time as ELECTRA-400K. Results on the GLUE dev set are shown in Table 2. ELECTRA-400K performs comparably to RoBERTa and XLNet. However, it took less than 1/4 of the compute to train ELECTRA-400K as it did to train RoBERTa and XLNet, demonstrating that ELECTRA’s sample-efficiency gains hold at large scale. Training ELECTRA for longer (ELECTRA-1.75M) results in a model that outscores them on most GLUE tasks while still requiring less pre-training compute. Surprisingly, our baseline BERT model scores notably worse than RoBERTa-100K, suggesting our models may benefit from more hyperparameter tuning or using the RoBERTa training data. ELECTRA’s gains hold on the GLUE test set (see Table 3), although these comparisons are less apples-to-apples due to the additional tricks employed by the models (see Appendix B). Results on SQuAD are shown in Table 4. Consistent, with the GLUE results, ELECTRA scores better than masked-language-modeling-based methods given the same compute resources. For ex- ample, ELECTRA-400K outperforms RoBERTa-100k and our BERT baseline, which use similar amounts of pre-training compute. ELECTRA-400K also performs comparably to RoBERTa-500K despite using less than 1/4th of the compute. Unsurprisingly, training ELECTRA longer improves results further: ELECTRA-1.75M scores higher than previous models on the SQuAD 2.0 bench- 7 Published as a conference paper at ICLR 2020 Model Train FLOPs Params SQuAD 1.1 dev EM F1 SQuAD 2.0 dev EM F1 F1 BERT-Base BERT SpanBERT XLNet-Base XLNet RoBERTa-100K RoBERTa-500K ALBERT 6.4e19 (0.09x) 1.9e20 (0.27x) 7.1e20 (1x) 6.6e19 (0.09x) 3.9e21 (5.4x) 6.4e20 (0.90x) 3.2e21 (4.5x) 3.1e22 (44x) 110M 335M 335M 117M 360M 356M 356M 235M 80.8 84.1 88.8 81.3 89.7 – 88.9 89.3 88.5 90.9 94.6 – 95.1 94.0 94.6 94.8 – 79.0 85.7 78.5 87.9 – 86.5 87.4 – 81.8 88.7 – 90.6 87.7 89.4 90.2 – 80.0 85.7 – 87.9 – 86.8 88.1 – 83.0 88.7 – 90.7 – 89.8 90.9 BERT (ours) ELECTRA-Base ELECTRA-400K ELECTRA-1.75M 7.1e20 (1x) 6.4e19 (0.09x) 7.1e20 (1x) 3.1e21 (4.4x) 335M 110M 335M 335M 88.0 84.5 88.7 89.7 93.7 90.8 94.2 94.9 84.7 80.5 86.9 88.0 87.5 83.3 89.6 90.6 – – – 88.7 – – – 91.4 Table 4: Results on the SQuAD for non-ensemble models. mark. ELECTRA-Base also yields strong results, scoring substantially better than BERT-Base and XLNet-Base, and even surpassing BERT-Large according to most metrics. ELECTRA generally performs better at SQuAD 2.0 than 1.1. Perhaps replaced token detection, in which the model distinguishes real tokens from plausible fakes, is particularly transferable to the answerability clas- sification of SQuAD 2.0, in which the model must distinguish answerable questions from fake unan- swerable questions. 3.5 EFFICIENCY ANALYSIS We have suggested that posing the training objective over a small subset of tokens makes masked language modeling inefficient. However, it isn’t entirely obvious that this is the case. After all, the model still receives a large number of input tokens even though it predicts only a small number of masked tokens. To better understand where the gains from ELECTRA are coming from, we compare a series of other pre-training objectives that are designed to be a set of “stepping stones” between BERT and ELECTRA. • ELECTRA 15%: This model is identical to ELECTRA except the discriminator loss only comes from the 15% of the tokens that were masked out of the input. In other words, the sum in the discriminator loss LDisc is over i ∈ m instead of from 1 to n.7 Replace MLM: This objective is the same as masked language modeling except instead of replacing masked-out tokens with [MASK], they are replaced with tokens from a generator model. This objective tests to what extent ELECTRA’s gains come from solving the dis- crepancy of exposing the model to [MASK] tokens during pre-training but not fine-tuning. • All-Tokens MLM: Like in Replace MLM, masked tokens are replaced with generator sam- ples. Furthermore, the model predicts the identity of all tokens in the input, not just ones that were masked out. We found it improved results to train this model with an explicit copy mechanism that outputs a copy probability D for each token using a sigmoid layer. The model’s output distribution puts D weight on the input token plus 1 − D times the output of the MLM softmax. This model is essentially a combination of BERT and ELEC- TRA. Note that without generator replacements, the model would trivially learn to make predictions from the vocabulary for [MASK] tokens and copy the input for other ones. Results are shown in Table 5. First, we find that ELECTRA is greatly benefiting from having a loss defined over all input tokens rather than just a subset: ELECTRA 15% performs much worse than ELECTRA. Secondly, we find that BERT performance is being slightly harmed from the pre-train fine-tune mismatch from [MASK] tokens, as Replace MLM slightly outperforms BERT. We note that BERT (including our implementation) already includes a trick to help with the pre-train/fine- tune discrepancy: masked tokens are replaced with a random token 10% of the time and are kept the 7We also trained a discriminator that learns from a random 15% of the input tokens distinct from the subset that was originally masked out; this model performed slightly worse. 8 Published as a conference paper at ICLR 2020 Model ELECTRA All-Tokens MLM Replace MLM ELECTRA 15% BERT GLUE score 85.0 84.3 82.4 82.4 82.2 Table 5: Compute-efficiency experiments (see text for details). & 86 G7 844 5 64 so 824 Fj 2% 804 € 1 $754 5 3 784 £44 g w g wy 3 76 e34 2 704 O74 £24 o 72- m= ELECTRA s — ELECTRA-256 704 © BERT 5 1 65 4 — _ BERT-256 68 Tt T wy 0-—>—7—t T rr 128 256 384 512 768 iH 128 256 384 512 768 o 123 4 5 Hidden State Size Hidden State Size Pre-Train FLOPs 1¢18 Figure 4: Left and Center: Comparison of BERT and ELECTRA for different model sizes. Right: A small ELECTRA model converges to higher downstream accuracy than BERT, showing the im- provement comes from more than just faster training. same 10% of the time. However, our results suggest these simple heuristics are insufficient to fully solve the issue. Lastly, we find that All-Tokens MLM, the generative model that makes predictions over all tokens instead of a subset, closes most of the gap between BERT and ELECTRA. In total, these results suggest a large amount of ELECTRA’s improvement can be attributed to learning from all tokens and a smaller amount can be attributed to alleviating the pre-train fine-tune mismatch. The improvement of ELECTRA over All-Tokens MLM suggests that the ELECTRA’s gains come from more than just faster training. We study this further by comparing BERT to ELECTRA for various model sizes (see Figure 4, left). We find that the gains from ELECTRA grow larger as the models get smaller. The small models are trained fully to convergence (see Figure 4, right), showing that ELECTRA achieves higher downstream accuracy than BERT when fully trained. We speculate that ELECTRA is more parameter-efficient than BERT because it does not have to model the full distribution of possible tokens at each position, but we believe more analysis is needed to completely explain ELECTRA’s parameter efficiency. 4 RELATED WORK Self-Supervised Pre-training for NLP Self-supervised learning has been used to learn word rep- resentations (Collobert et al., 2011; Pennington et al., 2014) and more recently contextual represen- tations of words though objectives such as language modeling (Dai & Le, 2015; Peters et al., 2018; Howard & Ruder, 2018). BERT (Devlin et al., 2019) pre-trains a large Transformer (Vaswani et al., 2017) at the masked-language modeling task. There have been numerous extensions to BERT. For example, MASS (Song et al., 2019) and UniLM (Dong et al., 2019) extend BERT to generation tasks by adding auto-regressive generative training objectives. ERNIE (Sun et al., 2019a) and SpanBERT (Joshi et al., 2019) mask out contiguous sequences of token for improved span representations. This idea may be complementary to ELECTRA; we think it would be interesting to make ELECTRA’s generator auto-regressive and add a “replaced span detection” task. Instead of masking out input tokens, XLNet (Yang et al., 2019) masks attention weights such that the input sequence is auto- regressively generated in a random order. However, this method suffers from the same inefficiencies as BERT because XLNet only generates 15% of the input tokens in this way. Like ELECTRA, XL- Net may alleviate BERT’s pretrain-finetune discrepancy by not requiring [MASK] tokens, although this isn’t entirely clear because XLNet uses two “streams” of attention during pre-training but only one for fine-tuning. Recently, models such as TinyBERT (Jiao et al., 2019) and MobileBERT (Sun et al., 2019b) show that BERT can effectively be distilled down to a smaller model. In contrast, we focus more on pre-training speed rather than inference speed, so we train ELECTRA-Small from scratch. 9 Published as a conference paper at ICLR 2020 Generative Adversarial Networks GANs (Goodfellow et al., 2014) are effective at generating high-quality synthetic data. Radford et al. (2016) propose using the discriminator of a GAN in downstream tasks, which is similar to our method. GANs have been applied to text data (Yu et al., 2017; Zhang et al., 2017), although state-of-the-art approaches still lag behind standard maximum- likelihood training (Caccia et al., 2018; Tevet et al., 2018). Although we do not use adversarial learning, our generator is particularly reminiscent of MaskGAN (Fedus et al., 2018), which trains the generator to fill in tokens deleted from the input. Contrastive Learning Broadly, contrastive learning methods distinguish observed data points from fictitious negative samples. They have been applied to many modalities including text (Smith & Eisner, 2005), images (Chopra et al., 2005), and video (Wang & Gupta, 2015; Sermanet et al., 2017) data. Common approaches learn embedding spaces where related data points are similar (Saunshi et al., 2019) or models that rank real data points over negative samples (Collobert et al., 2011; Bordes et al., 2013). ELECTRA is particularly related to Noise-Contrastive Estimation (NCE) (Gutmann & Hyv¨arinen, 2010), which also trains a binary classifier to distinguish real and fake data points. Word2Vec (Mikolov et al., 2013), one of the earliest pre-training methods for NLP, uses contrastive learning. In fact, ELECTRA can be viewed as a massively scaled-up version of Continuous Bag- of-Words (CBOW) with Negative Sampling. CBOW also predicts an input token given surrounding context and negative sampling rephrases the learning task as a binary classification task on whether the input token comes from the data or proposal distribution. However, CBOW uses a bag-of- vectors encoder rather than a transformer and a simple proposal distribution derived from unigram token frequencies instead of a learned generator. # 5 CONCLUSION We have proposed replaced token detection, a new self-supervised task for language representation learning. The key idea is training a text encoder to distinguish input tokens from high-quality nega- tive samples produced by an small generator network. Compared to masked language modeling, our pre-training objective is more compute-efficient and results in better performance on downstream tasks. It works well even when using relatively small amounts of compute, which we hope will make developing and applying pre-trained text encoders more accessible to researchers and practi- tioners with less access to computing resources. We also hope more future work on NLP pre-training will consider efficiency as well as absolute performance, and follow our effort in reporting compute usage and parameter counts along with evaluation metrics. # ACKNOWLEDGEMENTS We thank Allen Nie, Prajit Ramachandran, audiences at the CIFAR LMB meeting and U. de Montr´eal, and the anonymous reviewers for their thoughtful comments and suggestions. We thank Matt Peters for answering our questions about ELMo, Alec Radford for answers about GPT, Naman Goyal and Myle Ott for answers about RoBERTa, Zihang Dai for answers about XLNet, Zhenzhong Lan for answers about ALBERT, and Danqi Chen and Mandar Joshi for answers about SpanBERT. Kevin is supported by a Google PhD Fellowship. # REFERENCES Antoine Bordes, Nicolas Usunier, Alberto Garc´ıa-Dur´an, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In NeurIPS, 2013. Avishek Joey Bose, Huan Ling, and Yanshuai Cao. Adversarial contrastive estimation. In ACL, 2018. Massimo Caccia, Lucas Caccia, William Fedus, Hugo Larochelle, Joelle Pineau, and Laurent Char- lin. Language GANs falling short. arXiv preprint arXiv:1811.02549, 2018. Jamie Callan, Mark Hoy, Changkuk Yoo, and Le Zhao. Clueweb09 data set, 2009. URL https: //lemurproject.org/clueweb09.php/. 10 Published as a conference paper at ICLR 2020 Daniel M. Cer, Mona T. Diab, Eneko Agirre, I˜nigo Lopez-Gazpio, and Lucia Specia. Semeval- 2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In SemEval@ACL, 2017. Sumit Chopra, Raia Hadsell, and Yann LeCun. Learning a similarity metric discriminatively, with application to face verification. CVPR, 2005. Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, and Quoc V. Le. BAM! Born-again multi-task networks for natural language understanding. In ACL, 2019. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. Natural language processing (almost) from scratch. JMLR, 2011. Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In NeurIPS, 2015. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019. William B. Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In IWP@IJCNLP, 2005. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. Unified language model pre-training for natural language understanding and generation. In NeurIPS, 2019. William Fedus, Ian J. Goodfellow, and Andrew M. Dai. MaskGAN: Better text generation via filling in the . In ICLR, 2018. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B. Dolan. The third pascal recog- nizing textual entailment challenge. In ACL-PASCAL@ACL, 2007. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, 2014. Michael Gutmann and Aapo Hyv¨arinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In AISTATS, 2010. Jeremy Howard and Sebastian Ruder. Universal language model fine-tuning for text classification. In ACL, 2018. Shankar lease: First-Quora-Dataset-Release-Question-Pairs. Iyer, Nikhil Dandekar, and Kornl Csernai. Question pairs, 2017. URL re- https://data.quora.com/ First Quora dataset Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351, 2019. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. arXiv preprint SpanBERT: Improving pre-training by representing and predicting spans. arXiv:1907.10529, 2019. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Sori- cut. ALBERT: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized BERT pre- training approach. arXiv preprint arXiv:1907.11692, 2019. Tomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. In ICLR Workshop Papers, 2013. Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. English gigaword, fifth edition. Technical report, Linguistic Data Consortium, Philadelphia, 2011. 11 Published as a conference paper at ICLR 2020 Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In EMNLP, 2014. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. In NAACL-HLT, 2018. Jason Phang, Thibault F´evry, and Samuel R Bowman. Sentence encoders on STILTs: Supplemen- tary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088, 2018. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language under- standing by generative pre-training. https://blog.openai.com/language-unsupervised, 2018. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy S. Liang. Squad: 100, 000+ questions for machine comprehension of text. In EMNLP, 2016. Nikunj Saunshi, Orestis Plevrakis, Sanjeev Arora, Mikhail Khodak, and Hrishikesh Khandeparkar. A theoretical analysis of contrastive unsupervised representation learning. In ICML, 2019. Pierre Sermanet, Corey Lynch, Yevgen Chebotar, Jasmine Hsu, Eric Jang, Stefan Schaal, and Sergey Levine. Time-contrastive networks: Self-supervised learning from video. ICRA, 2017. Noah A. Smith and Jason Eisner. Contrastive estimation: Training log-linear models on unlabeled data. In ACL, 2005. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. MASS: Masked sequence to sequence pre-training for language generation. In ICML, 2019. Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng, Xuyi Chen, Han Zhang, Xin Tian, Danxiang Zhu, Hao Tian, and Hua Wu. Ernie: Enhanced representation through knowledge integration. arXiv preprint arXiv:1904.09223, 2019a. Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, and Denny Zhou. Mobile- BERT: Task-agnostic compression of bert for resource limited devices, 2019b. URL https: //openreview.net/forum?id=SJxjVaNKwB. Guy Tevet, Gavriel Habib, Vered Shwartz, and Jonathan Berant. Evaluating text gans as language models. In NAACL-HLT, 2018. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS, 2017. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In ICML, 2008. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR, 2019. Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. ICCV, 2015. Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471, 2018. Adina Williams, Nikita Nangia, and Samuel R. Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In NAACL-HLT, 2018. 12 Published as a conference paper at ICLR 2020 Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine Learning, 8(3-4):229–256, 1992. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. XLNet: Generalized autoregressive pretraining for language understanding. In NeurIPS, 2019. Lantao Yu, Weinan Zhang, Jun Wang, and Yingrui Yu. SeqGAN: Sequence generative adversarial nets with policy gradient. In AAAI, 2017. Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, and Lawrence Carin. Adversarial feature matching for text generation. In ICML, 2017. Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Tor- ralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. ICCV, 2015. # A PRE-TRAINING DETAILS The following details apply to both our ELECTRA models and BERT baselines. We mostly use the same hyperparameters as BERT. We set λ, the weight for the discriminator objective in the loss to 50.8 We use dynamic token masking with the masked positions decided on-the-fly instead of during preprocessing. Also, we did not use the next sentence prediction objective proposed in the original BERT paper, as recent work has suggested it does not improve scores (Yang et al., 2019; Liu et al., 2019). For our ELECTRA-Large model, we used a higher mask percent (25 instead of 15) because we noticed the generator was achieving high accuracy with 15% masking, resulting in very few replaced tokens. We searched for the best learning rate for the Base and Small models out of [1e-4, 2e-4, 3e-4, 5e-4] and selected λ out of [1, 10, 20, 50, 100] in early experiments. Otherwise we did no hyperparameter tuning beyond the experiments in Section 3.2. The full set of hyperparameters are listed in Table 6. # B FINE-TUNING DETAILS For Large-sized models, we used the hyperparameters from Clark et al. (2019) for the most part. However, after noticing that RoBERTa (Liu et al., 2019) uses more training epochs (up to 10 rather than 3) we searched for the best number of train epochs out of [10, 3] for each task. For SQuAD, we decreased the number of train epochs to 2 to be consistent with BERT and RoBERTa. For Base- sized models we searched for a learning rate out of [3e-5, 5e-5, 1e-4, 1.5e-4] and the layer-wise learning-rate decay out of [0.9, 0.8, 0.7], but otherwise used the same hyperparameters as for Large models. We found the small models benefit from a larger learning rate and searched for the best one out of [1e-4, 2e-4, 3e-4, 5e-3]. With the exception of number of train epochs, we used the same hyperparameters for all tasks. In contrast, previous research on GLUE such as BERT, XLNet, and RoBERTa separately searched for the best hyperparameters for each task. We expect our results would improve slightly if we performed the same sort of additional hyperparameter search. The full set of hyperparameters is listed in Table 7. Following BERT, we do not show results on the WNLI GLUE task for the dev set results, as it is difficult to beat even the majority classifier using a standard fine-tuning-as-classifier approach. For the GLUE test set results, we apply the standard tricks used by many of the GLUE leaderboard submissions including RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019), and ALBERT (Lan et al., 2019). Specifically: • For RTE and STS we use intermediate task training (Phang et al., 2018), starting from an ELECTRA checkpoint that has been fine-tuned on MNLI. For RTE, we found it helpful to combine this with a lower learning rate of 2e-5. 8As a binary classification task instead of the 30,000-way classification task in MLM, the discriminator’s loss was typically much lower than the generator’s. 13 Published as a conference paper at ICLR 2020 Hyperparameter Small Base Large Number of layers 12 2 24 Hidden Size 256 768 024 FEN inner hidden size 1024 3072 4096 Attention heads 4 2 6 Attention head size 64 64 64 Embedding Size 128 768 024 Generator Size (multiplier for hidden-size, 1/4 B /4 FFN-size, and num-attention-heads) Mask percent 15 5 25 Learning Rate Decay Linear Linear Linear Warmup steps 10000 0000 0000 Learning Rate Se-4 2e-4 2e-4 Adam € le-6 e-6 e-6 Adam 6; 0.9 0.9 0.9 Adam 65 0.999 0.999 0.999 Attention Dropout 0.1 0.1 0.1 Dropout 0.1 0.1 0.1 Weight Decay 0.01 0.01 0.01 Batch Size 128 256 2048 Train Steps (BERT/ELECTRA) 1.45M/IM_ 1M/766K 464K/400K Table 6: Pre-train hyperparameters. We also train an ELECTRA-Large model for 1.75M steps (other hyperparameters are identical). Hyperparameter GLUE Value Learning Rate 3e-4 for Small, 1e-4 for Base, Se-5 for Large Adam € le-6 Adam 6, 0.9 Adam 85 0.999 Layerwise LR decay 0.8 for Base/Small, 0.9 for Large Learning rate decay _—_ Linear Warmup fraction 0.1 Attention Dropout 0.1 Dropout 0.1 Weight Decay 0 Batch Size 32 Train Epochs 10 for RTE and STS, 2 for SQuAD, 3 for other tasks Table 7: Fine-tune hyperparameters • For WNLI, we follow the trick described in Liu et al. (2019) where we extract candidate antecedents for the pronoun using rules and train a model to score the correct antecedent highly. However, different from Liu et al. (2019), the scoring function is not based on MLM probabilities. Instead, we fine-tune ELECTRA’s discriminator so it assigns high scores to the tokens of the correct antecedent when the correct antecedent replaces the pronoun. For example, if the Winograd schema is “the trophy could not fit in the suitcase because it was too big,” we train the discriminator so it gives a high score to “trophy” in “the trophy could not fit in the suitcase because the trophy was too big” but a low score to “suitcase” in “the trophy could not fit in the suitcase because the suitcase was too big.” • For each task we ensemble the best 10 of 30 models fine-tuned with different random seeds but initialized from the same pre-trained checkpoint. While these tricks do improve scores, they make having clear scientific comparisons more difficult because they require extra work to implement, require lots of compute, and make results less apples- 14 Published as a conference paper at ICLR 2020 to-apples because different papers implement the tricks differently. We therefore also report results for ELECTRA-1.75M with the only trick being dev-set model selection (best of 10 models), which is the setting BERT used to report results, in Table 8. For our SQuAD 2.0 test set submission, we fine-tuned 20 models from the same pre-trained check- point and submitted the one with the best dev set score. # C DETAILS ABOUT GLUE We provide further details about the GLUE benchmark tasks below • CoLA: Corpus of Linguistic Acceptability (Warstadt et al., 2018). The task is to determine whether a given sentence is grammatical or not. The dataset contains 8.5k train examples from books and journal articles on linguistic theory. • SST: Stanford Sentiment Treebank (Socher et al., 2013). The tasks is to determine if the sentence is positive or negative in sentiment. The dataset contains 67k train examples from movie reviews. • MRPC: Microsoft Research Paraphrase Corpus (Dolan & Brockett, 2005). The task is to predict whether two sentences are semantically equivalent or not. The dataset contains 3.7k train examples from online news sources. STS: Semantic Textual Similarity (Cer et al., 2017). The tasks is to predict how seman- tically similar two sentences are on a 1-5 scale. The dataset contains 5.8k train examples drawn from new headlines, video and image captions, and natural language inference data. • QQP: Quora Question Pairs (Iyer et al., 2017). The task is to determine whether a pair of questions are semantically equivalent. The dataset contains 364k train examples from the community question-answering website Quora. • MNLI: Multi-genre Natural Language Inference (Williams et al., 2018). Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis, contradicts the hypothesis, or neither. The dataset contains 393k train examples drawn from ten different sources. • QNLI: Question Natural Language Inference; constructed from SQuAD (Rajpurkar et al., 2016). The task is to predict whether a context sentence contains the answer to a question sentence. The dataset contains 108k train examples from Wikipedia. • RTE: Recognizing Textual Entailment (Giampiccolo et al., 2007). Given a premise sen- tence and a hypothesis sentence, the task is to predict whether the premise entails the hy- pothesis or not. The dataset contains 2.5k train examples from a series of annual textual entailment challenges. # D FURTHER RESULTS ON GLUE We report results for ELECTRA-Base and ELECTRA-Small on the GLUE test set in Table 8. Furthermore, we push the limits of base-sized and small-sized models by training them on the XLNet data instead of wikibooks and for much longer (4e6 train steps); these models are called ELECTRA-Base++ and ELECTRA-Small++ in the table. For ELECTRA-Small++ we also in- creased the sequence length to 512; otherwise the hyperparameters are the same as the ones listed in Table 6. Lastly, the table contains results for ELECTRA-1.75M without the tricks described in Appendix B. Consistent with dev-set results in the paper, ELECTRA-Base outperforms BERT-Large while ELECTRA-Small outperforms GPT in terms of average score. Unsurprisingly, the ++ models perform even better. The small model scores are even close to TinyBERT (Jiao et al., 2019) and Mo- bileBERT (Sun et al., 2019b). These models learn from BERT-Base using sophisticated distillation procedures. Our ELECTRA models, on the other hand, are trained from scratch. Given the success of distilling BERT, we believe it would be possible to build even stronger small pre-trained models by distilling ELECTRA. ELECTRA appears to be particularly effective at CoLA. In CoLA the goal is to distinguish linguistically acceptable sentences from ungrammatical ones, which fairly closely matches ELECTRA’s pre-training task of identifying fake tokens, perhaps explaining ELECTRA’s strength at the task. 15 Published as a conference paper at ICLR 2020 Model Train FLOPs Params CoLA SST MRPC STS QQP MNLI QNLI RTE Avg. TinyBERT MobileBERT GPT BERT-Base BERT-Large SpanBERT 6.4e19+ (45x+) 14.5M 51.1 6.4e19+ (45x+) 25.3M 51.1 117M 45.4 4.0e19 (29x) 110M 52.1 6.4e19 (45x) 335M 60.5 1.9e20 (135x) 335M 64.3 7.1e20 (507x) 93.1 82.6 92.6 84.5 91.3 75.7 93.5 84.8 94.9 85.4 94.8 87.9 83.7 89.1 84.6 84.8 88.3 84.3 80.0 88.5 82.1 85.8 89.2 84.6 86.5 89.3 86.7 89.9 89.5 87.7 90.4 91.6 88.1 90.5 92.7 94.3 70.0 80.6 70.4 81.0 56.0 75.9 66.4 80.9 70.1 83.3 79.0 85.9 54.6 14M ELECTRA-Small 55.6 14M ELECTRA-Small++ 3.3e19 (18x) 110M 59.7 ELECTRA-Base 6.4e19 (45x) ELECTRA-Base++ 3.3e20 (182x) 110M 64.6 ELECTRA-1.75M 3.1e21 (2200x) 330M 68.1 1.4e18 (1x) 89.1 83.7 91.1 84.9 93.4 86.7 96.0 88.1 96.7 89.2 80.3 88.0 79.7 84.6 88.0 81.6 87.7 89.1 85.8 90.2 89.5 88.5 91.7 90.4 90.7 87.7 88.3 92.7 93.1 95.5 60.8 78.0 63.6 79.7 73.1 83.5 75.2 85.7 86.1 88.6 Table 8: Results for models on the GLUE test set. Only models with single-task finetuning (no ensembling, task-specific tricks, etc.) are shown. # E COUNTING FLOPS We chose to measure compute usage in terms of floating point operations (FLOPs) because it is a measure agnostic to the particular hardware, low-level optimizations, etc. However, it is worth not- ing that in some cases abstracting away hardware details is a drawback because hardware-centered optimizations can be key parts of a model’s design, such as the speedup ALBERT (Lan et al., 2019) gets by tying weights and thus reducing communication overhead between TPU workers. We used TensorFlow’s FLOP-counting capabilities9 and checked the results with by-hand computation. We made the following assumptions: • An “operation” is a mathematical operation, not a machine instruction. For example, an exp is one op like an add, even though in practice the exp might be slower. We believe this assumption does not substantially change compute estimates because matrix-multiplies dominate the compute for most models. Similarly, we count matrix-multiplies as 2 ∗ m ∗ n FLOPs instead of m ∗ n as one might if considering fused multiply-add operations. • The backwards pass takes the same number of FLOPs as the forward pass. This assumption is not exactly right (e.g., for softmax cross entropy loss the backward pass is faster), but importantly, the forward/backward pass FLOPs really are the same for matrix-multiplies, which is most of the compute anyway. • We assume “dense” embedding lookups (i.e., multiplication by a one-hot vector). In prac- tice, sparse embedding lookups are much slower than constant time; on some hardware accelerators dense operations are actually faster than sparse lookups. # F ADVERSARIAL TRAINING Here we detail attempts to adversarially train the generator instead of using maximum likelihood. In particular we train the generator G to maximize the discriminator loss LDisc. As our discriminator isn’t precisely the same as the discriminator of a GAN (see the discussion in Section 2), this method is really an instance of Adversarial Contrastive Estimation (Bose et al., 2018) rather than Generative Adversarial Training. It is not possible to adversarially train the generator by back-propagating through the discriminator (e.g., as in a GAN trained on images) due to the discrete sampling from the generator, so we use reinforcement learning instead. Our generator is different from most text generation models in that it is non-autogregressive: predic- tions are made independently. In other words, rather than taking a sequence of actions where each action generates a token, the generator takes a single giant action of generating all tokens simulta- neously, where the probability for the action factorizes as the product of generator probabilities for each token. To deal with this enormous action space, we make the following simplifying assumption: that the discriminator’s prediction D(xcorrupt, t) depends only on the token xt and the non-replaced # 9See https://www.tensorflow.org/api_docs/python/tf/profiler 16 Published as a conference paper at ICLR 2020 tokens {x; : i ¢ m}, ie., it does not depend on other generated tokens {#; : i € mAi F t}. This isn’t too bad of an assumption because a relatively small number of tokens are replaced, and it greatly simplifies credit assignment when using reinforcement learning. Notationally, we show this assumption by (in a slight abuse of notation) by writing D(#;|a2™°**) for the discriminator predicting whether the generated token <; equals the original token x, given the masked context asked A useful consequence of this assumption is that the discriminator score for non-replaced tokens (D(a;|a™**¢) for t ¢ m) is independent of pc because we are assuming it does not depend on any replaced token. Therefore these tokens can be ignored when training G to maximize Lpisc. During training we seek to find # n n arg max Cpjsc = argmax E (= —1(25™"' = x,) log D(a", t)— 0a 0g mae \ 1(@omr" # &,) log(1 — oa) Using the simplifying assumption, we approximate the above by finding the argmax of E ( Ss —1(@, = 2) log D(&la™ 4) — 1(&, A x) log(1 — Dele) w,m,@ \ fom =E DD. E R(t) 2M fom tt~PG —log D(a, ™s*4) ifé, = 2% —log(1 — D(&,|a™*)) otherwise where R(a;, x2) = { if ˆxt = xt − log(1 − D(ˆxt|xmasked)) otherwise In short, the simplifying assumption allows us to decompose the loss over the individual generated tokens. We cannot directly find arg maxθG using gradient ascent because it is impossible to back- propagate through discrete sampling of ˆx. Instead, we use policy gradient reinforcement learning (Williams, 1992). In particular, we use the REINFORCE gradient ∇θGLDisc ≈ E x,m t∈m E ˆxt∼pG ∇θg log pG(ˆxt|xmasked)[R(ˆxt, x) − b(xmasked, t)] Where b is a learned baseline implemented as b(xmasked, t) = − log sigmoid(wT hG(xmasked)t) where hG(xmasked) are the outputs of the generator’s Transformer encoder. The baseline is trained with cross-entropy loss to match the reward for the corresponding position. We approximate the expectations with a single sample and learn θG with gradient ascent. Despite receiving no explicit feedback about which generated tokens are correct, we found the adversarial training resulted in a fairly accurate generator (for a 256-hidden-size generator, the adversarially trained one achieves 58% accuracy at masked language modeling while the same sized MLE generator gets 65%). How- ever, using this generator did not improve over the MLE-trained one on downstream tasks (see the right of Figure 3 in the main paper). # G EVALUATING ELECTRA AS A MASKED LANGUAGE MODEL This sections details some initial experiments in evaluating ELECTRA as a masked language model. Using slightly different notation from the main paper, given a context c consisting of a text sequence with one token x masked-out, the discriminator loss can be written as Loisc = — Ss (a = Pmask)Pdata(z|c) log D(x, c) + //funmasked token a&€vocab PmaskPdata(2|C)pG (ac) log D(x, c) + //generator samples correct token Pmask(1 — Paata(x|c))pg (ac) log(1 — D(x, °))) //generator samples incorrect token Finding the critical points of this loss with respect to D shows that for a fixed generator the optimal discriminator is D(x, c) = pdata(x|c)(a + pG(x|c))/(apdata(x|c) + pG(x|c)) 17 Published as a conference paper at ICLR 2020 which means pdata(x|c) = D(x, c)pG(x|c)/(a(1 − D(x, c)) + pG(x|c)) where a = (1 − pmask)/pmask is the number of unmasked tokens for every masked token. We can use this expression to evaluate ELECTRA as a masked language model by selecting argmaxx∈vocabD(x, c)pG(x|c)/(a(1 − D(x, c)) + pG(x|c)) as the model’s prediction for a given context. In practice, selecting over the whole vocabulary is very expensive, so we instead take the argmax over the top 100 predictions from the generator.10 Using this method, we compared ELECTRA-Base and BERT-Base on the Wikipedia+BooksCorpus dataset. We found that BERT slightly outperformed ELECTRA at masked language modeling (77.9% vs 75.5% accuracy). It is possible that the assumption of an optimal discriminator, which is certainly far from correct, is harming ELECTRA’s accuracy under this evaluation scheme. However, perhaps it is not too surpris- ing that a model like BERT that is trained specifically for generation performs better at generation while a model with a discriminative objective like ELECTRA is better at being fine-tuned on dis- criminative tasks. We think comparisons of BERT’s and ELECTRA’s MLM predictions might be an interesting way to uncover more about the differences between ELECTRA and BERT encoders in future work. # H NEGATIVE RESULTS We briefly describe a few ideas that did not look promising in our initial experiments: • We initially attempted to make BERT more efficient by strategically masking-out tokens (e.g., masking our rarer tokens more frequently, or training a model to guess which tokens BERT would struggle to predict if they were masked out). This resulted in fairly minor speedups over regular BERT. • Given that ELECTRA seemed to benefit (up to a certain point) from having a weaker gener- ator (see Section 3.2), we explored raising the temperature of the generator’s output softmax or disallowing the generator from sampling the correct token. Neither of these improved results. • We tried adding a sentence-level contrastive objective. For this task, we kept 20% of input sentences unchanged rather than noising them with the generator. We then added a predic- tion head to the model that predicted if the entire input was corrupted or not. Surprisingly, this slightly decreased scores on downstream tasks. 10For ELECTRA-Base, this means the upper-bound for accuracy is around 95%. 18
{ "id": "1811.01088" }
2003.08934
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.
http://arxiv.org/pdf/2003.08934
Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, Ren Ng
cs.CV, cs.GR
ECCV 2020 (oral). Project page with videos and code: http://tancik.com/nerf
null
cs.CV
20200319
20200803
0 2 0 2 # NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis g u A 3 Ben Mildenhall'* Pratul P. Srinivasan!* Matthew Tancik!* Jonathan T. Barron? Ravi Ramamoorthi? Ren Ng # 1UC Berkeley 2Google Research 3UC San Diego ] # V C . s c [ 2 v 4 3 9 8 0 . 3 0 0 2 : v i X r a Abstract. We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an under- lying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non- convolutional) deep network, whose input is a single continuous 5D coor- dinate (spatial location (x, y, z) and viewing direction (θ, φ)) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our repre- sentation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demon- strate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons. Keywords: scene representation, view synthesis, image-based render- ing, volume rendering, 3D deep learning # 1 Introduction In this work, we address the long-standing problem of view synthesis in a new way by directly optimizing parameters of a continuous 5D scene representation to minimize the error of rendering a set of captured images. We represent a static scene as a continuous 5D function that outputs the radiance emitted in each direction (θ, φ) at each point (x, y, z) in space, and a density at each point which acts like a differential opacity controlling how much radiance is accumulated by a ray passing through (x, y, z). Our method optimizes a deep fully-connected neural network without any convolutional layers (often referred to as a multilayer perceptron or MLP) to represent this function by regressing from a single 5D coordinate (x, y, z, θ, φ) to a single volume density and view-dependent RGB color. To render this neural radiance field (NeRF) * Authors contributed equally to this work. B. Mildenhall, P. P. Srinivasan, M. Tancik et al. Input Images Optimize NeRF > Render new views Fig. 1: We present a method that optimizes a continuous 5D neural radiance field representation (volume density and view-dependent color at any continuous location) of a scene from a set of input images. We use techniques from volume rendering to accumulate samples of this scene representation along rays to render the scene from any viewpoint. Here, we visualize the set of 100 input views of the synthetic Drums scene randomly captured on a surrounding hemisphere, and we show two novel views rendered from our optimized NeRF representation. from a particular viewpoint we: 1) march camera rays through the scene to generate a sampled set of 3D points, 2) use those points and their corresponding 2D viewing directions as input to the neural network to produce an output set of colors and densities, and 3) use classical volume rendering techniques to accumulate those colors and densities into a 2D image. Because this process is naturally differentiable, we can use gradient descent to optimize this model by minimizing the error between each observed image and the corresponding views rendered from our representation. Minimizing this error across multiple views encourages the network to predict a coherent model of the scene by assigning high volume densities and accurate colors to the locations that contain the true underlying scene content. Figure 2 visualizes this overall pipeline. We find that the basic implementation of optimizing a neural radiance field representation for a complex scene does not converge to a sufficiently high- resolution representation and is inefficient in the required number of samples per camera ray. We address these issues by transforming input 5D coordinates with a positional encoding that enables the MLP to represent higher frequency func- tions, and we propose a hierarchical sampling procedure to reduce the number of queries required to adequately sample this high-frequency scene representation. Our approach inherits the benefits of volumetric representations: both can represent complex real-world geometry and appearance and are well suited for gradient-based optimization using projected images. Crucially, our method over- comes the prohibitive storage costs of discretized voxel grids when modeling complex scenes at high-resolutions. In summary, our technical contributions are: – An approach for representing continuous scenes with complex geometry and materials as 5D neural radiance fields, parameterized as basic MLP networks. – A differentiable rendering procedure based on classical volume rendering tech- niques, which we use to optimize these representations from standard RGB images. This includes a hierarchical sampling strategy to allocate the MLP’s capacity towards space with visible scene content. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis – A positional encoding to map each input 5D coordinate into a higher dimen- sional space, which enables us to successfully optimize neural radiance fields to represent high-frequency scene content. We demonstrate that our resulting neural radiance field method quantitatively and qualitatively outperforms state-of-the-art view synthesis methods, including works that fit neural 3D representations to scenes as well as works that train deep convolutional networks to predict sampled volumetric representations. As far as we know, this paper presents the first continuous neural scene representation that is able to render high-resolution photorealistic novel views of real objects and scenes from RGB images captured in natural settings. # 2 Related Work A promising recent direction in computer vision is encoding objects and scenes in the weights of an MLP that directly maps from a 3D spatial location to an implicit representation of the shape, such as the signed distance [6] at that location. However, these methods have so far been unable to reproduce realistic scenes with complex geometry with the same fidelity as techniques that represent scenes using discrete representations such as triangle meshes or voxel grids. In this section, we review these two lines of work and contrast them with our approach, which enhances the capabilities of neural scene representations to produce state-of-the-art results for rendering complex realistic scenes. A similar approach of using MLPs to map from low-dimensional coordinates to colors has also been used for representing other graphics functions such as im- ages [44], textured materials [12,31,36,37], and indirect illumination values [38]. Neural 3D shape representations Recent work has investigated the im- plicit representation of continuous 3D shapes as level sets by optimizing deep networks that map xyz coordinates to signed distance functions [15,32] or occu- pancy fields [11,27]. However, these models are limited by their requirement of access to ground truth 3D geometry, typically obtained from synthetic 3D shape datasets such as ShapeNet [3]. Subsequent work has relaxed this requirement of ground truth 3D shapes by formulating differentiable rendering functions that allow neural implicit shape representations to be optimized using only 2D im- ages. Niemeyer et al. [29] represent surfaces as 3D occupancy fields and use a numerical method to find the surface intersection for each ray, then calculate an exact derivative using implicit differentiation. Each ray intersection location is provided as the input to a neural 3D texture field that predicts a diffuse color for that point. Sitzmann et al. [42] use a less direct neural 3D representation that simply outputs a feature vector and RGB color at each continuous 3D coordinate, and propose a differentiable rendering function consisting of a recurrent neural network that marches along each ray to decide where the surface is located. Though these techniques can potentially represent complicated and high- resolution geometry, they have so far been limited to simple shapes with low geometric complexity, resulting in oversmoothed renderings. We show that an al- ternate strategy of optimizing networks to encode 5D radiance fields (3D volumes 3 4 B. Mildenhall, P. P. Srinivasan, M. Tancik et al. with 2D view-dependent appearance) can represent higher-resolution geometry and appearance to render photorealistic novel views of complex scenes. View synthesis and image-based rendering Given a dense sampling of views, photorealistic novel views can be reconstructed by simple light field sam- ple interpolation techniques [21,5,7]. For novel view synthesis with sparser view sampling, the computer vision and graphics communities have made significant progress by predicting traditional geometry and appearance representations from observed images. One popular class of approaches uses mesh-based representa- tions of scenes with either diffuse [48] or view-dependent [2,8,49] appearance. Differentiable rasterizers [4,10,23,25] or pathtracers [22,30] can directly optimize mesh representations to reproduce a set of input images using gradient descent. However, gradient-based mesh optimization based on image reprojection is often difficult, likely because of local minima or poor conditioning of the loss land- scape. Furthermore, this strategy requires a template mesh with fixed topology to be provided as an initialization before optimization [22], which is typically unavailable for unconstrained real-world scenes. Another class of methods use volumetric representations to address the task of high-quality photorealistic view synthesis from a set of input RGB images. Volumetric approaches are able to realistically represent complex shapes and materials, are well-suited for gradient-based optimization, and tend to produce less visually distracting artifacts than mesh-based methods. Early volumetric approaches used observed images to directly color voxel grids [19,40,45]. More recently, several methods [9,13,17,28,33,43,46,52] have used large datasets of mul- tiple scenes to train deep networks that predict a sampled volumetric represen- tation from a set of input images, and then use either alpha-compositing [34] or learned compositing along rays to render novel views at test time. Other works have optimized a combination of convolutional networks (CNNs) and sampled voxel grids for each specific scene, such that the CNN can compensate for dis- cretization artifacts from low resolution voxel grids [41] or allow the predicted voxel grids to vary based on input time or animation controls [24]. While these volumetric techniques have achieved impressive results for novel view synthe- sis, their ability to scale to higher resolution imagery is fundamentally limited by poor time and space complexity due to their discrete sampling — rendering higher resolution images requires a finer sampling of 3D space. We circumvent this problem by instead encoding a continuous volume within the parameters of a deep fully-connected neural network, which not only produces significantly higher quality renderings than prior volumetric approaches, but also requires just a fraction of the storage cost of those sampled volumetric representations. # 3 Neural Radiance Field Scene Representation We represent a continuous scene as a 5D vector-valued function whose input is a 3D location x = (x, y, z) and 2D viewing direction (θ, φ), and whose output is an emitted color c = (r, g, b) and volume density σ. In practice, we express NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis 5D Input Output Volume Rendering Position + Direction Color + Density Rendering Loss (x.9.2,08) > Mr RGBo) Rwy be" a Lin 1 O Pine LRN 2 Oli Beet. | LAX ay Distance ; (d) Fig. 2: An overview of our neural radiance field scene representation and differ- entiable rendering procedure. We synthesize images by sampling 5D coordinates (location and viewing direction) along camera rays (a), feeding those locations into an MLP to produce a color and volume density (b), and using volume ren- dering techniques to composite these values into an image (c). This rendering function is differentiable, so we can optimize our scene representation by mini- mizing the residual between synthesized and ground truth observed images (d). direction as a 3D Cartesian unit vector d. We approximate this continuous 5D scene representation with an MLP network FΘ : (x, d) → (c, σ) and optimize its weights Θ to map from each input 5D coordinate to its corresponding volume density and directional emitted color. We encourage the representation to be multiview consistent by restricting the network to predict the volume density σ as a function of only the location x, while allowing the RGB color c to be predicted as a function of both location and viewing direction. To accomplish this, the MLP FΘ first processes the input 3D coordinate x with 8 fully-connected layers (using ReLU activations and 256 channels per layer), and outputs σ and a 256-dimensional feature vector. This feature vector is then concatenated with the camera ray’s viewing direction and passed to one additional fully-connected layer (using a ReLU activation and 128 channels) that output the view-dependent RGB color. See Fig. 3 for an example of how our method uses the input viewing direction to represent non-Lambertian effects. As shown in Fig. 4, a model trained without view dependence (only x as input) has difficulty representing specularities. # 4 Volume Rendering with Radiance Fields Our 5D neural radiance field represents a scene as the volume density and di- rectional emitted radiance at any point in space. We render the color of any ray passing through the scene using principles from classical volume rendering [16]. The volume density σ(x) can be interpreted as the differential probability of a ray terminating at an infinitesimal particle at location x. The expected color C(r) of camera ray r(t) = o + td with near and far bounds tn and tf is: C(r) = i ’ T(t)o(r(t))c(r(t), d)dt, where T(t) = ex(- | a(r(s))as) . (1) 6 B. Mildenhall, P. P. Srinivasan, M. Tancik et al. (a) View 1 (b) View 2 (c) Radiance Distributions Fig. 3: A visualization of view-dependent emitted radiance. Our neural radiance field representation outputs RGB color as a 5D function of both spatial position x and viewing direction d. Here, we visualize example directional color distri- butions for two spatial locations in our neural representation of the Ship scene. In (a) and (b), we show the appearance of two fixed 3D points from two dif- ferent camera positions: one on the side of the ship (orange insets) and one on the surface of the water (blue insets). Our method predicts the changing spec- ular appearance of these two 3D points, and in (c) we show how this behavior generalizes continuously across the whole hemisphere of viewing directions. The function T (t) denotes the accumulated transmittance along the ray from tn to t, i.e., the probability that the ray travels from tn to t without hitting any other particle. Rendering a view from our continuous neural radiance field requires estimating this integral C(r) for a camera ray traced through each pixel of the desired virtual camera. We numerically estimate this continuous integral using quadrature. Deter- ministic quadrature, which is typically used for rendering discretized voxel grids, would effectively limit our representation’s resolution because the MLP would only be queried at a fixed discrete set of locations. Instead, we use a stratified sampling approach where we partition [tn, tf ] into N evenly-spaced bins and then draw one sample uniformly at random from within each bin: a a ti ~U|tn + sy ts —tn), tnt wits ~ tn) : (2 Although we use a discrete set of samples to estimate the integral, stratified sampling enables us to represent a continuous scene representation because it results in the MLP being evaluated at continuous positions over the course of optimization. We use these samples to estimate C(r) with the quadrature rule discussed in the volume rendering review by Max [26]: N i-1 C(r) = Ss T;(1 — exp(—o;6;))e; , where T; = exp -S oj6; |, (3) i=1 j=l where δi = ti+1 − ti is the distance between adjacent samples. This function for calculating ˆC(r) from the set of (ci, σi) values is trivially differentiable and reduces to traditional alpha compositing with alpha values αi = 1 − exp(−σiδi). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis eee eee Ground Truth Complete Model No View Dependence No Positional Encoding Fig. 4: Here we visualize how our full model benefits from representing view- dependent emitted radiance and from passing our input coordinates through a high-frequency positional encoding. Removing view dependence prevents the model from recreating the specular reflection on the bulldozer tread. Removing the positional encoding drastically decreases the model’s ability to represent high frequency geometry and texture, resulting in an oversmoothed appearance. # 5 Optimizing a Neural Radiance Field In the previous section we have described the core components necessary for modeling a scene as a neural radiance field and rendering novel views from this representation. However, we observe that these components are not sufficient for achieving state-of-the-art quality, as demonstrated in Section 6.4). We introduce two improvements to enable representing high-resolution complex scenes. The first is a positional encoding of the input coordinates that assists the MLP in representing high-frequency functions, and the second is a hierarchical sampling procedure that allows us to efficiently sample this high-frequency representation. # 5.1 Positional encoding Despite the fact that neural networks are universal function approximators [14], we found that having the network FΘ directly operate on xyzθφ input coordi- nates results in renderings that perform poorly at representing high-frequency variation in color and geometry. This is consistent with recent work by Rahaman et al. [35], which shows that deep networks are biased towards learning lower fre- quency functions. They additionally show that mapping the inputs to a higher dimensional space using high frequency functions before passing them to the network enables better fitting of data that contains high frequency variation. We leverage these findings in the context of neural scene representations, and show that reformulating Fo as a composition of two functions Fo = F% o 7, one learned and one not, significantly improves performance (see Fig. |4Jand Table[2). Here is a mapping from R into a higher dimensional space R?", and F% is still simply a regular MLP. Formally, the encoding function we use is: Y(p) = (sin(2°7p), cos(2°xp), see, sin(24~!xp), cos(24~! xp) ) . (4) This function γ(·) is applied separately to each of the three coordinate values in x (which are normalized to lie in [−1, 1]) and to the three components of the 7 (4) B. Mildenhall, P. P. Srinivasan, M. Tancik et al. Cartesian viewing direction unit vector d (which by construction lie in [−1, 1]). In our experiments, we set L = 10 for γ(x) and L = 4 for γ(d). A similar mapping is used in the popular Transformer architecture [47], where it is referred to as a positional encoding. However, Transformers use it for a different goal of providing the discrete positions of tokens in a sequence as input to an architecture that does not contain any notion of order. In contrast, we use these functions to map continuous input coordinates into a higher dimensional space to enable our MLP to more easily approximate a higher frequency function. Concurrent work on a related problem of modeling 3D protein structure from projections [51] also utilizes a similar input coordinate mapping. # 5.2 Hierarchical volume sampling Our rendering strategy of densely evaluating the neural radiance field network at N query points along each camera ray is inefficient: free space and occluded regions that do not contribute to the rendered image are still sampled repeat- edly. We draw inspiration from early work in volume rendering [20] and propose a hierarchical representation that increases rendering efficiency by allocating samples proportionally to their expected effect on the final rendering. Instead of just using a single network to represent the scene, we simultane- ously optimize two networks: one “coarse” and one “fine”. We first sample a set of Nc locations using stratified sampling, and evaluate the “coarse” network at these locations as described in Eqns. 2 and 3. Given the output of this “coarse” network, we then produce a more informed sampling of points along each ray where samples are biased towards the relevant parts of the volume. To do this, we first rewrite the alpha composited color from the coarse network ˆCc(r) in Eqn. 3 as a weighted sum of all sampled colors ci along the ray: Ne C.(r) = Ss wii, wi =T;(1—exp(—a;d;)). (5) i=1 Normalizing these weights as 0; = ”:/%:, w; produces a piecewise-constant PDF along the ray. We sample a second set of Ny¢ locations from this distribution using inverse transform sampling, evaluate our “fine” network at the union of the first and second set of samples, and compute the final rendered color of the ray Cy(r) using Eqn.|3}but using all N+ Ny samples. This procedure allocates more samples to regions we expect to contain visible content. This addresses a similar goal as importance sampling, but we use the sampled values as a nonuniform discretization of the whole integration domain rather than treating each sample as an independent probabilistic estimate of the entire integral. # Implementation details We optimize a separate neural continuous volume representation network for each scene. This requires only a dataset of captured RGB images of the scene, NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis the corresponding camera poses and intrinsic parameters, and scene bounds (we use ground truth camera poses, intrinsics, and bounds for synthetic data, and use the COLMAP structure-from-motion package [39] to estimate these parameters for real data). At each optimization iteration, we randomly sample a batch of camera rays from the set of all pixels in the dataset, and then follow the hierarchical sampling described in Sec. 5.2 to query Nc samples from the coarse network and Nc + Nf samples from the fine network. We then use the volume rendering procedure described in Sec. 4 to render the color of each ray from both sets of samples. Our loss is simply the total squared error between the rendered and true pixel colors for both the coarse and fine renderings: ~ 2 A 2 £= > [lee - cel} + |e - cf] (6) reR where R is the set of rays in each batch, and C(r), ˆCc(r), and ˆCf (r) are the ground truth, coarse volume predicted, and fine volume predicted RGB colors for ray r respectively. Note that even though the final rendering comes from ˆCf (r), we also minimize the loss of ˆCc(r) so that the weight distribution from the coarse network can be used to allocate samples in the fine network. In our experiments, we use a batch size of 4096 rays, each sampled at N, = 64 coordinates in the coarse volume and Ny = 128 additional coordinates in the fine volume. We use the Adam optimizer with a learning rate that begins at 5 x 10-4 and decays exponentially to 5 x 10~° over the course of optimization (other Adam hyperparameters are left at default values of 3, = 0.9, 62 = 0.999, and € = 10-7). The optimization for a single scene typically take around 100- 300k iterations to converge on a single NVIDIA V100 GPU (about 1-2 days). # 6 Results We quantitatively (Tables 1) and qualitatively (Figs. 8 and 6) show that our method outperforms prior work, and provide extensive ablation studies to vali- date our design choices (Table 2). We urge the reader to view our supplementary video to better appreciate our method’s significant improvement over baseline methods when rendering smooth paths of novel views. # 6.1 Datasets Synthetic renderings of objects We first show experimental results on two datasets of synthetic renderings of objects (Table 1, “Diffuse Synthetic 360◦” and “Realistic Synthetic 360◦”). The DeepVoxels [41] dataset contains four Lamber- tian objects with simple geometry. Each object is rendered at 512 × 512 pixels from viewpoints sampled on the upper hemisphere (479 as input and 1000 for testing). We additionally generate our own dataset containing pathtraced images of eight objects that exhibit complicated geometry and realistic non-Lambertian materials. Six are rendered from viewpoints sampled on the upper hemisphere, and two are rendered from viewpoints sampled on a full sphere. We render 100 views of each scene as input and 200 for testing, all at 800 × 800 pixels. 9 B. Mildenhall, P. P. Srinivasan, M. Tancik et al. Diffuse Synthetic 360◦ [41] Realistic Synthetic 360◦ Real Forward-Facing [28] PSNR↑ SSIM↑ LPIPS↓ PSNR↑ SSIM↑ LPIPS↓ PSNR↑ SSIM↑ LPIPS↓ 0.846 33.20 0.893 29.62 0.911 34.38 0.947 40.15 Table 1: Our method quantitatively outperforms prior work on datasets of both synthetic and real images. We report PSNR/SSIM (higher is better) and LPIPS [50] (lower is better). The DeepVoxels [41] dataset consists of 4 diffuse ob- jects with simple geometry. Our realistic synthetic dataset consists of pathtraced renderings of 8 geometrically complex objects with complex non-Lambertian ma- terials. The real dataset consists of handheld forward-facing captures of 8 real- world scenes (NV cannot be evaluated on this data because it only reconstructs objects inside a bounded volume). Though LLFF achieves slightly better LPIPS, we urge readers to view our supplementary video where our method achieves better multiview consistency and produces fewer artifacts than all baselines. Real images of complex scenes We show results on complex real-world scenes captured with roughly forward-facing images (Table 1, “Real Forward- Facing”). This dataset consists of 8 scenes captured with a handheld cellphone (5 taken from the LLFF paper and 3 that we capture), captured with 20 to 62 images, and hold out 1/8 of these for the test set. All images are 1008×756 pixels. # 6.2 Comparisons To evaluate our model we compare against current top-performing techniques for view synthesis, detailed below. All methods use the same set of input views to train a separate network for each scene except Local Light Field Fusion [28], which trains a single 3D convolutional network on a large dataset, then uses the same trained network to process input images of new scenes at test time. Neural Volumes (NV) [24] synthesizes novel views of objects that lie en- tirely within a bounded volume in front of a distinct background (which must be separately captured without the object of interest). It optimizes a deep 3D convolutional network to predict a discretized RGBα voxel grid with 1283 sam- ples as well as a 3D warp grid with 323 samples. The algorithm renders novel views by marching camera rays through the warped voxel grid. Scene Representation Networks (SRN) [42] represent a continuous scene as an opaque surface, implicitly defined by a MLP that maps each (x, y, z) co- ordinate to a feature vector. They train a recurrent neural network to march along a ray through the scene representation by using the feature vector at any 3D coordinate to predict the next step size along the ray. The feature vector from the final step is decoded into a single color for that point on the surface. Note that SRN is a better-performing followup to DeepVoxels [41] by the same authors, which is why we do not include comparisons to DeepVoxels. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Ship Lego Microphone Materials # Ground Truth NeRF (ours) LLFF [28] SRN [42] NV [24] Fig. 5: Comparisons on test-set views for scenes from our new synthetic dataset generated with a physically-based renderer. Our method is able to recover fine details in both geometry and appearance, such as Ship’s rigging, Lego’s gear and treads, Microphone’s shiny stand and mesh grille, and Material ’s non- Lambertian reflectance. LLFF exhibits banding artifacts on the Microphone stand and Material ’s object edges and ghosting artifacts in Ship’s mast and inside the Lego object. SRN produces blurry and distorted renderings in every case. Neural Volumes cannot capture the details on the Microphone’s grille or Lego’s gears, and it completely fails to recover the geometry of Ship’s rigging. B. Mildenhall, P. P. Srinivasan, M. Tancik et al. Fern T-Rex Orchid Ground Truth NeRF (ours) LLFF [28] SRN [42] Fig. 6: Comparisons on test-set views of real world scenes. LLFF is specifically designed for this use case (forward-facing captures of real scenes). Our method is able to represent fine geometry more consistently across rendered views than LLFF, as shown in Fern’s leaves and the skeleton ribs and railing in T-rex. Our method also correctly reconstructs partially occluded regions that LLFF struggles to render cleanly, such as the yellow shelves behind the leaves in the bottom Fern crop and green leaves in the background of the bottom Orchid crop. Blending between multiples renderings can also cause repeated edges in LLFF, as seen in the top Orchid crop. SRN captures the low-frequency geometry and color variation in each scene but is unable to reproduce any fine detail. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Local Light Field Fusion (LLFF) [28] LLFF is designed for producing pho- torealistic novel views for well-sampled forward facing scenes. It uses a trained 3D convolutional network to directly predict a discretized frustum-sampled RGBα grid (multiplane image or MPI [52]) for each input view, then renders novel views by alpha compositing and blending nearby MPIs into the novel viewpoint. # 6.3 Discussion We thoroughly outperform both baselines that also optimize a separate network per scene (NV and SRN) in all scenarios. Furthermore, we produce qualitatively and quantitatively superior renderings compared to LLFF (across all except one metric) while using only their input images as our entire training set. The SRN method produces heavily smoothed geometry and texture, and its representational power for view synthesis is limited by selecting only a single depth and color per camera ray. The NV baseline is able to capture reasonably detailed volumetric geometry and appearance, but its use of an underlying ex- plicit 1283 voxel grid prevents it from scaling to represent fine details at high resolutions. LLFF specifically provides a “sampling guideline” to not exceed 64 pixels of disparity between input views, so it frequently fails to estimate cor- rect geometry in the synthetic datasets which contain up to 400-500 pixels of disparity between views. Additionally, LLFF blends between different scene rep- resentations for rendering different views, resulting in perceptually-distracting inconsistency as is apparent in our supplementary video. The biggest practical tradeoffs between these methods are time versus space. All compared single scene methods take at least 12 hours to train per scene. In contrast, LLFF can process a small input dataset in under 10 minutes. However, LLFF produces a large 3D voxel grid for every input image, resulting in enor- mous storage requirements (over 15GB for one “Realistic Synthetic” scene). Our method requires only 5 MB for the network weights (a relative compression of 3000× compared to LLFF), which is even less memory than the input images alone for a single scene from any of our datasets. # 6.4 Ablation studies We validate our algorithm’s design choices and parameters with an extensive ablation study in Table 2. We present results on our “Realistic Synthetic 360◦” scenes. Row 9 shows our complete model as a point of reference. Row 1 shows a minimalist version of our model without positional encoding (PE), view- dependence (VD), or hierarchical sampling (H). In rows 2–4 we remove these three components one at a time from the full model, observing that positional encoding (row 2) and view-dependence (row 3) provide the largest quantitative benefit followed by hierarchical sampling (row 4). Rows 5–6 show how our per- formance decreases as the number of input images is reduced. Note that our method’s performance using only 25 input images still exceeds NV, SRN, and LLFF across all metrics when they are provided with 100 images (see supple- mentary material). In rows 7–8 we validate our choice of the maximum frequency B. Mildenhall, P. P. Srinivasan, M. Tancik et al. 1) No PE, VD, H 2) No Pos. Encoding 3) No View Dependence 4) No Hierarchical 5) Far Fewer Images 6) Fewer Images 7) Fewer Frequencies 8) More Frequencies 9) Complete Model Input #Im. L ( Nc , Nf ) PSNR↑ SSIM↑ LPIPS↓ xyz xyzθφ xyz xyzθφ xyzθφ xyzθφ xyzθφ xyzθφ xyzθφ 100 100 100 100 25 50 100 100 100 - - 10 10 10 10 5 15 10 (256, - ) (64, 128) (64, 128) (256, - ) (64, 128) (64, 128) (64, 128) (64, 128) (64, 128) 26.67 28.77 27.66 30.06 27.78 29.79 30.59 30.81 31.01 0.906 0.924 0.925 0.938 0.925 0.940 0.944 0.946 0.947 0.136 0.108 0.117 0.109 0.107 0.096 0.088 0.096 0.081 Table 2: An ablation study of our model. Metrics are averaged over the 8 scenes from our realistic synthetic dataset. See Sec. 6.4 for detailed descriptions. L used in our positional encoding for x (the maximum frequency used for d is scaled proportionally). Only using 5 frequencies reduces performance, but in- creasing the number of frequencies from 10 to 15 does not improve performance. We believe the benefit of increasing L is limited once 2L exceeds the maximum frequency present in the sampled input images (roughly 1024 in our data). # 7 Conclusion Our work directly addresses deficiencies of prior work that uses MLPs to repre- sent objects and scenes as continuous functions. We demonstrate that represent- ing scenes as 5D neural radiance fields (an MLP that outputs volume density and view-dependent emitted radiance as a function of 3D location and 2D viewing direction) produces better renderings than the previously-dominant approach of training deep convolutional networks to output discretized voxel representations. Although we have proposed a hierarchical sampling strategy to make render- ing more sample-efficient (for both training and testing), there is still much more progress to be made in investigating techniques to efficiently optimize and ren- der neural radiance fields. Another direction for future work is interpretability: sampled representations such as voxel grids and meshes admit reasoning about the expected quality of rendered views and failure modes, but it is unclear how to analyze these issues when we encode scenes in the weights of a deep neural network. We believe that this work makes progress towards a graphics pipeline based on real world imagery, where complex scenes could be composed of neural radiance fields optimized from images of actual objects and scenes. Acknowledgements We thank Kevin Cao, Guowei Frank Yang, and Nithin Raghavan for comments and discussions. RR acknowledges funding from ONR grants N000141712687 and N000142012529 and the Ronald L. Graham Chair. BM is funded by a Hertz Foundation Fellowship, and MT is funded by an NSF Graduate Fellowship. Google provided a generous donation of cloud com- pute credits through the BAIR Commons program. We thank the following NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Blend Swap users for the models used in our realistic synthetic dataset: gregzaal (ship), 1DInc (chair), bryanajones (drums), Herberhold (ficus), erickfree (hot- dog), Heinzelnisse (lego), elbrujodelatribu (materials), and up3d.de (mic). # References 1. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Man´e, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Vi´egas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: Large-scale machine learning on heterogeneous systems (2015) 2. Buehler, C., Bosse, M., McMillan, L., Gortler, S., Cohen, M.: Unstructured lumi- graph rendering. In: SIGGRAPH (2001) 3. Chang, A.X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., et al.: Shapenet: An information-rich 3d model repository. arXiv:1512.03012 (2015) 4. Chen, W., Gao, J., Ling, H., Smith, E.J., Lehtinen, J., Jacobson, A., Fidler, S.: Learning to predict 3D objects with an interpolation-based differentiable renderer. In: NeurIPS (2019) 5. Cohen, M., Gortler, S.J., Szeliski, R., Grzeszczuk, R., Szeliski, R.: The lumigraph. In: SIGGRAPH (1996) 6. Curless, B., Levoy, M.: A volumetric method for building complex models from range images. In: SIGGRAPH (1996) 7. Davis, A., Levoy, M., Durand, F.: Unstructured light fields. In: Eurographics (2012) 8. Debevec, P., Taylor, C.J., Malik, J.: Modeling and rendering architecture from pho- tographs: A hybrid geometry-and image-based approach. In: SIGGRAPH (1996) 9. Flynn, J., Broxton, M., Debevec, P., DuVall, M., Fyffe, G., Overbeck, R., Snavely, N., Tucker, R.: DeepView: view synthesis with learned gradient descent. In: CVPR (2019) 10. Genova, K., Cole, F., Maschinot, A., Sarna, A., Vlasic, D., , Freeman, W.T.: Un- supervised training for 3D morphable model regression. In: CVPR (2018) 11. Genova, K., Cole, F., Sud, A., Sarna, A., Funkhouser, T.: Local deep implicit functions for 3d shape. In: CVPR (2020) 12. Henzler, P., Mitra, N.J., Ritschel, T.: Learning a neural 3d texture space from 2d exemplars. In: CVPR (2020) 13. Henzler, P., Rasche, V., Ropinski, T., Ritschel, T.: Single-image tomography: 3d volumes from 2d cranial x-rays. In: Eurographics (2018) 14. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are uni- versal approximators. Neural Networks (1989) 15. Jiang, C., Sud, A., Makadia, A., Huang, J., Nießner, M., Funkhouser, T.: Local implicit grid representations for 3d scenes. In: CVPR (2020) 16. Kajiya, J.T., Herzen, B.P.V.: Ray tracing volume densities. Computer Graphics (SIGGRAPH) (1984) 17. Kar, A., H¨ane, C., Malik, J.: Learning a multi-view stereo machine. In: NeurIPS (2017) 18. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: ICLR (2015) 15 B. Mildenhall, P. P. Srinivasan, M. Tancik et al. 16 19. Kutulakos, K.N., Seitz, S.M.: A theory of shape by space carving. International Journal of Computer Vision (2000) 20. Levoy, M.: Efficient ray tracing of volume data. ACM Transactions on Graphics (1990) 21. Levoy, M., Hanrahan, P.: Light field rendering. In: SIGGRAPH (1996) 22. Li, T.M., Aittala, M., Durand, F., Lehtinen, J.: Differentiable monte carlo ray tracing through edge sampling. ACM Transactions on Graphics (SIGGRAPH Asia) (2018) 23. Liu, S., Li, T., Chen, W., Li, H.: Soft rasterizer: A differentiable renderer for image- based 3D reasoning. In: ICCV (2019) 24. Lombardi, S., Simon, T., Saragih, J., Schwartz, G., Lehrmann, A., Sheikh, Y.: Neural volumes: Learning dynamic renderable volumes from images. ACM Trans- actions on Graphics (SIGGRAPH) (2019) 25. Loper, M.M., Black, M.J.: OpenDR: An approximate differentiable renderer. In: ECCV (2014) 26. Max, N.: Optical models for direct volume rendering. IEEE Transactions on Visu- alization and Computer Graphics (1995) 27. Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: Learning 3D reconstruction in function space. In: CVPR (2019) 28. Mildenhall, B., Srinivasan, P.P., Ortiz-Cayon, R., Kalantari, N.K., Ramamoorthi, R., Ng, R., Kar, A.: Local light field fusion: Practical view synthesis with prescrip- tive sampling guidelines. ACM Transactions on Graphics (SIGGRAPH) (2019) 29. Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.: Differentiable volumetric rendering: Learning implicit 3D representations without 3D supervision. In: CVPR (2019) 30. Nimier-David, M., Vicini, D., Zeltner, T., Jakob, W.: Mitsuba 2: A retargetable forward and inverse renderer. ACM Transactions on Graphics (SIGGRAPH Asia) (2019) 31. Oechsle, M., Mescheder, L., Niemeyer, M., Strauss, T., Geiger, A.: Texture fields: Learning texture representations in function space. In: ICCV (2019) 32. Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: Learn- ing continuous signed distance functions for shape representation. In: CVPR (2019) 33. Penner, E., Zhang, L.: Soft 3D reconstruction for view synthesis. ACM Transactions 33. Penner, E., Zhang, L.: Soft 3D reconstruction for view synthesis. ACM Transactions on Graphics (SIGGRAPH Asia) (2017) on Graphics (SIGGRAPH Asia) (2017) 34. Porter, T., Duff, T.: Compositing digital images. Computer Graphics (SIG- GRAPH) (1984) 35. Rahaman, N., Baratin, A., Arpit, D., Dr¨axler, F., Lin, M., Hamprecht, F.A., Ben- gio, Y., Courville, A.C.: On the spectral bias of neural networks. In: ICML (2018) 36. Rainer, G., Ghosh, A., Jakob, W., Weyrich, T.: Unified neural encoding of BTFs. Computer Graphics Forum (Eurographics) (2020) 37. Rainer, G., Jakob, W., Ghosh, A., Weyrich, T.: Neural BTF compression and interpolation. Computer Graphics Forum (Eurographics) (2019) 38. Ren, P., Wang, J., Gong, M., Lin, S., Tong, X., Guo, B.: Global illumination with radiance regression functions. ACM Transactions on Graphics (2013) 39. Sch¨onberger, J.L., Frahm, J.M.: Structure-from-motion revisited. In: CVPR (2016) 40. Seitz, S.M., Dyer, C.R.: Photorealistic scene reconstruction by voxel coloring. In- ternational Journal of Computer Vision (1999) 41. Sitzmann, V., Thies, J., Heide, F., Nießner, M., Wetzstein, G., Zollh¨ofer, M.: Deep- voxels: Learning persistent 3D feature embeddings. In: CVPR (2019) 42. Sitzmann, V., Zollhoefer, M., Wetzstein, G.: Scene representation networks: Con- tinuous 3D-structure-aware neural scene representations. In: NeurIPS (2019) NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis 43. Srinivasan, P.P., Tucker, R., Barron, J.T., Ramamoorthi, R., Ng, R., Snavely, N.: Pushing the boundaries of view extrapolation with multiplane images. In: CVPR (2019) 44. Stanley, K.O.: Compositional pattern producing networks: A novel abstraction of development. Genetic programming and evolvable machines (2007) 45. Szeliski, R., Golland, P.: Stereo matching with transparency and matting. In: ICCV (1998) 46. Tulsiani, S., Zhou, T., Efros, A.A., Malik, J.: Multi-view supervision for single-view reconstruction via differentiable ray consistency. In: CVPR (2017) 47. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: NeurIPS (2017) 48. Waechter, M., Moehrle, N., Goesele, M.: Let there be color! Large-scale texturing of 3D reconstructions. In: ECCV (2014) 49. Wood, D.N., Azuma, D.I., Aldinger, K., Curless, B., Duchamp, T., Salesin, D.H., Stuetzle, W.: Surface light fields for 3D photography. In: SIGGRAPH (2000) 50. Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. In: CVPR (2018) 51. Zhong, E.D., Bepler, T., Davis, J.H., Berger, B.: Reconstructing continuous distri- butions of 3D protein structure from cryo-EM images. In: ICLR (2020) 52. Zhou, T., Tucker, R., Flynn, J., Fyffe, G., Snavely, N.: Stereo magnification: Learn- ing view synthesis using multiplane images. ACM Transactions on Graphics (SIG- GRAPH) (2018) # A Additional Implementation Details Network Architecture Fig. 7 details our simple fully-connected architecture. Volume Bounds Our method renders views by querying the neural radiance field representation at continuous 5D coordinates along camera rays. For exper- iments with synthetic images, we scale the scene so that it lies within a cube of side length 2 centered at the origin, and only query the representation within this bounding volume. Our dataset of real images contains content that can ex- ist anywhere between the closest point and infinity, so we use normalized device coordinates to map the depth range of these points into [−1, 1]. This shifts all the ray origins to the near plane of the scene, maps the perspective rays of the camera to parallel rays in the transformed volume, and uses disparity (inverse depth) instead of metric depth, so all coordinates are now bounded. Training Details For real scene data, we regularize our network by adding random Gaussian noise with zero mean and unit variance to the output σ values (before passing them through the ReLU) during optimization, finding that this slightly improves visual performance for rendering novel views. We implement our model in Tensorflow [1]. Rendering Details To render new views at test time, we sample 64 points per ray through the coarse network and 64 + 128 = 192 points per ray through the fine network, for a total of 256 network queries per ray. Our realistic synthetic 17 18 B. Mildenhall, P. P. Srinivasan, M. Tancik et al. + oo , | | | | | | | | | i | Fig. 7: A visualization of our fully-connected network architecture. Input vectors are shown in green, intermediate hidden layers are shown in blue, output vectors are shown in red, and the number inside each block signifies the vector’s dimen- sion. All layers are standard fully-connected layers, black arrows indicate layers with ReLU activations, orange arrows indicate layers with no activation, dashed black arrows indicate layers with sigmoid activation, and “+” denotes vector concatenation. The positional encoding of the input location (γ(x)) is passed through 8 fully-connected ReLU layers, each with 256 channels. We follow the DeepSDF [32] architecture and include a skip connection that concatenates this input to the fifth layer’s activation. An additional layer outputs the volume den- sity σ (which is rectified using a ReLU to ensure that the output volume density is nonnegative) and a 256-dimensional feature vector. This feature vector is con- catenated with the positional encoding of the input viewing direction (γ(d)), and is processed by an additional fully-connected ReLU layer with 128 channels. A final layer (with a sigmoid activation) outputs the emitted RGB radiance at position x, as viewed by a ray with direction d. dataset requires 640k rays per image, and our real scenes require 762k rays per image, resulting in between 150 and 200 million network queries per rendered image. On an NVIDIA V100, this takes approximately 30 seconds per frame. # B Additional Baseline Method Details Neural Volumes (NV) [24] We use the NV code open-sourced by the authors at https://github.com/facebookresearch/neuralvolumes and follow their procedure for training on a single scene without time dependence. Scene Representation Networks (SRN) [42] We use the SRN code open- sourced by the authors at https://github.com/vsitzmann/scene-representation-networks and follow their procedure for training on a single scene. Local Light Field Fusion (LLFF) [28] We use the pretrained LLFF model open-sourced by the authors at https://github.com/Fyusion/LLFF. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Quantitative Comparisons The SRN implementation published by the au- thors requires a significant amount of GPU memory, and is limited to an image resolution of 512 × 512 pixels even when parallelized across 4 NVIDIA V100 GPUs. We compute quantitative metrics for SRN at 512 × 512 pixels for our synthetic datasets and 504 × 376 pixels for the real datasets, in comparison to 800 × 800 and 1008 × 752 respectively for the other methods that can be run at higher resolutions. # C NDC ray space derivation We reconstruct real scenes with “forward facing” captures in the normalized device coordinate (NDC) space that is commonly used as part of the triangle rasterization pipeline. This space is convenient because it preserves parallel lines while converting the z axis (camera axis) to be linear in disparity. Here we derive the transformation which is applied to rays to map them from camera space to NDC space. The standard 3D perspective projection matrix for homogeneous coordinates is: M = n r 0 0 n t 0 0 −(f +n) f −n 0 0 −1 0 0 0 0 −2f n f −n 0 (7) where n, f are the near and far clipping planes and r and t are the right and top bounds of the scene at the near clipping plane. (Note that this is in the convention where the camera is looking in the —z direction.) To project a homogeneous point (x,y, 2,1)", we left-multiply by M and then divide by the fourth coordinate: n r 0 0 n t 0 0 −(f +n) f −n 0 0 −1 0 0 0 0 −2f n f −n 0 x y z 1 = project → n r x n t y f −n z − −2f n −z n x −z r y n −z t −(f +n) f −n (f +n) f −n − 2f n 1 −z f −n (8) (9) The projected point is now in normalized device coordinate (NDC) space, where the original viewing frustum has been mapped to the cube [−1, 1]3. Our goal is to take a ray o + td and calculate a ray origin o’ and direction d’ in NDC space such that for every t, there exists a new t’ for which 7(o + td) = o’ + t'd’ (where 7 is projection using the above matrix). In other words, the projection of the original ray and the NDC space ray trace out the same points (but not necessarily at the same rate). 20 B. Mildenhall, P. P. Srinivasan, M. Tancik et al. Let us rewrite the projected point from Eqn.|9|as (aza/z,ayy/z,az+b./z)!. The components of the new origin o’ and direction d’ must satisfy: Or+tdy © oz +tdz y 4 ¢d oyttdy | _ [of aye (10) Yo. +td. ~ fey Ey Je zo t d, bz az, + or4ids To eliminate a degree of freedom, we decide that t/ = 0 and t = 0 should map to the same point. Substituting t = 0 and ¢t’ = 0 Eqn. directly gives our NDC space origin o/: Oo, te oe of =[o,] =| a2 | =x(0). (11) 0, b 2 az + oF This is exactly the projection 7(o) of the original ray’s origin. By substituting this back into Eqn. [10] for arbitrary t, we can determine the values of t’ and d’: Or +tdy a, tt © o,+tdz td, Or Ay St tt _ Oy +td, O, ody = Qy oid. ~ Pye (12) td. a, —% bz Q2 + Tyra, Oz = ax ay bz (13) 02z(0r+tdx)—0x(oz+tdz) (02 +tdz)oz 2 (Oy +tdy)—o,(0.-+td, slotted petit p, zc ezttde) 2 “(o,+tdz)oz td. (dz _ 0» or Ftd, (# a) td: (# _ 2) o4id. \d. ~ Oz td, 1 —b, oz+tdz oz td. (dz _ 0» On or Ftd, (# a) _ td: (# _ 2) (14) {y o4id. \d. ~ Oz td, 1 —b, oz+tdz oz Factoring out a common expression that depends only on t gives us: td te) v= ——— 2 15 o, + td, o, + td, (15) td ——— o, + td, (16) NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Note that, as desired, t/ = 0 when t = 0. Additionally, we see that t’/ + 1 as t + oo. Going back to the original projection matrix, our constants are: n r n t f + n f − n 2f n f − n ax = − (17) ay = − (18) az = (19) bz = (20) Using the standard pinhole camera model, we can reparameterize as: # fcam W/2 fcam H/2 ax = − (21) ay = − (22) where W and H are the width and height of the image in pixels and fcam is the focal length of the camera. In our real forward facing captures, we assume that the far scene bound is infinity (this costs us very little since NDC uses the z dimension to represent inverse depth, i.e., disparity). In this limit the z constants simplify to: (23) az = 1 bz = 2n . (24) Combining everything together: — feam Ox W/2 oz — San ou 14 — feam (+ w/2 _ feam (4 foes oO. — feam Ox W/2 oz of = | — San ou (25) — feam (+ Ox w/2 d= | _ feam (4 _ oy foes (26) oO. One final detail in our implementation: we shift o to the ray’s intersection with the near plane at z = —n (before this NDC conversion) by taking 0, = 0+ tnd for tr, = —(n+0,)/d,. Once we convert to the NDC ray, this allows us to simply sample t’ linearly from 0 to 1 in order to get a linear sampling in disparity from n to oo in the original space. 21 B. Mildenhall, P. P. Srinivasan, M. Tancik et al. Cube - # Pedestal # Ground Truth NeRF (ours) LLFF [28] SRN [42] NV [24] Fig. 8: Comparisons on test-set views for scenes from the DeepVoxels [41] syn- thetic dataset. The objects in this dataset have simple geometry and perfectly diffuse reflectance. Because of the large number of input images (479 views) and simplicity of the rendered objects, both our method and LLFF [28] perform nearly perfectly on this data. LLFF still occasionally presents artifacts when in- terpolating between its 3D volumes, as in the top inset for each object. SRN [42] and NV [24] do not have the representational power to render fine details. # D Additional Results Per-scene breakdown Tables 3, 4, 5, and 6 include a breakdown of the quanti- tative results presented in the main paper into per-scene metrics. The per-scene breakdown is consistent with the aggregate quantitative metrics presented in the paper, where our method quantitatively outperforms all baselines. Although LLFF achieves slightly better LPIPS metrics, we urge readers to view our sup- plementary video where our method achieves better multiview consistency and produces fewer artifacts than all baselines. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis SSIM↑ Chair Pedestal Cube Vase Chair Pedestal Cube Vase Chair Pedestal Cube Vase 33.45 36.67 35.15 36.11 42.65 Table 3: Per-scene quantitative results from the DeepVoxels [41] dataset. The “scenes” in this dataset are all diffuse objects with simple geometry, rendered from texture-mapped meshes captured by a 3D scanner. The metrics for the DeepVoxels method are taken directly from their paper, which does not report LPIPS and only reports two significant figures for SSIM. PSNR↑ Chair Drums Ficus Hotdog Lego Materials Mic Ship 26.85 26.81 26.96 17.18 20.73 20.60 27.78 30.71 28.33 22.58 24.79 23.93 23.22 27.48 31.41 21.79 21.13 28.72 32.91 28.65 36.18 33.00 25.01 30.13 SSIM↑ Ship Chair Drums Ficus Hotdog Lego Materials Mic 0.757 0.849 0.766 0.947 0.923 0.910 0.784 0.946 0.944 0.910 0.873 0.916 0.964 0.965 0.948 0.823 0.896 0.890 0.980 0.856 0.974 0.967 0.925 0.964 LPIPS↓ Ship Chair Drums Ficus Hotdog Lego Materials Mic 0.299 0.149 0.267 0.063 0.100 0.106 0.276 0.107 0.109 0.162 0.214 0.109 0.084 0.061 0.064 0.218 0.130 0.126 0.028 0.206 0.121 0.046 0.091 0.044 Table 4: Per-scene quantitative results from our realistic synthetic dataset. The “scenes” in this dataset are all objects with more complex gometry and non- Lambertian materials, rendered using Blender’s Cycles pathtracer. 23 B. Mildenhall, P. P. Srinivasan, M. Tancik et al. 24 PSNR↑ SRN [42] LLFF [28] Ours Room Fern Leaves Fortress Orchids Flower T-Rex Horns 24.33 18.24 21.37 27.29 22.87 24.63 25.46 28.42 24.70 19.52 22.85 24.15 27.40 26.80 27.45 32.70 25.17 20.92 26.63 29.40 31.16 17.37 18.52 20.36 SSIM↑ Room Fern Leaves Fortress Orchids Flower T-Rex Horns 0.761 0.611 0.883 0.742 0.857 0.840 0.932 0.753 0.880 0.828 0.948 0.792 # LPIPS↓ Room Fern Leaves Fortress Orchids Flower T-Rex Horns 0.376 0.298 0.288 0.240 0.174 0.222 0.193 0.268 0.249 0.219 Table 5: Per-scene quantitative results from our real image dataset. The scenes in this dataset are all captured with a forward-facing handheld cellphone. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis PSNR↑ 1) No PE, VD, H 2) No Pos. Encoding 3) No View Dependence 4) No Hierarchical 5) Far Fewer Images 6) Fewer Images 7) Fewer Frequencies 8) More Frequencies 9) Complete Model Ship Chair Drums Ficus Hotdog Lego Materials Mic 25.12 28.16 32.24 28.44 26.55 30.76 33.16 30.33 25.72 28.62 32.65 30.06 27.73 31.74 35.24 31.32 26.57 30.47 32.77 30.92 27.67 32.33 34.91 32.19 28.26 31.66 36.06 32.19 32.86 35.78 32.87 28.34 32.91 28.65 36.18 33.00 25.17 23.11 29.32 24.54 25.91 23.41 29.25 24.55 24.39 22.62 23.70 27.45 25.29 30.73 29.92 24.65 30.13 25.01 26.38 27.75 29.93 31.42 27.97 31.53 30.77 32.50 32.54 24.69 27.79 24.96 29.22 26.55 28.54 29.77 29.54 29.62 1) No PE, VD, H 2) No Pos. Encoding 3) No View Dependence 4) No Hierarchical 5) Far Fewer Images 6) Fewer Images 7) Fewer Frequencies 8) More Frequencies 9) Complete Model Ship Chair Drums Ficus Hotdog Lego Materials Mic 0.810 0.955 0.955 0.919 0.824 0.968 0.956 0.938 0.828 0.962 0.961 0.948 0.844 0.973 0.969 0.951 0.832 0.972 0.966 0.956 0.847 0.979 0.971 0.963 0.853 0.973 0.972 0.959 0.980 0.853 0.973 0.967 0.980 0.856 0.974 0.967 0.926 0.896 0.953 0.918 0.938 0.906 0.956 0.914 0.922 0.895 0.911 0.948 0.928 0.965 0.962 0.921 0.964 0.925 0.882 0.903 0.947 0.951 0.930 0.957 0.947 0.961 0.961 0.905 0.933 0.912 0.944 0.925 0.941 0.952 0.948 0.949 LPIPS↓ 1) No PE, VD, H 2) No Pos. Encoding 3) No View Dependence 4) No Hierarchical 5) Far Fewer Images 6) Fewer Images 7) Fewer Frequencies 8) More Frequencies 9) Complete Model Ship Chair Drums Ficus Hotdog Lego Materials Mic 0.168 0.261 0.084 0.104 0.095 0.104 0.261 0.041 0.124 0.076 0.148 0.220 0.073 0.112 0.075 0.177 0.249 0.039 0.130 0.065 0.173 0.229 0.035 0.123 0.058 0.166 0.223 0.029 0.121 0.051 0.029 0.087 0.143 0.055 0.219 0.027 0.261 0.116 0.047 0.158 0.028 0.206 0.121 0.046 0.091 0.084 0.050 0.113 0.056 0.082 0.057 0.038 0.045 0.044 0.178 0.128 0.088 0.072 0.081 0.055 0.071 0.050 0.050 0.111 0.079 0.102 0.080 0.079 0.068 0.060 0.064 0.063 Table 6: Per-scene quantitative results from our ablation study. The scenes used here are the same as in Table 4. 25
{ "id": "1512.03012" }
2003.08380
TTTTTackling WinoGrande Schemas
We applied the T5 sequence-to-sequence model to tackle the AI2 WinoGrande Challenge by decomposing each example into two input text strings, each containing a hypothesis, and using the probabilities assigned to the "entailment" token as a score of the hypothesis. Our first (and only) submission to the official leaderboard yielded 0.7673 AUC on March 13, 2020, which is the best known result at this time and beats the previous state of the art by over five points.
http://arxiv.org/pdf/2003.08380
Sheng-Chieh Lin, Jheng-Hong Yang, Rodrigo Nogueira, Ming-Feng Tsai, Chuan-Ju Wang, Jimmy Lin
cs.CL, cs.LG
null
null
cs.CL
20200318
20200318
2020: 0 2 0 2 r a M 8 1 ] L C . s c [ 1 v 0 8 3 8 0 . 3 0 0 2 : v i X r a arXiv:2003.08380v1 # TTTTTackling WinoGrande Schemas Sheng-Chieh Lin∗1, Jheng-Hong Yang∗1, Rodrigo Nogueira2, Ming-Feng Tsai1, Chuan-Ju Wang1 and Jimmy Lin2 1Research Center for Information Technology Innovation, Academia Sinica 2David R. Cheriton School of Computer Science, University of Waterloo # Abstract We applied the T5 sequence-to-sequence model [5] to tackle the AI2 WinoGrande Challenge [6] by decomposing each example into two input text strings, each con- taining a hypothesis, and using the probabilities assigned to the “entailment” token as a score of the hypothesis. Our first (and only) submission to the official leader- board yielded 0.7673 AUC on March 13, 2020, which is the best known result at this time and beats the previous state of the art by over five points.2 # 1 Introduction Other than encoder-only pretrained transformer architectures [1, 3, 9], encoder–decoder style pre- trained transformers [2, 5] have been proven to be effective in text generation tasks as well as comprehension tasks. This paper describes our submission to the commonsense reasoning task leaderboard of the AI2 WinoGrande Challenge [6], which uses the text-to-text transfer transformer (T5); our approach currently represents the state of the art. In T5 [5], NLP tasks are formulated as text-to-text problems, where the inputs are cast into natural language templates that contain the task descriptors. Concretely, Raffel et al. provide the following example for MNLI [7], where the goal is to predict whether a premise implies (“entailment”) or contradicts (“contradiction”) a hypothesis, or neither (“neutral”). Thus, a training example becomes: “mnli premise: I hate pigeons. hypothesis: My feelings towards pigeons are filled with animosity.” with “entailment” as the corresponding ground truth target output. In other words, a token represent- ing each class is directly used as the prediction target. # 2 Approach The natural language template approach enables various options to formulate the WinoGrande com- monsense reasoning task as a text-to-text problem with T5. Here we adopt a formulation similar to the MNLI template. Consider a concrete example: He never comes to my home, but I always go to his house because the _ is smaller. Option1: home; Option2: house In this case, the correct replacement for _ is Option1. We decompose the above problem into two source–target training examples, where _ is replaced with each option and annotated with the correct answer as the target token, as shown in Table 1. In addition, we reformulate each example into ∗Contributed equally. 2https://leaderboard.allenai.org/winogrande/submissions/public a commonsense reasoning “template” with two statements: hypothesis (from _ to the end of the original problem statement) and premise (the remaining part of the original problem statement). Note that the bold and colored fonts are for clarity only; those tokens are not marked in any way in the model input. Source Target hypothesis: home is smaller. premise: He never comes to my home, but I always go to his house because the hypothesis: house is smaller. premise: He never comes to my home, but I always go to his house because the entailment contradiction Table 1: Decomposing WinoGrande problems into training instances for T5. At inference (test) time, we also decompose the problem into two inputs, where each input is formu- lated in exactly the same manner as in Table 1, with either one of the answer options. We then feed each into T5 to predict a target token. In this scenario, there are four possible outcomes: 1. one produces “entailment” and the other “contradiction”, 2. one produces “entailment” or “contradiction” and the other some other token, 3. both produce some other tokens, and 4. both produce the same token, either “entailment” or “contradiction”. Ideally, T5 would produce contrastive tokens for each input pair, as in case (1), which allows us to unambiguously select the final answer. However, the model might produce the same tokens for each input, or even tokens not in the predefined set, as in cases (2) to (4). To deal with these cases, we apply a softmax over the logits of the pair of predefined target tokens, similar to Nogueira et al. [4]. From this, we can compute the probabilities of the predefined target tokens (in the case of Table 1, “entailment” and “contradiction”). Then, we compare the probabilities across both input instances, and in cases (2) to (4), we select the instance that has a higher probability as the correct answer. This general problem setup allows us to choose the target tokens, which may have an impact on the prediction accuracy [4]. In addition to selecting “entailment” vs. “contradiction” as the target, we also tried the contrastive pair “true” vs. ”false”. In our experiment, we fine-tune T5-3B on Google Colab’s TPU v2 with a batch size of 16, a learning rate of 2 · 10−4, and save model checkpoints every 5000 steps. It takes 130k steps to converge for the XL data size (see below). At inference time, we use greedy decoding and select for evaluation the model checkpoint that achieves the highest score on the development set. We did not experiment with T5-11B due to limited computational resources. # 3 Results Experimental results on the WinoGrande development set are reported in Table 2 for different train- ing data sizes. Note that we fine-tune the model for each training data size separately. A Xunder the “logit” column indicates that we used the softmax over the target tokens as described above. Without this technique, given the original two-choice question, if T5 outputs the same tokens for the two processed inputs, we simply assign Option1 as the answer. The table also reports “zero-shot” performance, i.e., performing inference on the development set without any model fine tuning. Con- dition #2 represents our submission to the official leaderboard, which achieves 0.7673 AUC on the held-out test set. From these results, we see that the logit trick clearly improves performance, which is consistent with the observations of Nogueira et al. [4]. In fact, applying this technique in the zero-shot setting yields performance that is clearly better than random. Another interesting finding is that the choice of target token appears to have an impact on performance, which is also consistent with the above work. Since using true/false as the target token (conditions #3 and #4) did not improve performance much over conditions with entailment/contradiction, we did not run all data size conditions given our limited computational resources. 2 Condition Training size Condition Answer token #1 #2 logit Zero-Shot XS 0.657 0.718 S 0.693 0.740 M 0.757 0.788 L 0.809 0.837 0.506 0.608 entailment/contradiction X XL 0.840 0.854 #3 #4 true/false X 0.477 0.566 0.676 0.723 - - - - - - 0.852 0.865 Our leaderboard results on test set - 0.683 0.705 0.776 0.824 0.846 Table 2: Main results on the WinoGrande development set. Condition #2 is our current submission. Looking at the current WinoGrande leaderboard, it appears that the previous state of the art is based on RoBERTa [3], which can be characterized as an encoder-only transformer architecture. Since T5- 3B is larger than RoBERTa, it cannot be ruled out that model size alone explains the performance gain. However, when coupled with the observations of Nogueira et al. [4], T5’s “generative capa- bility”, i.e., its ability to generate fluent text, honed through pretraining, seems to play an important role. The fact that the choice of target tokens affects prediction accuracy is consistent with this observation. How and why is the subject of ongoing work. # 4 Implications Collectively, the success of large pretrained neural models, both encoder-only BERT-like architec- tures as well as encoder–decoder architectures like T5, raise interesting questions for the pursuit of commonsense reasoning. Researchers have discovered that previous models perform well on benchmark datasets because they pick up on incidental biases in the dataset that have nothing to do with the task; in contrast, the WinoGrande dataset has devoted considerable effort to reducing such biases, which may allow models to (inadvertently) “cheat” (for example, using simple statisti- cal associations). While it is certainly true that datasets over-estimate the commonsense reasoning capabilities of modern models [6], there are alternative and complementary explanations as well: It has been a fundamental assumption of the research community that commonsense reasoning is difficult because it comprises tacit rather than explicit knowledge [8]. That is, commonsense knowledge—like water is wet and that a tuba is usually too big to fit in a backpack—is not written down anywhere (unlike, say, factual knowledge, which can be modeled in a knowledge graph). As a result—the reasoning goes—data-driven techniques (even neural models) will be of limited use due to the paucity of relevant corpora. Yet, previous encoder-only architectures like RoBERTa that exploit a language modeling objective (that is, relying only on explicit textual knowledge) can clearly make headway in a commonsense reasoning task, and we can further improve upon these approaches with a sequence-to-sequence model. This leaves us with two possible explanations: despite careful controls, the WinoGrande challenge still contains incidental biases that these more sophisticated models can exploit, or that we are genuinely making at least some progress in commonsense reasoning. The latter, in particular, challenges the notion that commonsense knowledge is (mostly) tacit. Perhaps it is the case that in a humongous corpus of natural language text, someone really has written about trying to stuff a tuba in a backpack? # Acknowledgments This research was supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada. We would like to thank Google Colab for providing support in terms of computational resources. # References [1] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North 3 American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, June 2019. [2] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer. BART: Denoising sequence-to-sequence pre-training for natural language gen- eration, translation, and comprehension. arXiv:1910.13461, 2019. [3] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692, 2019. [4] R. Nogueira, Z. Jiang, and J. Lin. Document ranking with a pretrained sequence-to-sequence model. arXiv:2003.06713, 2020. [5] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683, 2019. [6] K. Sakaguchi, R. L. Bras, C. Bhagavatula, and Y. Choi. WinoGrande: An adversarial Winograd schema challenge at scale. arXiv:1907.10641, 2019. [7] A. Williams, N. Nangia, and S. Bowman. A broad-coverage challenge corpus for sentence understanding through inference. In Proceedings of the 2018 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana, June 2018. [8] T. Winograd. Understanding natural language. Cognitive Psychology, 3(1):1 – 191, 1972. [9] Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, and Q. V. Le. XLNet: Generalized autoregressive pretraining for language understanding. arXiv:1906.08237, 2019. 4
{ "id": "1910.13461" }
2003.08237
Fixing the train-test resolution discrepancy: FixEfficientNet
This paper provides an extensive analysis of the performance of the EfficientNet image classifiers with several recent training procedures, in particular one that corrects the discrepancy between train and test images. The resulting network, called FixEfficientNet, significantly outperforms the initial architecture with the same number of parameters. For instance, our FixEfficientNet-B0 trained without additional training data achieves 79.3% top-1 accuracy on ImageNet with 5.3M parameters. This is a +0.5% absolute improvement over the Noisy student EfficientNet-B0 trained with 300M unlabeled images. An EfficientNet-L2 pre-trained with weak supervision on 300M unlabeled images and further optimized with FixRes achieves 88.5% top-1 accuracy (top-5: 98.7%), which establishes the new state of the art for ImageNet with a single crop. These improvements are thoroughly evaluated with cleaner protocols than the one usually employed for Imagenet, and particular we show that our improvement remains in the experimental setting of ImageNet-v2, that is less prone to overfitting, and with ImageNet Real Labels. In both cases we also establish the new state of the art.
http://arxiv.org/pdf/2003.08237
Hugo Touvron, Andrea Vedaldi, Matthijs Douze, Hervé Jégou
cs.CV, cs.LG
null
null
cs.CV
20200318
20201118
0 2 0 2 v o N 8 1 ] V C . s c [ 5 v 7 3 2 8 0 . 3 0 0 2 : v i X r a # FIXING THE TRAIN-TEST RESOLUTION DISCREPANCY: FIXEFFICIENTNET Hugo Touvron, Andrea Vedaldi, Matthijs Douze, Herv´e J´egou # Facebook AI Research # ABSTRACT This paper provides an extensive analysis of the perfor- mance of the EfficientNet image classifiers with several re- cent training procedures, in particular one that corrects the discrepancy between train and test images [1]. The resulting network, called FixEfficientNet, significantly outperforms the initial architecture with the same number of parameters. For instance, our FixEfficientNet-B0 trained without ad- ditional training data achieves 79.3% top-1 accuracy on Im- ageNet with 5.3M parameters. This is a +0.5% absolute improvement over the Noisy student EfficientNet-B0 trained with 300M unlabeled images. An EfficientNet-L2 pre-trained with weak supervision on 300M unlabeled images and fur- ther optimized with FixRes achieves 88.5% top-1 accuracy (top-5: 98.7%), which establishes the new state of the art for ImageNet with a single crop. These improvements are thoroughly evaluated with cleaner protocols than the one usually employed for Imagenet, and particular we show that our improvement remains in the experimental setting of ImageNet-v2, that is less prone to overfitting, and with ImageNet Real Labels. In both cases we also establish the new state of the art. 1. INTRODUCTION In order to obtain the best possible performance from Con- volutional neural nets (CNNs), the training and testing data distributions should match. However, in image recognition, data pre-processing procedures are often different for train- ing and testing: the most popular practice is to extract a rect- angle with random coordinates from the image to artificially increase the amount of training data. This Region of Classifi- cation (RoC) is then resized to obtain an image, or crop, of a fixed size (in pixels) that is fed to the CNN. At test time, the RoC is instead set to a square covering the central part of the image, which results in the extraction of a center crop. Thus, while the crops extracted at training and test time have the same size, they arise from different RoCs, which skews the data distribution seen by the CNN. Our FixEfficientNet-B7 --* Noisy Student (EfficientNet-B7) 86) BA ee AdvProp (EfficientNet-B7) ra g FixPNASNet PNASNet _-““AmoebaNet-A FixResNet-50-sws 0 8 : {| ResNet-50-sws i aa “ResNeXt-101 | FixResNet-50-~~ ‘ ee i rae I ye *Xception ! *DenseNet-201 0 i) “Inception-resnet-v2 x o ResNet-152 ImageNet Top-1 Accuracy (%) 6 ! ResNet-50 i “Inception-v2 | 74 4 0 20 40 60 80 160 Number of Parameters (Millions) Fig. 1. Improvement brought by FixRes (in bold) to several popular architectures from the literature. Our FixEfficientNet (orange curve) surpasses all EfficientNet models, including the models trained with Noisy student (red curve) and adver- sarial examples (blue curve). The sws models are from [2]. Tables 1 and 2 report results on larger models. which jointly optimizes the choice of resolutions and scales at training and test time, while keeping the same RoC sampling. We apply this method to the recent EfficientNet [4] archi- tecture, which offers an excellent compromise between num- ber of parameters and accuracy. This evaluation paper shows that properly combining FixRes and EfficientNet further im- proves the state of the art [4]. Noticeably, • We report the best performance without external data on ImageNet (top1: 85.7%); Over the years, training and testing pre-processing proce- dures have evolved, but so far they have been optimized sepa- rately [3]. Touvron et al. show [1] that this separate optimiza- tion has a detrimental effect on the test-time performance of models. They address this problem with the FixRes method, • We report the best accuracy (top1: 88.5%) with ex- ternal data on ImageNet, and with ImageNet with Reallabels [5] ; • We achieve state-of-the-art compromises between ac- curacy and number of parameters, see Figure 1; • We validate the significance of our results on the ImageNet-v2 test set, an improved evaluation setup that clearly separates the validation and test sets. Fix- EfficientNet achieves the best performance. This paper is organized as follows. In Section 2 we in- troduce the corrected training procedure for EfficientNet, that produces FixEfficientNet. Section 3 analyzes our extensive evaluation and compare FixEfficientNet with the state of the art. Section 4 concludes the paper. # 2. TRAINING WITH FIXRES: UPDATES Recent research in image classification tends towards larger networks and higher resolution images [6, 7, 8]. For instance, the state-of-the-art in the ImageNet ILSVRC 2012 bench- mark is currently held by the EfficientNet-L2 [8] architecture with 480M parameters using 800×800 images for training. Similarly, the state-of-the-art model learned from scratch is currently EfficientNet-B8 [9] with 88M parameters using 672×672 images for training. In this note, we focus on the EfficientNet architecture [4] due to its good accuracy/cost trade-off and its popularity. Data augmentation is routinely employed at training time In to improve model generalization and reduce overfitting. this note, we use the same augmentation setup as in the orig- inal FixRes paper [1]. In addition, we have integrated label smoothing, which is orthogonal to the approach. FixRes is a very simple fine-tuning that re-trains the classifier or a few top layers at the target resolution. Therefore, it has several advantages: 1. it is computationally cheap, the back-propagation is not performed on the whole network; 2. it works with any CNN classification architecture and is complementary with the other tricks mentioned above; 3. it can be applied on a CNN that comes from a possibly non reproducible source. # 3. EXPERIMENTS We experiment on the ImageNet-2012 benchmark [10], and report standard performance metrics (top-1 and top-5 accura- cies) on a single image crop. # 3.1. Experimental Setting We focus on the EfficientNet [4] architectures. In the liter- ature, wo versions provide the best performance: Efficient- Net trained with adversarial examples [9], and Efficient- Net trained with Noisy student [8] pre-trained in a weakly- supervised fashion on 300 million unlabeled images. Table 1. Results on ImageNet with extra training data. We start from pre-trained models [8] learned using 300M additional unlabeled images (single crop evaluation). See Section 3.3 about the significance of these results. l e d o M s m a r a p # s e r n i a r t EfficientNet [8] FixEfficientNet test Top-1 Top-5 test Top-1 Top-5 (%) res (%) (%) res (%) 5.3M 224 224 B0 7.8M 240 240 B1 9.2M 260 260 B2 12M 300 300 B3 19M 380 380 B4 30M 456 456 B5 43M 528 528 B6 B7 66M 600 600 L2 480M 475 800 78.8 81.5 82.4 84.1 85.3 86.1 86.4 86.9 88.4 94.5 95.8 96.3 96.9 97.5 97.8 97.9 98.1 98.7 320 384 420 472 472 576 680 632 600 80.2 82.6 83.6 85.0 85.9 86.4 86.7 87.1 88.5 95.4 96.5 96.9 97.4 97.7 97.9 98.0 98.2 98.7 Table 2. Results on ImageNet without external data (single Crop evaluation). FixEfficientNet outperforms the previous EfficientNet AdvProp [9] state of the art in this setup, see Section 3.3 for the significance of these results. l e d o M s m a r a p # s e r n i a r t EfficientNet [9] FixEfficientNet test Top-1 Top-5 test Top-1 Top-5 (%) res (%) (%) res (%) 5.3M 224 224 B0 7.8M 240 240 B1 9.2M 260 260 B2 12M 300 300 B3 19M 380 380 B4 30M 456 456 B5 43M 528 528 B6 B7 66M 600 600 B8 87.4M 672 672 77.6 79.6 80.5 81.9 83.3 84.3 84.8 85.2 85.5 93.3 94.3 95.0 95.6 96.4 97.0 97.1 97.2 97.3 320 384 420 472 512 576 576 632 800 79.3 81.3 82.0 83.0 84.0 84.7 84.9 85.3 85.7 94.6 95.7 96.0 96.4 97.0 97.2 97.3 97.4 97.6 We start from the EfficientNet models in rwightman’s GitHub repository [11]. These models have been converted from the original Tensorflow to PyTorch. Training. We mostly follow the FixRes [1] training proto- col. The only difference is that we combine the FixRes data- augmentation with label smoothing during the fine-tuning. # 3.2. Comparison with the state of the art Table 1 and Table 2 compare our results with those of the Ef- ficientNet reported in the literature. All our FixEfficientNets outperform the corresponding EfficientNet (see Figure 1). As a result and to the best of our knowledge, our FixEfficientNet- L2 surpasses all other results reported in the literature. It achieves 88.5% Top-1 accuracy and 98.7% Top-5 accuracy on the ImageNet-2012 validation benchmark [10]. Table 3. Results on ImageNet Real labels [5]. l e d o M No Extra-Training Data EfficientNet [8] Top-5 Top-1 (%) (%) FixEfficientNet Top-5 Top-1 (%) (%) Extra-Training Data EfficientNet [8] Top-5 Top-1 (%) (%) FixEfficientNet Top-5 Top-1 (%) (%) B0 B1 B2 B3 B4 B5 B6 B7 B8 L2 83.7 85.1 86.0 87.2 88.3 88.9 89.3 89.4 89.6 95.8 96.4 96.8 97.4 97.9 98.2 98.3 98.3 98.3 85.8 87.0 87.7 88.3 89.2 89.4 89.6 89.7 90.0 96.8 97.4 97.6 98.0 98.3 98.4 98.4 98.5 98.6 84.5 86.7 87.3 88.4 89.4 89.7 89.8 90.1 96.4 97.2 97.6 98.0 98.4 98.5 98.5 98.6 86.5 88.1 88.8 89.2 89.8 90.0 90.1 90.3 97.3 98.0 98.2 98.4 98.5 98.6 98.6 98.7 90.6 98.8 90.9 98.8 Clean labels. In order to complement this evaluation, Ta- ble 3 present the results with the ImageNet clean labels pro- posed by Beyer et all. [5]. With 90.9% Top-1 accuracy and 98.8% Top-5 accuracy FixEfficientNet-L2 surpasses all other results reported in the literature with this labels. # 3.3. Significance of the results Several runs of the same training incur variations of about 0.1 accuracy points on Imagenet due to random initialization and mini-batch sampling. In general, since the Imagenet 2012 test set is not available, most works tune the hyper-parameters on the validation set, ie. there is no distinction between valida- tion and test set. This setting, while widely adopted, is not legitimate and can cause overfitting to go unnoticed. EfficientNets employ Neural Architecture Search, which significantly enlarges the hyper-parameter space. Addition- ally, the ImageNet validation images were used to filter the images from the unlabelled set [8]. Therefore the pre-trained models may benefit from more overfitting on the validation set. We quantify this in the experiments presented below. Since we use pre-trained EfficientNet for our initializa- tion, our results are comparable to those from the Noisy Stu- dent [8], which uses the same degree of overfitting, but not directly with other semi-supervised approaches like that of Yalniz et al. [2]. # 3.4. Evaluation on ImageNet-V2 The ImageNet-V2 [17] dataset was introduced to overcome the lack of a test split in the Imagenet dataset. ImageNet-V2 consists of 3 novel test sets that replace the ImageNet test set, which is no longer available. They were carefully designed to match the characteristics of the original test set. One of these test sets, Matched Frequency is the closest to the Im- ageNet validation set. To ensure that observed improvements are not due to overfitting, we evaluate all our models on the Matched Frequency version of the ImageNet-v2 [17] dataset. We evaluate the other methods in the same way. We present the results in Tables 4 and 5. —* Noisy Student 80, —— FixRes-Noisy Student —— FixRes-Billion Scale —* Billion Scale —+— EfficientNet —*— ResNet + NASNet —— RegNety ~ a ImageNet-V2 Top-1 Accuracy (%) ~ 3 a & 60 75 85 80 ImageNet Top-1 Accuracy (%) Fig. 2. Evidence of overfitting on Imagenet-val: We com- pare the results obtained on ImageNet (x-axis) and the re- sults obtained on ImageNet-v2 (y-axis), without FixRes for different models [12, 13, 8, 14, 15, 2, 16, 1]. For a given per- formance on Imagenet-val, overfitted models tend to have a lower performance on ImageNet-v2 and therefore are below the approaches that generalize better. The original study of [17] shows that there is significant overfitting of various models to the Imagenet 2012 valuation set, but that it does not impact the relative order of the models. Quantifying the overfitting on Imagenet. As mentioned earlier, several choices in the Noisy Student [8] method are prone to overfitting. We verify this hypothesis and quantify its extent by comparing the relative accuracy of this approach with another semi-supervised approach [2] both on ImageNet and ImageNet-V2 [17]. Without overfitting, models performing similarly on Im- agenet should also have similar performances on ImageNet- V2 [17]. However, for a comparable performance on Ima- geNet, when evaluating on ImageNet-V2, the Billion scale models of Yalniz et al. [2] outperform the EfficientNets from Noisy Student. For example, FixResNeXt-101 32x4d [1] has the same performance as EfficientNet-B3 [8] on ImageNet but on ImageNet-V2 FixResNeXt-101 32x4d [1] is better (+0.7% Top-1 accuracy). This shows that the EfficientNet Noisy student [8] tends to overfit and does not generalize as well as the (prior) semi- supervised work [2] or other works of the literature. Figure 2 illustrates this effect. The FixRes fine-tuning procedure is neutral with respect to overfitting: overfitted models remain overfitted and conversely. Table 4. Results on ImageNet-V2 [17] Matched Fre- quency with extra-training data. We start from pre-trained models [8] that have been learned using 300M additional unlabeled images (single crop evaluation). l e d o M s m a r a p # s e r n i a r t EfficientNet [8] FixEfficientNet test Top-1 Top-5 test Top-1 Top-5 (%) res (%) (%) res (%) 5.3M 224 224 B0 7.8M 240 240 B1 9.2M 260 260 B2 12M 300 300 B3 19M 380 380 B4 30M 456 456 B5 43M 528 528 B6 B7 66M 600 600 L2 480M 475 800 67.7 70.9 72.3 73.9 75.7 76.8 77.3 78.5 80.3 88.1 90.1 91.1 91.9 93.1 93.6 93.9 94.4 95.8 320 384 420 472 472 576 680 632 600 69.4 72.7 73.6 75.0 76.2 77.0 77.5 78.6 80.8 89.6 91.4 92.0 93.0 93.6 94.0 94.3 94.7 96.1 Table 5. quency without external data (single Crop evaluation). l e d o M s m a r a p # s e r n i a r t EfficientNet [9] FixEfficientNet test Top-1 Top-5 test Top-1 Top-5 (%) res (%) (%) res (%) 5.3M 224 224 B0 7.8M 240 240 B1 9.2M 260 260 B2 12M 300 300 B3 19M 380 380 B4 30M 456 456 B5 43M 528 528 B6 B7 66M 600 600 B8 87.4M 672 672 65.5 67.5 68.9 70.9 72.9 74.6 75.4 76.1 76.1 85.6 87.8 88.4 89.4 91.0 92.0 92.4 93.0 92.7 320 384 420 472 512 576 576 632 800 67.8 70.1 70.8 72.7 73.9 75.1 75.4 75.8 75.9 87.9 89.6 90.2 90.9 91.8 92.4 92.6 93.2 93.0 Comparison with the state of the art. Despite overfit- ting, EfficientNet remains very competitive on ImageNet-V2, as reported in Table 6. Interestingly, the FixEfficientNet-L2 that we fine-tuned from EfficientNet establishes the new state of the art with additional data on this benchmark. # 4. CONCLUSION The ”Fixing Resolution” is a method that improves the per- formance of any model. It is a method that is applied as a fine-tuning step after the conventional training, during a few epochs only, which makes it very flexible. It is easily inte- grated into any existing training pipeline. In our paper we proposed a thorough evaluation of the combination of the cur- rent state-of-the-art models, namely EfficientNet, with this improved training method. We provide an open-source implementation of our method 1. # 1http://github.com/facebookresearch/FixRes Table 6. Performance comparison and state of the art on ImageNet-v2, single crop with external data, sorted by top-1 accuracy. NS: Noisy Student [8]. BS: Billion-scale [2]. | Model size Top-1 (%) Top-5 (%) # 5. REFERENCES [1] Hugo Touvron, Andrea Vedaldi, Matthijs Douze, and H´erve J´egou, “Fixing the train-test resolution discrep- ancy,” Advances in Neural Information Processing Sys- tems, 2019. [2] Ismet Zeki Yalniz, Herv´e J´egou, Kan Chen, Manohar Paluri, and Dhruv Kumar Mahajan, “Billion-scale semi- arXiv supervised learning for image classification,” preprint arXiv:1905.00546, 2019. [3] Ekin Dogus Cubuk, Barret Zoph, Dandelion Man´e, Vi- jay Vasudevan, and Quoc V. Le, “Autoaugment: Learn- ing augmentation policies from data,” arXiv preprint arXiv:1805.09501, 2018. [4] Mingxing Tan and Quoc V. Le, “Efficientnet: Rethink- ing model scaling for convolutional neural networks,” arXiv preprint arXiv:1905.11946, 2019. [5] Lucas Beyer, Olivier J. H´enaff, A. Kolesnikov, Xiaohua Zhai, and Aaron van den Oord, “Are we done with Im- ageNet?,” arXiv preprint arXiv:2006.07159, 2020. [6] Yanping Huang, Yonglong Cheng, Dehao Chen, Hy- oukJoong Lee, Jiquan Ngiam, Quoc V. Le, and Zhifeng training of giant neural Chen, networks using pipeline parallelism,” arXiv preprint arXiv:1811.06965, 2018. [7] Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten, “Exploring the limits of weakly supervised pretraining,” in European Conference on Computer Vision, 2018. [8] Qizhe Xie, Eduard H. Hovy, Minh-Thang Luong, and Quoc V. Le, “Self-training with noisy stu- dent improves imagenet classification,” arXiv preprint arXiv:1911.04252, 2019. [9] Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, “Adversarial ex- arXiv preprint Alan L. Yuille, and Quoc V. Le, amples improve image recognition,” arXiv:1911.09665, 2019. [10] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexan- der C. Berg, and Li Fei-Fei, “Imagenet large scale visual recognition challenge,” International journal of Com- puter Vision, 2015. [11] “Pre-trained efficientnet models,” https://github.com/rwightman/ pytorch-image-models/, 03-01. Accessed: 2020- [12] Barret Zoph, V. Vasudevan, Jonathon Shlens, and “Learning transferable architectures for Quoc V. Le, scalable image recognition,” Conference on Computer Vision and Pattern Recognition, 2018. [13] Chenxi Liu, Barret Zoph, Maxim Neumann, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy, “Progressive neu- ral architecture search,” in International Conference on Computer Vision, September 2018. [14] Ilija Radosavovic, Raj Prateek Kosaraju, Ross B. Gir- shick, Kaiming He, and Piotr Doll´ar, “Designing net- work design spaces,” arXiv preprint arXiv:2003.13678, 2020. [15] Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V. Le, “Randaugment: Practical automated data arXiv augmentation with a reduced search space,” preprint arXiv:1909.13719, 2019. [16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” in Conference on Computer Vision and Pattern Recogni- tion, June 2016. [17] Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar, “Do imagenet classifiers gen- eralize to imagenet?,” in International Conference on Machine Learning, 2019.
{ "id": "1911.04252" }
2003.07990
Watching the World Go By: Representation Learning from Unlabeled Videos
Recent single image unsupervised representation learning techniques show remarkable success on a variety of tasks. The basic principle in these works is instance discrimination: learning to differentiate between two augmented versions of the same image and a large batch of unrelated images. Networks learn to ignore the augmentation noise and extract semantically meaningful representations. Prior work uses artificial data augmentation techniques such as cropping, and color jitter which can only affect the image in superficial ways and are not aligned with how objects actually change e.g. occlusion, deformation, viewpoint change. In this paper, we argue that videos offer this natural augmentation for free. Videos can provide entirely new views of objects, show deformation, and even connect semantically similar but visually distinct concepts. We propose Video Noise Contrastive Estimation, a method for using unlabeled video to learn strong, transferable single image representations. We demonstrate improvements over recent unsupervised single image techniques, as well as over fully supervised ImageNet pretraining, across a variety of temporal and non-temporal tasks. Code and the Random Related Video Views dataset are available at https://www.github.com/danielgordon10/vince
http://arxiv.org/pdf/2003.07990
Daniel Gordon, Kiana Ehsani, Dieter Fox, Ali Farhadi
cs.CV
null
null
cs.CV
20200318
20200507
0 2 0 2 # Watching the World Go By: Representation Learning from Unlabeled Videos y a M 7 Daniel Gordon1 Kiana Ehsani1 Dieter Fox1,2 Ali Farhadi1 1University of Washington 2Nvidia {xkcd,kianae,fox,ali}@cs.washington.edu ] # V C . s c [ 2 v 0 9 9 7 0 . 3 0 0 2 : v i X 1 r a Abstract. Recent unsupervised representation learning techniques show remarkable success on a variety of tasks. The basic principle in these works is instance discrimination: learning to differentiate between two augmented versions of the same image and a large batch of unrelated images. Prior work uses artificial data augmentation techniques such as cropping, and color jitter which can only affect the image in superficial ways and are not aligned with how objects actually change e.g. occlu- sion, deformation, viewpoint change. In this paper, we argue that videos offer this natural augmentation for free. Videos can provide entirely new views of objects, show deformation, and even connect semantically sim- ilar but visually distinct concepts. We propose Video Noise Contrastive Estimation, a method for using unlabeled video to learn strong, trans- ferable single image representations. We demonstrate improvements over recent unsupervised single image techniques, as well as over fully su- pervised ImageNet pretraining, across a variety of temporal and non-temporal tasks.1 # Introduction The world seen through our eyes is constantly changing. As we move through and interact with the world, we see much more than a single static image: ob- jects rotate revealing occluded regions, deform, the surroundings change, and we ourselves move. Our internal visual systems are constantly seeing temporally coherent images. Yet many popular computer vision models learn representa- tions which are limited to inference on single images, lacking temporal context. Visual representations learned from static images will be inherently limited to an understanding of the world as many unrelated static snapshots. This is especially true of recent unsupervised learning techniques [3, 7, 14, 16, 17, 29, 37, 42], all of which train on a set of highly-curated, well-balanced data: ImageNet [9]. Scaling up single-image techniques to larger, less-curated datasets like Instagram-1B [26] has not provided large improvements in performance [14]. There is only so much that can be learned from a single image: no amount of artificial augmentation can show a new view of an object or what might happen next in a scene. This dichotomy can be seen in Figure 1. 1 Code and the Random Related Video Views dataset are available at https://github.com/danielgordon10/vince. 2 D. Gordon et al. Standard Contrastive Loss Video Noise Contrastive Estimation Image Augmentations = H Color, Crop, Flip A v v Temporal Changes ¥ Deformation, New Views, Related Objects Y Fig. 1: The standard unsupervised learning setup learns to separate multiple aug- mentations of the same image. Our method uses truly novel views and temporal consistency which single images cannot provide. In order to move beyond this limitation, we argue that video supplies signif- icantly more semantically meaningful content than a single image. With video, we can see how the world changes, find connections between images, and more directly observe the underlying scene. Prior work using temporal cues has shown success in learning from unlabeled videos [30, 40, 36], but has not been able to surpass supervised pretraining. On the other hand, single image techniques have shown improvements over state-of-the-art by using Noise Contrastive Estima- tion [12] (NCE). In this work, we merge the two concepts with Video Noise Contrastive Estimation (VINCE), a method for using unlabeled videos as a ba- sis for learning visual representations. Instead of predicting whether two feature vectors come from the same underlying image, we task our network with pre- dicting whether two images originate from the same video. Not only does this allow our method to learn how a single object might change, it also enables learning which things might be in a scene together, e.g. cats are more likely to be in videos with dogs than with sharks. Additionally, we generalize the NCE technique to operate on multiple positive pairs from a single source. To facilitate this learning, we construct Random Related Video Views (R2V2), a set 960,000 frames from 240,000 uncurated videos. Using our learning technique, we achieve across-the-board improvements over the recent Momentum Contrast method [14] as well as over a network pretrained on supervised ImageNet on diverse tasks such as scene classification, activity recognition, and object tracking. Watching the World Go By: Representation Learning from Unlabeled Videos # 2 Related Work # 2.1 Noise Contrastive Estimation (NCE) The NCE loss [12] is at the center of many recent representation learning meth- ods [3, 7, 14, 16, 17, 29, 37, 42]. Similar to the triplet loss [6], the basic principle behind NCE is to maximize the similarity between an anchor data point and a positive data point while minimizing similarity to all other (negative) points. A challenge for using NCE in an unsupervised fashion is devising a way to construct positive pairs. Pairs should be different enough that a network learns a non-trivial representation, but structured enough that the learned representa- tion is useful for downstream tasks. A standard approach used by [3, 7, 14] is to generate the pairs via artificial data augmentation techniques such as color jitter, cropping, and flipping. Contrastive Multiview Coding [37] uses multiple “views” of a single source image such as intensity (L), color (ab), depth, or segmentation, training separate encoders for each view. PIRL [29] uses the jig- saw technique [31] to break the image into non-overlapping regions and learns a shared representation for the full image and the shuffled image patches. Similarly, Contrastive Predictive Coding (CPC) [16] uses crops of an image as “context” and predicts features for the unseen portions of the image. We provide a more natural data augmentation by using multiple frames from a single video. As a video progresses, the objects in the scene, the background, and the camera it- self may move, providing new views. Whereas augmentations on an image are constrained by a single snapshot in time, using different frames from a single video gives entirely new information about the scene. Additionally, rather than restricting our method to only use two frames from a video, we generalize the NCE technique to use many images from a single video, resulting in more com- putational reuse and a better final representation (AMDIM [3] similarly makes multiple comparisons per pair, but each anchor has only one positive). # 2.2 Unsupervised Learning Using Video Cues In contrast with supervised learning which requires hand-labeling, self-supervised and unsupervised learning acquire their labels for free. These techniques can create datasets which are orders of magnitude larger than comparable fully- supervised datasets. Whereas self-supervised learning requires extra setup during data generation [11, 32, 35], unsupervised learning can use existing data with- out the need for any specific generation constraints. Unsupervised single image methods such as auto-encoders [22], colorization [44], GANs [33], jigsaw [31], and NCE [42] rely on properties of the images themselves and can be applied to arbi- trary image datasets. However these image datasets cannot represent temporal information, nor can they show novel object views or occlusions. Video data automatically provides temporal cohesion which can be used as additional supervisory signal to learn these phenomena. There is a long history of using videos for low level [24, 27, 36] and high-level tasks [30, 40]. One of the most common unsupervised setups is using the present to predict the future. The 3 4 D. Gordon et al. Natural Language Processing community has embraced language modeling as an unsupervised task which has resulted in numerous breakthroughs [10, 28, 34]. However, similar systems applied to unlabeled videos have not revolutionized computer vision. These representations still underperform supervised methods due to several issues. Primarily, neighboring video frames do not change nearly as much as neighboring words in a sentence, so a network which learns the identity function would perform well at next frame prediction. Additionally, words are reused and can thus be tokenized in an effective way whereas images never repeat, especially between two disparate video sources. To avoid these issues, many have opted for other methods. Anand et al. [2] use the NCE loss to discriminate between temporally near frames and temporally far frames of ATARI gameplay but do not compare across games. Han et al. [13] also use the NCE loss and the CPC technique on a 3D-ResNet to learn spatio- temporal features. Aside from the NCE approach, other works have proposed alternative video training tasks. Misra et al. [30] shuffle the frames of a video and train a network to predict whether they are correctly temporally ordered. Wang et al. [40] and Vondrick et al. [38] use cycle consistency and color as a form of tracking points from one frame onto another. Earlier work from Wang et al. [39] uses hand-crafted features to track patches of a video and learn a correspondence between the patches. Our approach is inspired by these works but focuses on learning a semantic representation of the entire scene based on what is present in a single frame from the video. If a network can consistently represent visually dissimilar images from the same video with similar vectors, then not only has it learned how to recognize what is in each image, but it can also represent what might happen in the past or future of that scene. # 3 Methods In order to learn a semantically meaningful representation, we exploit the natural augmentations provided by unlabeled videos. In this section, we first outline the dataset generation process. We then describe the learning algorithm used to train our representation. # 3.1 Dataset Using ImageNet as a basis for representation learning has shown remarkable suc- cess both with supervised pretraining as well as unsupervised learning. However, even without labels, the images of ImageNet have been hand selected and are unnaturally balanced. To improve learned representations using existing tech- niques may require significantly larger datasets [14], but obtaining data with similar properties automatically and at scale is not practical. Instead, we turn to unlabeled videos as a source of additional supervision. In order to train on a diverse set of realistic video frames, we collect a new dataset which we call Random Related Video Views (R2V2). We use the fol- lowing fast and automated procedure to generate the images in our dataset. Watching the World Go By: Representation Learning from Unlabeled Videos Dataset Number of Images (Train) Number of Videos (Train) Number of Categories Mean Image Size ImageNet 1K [9] YouTube 8M [1] Kinetics 400 [20] GOT-10k [19] R2V2 (Ours) 1.3 M 0 0 1.4 M 0.96 M 0 3.7 M 0.22 M >9 K 0.24 M 1000 3862 400 563 - (428, 406) - - (1600, 912) (467, 280) Table 1: A comparison of various image and video datasets. While we have neither the most images nor the most videos, we provide good diversity between videos which is crucial for learning a strong, generic image representation. GOT- 10k [19] training set contains 9,000 video clips, but multiple clips may originate from a single source video. Using this procedure, we are able to construct R2V2 in under a day on a single machine. 1. Use YouTube Search to find videos for a set of queries, and download the top K videos licensed under the Creative Commons. In practice we use the ImageNet 1K classes. 2. Filter out videos with static images using a simple threshold over the percent of pixels which change between two frames. This removes videos of static images, which is common for music uploads. 3. Pick a random point in the video and extract T images with a gap of G seconds between each image. In practice T = 4 and G = 5. Using ImageNet synsets for search queries provides reasonable visual diversity, but could be substituted with another set of queries. While we acknowledge that using YouTube’s Search feature is not truly random, this procedure re- sulted in significantly more diverse samples than using existing datasets like YouTube8M [1] which is heavily unbalanced with unnatural videos like “Video Games” and “Cartoons.” We do no additional data cleaning to ensure that the videos or extracted images actually contain the search term (many do not), nor do we search for “high interest” video segments as in Misra et al. [30]. We also discard the search term itself as a form of supervision. We find that a gap of 5 seconds between each saved image typically results in visually distinct but semantically related images. Too much shorter results in images which are less individually distinct, and too much longer may result in large and unpredictable changes. A sample from each dataset can be seen in the supplemental material. We compare R2V2 with other popular datasets in Table 1. Because our dataset is constructed automatically, we can easily gather more data (more frames per video, more videos overall). In this work we limit the scale to roughly that of comparable datasets. # 3.2 Noise Contrastive Estimation (NCE) Learning Given a dataset of diverse video frames, we learn a representation which takes advantage of the structure of the data. We choose the Noise Contrastive Esti- mation technique [12] which has been popular in many recent works [3, 7, 14, 16, 17, 29, 37, 42], augmented with temporal supervision. 5 6 D. Gordon et al. The standard NCE implementation (used in [7, 3, 17]) uses the following procedure. First, a batch of anchor images A are selected. Second, a batch of positive images P are selected, one for each anchor. Positive matches for one example are reused as negatives for the other samples without the need to re- compute the features. The NCE loss for a single batch is shown in equation 1 where sim(X, Y ) is any similarity metric between the two inputs. Gradients flow through the positive pairs (pulling the vectors together) as well as the negative pairs (pushing the vectors away from each other). esim(Ai, Pi) 12 LNCE n Ss log ye, ems) () i=1 jal” =1 As in other works, we use the cosine similarity of the feature embeddings of the data points (as seen in equation 2) as the similarity metric due to its computational efficiency [3, 7, 14, 16, 29, 42]. The similarity is rescaled by a temperature vector τ to create peaked softmax distributions. f and g are neural networks. vars AX) 9) 0) # 3.3 Multi-Frame NCE All recent works perform some sort of transformation on a single image to create Anchor-Positive pairs for the NCE loss [3, 7, 14, 16, 29, 42]. We refer to this as “Same Frame.” We differ from these works by using multiple images from a single video to form our pairs. This allows our network to see truly different views, deformations, similar objects, and larger scene changes. Additionally, this encodes temporal consistency as the semantic contents of a video are unlikely to change suddenly. For example in a video of two cats playing, the camera may focus on one cat, or may even pan to a previously unseen dog, but it is unlikely to pan to a shark. Note that in practice, we select frames with replacement, so it is still possible to pair an image with itself, making our potential pairs a strict superset of those in prior works. # 3.4 Memory Banks and Momentum Contrast NCE-based methods benefit greatly from large pools of negatives because this increases the likelihood of finding at least one hard negative for each positive example. In some works [29, 42], negatives are sampled from a large memory bank which was filled with earlier outputs of the network. The NCE loss can be modified to use negatives from prior batches as shown in equation 3 for a memory bank of negatives N1...m. Watching the World Go By: Representation Learning from Unlabeled Videos Standard NCE Multi-Frame NCE Multi-Frame Multi-Pair NCE Same Frame Positives ve Current Pi MoCo MemBank Ss 4 Ni | No | Nye Nw Anchors , Fin ih AL Fig. 2: Left: Standard NCE using “Same Frame” where all correct pairs come from the same image. Middle: Standard NCE using “Multi-Frame” where correct pairs come from the same video. Right: Multi-Frame Multi-Pair NCE which uses more than one positive pair per video, resulting in more positives per batch. The gray boxes indicate the true match pairs. The MoCo Memory Bank adds more negatives for each anchor. 12 esim( Ai, Ps) Lnxcr = “a > log esm(APY SOT stm ALNG) (3) i=1 As the network trains, its output distribution will shift. A potential issue when using a memory bank is the network learns a simple classifier between the current distribution and an old one. Momentum Contrast (MoCo) [14] alleviates this issue by using a quickly updating primary network (f in equation 2) and a slowly updating secondary network (g). f is updated based on the NCE loss in equation 3 and g is updated using a momentum rule g ← αg + (1 − α)f . The memory bank is filled with previous outputs from the slowly changing network g, reducing the likelihood that f will be able to learn a simple recent batch/old batch classifier. For more details, see [14]. # 3.5 Multi-Pair NCE By using MoCo, we increase the number of useful negatives without a large com- putational cost. Yet MoCo only uses n positive pairs per batch of size n. We can further increase the number of positives per batch (while holding batch size con- stant) by simply selecting v videos and k samples from each video where k = n v . By computing the pairwise similarity between each pair, we reuse each positive sample k times, resulting in k2v = n2 v positives per batch. In the extreme, every sample from a batch could belong to the same class, resulting in n2 positives, however this causes noisier, more extreme gradients which makes training unsta- ble. Using a simple block-diagonal mask as shown in Figure 2 and Algorithm 1, we can efficiently compute the similarities and NCE loss both between elements of the batch and across a memory bank, achieving a large number of positive comparisons per batch while retaining a large negative size. In practice, we no- tice no meaningful computational cost to this approach. We refer to this full 7 8 D. Gordon et al. Algorithm 1 Python-style pseudo code for Multi-Pair NCE. def multi_pair_nce( f_output, g_output, moco_mem, mask, temperature): f_output = f_output.reshape(v * k, d) g_output = g_output.reshape(v * k, d) compare = concatenate((g_output, moco_mem), axis=0) similarities = matmul(f_output, compare.T) similarities /= temperature pos_similarities = similarities[mask] neg similarities = similarities[!mask] exp_pos_sim = exp(pos_similarities) normalizing_constant = broadcast( # [n, d] # [n, d] # [(n + m), d] # [n, (n + m)] # [n, k] # [n, (n + m - k)] # [n, k] # [n, k] reduce_sum(exp(neg_similarities), axis=1), shape(numerator)))) score = exp_pos_sim / (exp_pos_sim + normalizing_constant) loss = -mean(log(score)) return loss # [v, k, d] output of f encoder # [v, k, d] output of g encoder # [m, d] # [n, (n + m)] block diagonal boolean matrix # [1] In practice we use the Log-Sum-Exp trick for numerical stability but omit here for clarity. broadcast repeats the input until it matches the provided dimensions. !mask flips the booleans of each point in the mask. method using Multi-Pair on video data and the Multi-Frame learning procedure as Video Noise Contrastive Estimation (VINCE). Using clusters of positives has the additional benefit of forcing each feature to match with multiple other features at once. For videos, this means a repre- sentation for a single image will be pulled towards some global video feature, resulting in a more consistent representation over a video. For single images, this more strongly enforces invariance to data augmentation. # 4 Experiments We evaluate our method (VINCE) on both single-image and temporal tasks by freezing our learned representation and adding a small network (in most cases a single linear layer) for adaptation to new end-tasks. Our learned representation transfers well to a variety of visual tasks, especially tasks which require temporal reasoning. To show this, we compare with multiple strong baselines: – MoCo-IN: Network pretrained on ImageNet [9] using the MoCo algorithm. – MoCo-R2V2: Network pretrained on R2V2 using the MoCo algorithm. This uses exactly the same data as VINCE but the Same Frame technique described in 3.3. We also prevent multiple images from the same video being in the MoCo Memory Bank at the same time. – Sup-IN: Network pretrained on fully supervised ImageNet. To validate the benefits of unsupervised, uncleaned video, we additionally compare against an unsupervised, uncleaned image dataset. Since ImageNet it- self required time, effort, and money to create, we construct a new static image dataset analogous to our video dataset. Specifically, we search Google Images for the ImageNet synsets and download the top K results for each category. We refer to this as MoCo-G). Watching the World Go By: Representation Learning from Unlabeled Videos Image Classification ImageNet[9] Scene Classification Action Recognition SUN Scenes[43] Kinetics 400[20] Tracking OTB 2015[41] Trained Layer(s) Metric Linear Accuracy (Top 1) Linear Accuracy (Top 1) LSTM → Linear Accuracy (Top 1) 1x1 Conv Precision Success 8 1 t e N s e R Sup-IN MoCo-IN MoCo-G MoCo-R2V2 VINCE (Ours) 0.696 0.447 0.393 0.358 0.400 0.491 0.487 0.444 0.450 0.495 0.207 0.336 0.313 0.318 0.362 0.557 0.583 0.551 0.555 0.629 0.396 0.429 0.413 0.403 0.465 Relative Gain over MoCo-R2V2 11.91% 9.93% 13.85% 13.33% 15.38% 0 Sup-IN 5 t e N s e R MoCo-V2-IN (our impl.) MoCo-R2V2 VINCE (Ours) 0.762 0.652 0.536 0.544 0.593 0.608 0.581 0.611 0.305 0.459 0.456 0.491 0.458 0.300 0.386 0.402 0.320 0.260 0.299 0.300 Relative Gain over MoCo-R2V2 1.36% 5.29% 7.72% 4.15% 0.33% Table 2: Comparison of representation performance across a variety of end tasks. We show improvements over MoCo trained on the same data on all tasks, and outperform MoCo trained on ImageNet as well as supervised pretraining on ImageNet on all tasks but ImageNet itself (and tracking for ResNet50). Each representation uses the same ResNet convolutional backbone, sharing weights across all tasks. Linear (for Kinetics LSTM → Linear) classifiers are the only learned weights for each end task. # 4.1 Target Tasks We compare each method on several diverse end-tasks using both ResNet18 [15] backbones ResNet50 backbones. More training implementation details can be found in Appendix 1. Results for these tasks are shown in Table 2. We train all end-task models using Adam [21] and a shared learning rate schedule per task. For each dataset, we use standard data augmentation approaches (crop, flip, color jitter except for tracking). One overall trend to note is the relative gain over MoCo-R2V2. If the single-frame algorithm performed as well as our multi-frame method on temporal tasks, it would indicate minimal temporal understanding. However, the relative gain of VINCE over MoCo-R2V2 on Kinetics (13.85%) and tracking (13.33% and 15.38%) are higher than those on single-frame tasks, showing that our method can incorporate temporal cues. ImageNet: For this task, we use our frozen learned representations, adding a single linear layer after the global average pool. Although none of the meth- ods match the fully supervised performance of ResNet18 on ImageNet, they do achieve reasonable performance given only single linear layer. It is unsurprising that MoCo pretrained on ImageNet images (MoCo-IN) outperforms our method (0.447 vs 0.400) due to the domain shift between pretrain and end-task. However MoCo pretrained on R2V2 (MoCo-R2V2) suffers nearly a 2× drop in accuracy (8.9%) compared to our method (4.7%), indicating that pretraining on multi- frame matching provides a clear benefit over single frame pairs. Our method, which has never seen an image from ImageNet before, still learns a representa- 9 10 D. Gordon et al. 10 tion which generalizes well to this new type of data. Even drawing images from a similar class distribution (MoCo-G) does not outperform our method. SUN Scenes: SUN Scenes is a classification dataset in which each image is cat- egorized into one of 397 possible scene types such as airplane cabin, bedroom, and coffee shop. Again we train a single linear layer on top of each pretrained net- work. This data is quite similar to ImageNet in that each image is well-curated and contains single, unambiguous subjects. As such, the ImageNet fully super- vised baseline transfers quite well to SUN Scenes. However VINCE outperforms Sup-IN by a small margin. Again we note a large improvement of VINCE over MoCo-R2V2 (0.495 vs. 0.450). This shows that our method learns to recognize not just the main subject of an image but also the surrounding scene, which requires a richer understanding of the world. Kinetics 400: This dataset consists of 10 second clips from YouTube videos and action labels for each segment. We first download each video and subsample each clip to one frame per second (10 frames per clip). We train a single layer LSTM [18] followed by a single linear layer to predict the action category for each segment. Kinetics acts as a crucial test to evaluate whether our model learns temporal cues. VINCE greatly outperform all other methods, whereas traditional baselines such as fine-tuning supervised ImageNet do not adapt well at all. This shows that contrary to popular belief, representations pretrained on ImageNet many not be a good fit for other visual domains, especially on temporal tasks. Object Tracking Benchmark (OTB) 2015: OTB 2015 is a popular tracking dataset. Given an initial bounding box around an arbitrary object in the first image, a model must locate the object in the following frames. We use the SiamFC [4] tracking algorithm on top of our learned representation. SiamFC first crops the initial bounding box and extracts spatial features using a CNN. For each frame, it localizes the object by extracting spatial features on the full frame and convolving the template features with the full image features. This process is similar to template matching [5] but in deep feature space. For a more complete explanation, see [4]. To extract these features, we use the outputs of each model from before the average pooling. We additionally use dilated convolutions rather than strides for the second and third ResNet18 block to preserve spatial information even though the initial representation was pretrained using strides. We add a single 1x1 convolution layer to each representation. As OTB 2015 is only a set of test videos, we train on the GOT-10k dataset [19], a dataset of 9000 training clips and 1.4 million images. OTB is evaluated using two metrics – precision and success. Precision mea- sures the percentage of frames where the (normalized) center error is less than a certain threshold, using an area-under-the-curve evaluation. Similarly, success measures the percentage of frames where the Intersection over Union is more than a certain threshold, again using the area-under-the-curve. Watching the World Go By: Representation Learning from Unlabeled Videos Test Task Images Per Video ImageNet SUN Scene Kinetics 400 OTB 2015 Precision OTB 2015 Success 1: Same Frame 2: Multi-Frame 8: Multi-Frame Multi-Pair 0.358 0.381 0.400 0.450 0.478 0.495 0.318 0.361 0.362 0.555 0.622 0.629 0.403 0.464 0.465 Table 3: Method ablation for VINCE. We compare using one source image with two augmentations (the standard approach), two different images, or a set of different images. Using Multi-Frame results in a large boost across the board. Multi-Frame Multi-Pair further increases the power of the representation. Note that all methods use the entire dataset, but only Multi-Frame methods use multiple images from a video within one batch. A representation which works well for SiamFC would have the property that the cross correlation of two images of the same object is high, but the cross cor- relation of two different objects, or a poorly cropped image of the same object, is low. Pretraining our representations on multiple frames from the same video coincides quite well with the first objective, however since we use cropped data augmentations, the representations tend to be somewhat invariant to poorly- cropped candidates. Still, the models perform quite well across a variety of difficult tracking instances. Our ResNet18 model transfers significantly better than all other methods indicating a clear benefit to using temporal cues dur- ing pretraining. We find, somewhat unexpectedly, that the ResNet18 network fares better than the ResNet50. A likely explanation for this is that the origi- nal SiamFC method uses AlexNet [23] with no padding in a fully-convolutional manner. When using ResNet, padding must be applied to keep the outputs the same dimensionality as the inputs. Thus, at training time the network may latch onto zero-padding cues which will not be applicable at test time. This becomes more of an issue the larger the receptive field which is why ResNet50 struggles but ResNet18 is somewhat less affected. # 4.2 Method Ablation We validate the effectiveness of Multi-Frame (Sec. 3.3) and Multi-Pair (Sec. 3.5) learning by ablating the number of images from each video used in a batch of comparisons Due to computational constraints, we only perform ablations on ResNet18. The results of this ablation are shown in Table 3. The first row is equivalent to the procedure done in MoCo [14] i.e. the anchor and positive pairs are two data augmentations of the same image. The second row uses the MoCo procedure as well, however the anchors and positives may be from different images from the same video. The third row uses our Multi-Pair NCE method taking 4 positives and 4 anchors from each video, resulting in 16 positive pairs. A pictorial representation can be seen in Figure 2. Note that when selecting images for row 2 and 3, we use sampling with replacement, making our method a strict super-set of MoCo. We observe across-the-board improvements from both modifications to the MoCo approach. The majority of the improvement comes from using two non- 12 D. Gordon et al. Test Task Pretraining Data ImageNet SUN Scene Kinetics 400 OTB 2015 Precision OTB 2015 Success R2V2 IN-Queries R2V2 YT8M URLs Kinetics 400 URLs 0.400 0.367 0.368 0.495 0.478 0.494 0.362 0.343 0.390 0.629 0.667 0.612 0.465 0.492 0.456 Table 4: Pretraining data ablation for VINCE. Each method uses exactly the same training setup, only substituting one data source for another. Since R2V2 uses ImageNet search queries, it outperforms the others on ImageNet. Similarly, pretraining on Kinetics 400 videos results in better end performance on Kinetics. identical frames for matching, but we still gain an additional improvement from using Multi-Pair NCE. Our intuition is that using the Multi-Pair NCE creates gradients that pull each feature towards a global video representation whereas the standard NCE remains more instance-based, only moving a representation in one direction at a time. Thus, we would expect the Multi-Pair NCE features to be more holistically semantic whereas the standard NCE may retain more uniquely identifying features. In fact, we observe a larger performance gap on the more semantic ImageNet and SUN Scene tasks. In contrast, because the Kinetics model uses an LSTM to reason over all input images at once, instance- level features are equally useful as global video features for overall accuracy. # 4.3 Pretraining Data Ablation In Table 4 we explore the effect of different pretraining datasets on end-task performance. For each experiment, we use VINCE but use video data from three different sources: our method of searching ImageNet synset queries, using the URLs from YouTube 8M [1], and using the URLs from Kinetics 400 [20]. Again, we only test ResNet18. Our YouTube8M(YT8M) pretraining data uses the same filtering procedure as R2V2 and contains 5.8 million images from 1.4 million videos. As noted in Table 2 MoCo-IN results, using the same dataset for pre- training as the end-task results in a boost in performance on that specific task but does not indicate that the representation will be better on all tasks. We see this trend is true again when pretraining on Kinetics data. Similarly, since R2V2 uses ImageNet synset for search queries, pretraining on it performs better on ImageNet than the other less-aligned datasets. In general, this would indicate that given a large enough set of diverse videos, pretraining directly on the unlabeled source data would result in the best per- forming representation on that data. If this is not possible, then pretraining on a large external source of data may still result in a useful representation. It also indicates that the VINCE method works well on a variety of different pretraining datasets. The increased performance on tracking when using YT8M data could be explained by it simply having access to a larger number of video sources and frames. For generic object tracking, class diversity may be less important than number of samples because the class identity is ignored. Watching the World Go By: Representation Learning from Unlabeled Videos # 4.4 Qualitative Results We additionally provide two qualitative analyses to better understand the success and failure cases of VINCE: Nearest Neighbors, and t-SNE. Nearest Neighbors: We additionally query ImageNet Val and a set of test videos for nearest neighbor matches, taking at most one neighbor per video. We visualize the top 5 neighbors for VINCE, MoCo-R2V2, and MoCo-IN in Figure 3. We observe that VINCE seems to understand the semantics of an image more than MoCo-R2V2 and MoCo-IN. For instance, although MoCo-R2V2 and MoCo-IN find other control panels and buttons in query 1, they do not make the scene-level connection to car interiors as well as VINCE does. Query 2 shows an interesting quirk case of our method. Rather than matching the semantics of the image, VINCE relies on the news logo as a differentiating feature due to its discriminative nature. Each image in VINCE’s query 2 results is from a separate video, but from the same news source. For the ImageNet queries, despite never seeing ImageNet inputs during pretraining, VINCE is able to find good matches as well as MoCo-IN which was trained using only ImageNet inputs. t-SNE: Using a set of held-out video frames, we project the 64-D embedding space from VINCE to 2D using t-SNE [25] and visualize the formed clusters in Figure 4. Not only does this assist in verifying the quality of the embedding, it also serves as a visual method for evaluating the diversity of the dataset itself. The largest of the clusters seems to be the face cluster. YouTube is full of videos of people looking and talking directly to a camera, and our random subset reflects this pattern. Other interesting, yet unexpected clusters emerge as well such as cats (YouTube loves cats), hands (demo videos), and food (cooking videos). R2V2 ImageNet Query Image Top 5 Nearest Neighbors Query image Top 5 Nearest Neighbors VINCE VINCE MoCo R2v2 MoCo R2v2 MoCo-IN MoCo-N VINCE VINCE MoCo R2v2 MoCo R2v2 ‘MoCo IN MoCo-N Fig. 3: Nearest neighbor results for a sampling of query images from R2V2 and ImageNet using various models. VINCE shows a clear understanding of each image and finds highly relevant neighbors. 13 14 D. Gordon et al. Under Water Fig. 4: t-SNE embedding of images from R2V2 test set. # 5 Conclusions In this work we introduced Video Noise Contrastive Estimation, a process for us- ing unlabeled videos to learn an unsupervised image representation. By training on multiple images from the same video instance, we learn from more natu- ral changes such as deformation and viewpoint change rather than 2D artificial augmentations. To learn from a large variety of diverse video clips, we collect Random Related Video Views in a completely automated fashion. Using geo- metric video cues like structure from motion and optical flow could provide an even richer dataset, but we leave this as a promising future direction. We show across-the-board improvements over the recently proposed MoCo [14] technique on a wide variety of tasks, and we believe Video Noise Contrastive Estimation will extend to other unsupervised methods such as SimCLR [7] and PIRL [29] as well as other end-tasks. As representation learning techniques improve, we believe that videos – rather than images – will prove an invaluable resource for pushing the state-of-the-art forward. # 6 Acknowledgements This work is in part supported by the Nvidia Graduate Research Fellowship, NSF-NRI-1637479, NSF IIS 1652052, IIS 17303166, DARPA N66001-19-2-4031, 67102239, and gifts from Allen Institute for Artificial Intelligence. Watching the World Go By: Representation Learning from Unlabeled Videos # References 1. Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., Vijayanarasimhan, S.: Youtube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675 (2016) 2. Anand, A., Racah, E., Ozair, S., Bengio, Y., Côté, M.A., Hjelm, R.D.: Unsuper- vised state representation learning in atari. In: Advances in Neural Information Processing Systems. pp. 8766–8779 (2019) 3. Bachman, P., Hjelm, R.D., Buchwalter, W.: Learning representations by maximiz- ing mutual information across views. In: Advances in Neural Information Process- ing Systems. pp. 15509–15519 (2019) 4. Bertinetto, L., Valmadre, J., Henriques, J.F., Vedaldi, A., Torr, P.H.: Fully- convolutional siamese networks for object tracking. In: European conference on computer vision. pp. 850–865. Springer (2016) 5. Briechle, K., Hanebeck, U.D.: Template matching using fast normalized cross cor- relation. In: SPIE Defense + Commercial Sensing (2001) 6. Chechik, G., Sharma, V., Shalit, U., Bengio, S.: Large scale online learning of image similarity through ranking. Journal of Machine Learning Research 11(Mar), 1109– 1135 (2010) 7. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for con- trastive learning of visual representations. arXiv preprint arXiv:2002.05709 (2020) 8. Chen, X., Fan, H., Girshick, R., He, K.: Improved baselines with momentum con- trastive learning (2020) 9. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: IEEE conference on computer vision and pattern recognition. pp. 248–255. Ieee (2009) 10. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidi- rectional transformers for language understanding. In: NAACL-HLT (2019) 11. Godard, C., Mac Aodha, O., Brostow, G.J.: Unsupervised monocular depth esti- mation with left-right consistency. In: IEEE Conference on Computer Vision and Pattern Recognition (July 2017) 12. Gutmann, M., Hyvärinen, A.: Noise-contrastive estimation: A new estimation prin- ciple for unnormalized statistical models. In: Thirteenth International Conference on Artificial Intelligence and Statistics. pp. 297–304 (2010) 13. Han, T., Xie, W., Zisserman, A.: Video representation learning by dense predictive coding. In: Proceedings of the IEEE International Conference on Computer Vision Workshops (2019) 14. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722 (2019) 15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016) 16. Hénaff, O.J., Razavi, A., Doersch, C., Eslami, S., Oord, A.v.d.: Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272 (2019) 17. Hjelm, R.D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., Bengio, Y.: Learning deep representations by mutual information estimation and maximization. In: International Conference on Learning Represen- tations (2019), https://openreview.net/forum?id=Bklr3j0cKX 18. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Computation 9, 1735–1780 (1997) 15 16 D. Gordon et al. 16 19. Huang, L., Zhao, X., Huang, K.: Got-10k: A large high-diversity benchmark for generic object tracking in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence p. 1–1 (2019). https://doi.org/10.1109/tpami.2019.2957464, http://dx.doi.org/10.1109/TPAMI.2019.2957464 20. Kay, W., Carreira, J., Simonyan, K., Zhang, B., Hillier, C., Vijayanarasimhan, S., Viola, F., Green, T., Back, T., Natsev, P., et al.: The kinetics human action video dataset. arXiv preprint arXiv:1705.06950 (2017) 21. Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014) 22. Kramer, M.A.: Nonlinear principal component analysis using autoassociative neu- ral networks. AIChE journal 37(2), 233–243 (1991) 23. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep con- volutional neural networks (2012) 24. Li, Z., Dekel, T., Cole, F., Tucker, R., Snavely, N., Liu, C., Freeman, W.T.: Learning the depths of moving people by watching frozen people. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 4521–4530 (2019) 25. Maaten, L.v.d., Hinton, G.: Visualizing data using t-sne. Journal of machine learn- ing research 9(Nov), 2579–2605 (2008) 26. Mahajan, D., Girshick, R., Ramanathan, V., He, K., Paluri, M., Li, Y., Bharambe, A., van der Maaten, L.: Exploring the limits of weakly supervised pretraining. In: European Conference on Computer Vision. pp. 181–196 (2018) 27. Meister, S., Hur, J., Roth, S.: Unflow: Unsupervised learning of optical flow with a bidirectional census loss. In: Thirty-Second AAAI Conference on Artificial Intel- ligence (2018) 28. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed repre- sentations of words and phrases and their compositionality. In: Advances in neural information processing systems. pp. 3111–3119 (2013) 29. Misra, I., van der Maaten, L.: Self-supervised learning of pretext-invariant repre- sentations. arXiv preprint arXiv:1912.01991 (2019) 30. Misra, I., Zitnick, C.L., Hebert, M.: Shuffle and learn: unsupervised learning using temporal order verification. In: European Conference on Computer Vision. pp. 527–544. Springer (2016) 31. Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving jigsaw puzzles. In: European Conference on Computer Vision. pp. 69–84. Springer (2016) 32. Pinto, L., Gandhi, D., Han, Y., Park, Y.L., Gupta, A.: The curious robot: Learn- ing visual representations via physical interactions. In: European conference on computer vision (2016) 33. Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015) 34. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners (2019) 35. Schmidt, T., Newcombe, R., Fox, D.: Self-supervised visual descriptor learning for dense correspondence. IEEE Robotics and Automation Letters 2(2), 420–427 (2016) 36. Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representations using lstms. In: International conference on machine learning. pp. 843–852 (2015) 37. Tian, Y., Krishnan, D., Isola, P.: Contrastive multiview coding. arXiv preprint arXiv:1906.05849 (2019) Watching the World Go By: Representation Learning from Unlabeled Videos 38. Vondrick, C., Shrivastava, A., Fathi, A., Guadarrama, S., Murphy, K.: Tracking emerges by colorizing videos. In: European Conference on Computer Vision. pp. 391–408 (2018) 39. Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: IEEE International Conference on Computer Vision. pp. 2794–2802 (2015) 40. Wang, X., Jabri, A., Efros, A.A.: Learning correspondence from the cycle- consistency of time. In: IEEE Conference on Computer Vision and Pattern Recog- nition. pp. 2566–2576 (2019) 41. Wu, Y., Lim, J., Yang, M.H.: Object tracking benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence 37(9), 1834–1848 (2015) 42. Wu, Z., Xiong, Y., Yu, S.X., Lin, D.: Unsupervised feature learning via non- parametric instance discrimination. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 3733–3742 (2018) 43. Xiao, J., Hays, J., Ehinger, K.A., Oliva, A., Torralba, A.: Sun database: Large-scale scene recognition from abbey to zoo. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 3485–3492. IEEE (2010) 44. Zhang, R., Isola, P., Efros, A.A.: Colorful image colorization. In: European confer- ence on computer vision. pp. 649–666. Springer (2016) 17 18 D. Gordon et al. 18 # Appendix 1 Implementation Details For training our ResNet18 [15] representations we use the network up to the global average pooling followed by a fully connected layer (512 x 512), Leaky- ReLU and a final embedding layer (512 x 64). The features are L2-normalized and multiplied by τ = 1 0.07 as in MoCo [14]. We use 8 GPUs, a training batch size of 256 and a MoCo Memory Bank of size 65536 and a g-network momentum of 0.999. For multi-GPU training we employ the Shuffle-BN technique shuffling both the anchors and the positives to reduce the correlation between batches filled with multiple images from the same video. We use SGD with a learning rate of 0.03, momentum=0.9, and weight decay=0.0001. All of these hyperpa- rameters are shared with MoCo [14]. All methods are trained for approximately 450k iterations. When selecting frames from a video, we pick with replacement. In addition to the natural augmentation, we perform standard data augmenta- tion (color jitter, cropping, flipping) on the inputs. This prevents the network from relying too heavily on shared video statistics like mean frame color. After cropping, all images are resized to 224×224. It is worth noting that both VINCE and our data (R2V2) can be used with other network architectures or learning algorithms such as AMDIM [3], PIRL [29], or SimCLR [7]. We choose ResNet18 and MoCo due to implementation simplicity and relatively low computational constraints. For ResNet50 training, we closely match the hyperparameters in MoCo v2 [8]. Specifically, we use the blur augmentation and stronger color augmentation and a cosine annealing learning rate for 286,000 iterations (200 epochs of ImageNet or equivalent on R2V2 regardless of the number of unique positives per batch). The batch size we use is 896 with an initial learning rate of 0.105 and an embedding dimensionality of 128. The temperature parameter used was τ = 1 0.2 . We did not perform any hyperparameter searches for VINCE or R2V2, so these results may be suboptimal. # Appendix 2 Dataset Samples Existing datasets such as YouTube8M [1] and Kinetics400 [20] provide a large number of YouTube links over a diverse set of videos. However, these dataset are highly unbalanced and contain many videos undesirable for learning strong visual representations. For example, the second most common category in YouTube8M, comprising 540k of the 3.7 million training videos is “Video Game,” and the fifth is “Cartoon” with 240k. The category “Minecraft” itself has over 57,000 videos in the dataset whereas the category “Pear” has only 138. Kinetics contains only videos of humans performing actions. Alternative datasets such as GOT-10k [19] provide a comparatively small number of videos but with dense annotations (in GOT-10k’s case for object tracking). Watching the World Go By: Representation Learning from Unlabeled Videos R2Vv2 : 5 & Fig. F1: Random sampling of pairs of images from videos in each dataset. In GOT-10k, sometimes different video clips are segments from the same original video as seen in the first and second sample. Images are square cropped for visualization purposes only. # Appendix 2.1 Random Related Video Views Samples We show more samples from R2V2 in Figure F2. Videos each have four images 150 frames apart. Each separate video (outlined in blue) lists its corresponding YouTube link. Fig. F2: Sample from Random Related Video Views (train set). 20 D. Gordon et al. (a) (b) Fig. F3: Precision (a) and Success (b) plots for OTB 2015 for various backbones. # Appendix 3 Precision and Success Plots for OTB 2015 We provide full breakdowns of the Precision and Success of each method on OTB 2015 [41]. The values in the legend correspond to the (mean) area under the curve for each method. # Appendix 4 t-SNE We provide the full resolution t-SNE [25] image for further inspection. Best viewed on a screen. Fig. F4: Full Resolution t-SNE embedding of images from R2V2 test set.
{ "id": "1511.06434" }
2003.07820
Overview of the TREC 2019 deep learning track
The Deep Learning Track is a new track for TREC 2019, with the goal of studying ad hoc ranking in a large data regime. It is the first track with large human-labeled training sets, introducing two sets corresponding to two tasks, each with rigorous TREC-style blind evaluation and reusable test sets. The document retrieval task has a corpus of 3.2 million documents with 367 thousand training queries, for which we generate a reusable test set of 43 queries. The passage retrieval task has a corpus of 8.8 million passages with 503 thousand training queries, for which we generate a reusable test set of 43 queries. This year 15 groups submitted a total of 75 runs, using various combinations of deep learning, transfer learning and traditional IR ranking methods. Deep learning runs significantly outperformed traditional IR runs. Possible explanations for this result are that we introduced large training data and we included deep models trained on such data in our judging pools, whereas some past studies did not have such training data or pooling.
http://arxiv.org/pdf/2003.07820
Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, Ellen M. Voorhees
cs.IR, cs.CL, cs.LG
null
null
cs.IR
20200317
20200318
0 2 0 2 r a M 8 1 ] R I . s c [ 2 v 0 2 8 7 0 . 3 0 0 2 : v i X r a # OVERVIEW OF THE TREC 2019 DEEP LEARNING TRACK Nick Craswell1, Bhaskar Mitra1, Emine Yilmaz2, Daniel Campos1, and Ellen M. Voorhees3 1Microsoft AI & Research, {nickcr, bmitra, dacamp}@microsoft.com 2University College London, [email protected] 3NIST, [email protected] # ABSTRACT The Deep Learning Track is a new track for TREC 2019, with the goal of studying ad hoc ranking in a large data regime. It is the first track with large human-labeled training sets, introducing two sets corresponding to two tasks, each with rigorous TREC-style blind evaluation and reusable test sets. The document retrieval task has a corpus of 3.2 million documents with 367 thousand training queries, for which we generate a reusable test set of 43 queries. The passage retrieval task has a corpus of 8.8 million passages with 503 thousand training queries, for which we generate a reusable test set of 43 queries. This year 15 groups submitted a total of 75 runs, using various combinations of deep learning, transfer learning and traditional IR ranking methods. Deep learning runs significantly outperformed traditional IR runs. Possible explanations for this result are that we introduced large training data and we included deep models trained on such data in our judging pools, whereas some past studies did not have such training data or pooling. 1 # Introduction Deep learning methods, where a computational model learns an intricate representation of a large-scale dataset, have yielded dramatic improvements on the state of the art in speech recognition and computer vision. This has been fueled by the availability of large-scale datasets [LeCun et al., 2015] such as the ImageNet dataset [Deng et al., 2009] for computer vision and the Atari Arcade Learning Environment [Bellemare et al., 2013] for game playing. There has been significant interest in deep learning for ad-hoc ranking [Mitra and Craswell, 2018]. Work so far has largely been done with small data, proprietary data or synthetic data. With small data, there has been some discussion about whether deep learning methods really outperform strong traditional IR baselines [Yang et al., 2019a]. Using a proprietary set of document ranking data with 200,000 training queries [Mitra et al., 2017], a traditional IR baseline was beaten, but it was impossible for others to follow up on the work without a data release. Dietz et al. [2017] have a TREC task with enough training data to investigate such findings, but on synthetic rather than human-labeled data. Since significant questions remain about baselines and the required volume of human-labeled data, we argue that TREC is a good forum for studying such issues. When a large human-labeled dataset is made available, participants can investigate the role of data size by subsampling. Strong baselines are more than welcome at TREC and there is a blind one-shot evaluation to avoid overfitting. The TREC 2019 Deep Learning Track has two tasks: Document retrieval and passage retrieval. Each task has a dataset that is new to TREC, although the passage task is similar to the MS MARCO passage ranking leaderboard [Bajaj et al., 2016], but with a new test set in the TREC version with more comprehensive labeling. Both tasks are ad-hoc retrieval, meaning that there is a fixed document set, and the goal of the information retrieval system is to respond to each new query with results that would satisfy the querying user’s information need. Ad-hoc retrieval is a very common scenario in real-world search applications and in TREC. The main goals of the track are: 1) To provide large reusable datasets for training and evaluation of deep learning and traditional ranking methods in a large training data regime, 2) To perform a rigorous blind single-shot evaluation, where test labels don’t even exist until after all runs are submitted, to compare different ranking methods, and 3) To study this in both a traditional TREC setup with end-to-end retrieval and in a re-ranking setup that matches how some models may be deployed in practice. Comparing ad hoc retrieval methods in a large-data regime. The track should help us build our understanding of how retrieval methods can take advantage of large-scale data. It should also allow participants to compare various ranking methods such as: ML models vs. traditional IR—including pseudo-relevance feedback. • Deep learning vs. feature-based learning-to-rank (LTR) methods [Liu, 2009]. • Comparison of different deep learning architectures. • Comparison of different supervision approaches, such as fully supervised vs. semi-supervised vs. weakly supervised deep learning [Dehghani et al., 2017]. • Comparison of such models with all the training labels vs. using a subset of labels, to see how performance improves with more data. Comparing different methods for ad hoc search has always been a focus area at TREC, so our goal in this track is to continue that work. End-to-end retrieval vs. reranking. In real-world implementations of LTR methods, a common technique is to first retrieve the top-k documents for a query using relatively cheap “phase 1” ranker such as BM25, and then apply the full ML model to rerank the top-k documents in “phase 2”. This motivates us to offer two participation styles in the Deep Learning Track, which we also refer to as subtasks. One is to implement full end-to-end retrieval, perhaps by implementing both phase 1 and phase 2. This is interesting because a good implementation of phase 1 can enhance the end-to-end performance of the system, by enriching the candidate set for phase 2. It also encourages participants to consider alternatives to the two-phase approach, if it can improve efficiency and effectiveness. The other participation style is to only implement a top-k reranker. This approach is realistic in practice, in fact it is simply phase 2 of the end-to-end approach, for a fixed phase 1. This style of participation lowers the barrier to entry for participating groups who are interested in the LTR aspects of dealing with a large number of training queries, but are not interested in indexing a corpus or studying phase 1 issues. In this style of evaluation—sometimes referred to as telescoping [Matveeva et al., 2006]—participants are given the top-k results in both the training and test set. The interaction between deep learning models and traditional IR indexing data structures is also particularly interest- ing. Most applications of deep learning models in IR—with few exceptions e.g., [Boytsov et al., 2016, Zamani et al., 2018, Mitra et al., 2019, Nogueira et al., 2019]—have been constrained to the reranking setting. Encouraging future exploration of deep learning based ranking models under the full retrieval settings is an explicit goal of the Deep Learning Track. # 2 Task description The track has two tasks: Document retrieval and passage retrieval. Participants were allowed to submit up to three runs per task, although this was not strictly enforced. Participants were provided with an initial set of 200 test queries, then NIST later selected 43 queries during the pooling and judging process, based on budget constraints and with the goal of producing a reusable test collection. The same 200 queries were used for submissions in both tasks, while the selected 43 queries for each task were overlapping but not identical. The full judging process is described in Section 5. When submitting each run, participants also indicated what external data, pretrained models and other resources were used, as well as information on what style of model was used. Below we provide more detailed information about the document retrieval and passage retrieval tasks, as well as the datasets provided as part of these tasks. # 2.1 Document retrieval task The first task focuses on document retrieval—with two subtasks: (i) Full retrieval and (ii) top-100 reranking. In the full retrieval subtask, the runs are expected to rank documents based on their relevance to the query, where documents can be retrieved from the full document collection provided. This subtask models the end-to-end retrieval scenario. Note, although most full retrieval runs had 1000 results per query, the reranking runs had 100, so to make the AP and RR results more comparable across subtasks we truncated full retrieval runs by taking the top-100 results per 2 query by score. These truncated runs were used in the main results table for the task (only), not in the TREC Appendix or in Section 5. In the reranking subtask, participants were provided with an initial ranking of 100 documents, giving all participants the same starting point. The 100 were retrieved using Indri [Strohman et al., 2005] on the full corpus with Krovetz stemming and stopwords eliminated. Participants were expected to rerank the candidates w.r.t. their estimated rel- evance to the query. This is a common scenario in many real-world retrieval systems that employ a telescoping architecture [Matveeva et al., 2006, Wang et al., 2011]. The reranking subtask allows participants to focus on learning an effective relevance estimator, without the need for implementing an end-to-end retrieval system. It also makes the reranking runs more comparable, because they all rerank the same set of 100 candidates. For judging, NIST’s pooling was across both subtasks, and they also identified additional documents for judging via classifier. Further, for queries with many relevant documents, additional documents were judged. These steps were carried out to identify a sufficiently comprehensive set of relevant results, to allow reliable future dataset reuse. Judgments were on a four-point scale: [3] Perfectly relevant: Document is dedicated to the query, it is worthy of being a top result in a search engine. [2] Highly relevant: The content of this document provides substantial information on the query. [1] Relevant: Document provides some information relevant to the query, which may be minimal. [0] Irrelevant: Document does not provide any useful information about the query. # 2.2 Passage retrieval task Similar to the document retrieval task, the passage retrieval task includes (i) a full retrieval and (ii) a top-1000 reranking tasks. In the full retrieval subtask, given a query, the participants were expected to retrieve a ranked list of passages from the full collection based on their estimated likelihood of containing an answer to the question. Participants could submit up to 1000 passages per query for this end-to-end retrieval task. In the top-1000 reranking subtask, 1000 passages per query query were provided to participants, giving all participants the same starting point. The sets of 1000 were generated based on BM25 retrieval with no stemming as applied to the full collection. Participants were expected to rerank the 1000 passages based on their estimated likelihood of containing an answer to the query. In this subtask, we can compare different reranking methods based on the same initial set of 1000 candidates, with the same rationale as described for the document reranking subtask. For judging, NIST’s pooling was across both subtasks, and they also identified additional passages for judging via classifier. Further, for queries with many relevant passages, additional passages were judged. These steps were carried out to identify a sufficiently comprehensive set of relevant results, to allow reliable future dataset reuse. Judgments were on a four-point scale: [3] Perfectly relevant: The passage is dedicated to the query and contains the exact answer. [2] Highly relevant: The passage has some answer for the query, but the answer may be a bit unclear, or hidden amongst extraneous information. [1] Related: The passage seems related to the query but does not answer it. [0] Irrelevant: The passage has nothing to do with the query. # 3 Datasets Both tasks have large training sets based on human relevance assessments, derived from MS MARCO. These are sparse, with no negative labels and often only one positive label per query, analogous to some real-world training data such as click logs. In the case of passage retrieval, the positive label indicates that the passage contains an answer to a query. In the case of document retrieval, we transferred the passage-level label to the corresponding source document that contained the passage. We do this under the assumption that a document with a relevant passage is a relevant document, although we note that our document snapshot was generated at a different time from the passage dataset, so there can be some mismatch. Despite this, in this year’s document retrieval task machine learning models seem to benefit from using the labels, when evaluated using NIST’s non-sparse, non-transferred labels. This suggests the transferred document labels are meaningful for our TREC task. 3 # Table 1: Summary of statistics on TREC 2019 Deep Learning Track datasets. File description Collection Train queries Train qrels Validation queries Validation qrels Test queries Document retrieval dataset File size Number of records 22 GB 3, 213, 835 15 MB 367, 013 7.6 MB 384, 597 216 KB 5, 193 27 MB 519, 300 12 KB 200 Passage retrieval dataset Number of records 8, 841, 823 502, 940 532, 761 12, 665 59, 273 200 File size 2.9 GB 19.7 MB 10.1 MB 545 KB 1.1 MB 12 KB Table 2: Summary of statistics of runs for the two retrieval tasks at the TREC 2019 Deep Learning Track. Number of groups Number of total runs Number of runs w/ category: nnlm Number of runs w/ category: nn Number of runs w/ category: trad Number of runs w/ category: rerank Number of runs w/ category: fullrank 11 37 18 8 11 11 26 The passage corpus is the same as in MS MARCO passage retrieval leaderboard. The document corpus is newly released for use in TREC. Each document has three fields: (i) URL, (ii) title, and (iii) body text. Table 1 provides descriptive statistics for the datasets. More details about the datasets—including directions for download—is available on the TREC 2019 Deep Learning Track website1. Interested readers are also encouraged to refer to [Bajaj et al., 2016] for details on the original MS MARCO dataset. # 4 Results and analysis Submitted runs A total of 15 groups participated in the TREC 2019 Deep Learning Track, with an aggregate of 75 runs submitted across both tasks. Based run submission surveys, we classify each run into one of three categories: • nnlm: if the run employs large scale pre-trained neural language models, such as BERT [Devlin et al., 2018] or XLNet [Yang et al., 2019b] nn: if the run employs some form of neural network based approach—e.g., Duet [Mitra et al., 2017, Mitra and Craswell, 2019] or using word embeddings [Joulin et al., 2016]—but does not fall into the “nnlm” category • trad: if the run exclusively uses traditional IR methods like BM25 [Robertson et al., 2009] and RM3 [Abdul- Jaleel et al., 2004]. We placed 33 (44%) runs in the “nnlm” category (32 using BERT and one using XLNet), 20 (27%) in the “nn” category, and the remaining 22 (29%) in the “trad” category. We further categorize runs based on subtask: rerank: if the run reranks the provided top-k candidates, or • fullrank: if the run employs their own phase 1 retrieval system. We find that only 21 (28%) submissions fall under the “rerank” category—while the remaining 54 (72%) are “full- rank”. Table 2 breaks down the submissions by category and task. We also encouraged some participants to run strong traditional IR baselines, and submit them as additional runs under the “BASELINE” group. Baseline runs for document ranking were: bm25base BM25 [Robertson et al., 2009] with default parameters # 1https://microsoft.github.io/TREC-2019-Deep-Learning/ 4 bm25base_ax BM25+AX [Yang and Lin, 2019] with default parameters bm25base_prf BM25+PRF [Zeng and Sakai, 2019] with default parameters bm25base_rm3 BM25+RM3 [Yang et al., 2019a] with default parameters bm25tuned BM25 [Robertson et al., 2009] with tuned parameters bm25tuned_ax BM25+AX [Yang and Lin, 2019] with tuned parameters bm25tuned_prf BM25+PRF [Zeng and Sakai, 2019] with tuned parameters bm25tuned_rm3 BM25+RM3 [Yang et al., 2019a] with tuned parameters Baseline runs for passage ranking were: bm25base_ax_p BM25+AX [Yang and Lin, 2019] with default parameters bm25base_p BM25 [Robertson et al., 2009] with default parameters bm25base_prf_p BM25+PRF [Zeng and Sakai, 2019] with default parameters bm25base_rm3_p BM25+RM3 [Yang et al., 2019a] with default parameters bm25tuned_ax_p BM25+AX [Yang and Lin, 2019] with tuned parameters bm25tuned_p BM25 [Robertson et al., 2009] with tuned parameters bm25tuned_prf_p BM25+PRF [Zeng and Sakai, 2019] with tuned parameters bm25tuned_rm3_p BM25+RM3 [Yang et al., 2019a] with tuned parameters Overall results Our main metric in both tasks is Normalized Discounted Cumulative Gain (NDCG)—specifically, NDCG@10, since it makes use of our 4-level judgments and focuses on the first results that users will see. To analyse if any of the fullrank runs recall more relevant candidates in phase 1 compared to those provided for the reranking subtask, we also report Normalized Cumulative Gain (NCG) [Rosset et al., 2018] at rank 100 and 1000 for the document and passage ranking tasks, respectively. We choose to report NCG because it discriminates between recalling documents with different positive relevance grades and is a natural complement to NDCG, our main metric. Although NCG is not officially supported by trec_eval, we confirm that it correlates strongly with the recall metric for these analysed runs. The overall results are presented in Table 3 for document retrieval and Table 4 for passage retrieval. These tables include multiple metrics and run categories, which we now use in our analysis. Evaluation of deep learning and traditional ranking methods in a large training data regime An important goal of this track is to compare the performance of different types of model, using large human-labeled training sets, for the core IR task of ad-hoc search. Indeed this is the first time a TREC-style blind evaluation has been carried out to compare state-of-the-art neural and traditional IR methods. Figure 1a plots the NDCG@10 performance of the different runs for the document retrieval task, broken down by model type. In general, runs in the category “nnlm” outperform the “nn” runs, which outperform the “trad” runs. The best performing run of each category is indicated, with the best “nnlm” and “nn” models outperforming the best “trad” model by 29.4% and 14.8% respectively. The passage retrieval task reveals similar pattern. In Figure 1b, the gap between the best “nnlm” and “nn” runs and the best “trad” run is larger, at 37.4% and 23.7% respectively. One explanation for this could be that vocabulary mismatch between queries and relevant results is more likely in short text, so neural methods that can overcome such mismatch have a relatively greater advantage in passage retrieval. Another explanation could be that there is already a public leaderboard, albeit without test labels from NIST, for the passage task. Some TREC participants may have submitted neural models multiple times to the public leaderboard, and are well practiced for the passage ranking task. In query-level win-loss analysis for the document retrieval task (Figure 2) the best “nnlm” model outperforms the best “trad” run on 36 out of 43 test queries (i.e., 83.7%). Passage retrieval shows a similar pattern in Figure 3. Neither task has a large class of queries where the “nnlm” model performs worse, at least on this year’s data. However, more iterations of rigorous blind evaluation with strong “trad” baselines, plus more scrutiny of the benchmarking methods, would be required to convince us that this is true in general. Next, we analyze this year’s runs by representing each run as a vector of 43 NDCG@10 scores. In this vector space, two runs are similar if their NDCG vectors are similar, meaning they performed well and badly on the same queries. Using t-SNE [Maaten and Hinton, 2008] we then plot the runs in two dimensions, which gives us a visualization where similar runs will be closer together and dissimilar results further apart. This method of visualizing inter-model similarity was first proposed by Mitra et al. [2017] and we employ it to generate the plots in Figure 4. 5 Table 3: Document retrieval runs. RR (MS) is based on MS MARCO labels. All other metrics are based on NIST labels. run group subtask neural RR (MS) RR NDCG@10 NCG@100 AP IDST idst_bert_v3 IDST idst_bert_r1 IDST idst_bert_v2 IDST idst_bert_v1 IDST idst_bert_r2 h2oloo bm25exp_marcomb TU-Vienna TUW19-d3-re UCAS ucas_runid1 UCAS ucas_runid3 h2oloo bm25_marcomb h2oloo bm25exp_marco UCAS ucas_runid2 TU-Vienna TUW19-d2-re uogTr uogTrDNN6LM TU-Vienna TUW19-d1-re Microsoft ms_ensemble srchvrs srchvrs_run1 TU-Vienna TUW19-d2-f TU-Vienna TUW19-d3-f CMU dct_tp_bm25e2 srchvrs srchvrs_run2 BASELINE bm25tuned_rm3 CMU dct_qp_bm25e dct_tp_bm25e CMU uogTrDSSQE5LM uogTr TUW19-d1-f ms_duet uogTrDSS6pLM bm25tuned_prf bm25tuned_ax bm25base bm25base_rm3 runid1 bm25tuned bm25base_prf baseline bm25base_ax fullrank rerank fullrank fullrank rerank fullrank rerank rerank rerank fullrank fullrank rerank rerank fullrank rerank fullrank fullrank fullrank fullrank fullrank fullrank fullrank fullrank fullrank fullrank fullrank rerank fullrank fullrank fullrank fullrank fullrank rerank fullrank fullrank fullrank fullrank nnlm nnlm nnlm nnlm nnlm nnlm nn nnlm nnlm nnlm nnlm nnlm nn nnlm nn nn trad nn nn nn trad trad nn nn nnlm nn nn nnlm trad trad trad trad nnlm trad trad trad trad 0.4866 0.4889 0.4865 0.4874 0.4734 0.3518 0.4014 0.4422 0.4353 0.3591 0.3610 0.4315 0.3154 0.3187 0.3616 0.3725 0.3065 0.2886 0.3735 0.3402 0.3038 0.3396 0.3585 0.3530 0.3264 0.3190 0.2758 0.2803 0.3176 0.2889 0.2949 0.2405 0.3058 0.2930 0.2717 0.2795 0.2677 0.9612 0.9729 0.9612 0.9729 0.9729 0.8992 0.9457 0.9109 0.8992 0.9128 0.9031 0.9496 0.9147 0.8729 0.8915 0.8760 0.8715 0.8711 0.8929 0.8718 0.8715 0.8074 0.8915 0.8638 0.8895 0.8465 0.8101 0.8895 0.8005 0.7492 0.8046 0.7714 0.7811 0.8872 0.7774 0.8037 0.7424 0.7257 0.7189 0.7181 0.7175 0.7135 0.6456 0.6443 0.6437 0.6418 0.6403 0.6399 0.6350 0.6053 0.6046 0.5930 0.5784 0.5609 0.5596 0.5576 0.5544 0.5529 0.5485 0.5435 0.5424 0.5386 0.5383 0.5330 0.5323 0.5281 0.5245 0.5190 0.5169 0.5164 0.5140 0.5106 0.4823 0.4730 0.5800 0.5179 0.5947 0.5820 0.5179 0.6367 0.5179 0.5179 0.5179 0.6356 0.6191 0.5179 0.5179 0.5093 0.5179 0.4841 0.5599 0.4103 0.3045 0.4979 0.5572 0.5590 0.4924 0.4786 0.1839 0.2951 0.5179 0.1868 0.5576 0.5835 0.5170 0.5546 0.5179 0.5262 0.5303 0.5114 0.5148 # TU-Vienna Microsoft uogTr BASELINE BASELINE BASELINE BASELINE CCNU_IRGroup BASELINE BASELINE BITEM_DL BASELINE 0.3137 0.2915 0.3157 0.3119 0.2910 0.3190 0.2709 0.2642 0.2677 0.3229 0.3030 0.2526 0.2391 0.2488 0.2524 0.2369 0.2645 0.2050 0.1843 0.2244 0.2615 0.2700 0.2228 0.2098 0.1085 0.1647 0.2291 0.1129 0.2759 0.2816 0.2443 0.2772 0.2366 0.2318 0.2542 0.2168 0.2452 0.9 —%& nnim = nn 0.8 _% trad bestnnimrun YL o OE S07 8 ee oe 8 | = 0.6 best trad run 05 | | | 0.4 0.9 2% nnim = nn 0.8 best nnim run _% trad Segue nn nnn nnn nnn nn nnn nn nnn nnn nnn nn nnn nnn nn nnn enna naan ne o best nn run S074 |] } Itty) | tegeeeeee-- ee. 8 8 206 best trad run 05 | | 0.4 | I (a) Document retrieval task (b) Passage retrieval task Figure 1: NDCG@10 results, broken down by run type. Runs of type “nnlm”, meaning they use language models such as BERT, performed best on both tasks. Other neural network models “nn” and non-neural models “trad” had relatively lower performance this year. More iterations of evaluation and analysis would be needed to determine if this is a general result, but it is a strong start for the argument that deep learning methods may take over from traditional methods in IR applications. 6 # who # difference how long to hold bow in yoga exons definition biology formed the commonwealth of independent states what is the daily life of thai people what are the social determinants of health how many liberty ships were built in brunswick rsa definition key what types of food can you cook sous vide what can contour plowing reduce why did the us volunterilay enter ww1 what is the most popular food in switzerland what is an aml surveillance analyst how are some sharks warm blooded what is durable medical equipment consist of definition declaratory judgment causes of military suicide what is wifi vs bluetooth define visceral? cost of interior concrete flooring what is physical description of spruce axon terminals or synaptic knob definition tracheids are partof__ when was the salvation army founded hydrogen is a liquid below what temperature anthropological definition of environment what is theraderm used for right pelvic pain causes causes of left ventricular hypertrophy definition of a sigmet do goldfish grow is cdg airport in main paris how long is life cycle of flea Ips laws definition medicare's definition of mechanical ventilation example of monotonic function how to find the midsegment of a trapezoid types of dysarthria from cerebral palsy between a mcdouble and a double cheeseburger what is a active margin difference between rn and bsn what is famvir prescribed for who is robert gray how is the weather in jamaica o----¢ # oo #--2 0.0 0.2 0.4 0.6 NDCG@10 0.8 Figure 2: Comparison of the best “nnlm” and “trad” runs on individual test queries for the document retrieval task. Queries are sorted by difference in mean performance between “nnlm” and “trad”runs. Queries on which “nnlm” wins with large margin are at the top. 7 1.0 who is robert gray define visceral? difference between rn and bsn what is the most popular food in switzerland exons definition biology example of monotonic function what are the three percenters? how to find the midsegment of a trapezoid types of dysarthria from cerebral palsy formed the commonwealth of independent states medicare's definition of mechanical ventilation what is durable medical equipment consist of what can contour plowing reduce how are some sharks warm blooded what is a active margin do goldfish grow what types of food can you cook sous vide tracheids are partof__ what is physical description of spruce is cdg airport in main paris what is wifi vs bluetooth cost of interior concrete flooring Ips laws definition why did the us volunterilay enter ww1 what is the daily life of thai people definition of a sigmet definition declaratory judgment axon terminals or synaptic knob definition what is famvir prescribed for does legionella pneumophila cause pneumonia hydrogen is a liquid below what temperature causes of military suicide anthropological definition of environment between a mcdouble and a double cheeseburger what is an aml surveillance analyst what is theraderm used for right pelvic pain causes rsa definition key what are the social determinants of health how long is life cycle of flea when was the salvation army founded causes of left ventricular hypertrophy # who # difference @------------ ° @----------- + @--------- ° enn ----- =~ ° @n------- + o------- ° pe ° @n----- @------ ° en + O----- ° o----- ° o----- ° o----@ ———+---@ ied > oo +2 a o-2 o---0 o----2 how is the weather in jamaica ----- 2 0.0 0.2 0.4 0.6 NDCG@10 0.8 1.0 Figure 3: Comparison of the best “nnlm” and “trad” runs on individual test queries for the passage retrieval task. Queries are sorted by difference in mean performance between “nnlm” and “trad”runs. Queries on which “nnlm” wins with large margin are at the top. 8 Table 4: Passage retrieval runs. RR (MS) is based on MS MARCO labels. All other metrics are based on NIST labels. neural RR (MS) group subtask RR NDCG@10 NCG@1000 run idst_bert_p1 idst_bert_p2 idst_bert_p3 p_exp_rm3_bert p_bert idst_bert_pr2 idst_bert_pr1 p_exp_bert test1 TUA1-1 runid4 runid3 TUW19-p3-f TUW19-p1-f TUW19-p3-re TUW19-p1-re TUW19-p2-f ICT-BERT2 srchvrs_ps_run2 TUW19-p2-re ICT-CKNRM_B ms_duet_passage ICT-CKNRM_B50 srchvrs_ps_run3 bm25tuned_prf_p bm25base_ax_p bm25tuned_ax_p bm25base_prf_p runid2 runid5 bm25tuned_rm3_p bm25base_rm3_p bm25base_p srchvrs_ps_run1 bm25tuned_p UNH_bm25 IDST IDST IDST h2oloo h2oloo IDST IDST h2oloo Brown TUA1 udel_fang udel_fang TU-Vienna TU-Vienna TU-Vienna TU-Vienna TU-Vienna ICTNET srchvrs TU-Vienna ICTNET Microsoft ICTNET srchvrs BASELINE BASELINE BASELINE BASELINE CCNU_IRGroup CCNU_IRGroup BASELINE BASELINE BASELINE srchvrs BASELINE TREMA-UNH fullrank fullrank fullrank fullrank fullrank rerank rerank fullrank rerank rerank rerank rerank fullrank fullrank rerank rerank fullrank fullrank fullrank rerank fullrank rerank fullrank fullrank fullrank fullrank fullrank fullrank rerank fullrank fullrank fullrank fullrank fullrank fullrank fullrank nnlm nnlm nnlm nnlm nnlm nnlm nnlm nnlm nnlm nnlm nnlm nnlm nn nn nn nn nn nnlm nnlm nn nnlm nn nnlm trad trad trad trad trad nnlm nnlm trad trad trad trad trad trad 0.4635 0.4631 0.4374 0.3582 0.3624 0.4209 0.4430 0.3564 0.3598 0.3622 0.3762 0.3725 0.3134 0.3187 0.3100 0.3180 0.3469 0.3846 0.3262 0.3424 0.2984 0.2473 0.2055 0.1883 0.1928 0.1888 0.1840 0.2007 0.2143 0.2068 0.2162 0.1590 0.2402 0.1902 0.2363 0.1803 0.9283 0.9283 0.9167 0.8884 0.8663 0.8818 0.9070 0.8671 0.8702 0.8702 0.8702 0.8663 0.8407 0.8360 0.8568 0.8516 0.8487 0.8743 0.8302 0.8611 0.8016 0.8065 0.7597 0.6942 0.6996 0.6516 0.6481 0.6211 0.8088 0.7999 0.6992 0.6683 0.7036 0.5597 0.6850 0.6036 0.7645 0.7632 0.7594 0.7422 0.7380 0.7379 0.7378 0.7336 0.7314 0.7314 0.7028 0.6975 0.6884 0.6756 0.6746 0.6746 0.6709 0.6650 0.6645 0.6615 0.6481 0.6137 0.6014 0.5558 0.5536 0.5511 0.5461 0.5372 0.5322 0.5252 0.5231 0.5180 0.5058 0.4990 0.4973 0.4495 0.8196 0.8203 0.8287 0.7939 0.7472 0.6864 0.6864 0.7465 0.6864 0.6864 0.6864 0.6864 0.7436 0.7436 0.6864 0.6864 0.7432 0.2491 0.6643 0.6864 0.2491 0.6864 0.3786 0.7240 0.7947 0.8194 0.8145 0.7901 0.6830 0.5440 0.7841 0.7976 0.7490 0.7240 0.7472 0.6957 — On both document and passage retrieval tasks, the runs appear to be first clustered by group—see Figures 4b and 4d. This is expected, as different runs from the same group are likely to employ variations of the same approach. In Figures 4a and 4c, runs also cluster together based on their categorization as “nnlm”, “nn”, and “trad”. End-to-end retrieval vs. reranking. Our datasets include top-k candidate result lists, with 100 candidates per query for document retrieval and 1000 candidates per query for passage retrieval. Runs that simply rerank the provided candidates are “rerank” runs, whereas runs that perform end-to-end retrieval against the corpus, with millions of potential results, are “fullrank” runs. We would expect that a “fullrank” run should be able to find a greater number of relevant candidates than we provided, achieving higher NCG@k. A multi-stage “fullrank” run should also be able to optimize the stages jointly, such that early stages produce candidates that later stages are good at handling. According to Figure 5, “fullrank” did not achieve much better NDCG@10 performance than “rerank” runs. While it was possible for “fullrank” to achieve better NCG@k, it was also possible to make NCG@k worse, and achieving significantly higher NCG@k does not seem necessary to achieve good NDCG@10. Specifically, for the document retrieval task, the best “fullrank” run achieves only 0.9% higher NDCG@10 over the best “rerank’ run. For the passage retrieval task, the difference is 3.6%. The best NCG@100 for the document retrieval task is achieved by a well-tuned combination of BM25 [Robertson et al., 2009] and RM3 [Abdul-Jaleel et al., 2004] on top of document expansion using doc2query [Nogueira et al., 2019]—which improves by 22.9% on the metric relative to the set of 100 candidates provided for the reranking task. For the passage retrieval task, the best NCG@1000 is 20.7% higher than that of the provided reranking candidate set. 9 0.5030 0.5039 0.5046 0.5049 0.4677 0.4565 0.4571 0.4749 0.4567 0.4571 0.4383 0.4381 0.4196 0.4125 0.4113 0.4073 0.4157 0.2421 0.4090 0.3963 0.2289 0.3477 0.2429 0.3184 0.3684 0.3745 0.3632 0.3561 0.2671 0.2506 0.3377 0.3390 0.3013 0.2972 0.2903 0.2566 ee em e @ @ mim trad e ee ~ PY 5 | e a S|) oe e —e| + im Cm hd g e e s ° ee e e oe e e +++ % e + +t, # nw 5) a 5) —| 3 g s @ BASELINE ee @ = BITEMDL ~~ CCNU_IRGroup * =CMU @ \DsT ) ++ Microsoft oe = TU-Vienna » UCAS | @ heoloo x @ srchvrs x Vo suogTr oe * @ + 2% x e+ x * v fa ee? % + e ee, ° ° . latent dimension 1 latent dimension 1 (a) By model type on document retrieval task (b) By group name on document retrieval task em e e © onnlm A ° % s | trad 1 ee es e e e + 5 + 5 e 2) * 2, e@ aa Ele @ BASELINE S| +t oe S 2 & ps @ Brown g * g La! CCNU_IRGroup “ * ° = W v * ICTNET e + @ pst » y + Microsoft oo xx TREMA-UNH e x TU-Vienna e @ TuAL @ = h2oloo ? Pl Vosrchvrs e * @ udel_fang latent dimension 1 latent dimension 1 # (c) By model type on passage retrieval task (d) By group name on passage retrieval task Figure 4: Visualizing inter-run similarity using t-SNE. Each run is represented by a 43-dimensional vector of NDCG@10 performance on corresponding 43 test queries. The 43-dimensional vector is then reduced to two- dimensions and plotted using t-SNE. Runs that are submitted by the same group generally cluster together. Similarly, “nnlm”, “nn”, and “trad” runs also demonstrate similarities. Given this was the first ever Deep Learning Track at TREC, we are not yet seeing a strong advantage of “fullrank” over “rerank”. However, we hope that as the body of literature on neural methods for phase 1 retrieval (e.g., [Boytsov et al., 2016, Zamani et al., 2018, Mitra et al., 2019, Nogueira et al., 2019]) grows, we would see a larger number of runs with deep learning as an ingredient for phase 1 in future editions of this TREC track. NIST labels vs. Sparse MS MARCO labels. Our baseline human labels from MS MARCO often have one known positive result per query. We use these labels for training, but they are also available for test queries. Although our official evaluation uses NDCG@10 with NIST labels, we now compare this with reciprocal rank (RR) using MS MARCO labels, and RR using NIST labels. Our goal is to understand how changing the labeling scheme and metric affects the overall results of the track, but if there is any disagreement we believe the NDCG results are more valid, since they evaluate the ranking more comprehensively and a ranker that can only perform well on labels with exactly the same distribution as the training set is not robust enough for use in real-world applications, where real users will have opinions that are not necessarily identical to the preferences encoded in sparse training labels. In Figure 7 and 8, We observe general agreement between results using MS MARCO and NIST labels–i.e., runs that perform well on MS MARCO-style evaluation also tends to achieve good performance when evaluated under 10 0.9 2 fullrank —t rerank 0.8 best fullrank run best rerank run S07 ® 8 Trt 206 ; I 0.4 lI 0.9 @ fullrank — best fullrank run ¢_ rerank 0.8 -- S07 ® 8 206 ? ; HU 0.4 i (a) NDCG@10 for runs on the document retrieval task (b) NDCG@10 for runs on the passage retrieval task 0.7 _¢ fullrank 06 t Te _? rerank ¢. 0.5 TTT TTT TITS FT Te ; 8 G04 8 0.3 0.2 1 1 oa 0.9 os! 7? eff * 07 t o “| PTT TTTITTIT ST TTT TOT TT TT 806 ® 8 0.5 0.4 0.3) _@ fullrank ace k a2 SEEEEPH tit (c) NCG@100 for runs on the document retrieval task (d) NCG@1000 for runs on the passage retrieval task Figure 5: Analyzing the impact of “fullrank” vs. “rerank” settings on retrieval performance. Figure (a) and (b) show the performance of different runs on the document and passage retrieval tasks, respectively. Figure (c) and (d) plot the NCG@100 and NCG@1000 metrics for the same runs for the two tasks, respectively. The runs are ordered by their NDCG@10 performance along the x-axis in all four plots. We observe, that the best run under the “fullrank” setting outperforms the same under the “rerank” setting for both document and passage retrieval tasks—although the gaps are relatively smaller compared to those in Figure 1. If we compare Figure (a) with (c) and Figure (b) with (d), we do not observe any evidence that the NCG metric is a good predictor of NDCG@10 performance. traditional TREC settings, and vice versa. This is good news, validating the MS MARCO leaderboard results are at least somewhat indicative of results that are found with pooled judging. # 5 Reusability of test collections One goal of the track was to create traditional ad hoc test sets based on the MS MARCO dataset within available budgets. Since the Document Ranking and Passage Ranking tasks used different document sets, two separate test collections, one per task, were constructed. The two test collections started from a common set of topics and each topic was judged by the same NIST assessor for both documents and passages, but assessing for documents and passages was done at different times. Further, the evaluation set of topics (i.e., the topics over which evaluation scores are computed) are overlapping but not identical in the two collections. Thus the collections created in the track are two separate, independent collections. The runs submitted to the track consisted of ranked lists of items for each topic in the test set of 200 topics. NIST se- lected 52 topics from this set to be judged. The topics were selected by observing the behavior of submitted Document Ranking task runs on the entire test set when using the sparse MARCO judgments to evaluate runs. Test questions that had median MRR scores greater than 0.0 but no more than 0.5 were candidates to be judged. The judgment process then proceeded as follows, where the items to be judged will generically be called ‘documents’ even though those documents were MS MARCO passages for the Passage Ranking task. 11 0.504 @o e 0.45 4 <4 < group e IDST 0.40 4 a Vv h2oloo ~ 4 TU-Vienna 2 ae < UCAS z a Â¥v > uogTr [a4 @ 0.354 7 v @ Microsoft > srchvrs «A x cMuU 0.304 e » BASELINE Care a @ CCNU_IRGroup ° 8 e@ BITEM_DL * * 0.25 4 * T T T T T 0.45 0.50 0.55 0.60 0.65 0.70 # NDCG@10 (a) Document retrieval task. e 0.45 4 Pe e 0.40 4 group > e IDST v_ h2oloo 0.35 4 e * e A Brown a << TUAI = » udel_fang Eq 0.30 e@ TU-Vienna ICTNET srchvrs 0.25 4 e * » — Microsoft e e@ BASELINE ce e@ CCNU_IRGroup e@ |_| 0.204 bd @ TREMA-UNH e & e 0.15 4 0.45 050 055 0.60 0.65 0.70 0.75 NDCG@10 (b) Passage retrieval task. Figure 6: Metrics agreement scatter plot, broken down by group. RR (MS) is reciprocal rank calculated with the sparse MS MARCO labels, while NDCG@10 is calculated using NIST labels. 12 0.50 + x 0.45 0.40 Y RR (MS) 0.35 0.30 0.25 1.00 0.95 0.90 RR 0.85 0.80 0.75 4 4 9.75) 7 =0.69 t= 0.73 # + 0.65 4X4 4 # X 4 yy 4 +X 4 % x x 2 4 Y 4 4 Y Â¥ ng y NDCG@10 ° fon) fo) ° u a 0.50 v J aa Y 0.2 0.4 0.6 0.8 1.0 0.4 0.6 RR (MS) RR NDCG@10 neural + nnim x onn trad Y 0.8 Figure 7: Metrics agreement analysis, broken down by model type, for the document retrieval task. Kendall correlation (τ ) indicates agreement between metrics on system ordering. RR (MS) is calculated using MS MARCO sparse labels, while RR and NDCG@10 are calculated using NIST labels. 13 0.5 0.4 0.3 RR (MS) 0.2 NDCG@10 0.2 0.4 RR (MS) neural + x y tT=0.77 + fH & 4x Â¥ Â¥ Tyy oF Y yr Y GSS>SS 0.6 0.8 1.0 0.4 0.6 0.8 RR NDCG@10 Figure 8: Metrics agreement analysis, broken down by model type, for the passage retrieval task. Kendall correlation (τ ) indicates agreement between metrics on system ordering. RR (MS) is calculated using MS MARCO sparse labels, while RR and NDCG@10 are calculated using NIST labels. 14 neural + nnim x onmn y trad [1] For each question, create a top-10 pool across all runs in the task, and add any document that contains a judgment in the MARCO sparse judgments. Call the size of this set P (which varies from topic to topic). The assessor judges these pool documents first, then another 100 documents selected to be judged using the University of Waterloo’s HiCAL [Abualsaud et al., 2018] system. HiCAL uses the current set of judgments to build a relevance model and then selects the unjudged document most likely to be relevant as the next document to judge. At the end of this stage there are R known relevant documents. If 2R < P , the judging is finished for this topic. [2] Call the the difference between the number of documents that have been judged and the desired number of 2R + 100 judgments G. Judge another G documents selected by HiCAL. Now the number of judgments for the topic is J = P + 100 + G and the new number of known relevant is R∗. If 2R∗ + 100 < J, assessment is finished for the topic. If R∗ ≈ J, then discard the topic because it will be too expensive to get “sufficiently complete” judgments for it. [3] If a topic is still live, add a new increment proportional to the number of known relevant documents to the topic budget, and iterate, terminating when (if) the number of known relevant documents is less than half the number of judged documents. [4] Terminate the entire process when assessors are out of time or have nothing left to judge. The resulting evaluation set was the set of topics with at least three relevant documents and a ratio of R∗/J < 0.6. This process resulted in 43 topics in the evaluation set for both the Document Ranking and the Passage Ranking tasks, but as noted it is a slightly different 43 topics for the two tasks. Documents in the Document Ranking task were judged on a four-point scale of Irrelevant (0), Relevant (1), Highly Relevant (2), and Perfectly Relevant (3) where all but Irrelevant were treated as relevant in HiCAL and in computing binary-relevance-based measures. For the Passage Ranking task, passages were judged on a four-point scale of Irrel- evant (0), Related (the passage is on-topic but does not answer the question) (1), Highly Relevant (2), and Perfectly Relevant (3). In this task, only Highly and Perfectly Relevant were considered to be relevant for binary measures and by HiCAL, though nDCG scores did use a gain value of 1 for the Related passages. Table 5 gives counts of the number of documents judged and the number of relevant documents (using the definitions for binary relevance) found for each of the 52 topics that entered the process. HiCAL is a dynamic collection construction method, meaning that the document to be judged next is selected only after judgments for previous documents have been received. The Common Core track in TRECs 2017 and 2018 used a method based on multi-armed bandit optimization techniques, another dynamic method, with the similar goal of building high-quality, reusable, ad hoc test collections affordably [Voorhees, 2018]. That work showed two main issues to be overcome when building new collections with dynamic techniques: providing the assessors the opportunity to learn a topic before immutable judgments are rendered, and setting individual topic budgets when assessors judge at different rates and at different times but are subject to a single overall judgment budget. The first issue is less severe with (NIST’s modification of) HiCAL since assessors can change the value of any previously made judgment at any time; whenever a new relevance model is calculated, HiCAL uses the judgments current at the time of calculation. Nonetheless, top-10 pools provide both an opportunity for assessors to learn a topic and ensure that all measures based on document-level cutoffs less than or equal to ten are precise for all judged runs, and this motivated the use of pools in the first stage of the process. Setting per-topic judgment budgets continues to be a challenging problem. The stopping criterion of ending a topic once 2R+100 documents were judged was motivated by the heuristic observed by Waterloo in prior use of HiCAL [Cormack and Grossman, 2018] further supported by the Common Core track’s observation that a topic for which more than half of its judged documents are relevant is unlikely to be sufficiently judged2. Note that the process described above was the target process, but the practicalities of keeping assessors occupied meant that some topics received more judgments than they “deserved”. All judgments for non-excluded topics are included in the qrels file. # 5.1 Collection Robustness Our goal is to build general-purpose, reusable test collections at acceptable cost. In this context, general-purpose means a collection reliably ranks runs for a wide spectrum of evaluation measures, including recall-focused measures. Reusable means that runs that did not participate in the collection building process can be reliably ranked by the collection. Since costs in building a collection are generally dominated by the cost of human assessments, the number of relevance judgments required is used as the construction cost. 2We nonetheless included topics with a ratio of relevant to judged between 0.5 and 0.6 in the evaluation set because test collection stability tests suggest the collection is more stable with those topics than without them (likely because the total number of topics is greater with them) and to provide a greater diversity of topic sizes in the evaluation set. 15 Table 5: Judging statistics for the Document Ranking and Passage Ranking tasks. Given are the number of documents judged (any variant of) relevant, the total number of documents judged, and the fraction of judged documents that are relevant (Relevant Ratio). Topics were excluded from the evaluation set if they had fewer than 3 relevant or if the fraction of judged documents that are relevant was greater than 0.6. Data for excluded topics are given in gray. The final rows gives the total number of documents judged and the number of documents judged when not counting excluded topics. Document Ranking Passage Ranking # Relevant 53 767 168 165 341 61 42 25 25 240 151 578 23 324 76 177 3 183 34 1 195 202 392 51 52 42 178 5 115 24 283 44 381 40 432 335 242 41 385 93 55 562 7 386 55 2 440 276 38 20 199 426 # Judged 239 1476 404 346 420 218 174 168 157 578 378 885 144 723 228 415 190 446 171 183 376 415 700 161 204 176 412 337 314 173 372 188 708 234 466 395 416 183 664 280 163 1026 158 845 200 250 474 629 175 204 464 454 # Relevant 7 41 31 31 370 111 14 19 8 32 117 200 9 175 11 152 1 25 7 2 63 100 24 24 34 13 42 3 79 21 120 7 183 11 113 192 41 28 119 25 12 213 4 83 23 3 263 120 17 0 219 467 194 143 158 139 432 306 133 132 138 159 300 582 132 451 137 382 140 139 144 199 188 220 175 148 160 141 157 183 192 161 180 154 392 141 152 300 178 175 223 180 151 470 152 257 146 178 378 330 147 163 492 700 Topic 19335 47923 87181 87452 100983 104861 130510 131843 146187 148538 156493 168216 182539 183378 207786 264014 287683 359349 405717 423273 443396 451602 489204 490595 527433 573724 833860 855410 915593 962179 966413 1037798 1063750 1103812 1104031 1104492 1106007 1110199 1112341 1113437 1114646 1114819 1115776 1117099 1121402 1121709 1121986 1124210 1129237 1132213 1133167 1134787 Total judged: Final qrels size: Relevant Ratio 0.222 0.520 0.416 0.477 0.812 0.280 0.241 0.149 0.159 0.415 0.399 0.653 0.160 0.448 0.333 0.427 0.016 0.410 0.199 0.005 0.519 0.487 0.560 0.317 0.255 0.239 0.432 0.015 0.366 0.139 0.761 0.234 0.538 0.171 0.927 0.848 0.582 0.224 0.580 0.332 0.337 0.548 0.044 0.457 0.275 0.008 0.928 0.439 0.217 0.098 0.429 0.938 20,157 16,258 # Judged Relevant Ratio 0.036 0.287 0.196 0.223 0.856 0.363 0.105 0.144 0.058 0.201 0.390 0.344 0.068 0.388 0.080 0.398 0.007 0.180 0.049 0.010 0.335 0.455 0.137 0.162 0.212 0.092 0.268 0.016 0.411 0.130 0.667 0.045 0.467 0.078 0.743 0.640 0.230 0.160 0.534 0.139 0.079 0.453 0.026 0.323 0.158 0.017 0.696 0.364 0.116 0.000 0.445 0.667 11,904 9260 16 Leave-Out-Uniques (LOU) tests [Buckley et al., 2007, Zobel, 1998] are a way of analyzing the reusability of a collec- tion. In these tests, the relevant documents retrieved by only one participating team are removed from the qrels files and all runs are then evaluated using the reduced qrels. The reduced qrels are the qrels that would have resulted had the team not participated in the collection building process, and thus their submitted runs represent new runs with respect to the reduced qrels. If the ranking of runs using the reduced qrels is essentially the same as the ranking of runs using the original qrels over all participating teams, then the original collection is likely reusable. The similarity between rankings of runs is usually defined by the Kendall’s τ correlation between the rankings. Kendall’s τ is a measure of association that is proportional to the number of interchanges between adjacent items in one ranking that are required to turn that ranking into the other. τ scores are normalized such that a score of 1 designates perfect agreement, -1 designates rankings that are inverses of one another, and 0 designates rankings that are independent of one another. τ scores can be misleading in the case of system rankings of TREC submissions, however, because usually there are a set of very good runs and a set of very poor runs and each of those run sets always rank in the same order. Thus, in ad- dition to the τ score between the rankings, we also report drops, the largest (negative) difference in ranks experienced by some run [Voorhees, 2018]. A standard LOU test does not work for examining the collections built in the Deep Learning track because the HiCAL process does not depend on runs to provide documents and thus “unique relevant documents” is no longer a well- defined concept. A given team’s unique relevant documents can be removed from the depth-10 pools in the first stage, but then the HiCAL process must activated as it may select the removed documents to be judged in later stages. Since the HiCAL process is not deterministic (ties are broken randomly) and depends on the particular set of documents seen so far, the HiCAL process must be simulated multiple times using the original qrels’ judgments. The simulations proceeded as follows, where the entire process was performed separately for the Document Ranking and Passage Ranking collections. The original depth-10 pools (i.e., top-10 documents from all runs plus MARCO judgments) were fed to the HiCAL process for each of ten trials, where each trial used a separate initial seed for the random number generator. Within each trial, we tracked the documents encountered by HiCAL, creating a trace of the first 2500 documents encountered per topic. Any unjudged documents encountered by HiCAL were treated as not relevant. We created a qrels file from each trace by taking a prefix of the trace of length equal to the number of documents judged in the original qrels per topic. This resulted in 10 qrels files that could have resulted as the official qrels of the track (modulo the unjudged documents would have been judged). While these qrels are not identical to one another nor to the official qrels, they do rank systems very similarly. The leftmost segment of Table 6 shows the τ values and the drops for MAP scores over the set of ten trials3. The top part of the table gives statistics for the Document Ranking task collection and the bottom part for the Passage Ranking task collection. The rightmost segment of Table 6 gives the τ and maximum drop values for the experiments when one participating team is removed from the process. In these experiments, for each team in turn, we created initial pools consisting of the MARCO judged documents plus the top-10 documents from all runs except those runs submitted by the current team. This pool was fed to the HiCAL process for each of ten trials where the random number seed for a given trial was the same as in the all-teams simulation. As before, we created a trace of the documents that were encountered by HiCAL, and created a qrels file by taking a prefix of the trace of length equal to the number of documents judged in the official qrels. All runs were evaluated using this trial qrels, and the ranking induced by it was compared to the ranking induced by the official qrels. The table reports the smallest τ and largest maximum drop observed over all teams for that trial. In general, the ranking of systems is stable, providing support for the contention that the collections are reusable. A more detailed look at the variability in system rankings is given in Figure 9. The figure shows a heat map of the number of times a run was ranked at a given position over all simulation trials (120 trials for the Document Ranking collection and 130 trials for the Passage Ranking task). The ranks are plotted on the x-axis and the runs on the y-axis where they are sorted by their position in the ranking by the official qrels. The darker a plotted point the more times the run was ranked at that position. The figure makes it clear that a large majority of runs have a single dominant rank. When a run does have change ranks, it moves by a modest amount. # 5.2 Per-topic Budgets The qrels created from the simulations for the stability investigation were constructed to contain exactly the same number of judgments per topic as the official qrels contains for fair comparisons. But, of course, no such stopping criterion is available when first building a collection. The trace of documents encountered by HiCAL in the simulations provides a mechanism for exploring the effect of different stopping conditions on the final collection. We construct a qrels by applying a given stopping criterion to a document trace. For these experiments, all 52 topics start the process 3Prec(10) scores are identical over all trials because each trial starts with a depth-10 pool. 17 Table 6: Kendall’s τ and Maximum Drop in ranks observed in simulation trials. Each trial creates a qrels file of the same size as the official qrels, and the ranking of systems induced by that qrels is compared to the ranking induced by the official qrels. Using all team’s runs compared to the original (left columns) shows the effect of the nondeterminism of HiCAL. The remainder of the columns show the effect of omitting one team’s runs from the pools in the first stage. All vs. Official MAP τ 0.9915 0.9829 0.9801 0.9801 0.9829 0.9858 0.9886 0.9829 0.9801 0.9829 Drop 1 2 2 2 2 2 2 2 2 2 Omit Team vs. Official MAP Prec(10) τ 0.9856 0.9856 0.9856 0.9856 0.9827 0.9798 0.9856 0.9827 0.9856 0.9827 τ 0.9573 0.9659 0.9687 0.9687 0.9687 0.9687 0.9687 0.9687 0.9602 0.9659 Drop 3 3 3 3 3 3 3 3 3 3 Drop 5 5 5 5 5 5 5 5 4 5 a) Document Ranking task collection All vs. Official MAP τ 0.9970 0.9910 0.9880 0.9880 0.9880 0.9970 0.9940 0.9880 0.9880 0.9880 Drop 1 2 2 2 2 1 1 2 2 2 Omit Team vs. Official MAP Prec(10) τ 0.9939 0.9939 0.9939 0.9939 0.9939 0.9939 0.9939 0.9939 0.9939 0.9939 τ 0.9820 0.9819 0.9820 0.9820 0.9820 0.9820 0.9849 0.9820 0.9850 0.9820 Drop 2 2 2 2 2 2 2 2 2 2 Drop 2 2 2 2 2 2 2 2 2 2 b) Passage Ranking task collection and each may be included in the final qrels if the stopping criterion allows. Unjudged documents encountered in a simulation are treated as not relevant. The simplest stopping criterion is to simply judge an equal number of documents per topic. Each of the X topics that starts the process gets totalBudget/X judgments, and a topic is included in the final qrels if at least some minimum number of relevant documents (we use 3) is found. The simplicity of this method arises from the fact that topics are independent of one another once the budget is determined, but equal allotment is known to be sub-optimal for finding the maximum possible viable topics since “small” topics will receive as many judgments as “large” topics. An alternative is a strict implementation of the process loosely followed in the track; we call this the Heuristic stopping criterion. In the Heuristic simulation experiments here, we capped the number of judgments any topic can receive at 1000, though that cap was never reached. Table 7 shows the number of judgments required and relative quality of the qrels created from these different stopping criteria for the two collections built in the track. Note that the only judgments available for these collection is from the Official qrels, so a method could never find more relevant than in the Official qrels. The statistics for the Document Ranking task collection are given in the top half of the table and for the Passage Ranking task collection in the bottom half. The statistics for the Official qrels is included in the table for reference. The qrels designated as “Original Size” are the same qrels as in the previous experiments above: pools are built from all runs but ten different trials of the HiCAL process, corresponding to ten different random number seeds, are tested. “Budget 400” and “Budget 500” correspond to a constant per-topic budget of 400 and 500 judgments respectively. The Total Number of Judgments column in the table gives the number of judgments used over all topics that start the process. These judgments must be made to determine whether a topic will be included in the final evaluation set, and 18 # Document Ranking collection, MAP Document Ranking collection, Prec(10) Passage Ranking collection, Prec(10) Passage Ranking collection, MAP Figure 9: Position at which runs ranked over all simulation trials. so must be accounted for in the budgeting process. The Number of Evaluation Topics is the number of topics that are included in the final qrels file based on the criterion’s specification. Original Size qrels always have the same number of judgments as the official qrels by construction, so the qrels built using that method in each trial has the same number of topics as the qrels from all other trials, namely the number of topics in the Official qrels. Constant budget qrels omit a topic only if the minimum number of relevant documents for a topic is not found. While it is possible for qrels created by a constant budget to differ in the number of topics, for the current collections each trial produced a qrels with the same number of topics as the other trials. The Heuristic method omits not only topics with too few relevant documents but topics with too many relevant as well. Again, different trials could lead to different numbers of topics in the qrels, but that did not happen in practice. The Heuristic method is the only method among those tested that can differ in the number of documents judged across trials. For that method, the table reports the mean number of judgments across the ten trials as well as the minimum and maximum number of judgments observed in a trial. The remaining columns in the table give the Kendall’s τ score and maximum drops for the ranking of systems produced by the test qrels as compared to the ranking produced by the Official qrels. As in the experiments above, the value reported is the smallest τ and largest drop observed across the ten trials. The main take-away from the results in Table 7 is that the HiCAL process is very stable across trials and is even robust to differences in stopping conditions within the ranges tested. The primary effect of the different stopping conditions is the inclusion or exclusion of topics affecting mean scores, not differences in individual topic scores. Averaging 19 Table 7: Effect of stopping criteria on qrels quality and number judgments required. Criterion Official Original Size Budget 400 Budget 500 Heuristic # Eval Topics 43 43 50 50 38 MAP τ — 0.9801 0.9316 0.9431 0.9260 Drop — 2 5 3 5 Prec(10) τ — 1.0000 0.9017 0.9017 Drop — 0 8 8 0.9565 2 a) Document Ranking task collection Criterion Official Original Size Budget 400 Budget 500 Heuristic # Eval Topics 43 43 49 49 46 MAP τ — 0.9880 0.9880 0.9880 0.9880 Drop — 2 1 1 1 Prec(10) τ — 1.0000 0.9727 0.9727 Drop — 0 3 3 0.9786 2 b) Passage Ranking task collection effects are the sole explanation for the differences in Prec(10) rankings: since the top-10 pool was always judged in all conditions, the only difference that can arise for a Prec(10) ranking is the change in the mean score when a topic is omitted from the evaluation set. A large majority of the topics omitted by the Heuristic method were eliminated by matching the condition |Relevant| > 0.6|Judged| once sufficiently many documents were judged (i.e., in step 2 above). LOU tests and other simulations are dependent on the results submitted to the track, so it is not possible to say with certainty that a given partially judged collection is reusable. Nonetheless, the current evidence suggests that the collections built in the Deep Learning track are high quality ad hoc collections. # 6 Conclusion The TREC 2019 Deep Learning Track introduced two large training datasets, for a document retrieval task and a passage retrieval task, generating two ad hoc test collections with good reusability. For both tasks, in the presence of large training data, this year’s non-neural network runs were outperformed by neural network runs. Among the neural approaches, the best-performing runs tended to use transfer learning, employing a pretrained language model such as BERT. In future it will be interesting to confirm and extend these results, understanding what mix of data and multi-stage training lead to the best overall performance. We compared reranking approaches to end-to-end retrieval approaches, and in this year’s track there was not a huge difference, with some runs performing well in both regimes. This is another result that would be interesting to track in future years, since we would expect that end-to-end retrieval should perform better if it can recall documents that are unavailable in a reranking subtask. This year there were not many non-neural runs, so it would be important in next year’s track to see more runs of all types, to further understand the relative performance of different approaches. Although this year’s test collections are of high quality, meaning that they are likely to give meaningful results when reused, overfitting can still be a problem if the test set is used multiple times during the development of a new retrieval approach. The most convincing way to show that a new approach is good is to submit TREC runs. There is no chance of overfitting, or any kind of repeated testing, because the test labels are not generated until after the submission deadline. Through a combination of test collection reuse (from past years) and blind evaluation (submitting runs) the Deep Learning Track is offering a framework for studying ad hoc search in the large data regime. 20 # References Nasreen Abdul-Jaleel, James Allan, W Bruce Croft, Fernando Diaz, Leah Larkey, Xiaoyan Li, Mark D Smucker, and Courtney Wade. Umass at trec 2004: Novelty and hard. 2004. Mustafa Abualsaud, Nimesh Ghelani, Haotian Zhang, Mark D Smucker, Gordon V Cormack, and Maura R Gross- man. A system for efficient high-recall retrieval. In The 41st international ACM SIGIR conference on research & development in information retrieval, pages 1317–1320, 2018. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, et al. Ms marco: A human generated machine reading comprehension dataset. arXiv preprint arXiv:1611.09268, 2016. Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 2013. Leonid Boytsov, David Novak, Yury Malkov, and Eric Nyberg. Off the beaten path: Let’s replace term-based retrieval with k-nn search. In Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, pages 1099–1108. ACM, 2016. Chris Buckley, Darrin Dimmick, Ian Soboroff, and Ellen Voorhees. Bias and the limits of pooling for large collections. Information retrieval, 10(6):491–508, 2007. Gordon V Cormack and Maura R Grossman. Beyond pooling. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1169–1172, 2018. Mostafa Dehghani, Hamed Zamani, Aliaksei Severyn, Jaap Kamps, and W Bruce Croft. Neural ranking models with weak supervision. In Proc. SIGIR, pages 65–74. ACM, 2017. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In Proc. CVPR, pages 248–255. Ieee, 2009. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805, 2018. Laura Dietz, Manisha Verma, Filip Radlinski, and Nick Craswell. Trec complex answer retrieval overview. In Pro- ceedings of TREC, 2017. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759, 2016. Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015. Tie-Yan Liu. Learning to rank for information retrieval. Foundation and Trends in Information Retrieval, 3(3):225– 331, March 2009. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 9 (Nov):2579–2605, 2008. Irina Matveeva, Chris Burges, Timo Burkard, Andy Laucius, and Leon Wong. High accuracy retrieval with multiple nested ranker. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 437–444. ACM, 2006. Bhaskar Mitra and Nick Craswell. An introduction to neural information retrieval. Foundations and Trends®) in Information Retrieval (to appear), 2018. Bhaskar Mitra and Nick Craswell. An updated duet model for passage re-ranking. arXiv preprint arXiv:1903.07666, 2019. Bhaskar Mitra, Fernando Diaz, and Nick Craswell. Learning to match using local and distributed representations of text for web search. In Proc. WWW, pages 1291–1299, 2017. Incorporating query term independence assumption for efficient retrieval and ranking using deep neural networks. arXiv preprint arXiv:1907.03693, 2019. Rodrigo Nogueira, Wei Yang, Jimmy Lin, and Kyunghyun Cho. Document expansion by query prediction. arXiv preprint arXiv:1904.08375, 2019. Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends® in Information Retrieval, 3(4):333-389, 2009. Corby Rosset, Damien Jose, Gargi Ghosh, Bhaskar Mitra, and Saurabh Tiwary. Optimizing query evaluations using reinforcement learning for web search. In Proc. SIGIR, pages 1193–1196. ACM, 2018. 21 Trevor Strohman, Donald Metzler, Howard Turtle, and W Bruce Croft. Indri: A language model-based search engine for complex queries. In Proceedings of the International Conference on Intelligent Analysis, volume 2, pages 2–6. Citeseer, 2005. Ellen M Voorhees. On building fair and reusable test collections using bandit techniques. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 407–416, 2018. Lidan Wang, Jimmy Lin, and Donald Metzler. A cascade ranking model for efficient ranked retrieval. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 105–114. ACM, 2011. Peilin Yang and Jimmy Lin. Reproducing and generalizing semantic term matching in axiomatic information retrieval. In European Conference on Information Retrieval, pages 369–381. Springer, 2019. Wei Yang, Kuang Lu, Peilin Yang, and Jimmy Lin. Critically examining the “neural hype”: Weak baselines and the additivity of effectiveness gains from neural ranking models. In Proc. SIGIR, pages 1129–1132. ACM, 2019a. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019b. Hamed Zamani, Mostafa Dehghani, W Bruce Croft, Erik Learned-Miller, and Jaap Kamps. From neural re-ranking to neural ranking: Learning a sparse representation for inverted indexing. In Proc. CIKM, pages 497–506. ACM, 2018. Zhaohao Zeng and Tetsuya Sakai. Bm25 pseudo relevance feedback using anserini at waseda university. In Proceed- ings The Open-Source IR Replicability Challenge (OSIRRC) Workshop, 2019. Justin Zobel. How reliable are the results of large-scale information retrieval experiments? In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 307–314, 1998. 22
{ "id": "1607.01759" }
2003.07853
Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation
Convolution exploits locality for efficiency at a cost of missing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by restricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self-attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all existing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8x parameter-efficient and 27x computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes.
http://arxiv.org/pdf/2003.07853
Huiyu Wang, Yukun Zhu, Bradley Green, Hartwig Adam, Alan Yuille, Liang-Chieh Chen
cs.CV, cs.LG
ECCV 2020 camera-ready
null
cs.CV
20200317
20200806
0 2 0 2 g u A 6 ] V C . s c [ 2 v 3 5 8 7 0 . 3 0 0 2 : v i X r a # Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation Huiyu Wang!*, Yukun Zhu?, Bradley Green”, Hartwig Adam?, Alan Yuille, and Liang-Chieh Chen? 1 Johns Hopkins University 2 Google Research Abstract. Convolution exploits locality for efficiency at a cost of miss- ing long range context. Self-attention has been adopted to augment CNNs with non-local interactions. Recent works prove it possible to stack self-attention layers to obtain a fully attentional network by re- stricting the attention to a local region. In this paper, we attempt to remove this constraint by factorizing 2D self-attention into two 1D self- attentions. This reduces computation complexity and allows performing attention within a larger or even global region. In companion, we also propose a position-sensitive self-attention design. Combining both yields our position-sensitive axial-attention layer, a novel building block that one could stack to form axial-attention models for image classification and dense prediction. We demonstrate the effectiveness of our model on four large-scale datasets. In particular, our model outperforms all exist- ing stand-alone self-attention models on ImageNet. Our Axial-DeepLab improves 2.8% PQ over bottom-up state-of-the-art on COCO test-dev. This previous state-of-the-art is attained by our small variant that is 3.8× parameter-efficient and 27× computation-efficient. Axial-DeepLab also achieves state-of-the-art results on Mapillary Vistas and Cityscapes. Keywords: bottom-up panoptic segmentation, self-attention # Introduction Convolution is a core building block in computer vision. Early algorithms employ convolutional filters to blur images, extract edges, or detect features. It has been heavily exploited in modern neural networks [50,49] due to its efficiency and generalization ability, in comparison to fully connected models [2]. The success of convolution mainly comes from two properties: translation equivariance, and locality. Translation equivariance, although not exact [97], aligns well with the nature of imaging and thus generalizes the model to different positions or to images of different sizes. Locality, on the other hand, reduces parameter counts and M-Adds. However, it makes modeling long range relations challenging. Work done while an intern at Google. https://github.com/csrhddlam/axial-deeplab # 2 H. Wang et al. A rich set of literature has discussed approaches to modeling long range interactions in convolutional neural networks (CNNs). Some employ atrous con- volutions [34,77,67,13], larger kernel [70], or image pyramids [98,85], either de- signed by hand or searched by algorithms [103,12,60]. Another line of works adopts attention mechanisms. Attention shows its ability of modeling long range interactions in language modeling [83,88], speech recognition [22,11], and neu- ral captioning [92]. Attention has since been extended to vision, giving signifi- cant boosts to image classification [6], object detection [37], semantic segmen- tation [42], video classification [87], and adversarial defense [89]. These works enrich CNNs with non-local or long-range attention modules. Recently, stacking attention layers as stand-alone models without any spatial convolution has been proposed [68,38] and shown promising results. However, naive attention is computationally expensive, especially on large inputs. Ap- plying local constraints to attention, proposed by [68,38], reduces the cost and enables building fully attentional models. However, local constraints limit model receptive field, which is crucial to tasks such as segmentation, especially on high-resolution inputs. In this work, we propose to adopt axial-attention [33,42], which not only allows efficient computation, but recovers the large receptive field in stand-alone attention models. The core idea is to factorize 2D attention into two 1D attentions along height- and width-axis sequentially. Its efficiency enables us to attend over large regions and build models to learn long range or even global interactions. Additionally, most previous attention modules do not utilize positional information, which degrades attention’s ability in modeling position-dependent interactions, like shapes or objects at multiple scales. Recent works [68,38,6] introduce positional terms to attention, but in a context-agnostic way. In this paper, we augment the positional terms to be context-dependent, making our attention position-sensitive, with marginal costs. We show the effectiveness of our axial-attention models on ImageNet [73] for classification, and on three datasets (COCO [59], Mapillary Vistas [65], and Cityscapes [23]) for panoptic segmentation [48], instance segmentation, and se- mantic segmentation. In particular, on ImageNet, we build an Axial-ResNet by replacing the 3 × 3 convolution in all residual blocks [32] with our position- sensitive axial-attention layer, and we further make it fully attentional [68] by adopting axial-attention layers in the ‘stem’. As a result, our Axial-ResNet at- tains state-of-the-art results among stand-alone attention models on ImageNet. For segmentation tasks, we convert Axial-ResNet to Axial-DeepLab by replac- ing the backbones in Panoptic-DeepLab [19]. On COCO [59], our Axial-DeepLab outperforms the current bottom-up state-of-the-art, Panoptic-DeepLab [20], by 2.8% PQ on test-dev set. We also show state-of-the-art segmentation results on Mapillary Vistas [65], and Cityscapes [23]. To summarize, our contributions are four-fold: – The proposed method is the first attempt to build stand-alone attention models with large or global receptive field. – We propose position-sensitive attention layer that makes better use of posi- tional information without adding much computational cost. # Axial-DeepLab 3 – We show that axial attention works well, not only as a stand-alone model on image classification, but also as a backbone on panoptic segmentation, instance segmentation, and segmantic segmentation. – Our Axial-DeepLab improves significantly over bottom-up state-of-the-art on COCO, achieving comparable performance of two-stage methods. We also surpass previous state-of-the-art methods on Mapillary Vistas and Cityscapes. # 2 Related Work Top-down panoptic segmentation: Most state-of-the-art panoptic segmen- tation models employ a two-stage approach where object proposals are firstly generated followed by sequential processing of each proposal. We refer to such ap- proaches as top-down or proposal-based methods. Mask R-CNN [31] is commonly deployed in the pipeline for instance segmentation, paired with a light-weight stuff segmentation branch. For example, Panoptic FPN [47] incorporates a se- mantic segmentation head to Mask R-CNN [31], while Porzi et al . [71] append a light-weight DeepLab-inspired module [14] to the multi-scale features from FPN [58]. Additionally, some extra modules are designed to resolve the overlapping instance predictions by Mask R-CNN. TASCNet [52] and AUNet [55] propose a module to guide the fusion between ‘thing’ and ‘stuff’ predictions, while Liu et al . [64] adopt a Spatial Ranking module. UPSNet [91] develops an efficient parameter-free panoptic head for fusing ‘thing’ and ‘stuff’, which is further ex- plored by Li et al . [53] for end-to-end training of panoptic segmentation models. AdaptIS [80] uses point proposals to generate instance masks. Bottom-up panoptic segmentation: In contrast to top-down approaches, bottom-up or proposal-free methods for panoptic segmentation typically start with the semantic segmentation prediction followed by grouping ‘thing’ pixels into clusters to obtain instance segmentation. DeeperLab [93] predicts bound- ing box four corners and object centers for class-agnostic instance segmentation. SSAP [29] exploits the pixel-pair affinity pyramid [63] enabled by an efficient graph partition method [46]. BBFNet [8] obtains instance segmentation results by Watershed transform [84,4] and Hough-voting [5,51]. Recently, Panoptic- DeepLab [20], a simple, fast, and strong approach for bottom-up panoptic seg- mentation, employs a class-agnostic instance segmentation branch involving a simple instance center regression [45,82,66], coupled with DeepLab semantic segmentation outputs [13,15,16]. Panoptic-DeepLab has achieved state-of-the- art results on several benchmarks, and our method builds on top of it. Self-attention: Attention, introduced by [3] for the encoder-decoder in a neural sequence-to-sequence model, is developed to capture correspondence of tokens between two sequences. In contrast, self-attention is defined as applying attention to a single context instead of across multiple modalities. Its ability to directly encode long-range interactions and its parallelizability, has led to state-of-the-art performance for various tasks [83,40,26,69,75,25,56]. Recently, self-attention has been applied to computer vision, by augmenting CNNs with non-local or long-range modules. Non-local neural networks [87] show that self- attention is an instantiation of non-local means [10] and achieve gains on many # 4 H. Wang et al. vision tasks such as video classification and object detection. Additionally, [18,6] show improvements on image classification by combining features from self- attention and convolution. State-of-the-art results on video action recognition tasks [18] are also achieved in this way. On semantic segmentation, self-attention is developed as a context aggregation module that captures multi-scale con- text [42,27,102,99]. Efficient attention methods are proposed to reduce its com- plexity [76,42,56]. Additionally, CNNs augmented with non-local means [10] are shown to be more robust to adversarial attacks [89]. Besides discriminative tasks, self-attention is also applied to generative modeling of images [95,9,33]. Recently, [68,38] show that self-attention layers alone could be stacked to form a fully attentional model by restricting the receptive field of self-attention to a local square region. Encouraging results are shown on both image classification and object detection. In this work, we follow this direction of research and propose a stand-alone self-attention model with large or global receptive field, making self-attention models non-local again. Our models are evaluated on bottom-up panoptic segmentation and show significant improvements. # 3 Method We begin by formally introducing our position-sensitive self-attention mecha- nism. Then, we discuss how it is applied to axial-attention and how we build stand-alone Axial-ResNet and Axial-DeepLab with axial-attention layers. # 3.1 Position-Sensitive Self-Attention Self-Attention: Self-attention mechanism is usually applied to vision models as an add-on to augment CNNs outputs [87,95,42]. Given an input feature map x ∈ Rh×w×din with height h, width w, and channels din, the output at position o = (i, j), yo ∈ Rdout , is computed by pooling over the projected input as: yo = softmaxp(qT o kp)vp p∈N (1) where N is the whole location lattice, and queries qo = WQxo, keys ko = WKxo, values vo = WV xo are all linear projections of the input xo ∀o ∈ N . WQ, WK ∈ Rdq×din and WV ∈ Rdout×din are all learnable matrices. The softmaxp denotes a softmax function applied to all possible p = (a, b) positions, which in this case is also the whole 2D lattice. Q WKxp, allowing us to capture related but non-local context in the whole feature map, as opposed to convolution which only captures local relations. However, self-attention is extremely expensive to compute (O(h2w2)) when the spatial dimension of the input is large, restricting its use to only high levels of a CNN (i.e., downsampled feature maps) or small images. Another drawback is that the global pooling does not exploit positional information, which is critical to capture spatial structures or shapes in vision tasks. # Axial-DeepLab 5 These two issues are mitigated in [68] by adding local constraints and po- sitional encodings to self-attention. For each location o, a local m × m square region is extracted to serve as a memory bank for computing the output yo. This significantly reduces its computation to O(hwm2), allowing self-attention modules to be deployed as stand-alone layers to form a fully self-attentional neural network. Additionally, a learned relative positional encoding term is in- corporated into the affinities, yielding a dynamic prior of where to look at in the receptive field (i.e., the local m × m square region). Formally, [68] proposes Yo = Ss softmax, (qr kp + q2 rp—o)Up (2) PENmxm(0) where Nm×m(o) is the local m × m square region centered around location o = (i, j), and the learnable vector rp−o ∈ Rdq is the added relative positional encoding. The inner product qT o rp−o measures the compatibility from location p = (a, b) to location o = (i, j). We do not consider absolute positional encoding qT o rp, because they do not generalize well compared to the relative counter- part [68]. In the following paragraphs, we drop the term relative for conciseness. In practice, dq and dout are much smaller than din, and one could extend single-head attention in Eq. (2) to multi-head attention to capture a mixture of affinities. In particular, multi-head attention is computed by applying N single- head attentions in parallel on xo (with different W n V , ∀n ∈ {1, 2, . . . , N } for the n-th head), and then obtaining the final output zo by concatenating the results from each head, i.e., zo = concatn(yn o ). Note that positional encodings are often shared across heads, so that they introduce marginal extra parameters. Position-Sensitivity: We notice that previous positional bias only depends on the query pixel xo, not the key pixel xp. However, the keys xp could also have information about which location to attend to. We therefore add a key-dependent positional bias term kT Similarly, the values vp do not contain any positional information in Eq. (2). In the case of large receptive fields or memory banks, it is unlikely that yo contains the precise location from which vp comes. Thus, previous models have to trade-off between using smaller receptive fields (i.e., small m × m regions) and throwing away precise spatial structures. In this work, we enable the output yo to retrieve relative positions rv p−o, besides the content vp, based on query-key affinities qT o kp. Formally, yo = o rq softmaxp(qT o kp + qT p−o + kT p rk p−o)(vp + rv p−o) p∈Nm×m(o) (3) where the learnable rk p−o ∈ Rdout is for values. Both vectors do not introduce many parameters, since they are shared across attention heads in a layer, and the number of local pixels |Nm×m(o)| is usually small. We call this design position-sensitive self-attention, which captures long range interactions with precise positional information at a reasonable computation overhead, as verified in our experiments. 6 # H. Wang et al. y y A * HxWXx16 HXWX16 HXWX16 softmax Hx(Wx16) . softmax Hx(Wx16) r Hx(WxW) wax Hx(WxW) Hx(WxW) 8 rz 8. z Wx8xw wx8xw Hx(Wx8) Hx(8xW) Hx(Wx8) Hx(8xW) HXWx8 HXWx8 HXWx8 HXWx8 Wo: 1x1 We: 1x1. Wy: 1x1 Wo: 1x1 We: 1x1. Wy: 1x1 HxWx128 x HXWx128 x Fig. 1. A non-local block (left) vs. our position-sensitive axial-attention applied along the width-axis (right). “⊗” denotes matrix multiplication, and “⊕” denotes element- wise sum. The softmax is performed on the last axis. Blue boxes denote 1 × 1 convo- lutions, and red boxes denote relative positional encoding. The channels din = 128, dq = 8, and dout = 16 is what we use in the first stage of ResNet after ‘stem’ # 3.2 Axial-Attention The local constraint, proposed by the stand-alone self-attention models [68], sig- nificantly reduces the computational costs in vision tasks and enables building fully self-attentional model. However, such constraint sacrifices the global con- nection, making attention’s receptive field no larger than a depthwise convolution with the same kernel size. Additionally, the local self-attention, performed in lo- cal square regions, still has complexity quadratic to the region length, introduc- ing another hyper-parameter to trade-off between performance and computation complexity. In this work, we propose to adopt axial-attention [42,33] in stand- alone self-attention, ensuring both global connection and efficient computation. Specifically, we first define an axial-attention layer on the width-axis of an image as simply a one dimensional position-sensitive self-attention, and use the similar definition for the height-axis. To be concrete, the axial-attention layer along the width-axis is defined as follows. yo = o rq softmaxp(qT o kp + qT p−o + kT p rk p−o)(vp + rv p−o) p∈N1×m(o) (4) One axial-attention layer propagates information along one particular axis. To capture global information, we employ two axial-attention layers consecutively for the height-axis and width-axis, respectively. Both of the axial-attention layers adopt the multi-head attention mechanism, as described above. Axial-attention reduces the complexity to O(hwm). This enables global re- ceptive field, which is achieved by setting the span m directly to the whole input features. Optionally, one could also use a fixed m value, in order to reduce memory footprint on huge feature maps. Axial-DeepLab 7 Conv [7 ) axa HxWx128 Multi-Head Attention Concat y Z\ J conv [s(t 1x1 ' (HxW x16)x8 | HXWx128 | Height-Axis ' (HxWx16)x8 | HXWX256 | Multi-Head Attention W128 Width-Axis / » > HxWx256 HxWx256 Fig. 2. An axial-attention block, which consists of two axial-attention layers operating along height- and width-axis sequentially. The channels din = 128, dout = 16 is what we use in the first stage of ResNet after ‘stem’. We employ N = 8 attention heads Axial-ResNet: To transform a ResNet [32] to an Axial-ResNet, we replace the 3 × 3 convolution in the residual bottleneck block by two multi-head axial- attention layers (one for height-axis and the other for width-axis). Optional striding is performed on each axis after the corresponding axial-attention layer. The two 1×1 convolutions are kept to shuffle the features. This forms our (resid- ual) axial-attention block, as illustrated in Fig. 2, which is stacked multiple times to obtain Axial-ResNets. Note that we do not use a 1 × 1 convolution in-between the two axial-attention layers, since matrix multiplications (WQ, WK, WV ) fol- low immediately. Additionally, the stem (i.e., the first strided 7 × 7 convolution and 3 × 3 max-pooling) in the original ResNet is kept, resulting in a conv-stem model where convolution is used in the first layer and attention layers are used everywhere else. In conv-stem models, we set the span m to the whole input from the first block, where the feature map is 56×56. In our experiments, we also build a full axial-attention model, called Full Axial-ResNet, which further applies axial-attention to the stem. Instead of de- signing a special spatially-varying attention stem [68], we simply stack three axial-attention bottleneck blocks. In addition, we adopt local constraints (i.e., a local m×m square region as in [68]) in the first few blocks of Full Axial-ResNets, in order to reduce computational cost. Axial-DeepLab: To further convert Axial-ResNet to Axial-DeepLab for seg- mentation tasks, we make several changes as discussed below. Firstly, to extract dense feature maps, DeepLab [13] changes the stride and atrous rates of the last one or two stages in ResNet [32]. Similarly, we remove the stride of the last stage but we do not implement the ‘atrous’ attention module, since our axial-attention already captures global information for the whole input. In this work, we extract feature maps with output stride (i.e., the ratio of input resolution to the final backbone feature resolution) 16. We do not pursue output stride 8, since it is computationally expensive. Secondly, we do not adopt the atrous spatial pyramid pooling module (ASPP) [14,15], since our axial-attention block could also efficiently encode the multi- scale or global information. We show in the experiments that our Axial-DeepLab without ASPP outperforms Panoptic-DeepLab [20] with and without ASPP. # 8 H. Wang et al. Lastly, following Panoptic-DeepLab [20], we adopt exactly the same stem [81] of three convolutions, dual decoders, and prediction heads. The heads produce semantic segmentation and class-agnostic instance segmentation, and they are merged by majority voting [93] to form the final panoptic segmentation. In cases where the inputs are extremely large (e.g., 2177×2177) and memory is constrained, we resort to a large span m = 65 in all our axial-attention blocks. Note that we do not consider the axial span as a hyper-parameter because it is already sufficient to cover long range or even global context on several datasets, and setting a smaller span does not significantly reduce M-Adds. # 4 Experimental Results We conduct experiments on four large-scale datasets. We first report results with our Axial-ResNet on ImageNet [73]. We then convert the ImageNet pretrained Axial-ResNet to Axial-DeepLab, and report results on COCO [59], Mapillary Vistas [65], and Cityscapes [23] for panoptic segmentation, evaluated by panop- tic quality (PQ) [48]. We also report average precision (AP) for instance seg- mentation, and mean IoU for semantic segmentation on Mapillary Vistas and Cityscapes. Our models are trained using TensorFlow [1] on 128 TPU cores for ImageNet and 32 cores for panoptic segmentation. Training protocol: On ImageNet, we adopt the same training protocol as [68] for a fair comparison, except that we use batch size 512 for Full Axial- ResNets and 1024 for all other models, with learning rates scaled accordingly [30]. For panoptic segmentation, we strictly follow Panoptic-DeepLab [20], except using a linear warm up Radam [61] Lookahead [96] optimizer (with the same learning rate 0.001). All our results on panoptic segmentation use this setting. We note this change does not improve the results, but smooths our training curves. Panoptic-DeepLab yields similar result in this setting. # 4.1 ImageNet For ImageNet, we build Axial-ResNet-L from ResNet-50 [32]. In detail, we set din = 128, dout = 2dq = 16 for the first stage after the ‘stem’. We double them when spatial resolution is reduced by a factor of 2 [79]. Additionally, we multiply all the channels [36,74,35] by 0.5, 0.75, and 2, resulting in Axial- ResNet-{S, M, XL}, respectively. Finally, Stand-Alone Axial-ResNets are further generated by replacing the ‘stem’ with three axial-attention blocks where the first block has stride 2. Due to the computational cost introduced by the early layers, we set the axial span m = 15 in all blocks of Stand-Alone Axial-ResNets. We always use N = 8 heads [68]. In order to avoid careful initialization of WQ, WK, WV , rq, rk, rv, we use batch normalizations [43] in all attention layers. Tab. 1 summarizes our ImageNet results. The baselines ResNet-50 [32] (done by [68]) and Conv-Stem + Attention [68] are also listed. In the conv-stem setting, adding BN to attention layers of [68] slightly improves the performance by 0.3%. # Axial-DeepLab 9 Table 1. ImageNet validation set results. BN: Use batch normalizations in atten- tion layers. PS: Our position-sensitive self-attention. Full: Stand-alone self-attention models without spatial convolutions Method BN | PS | Full | Params | M-Adds | Top-1 Conv-Stem methods ResNet-50 [32,63] 25.6M 4.1B 76.9 Conv-Stem + Attention [68] 18.0M 3.5B TTA Conv-Stem + Attention v 18.0M 3.5B 77.7 Conv-Stem + PS-Attention v v 18.0M 3.7B 78.1 Conv-Stem + Axial-Attention v v 12.4M 2.8B 77.5 Fully self-attentional methods LR-Net-50 [38] v 23.3M 4.3B 77.3 Full Attention [63] v 18.0M 3.6B 77.6 Full Axial-Attention v v v 12.5M 3.3B 78.1 Our proposed position-sensitive self-attention (Conv-Stem + PS-Attention) fur- ther improves the performance by 0.4% at the cost of extra marginal compu- tation. Our Conv-Stem + Axial-Attention performs on par with Conv-Stem + Attention [68] while being more parameter- and computation-efficient. When comparing with other full self-attention models, our Full Axial-Attention out- performs Full Attention [68] by 0.5%, while being 1.44× more parameter-efficient and 1.09× more computation-efficient. Following [68], we experiment with different network widths (i.e., Axial- ResNets-{S,M,L,XL}), exploring the trade-off between accuracy, model parame- ters, and computational cost (in terms of M-Adds). As shown in Fig. 3, our pro- posed Conv-Stem + PS-Attention and Conv-Stem + Axial-Attention already outperforms ResNet-50 [32,68] and attention models [68] (both Conv-Stem + Attention, and Full Attention) at all settings. Our Full Axial-Attention further attains the best accuracy-parameter and accuracy-complexity trade-offs. # 4.2 COCO The ImageNet pretrained Axial-ResNet model variants (with different channels) are then converted to Axial-DeepLab model variant for panoptic segmentation tasks. We first demonstrate the effectiveness of our Axial-DeepLab on the chal- lenging COCO dataset [59], which contains objects with various scales (from less than 32 × 32 to larger than 96 × 96). Val set: In Tab. 2, we report our validation set results and compare with other bottom-up panoptic segmentation methods, since our method also belongs to the bottom-up family. As shown in the table, our single-scale Axial-DeepLab-S outperforms DeeperLab [93] by 8% PQ, multi-scale SSAP [29] by 5.3% PQ, and single-scale Panoptic-DeepLab by 2.1% PQ. Interestingly, our single-scale Axial- DeepLab-S also outperforms multi-scale Panoptic-DeepLab by 0.6% PQ while 10 # H. Wang et al. 79} te 794 =78 = 784 g eg” > > 877 77+ 5 5 < 16 Full Axial-Attention <6) —¥— Full Axial-Attention a Conv-Stem + Axial-Attention a =¥= Conv-Stem + Axial-Attention fd Conv-Stem + PS-Attention fd —e— Conv-Stem + PS-Attention 75 Conv-Stem + Attention —a— Conv-Stem + Attention Full Attention a= Full Attention 74 »» ResNet-50 ~@-» ResNet-50 10 20 30 40 50 2 4 6 8 10 12 Parameters (M) M-Adds (B) Fig. 3. Comparing parameters and M-Adds against accuracy on ImageNet classifi- cation. Our position-sensitive self-attention (Conv-Stem + PS-Attention) and axial- attention (Conv-Stem + Axial-Attention) consistently outperform ResNet-50 [32,68] and attention models [68] (both Conv-Stem + Attention, and Full Attention), across a range of network widths (i.e., different channels). Our Full Axial-Attention works the best in terms of both parameters and M-Adds Table 2. COCO val set. MS: Multi-scale inputs Method Backbone |MS]|Params M-Adds| PQ PQT™ PQ** DeeperLab [933] Xception-71 33.8 0 - - SSAP [29] ResNet-101 v 36.5 - - Panoptic-DeepLab [20]| Xception-71 46.7M 274.0B | 39.7 43.9 33.2 Panoptic-DeepLab [20]| Xception-71 ¥ | 46.7M 3081.4B]41.2 44.9 35.7 Axial-DeepLab-S Axial-ResNet-S 12.1M 110.4B /41.8 46.1 35.2 Axial-DeepLab-M Axial-ResNet-M 25.9M 209.9B |42.9 47.6 35.8 Axial-DeepLab-L Axial-ResNet-L 44.9M 343.9B |43.4 48.5 35.6 Axial-DeepLab-L Axial-ResNet-L | Y | 44.9M 3867.7B| 43.9 48.6 36.8 being 3.8× parameter-efficient and 27× computation-efficient (in M-Adds). In- creasing the backbone capacity (via large channels) continuously improves the performance. Specifically, our multi-scale Axial-DeepLab-L attains 43.9% PQ, outperforming Panoptic-DeepLab [20] by 2.7% PQ. Test-dev set: As shown in Tab. 3, our Axial-DeepLab variants show con- sistent improvements with larger backbones. Our multi-scale Axial-DeepLab-L attains the performance of 44.2% PQ, outperforming DeeperLab [93] by 9.9% PQ, SSAP [29] by 7.3% PQ, and Panoptic-DeepLab [20] by 2.8% PQ, setting a new state-of-the-art among bottom-up approaches. We also list several top- performing methods adopting the top-down approaches in the table for reference. Scale Stress Test: In order to verify that our model learns long range in- teractions, we perform a scale stress test besides standard testing. In the stress test, we train Panoptic-DeepLab (X-71) and our Axial-DeepLab-L with the stan- dard setting, but test them on out-of-distribution resolutions (i.e., resize the in- Axial-DeepLab 11 Table 3. COCO test-dev set. MS: Multi-scale inputs Aethod Backbone MS | PQ PQ™ ~~ PQ* Top-down panoptic segmentation methods TASCNet [52] ResNet-50 40.7 47.0 31.0 Panoptic-FPN [47] ResNet-101 40.9 48.3 29.7 AdaptlIS [80] ResNeXt-101 v 42.8 2 36.7 AUNet [55] ResNeXt-152 46.5 8 32.5 UPSNet [91] DCN-101 [24] v 46.6 2 36.7 Li et al. [53] DCN-101 [24] 47.2 5 37.7 SpatialFlow [17] DCN-101 [24] v 47.3 3.5 37.9 SOGNet [14] DCN-101 [24] v 47.8 - - # Bottom-up panoptic segmentation methods DeeperLab [93] Xception-71 34.3 37.5 29.6 SSAP [29] ResNet-101 v 36.9 40.1 32.0 Panoptic-DeepLab [20] Xception-71 v 41.4 45.1 35.9 Axial-DeepLab-S Axial-ResNet-S 42.2 46.5 Axial-DeepLab-M Axial-ResNet-M 43.2 48.1 Axial-DeepLab-L Axial-ResNet-L 43.6 48.9 Axial-DeepLab-L Axial-ResNet-L v 44.2 49.2 gaol se PQ (thing) Pa —— PQ 1 5 59 -0- PQ (stuff) ° g 2 220 g #10 4 Pye e on? 0. 0.0 05 10 15 2.0 25 3.0 3.5 40 Testing Resolution Ratio Fig. 4. Scale stress test on COCO val set. Axial-DeepLab gains the most when tested on extreme resolutions. On the x-axis, ratio 4.0 means inference with resolution 4097×4097 put to different resolutions). Fig. 4 summarizes our relative improvements over Panoptic-DeepLab on PQ, PQ (thing) and PQ (stuff). When tested on huge im- ages, Axial-DeepLab shows large gain (30%), demonstrating that it encodes long range relations better than convolutions. Besides, Axial-DeepLab improves 40% on small images, showing that axial-attention is more robust to scale variations. # 4.3 Mapillary Vistas We evaluate our Axial-DeepLab on the large-scale Mapillary Vistas dataset [65]. We only report validation set results, since the test server is not available. 12 H. Wang et al. Table 4. Mapillary Vistas validation set. MS: Multi-scale inputs # Method # MS Params M-Adds PQ PQTh PQSt AP mIoU # Top-down panoptic segmentation methods TASCNet [52] 32.6 31. 18.5 - TASCNet [52] v 34.3 34, 20.4 - AdaptIS [80] 35.9 312 - Seamless [71] 37.7 33. 16.4 50.4 Bottom-up panoptic segmentation methods DeeperLab [93] 32.0 Panoptic-DeepLab (Xception-71 [21,72]) [20] 46.7M 1.247 |37.7 : Panoptic-DeepLab (Xception-71 [21,72]) [20]] ¥ | 46.7M 31.35T |40.3. ¢ Panoptic-DeepLab (HRNet-W48 [86]) [20] v¥ | 71.7M 58.477 |39.3 Panoptic-DeepLab (Auto-XL++ [60]) [20] | ¥ | 72.2M_ 60.55T |40.3 32.0 v¥ | 71.7M 58.477 |39.3 ¥ | 72.2M_ 60.55T |40.3 - 55.3 47.4 14.9 55.4 49.3 17.2 56.8 17.2 55.4 16.9 57.6 - # Axial-DeepLab-L Axial-DeepLab-L 44.9M 1.55T |40.1 32.7 ¥ |44.9M 39.35T |41.1 33.4 49.8 16.7 57.6 51.3 17.2 58.4 Val set: As shown in Tab. 4, our Axial-DeepLab-L outperforms all the state- of-the-art methods in both single-scale and multi-scale cases. Our single-scale Axial-DeepLab-L performs 2.4% PQ better than the previous best single-scale Panoptic-DeepLab (X-71) [20]. In multi-scale setting, our lightweight Axial- DeepLab-L performs better than Panoptic-DeepLab (Auto-DeepLab-XL++), not only on panoptic segmentation (0.8% PQ) and instance segmentation (0.3% AP), but also on semantic segmentation (0.8% mIoU), the task that Auto- DeepLab [60] was searched for. Additionally, to the best of our knowledge, our Axial-DeepLab-L attains the best single-model semantic segmentation result. # 4.4 Cityscapes Val set: In Tab. 5 (a), we report our Cityscapes validation set results. With- out using extra data (i.e., only Cityscapes fine annotation), our Axial-DeepLab achieves 65.1% PQ, which is 1% better than the current best bottom-up Panoptic- DeepLab [20] and 3.1% better than proposal-based AdaptIS [80]. When using extra data (e.g., Mapillary Vistas [65]), our multi-scale Axial-DeepLab-XL at- tains 68.5% PQ, 1.5% better than Panoptic-DeepLab [20] and 3.5% better than Seamless [71]. Our instance segmentation and semantic segmentation results are respectively 1.7% and 1.5% better than Panoptic-DeepLab [20]. Test set: Tab. 5 (b) shows our test set results. Without extra data, Axial- DeepLab-XL attains 62.8% PQ, setting a new state-of-the-art result. Our model further achieves 66.6% PQ, 39.6% AP, and 84.1% mIoU with Mapillary Vistas pretraining. Note that Panoptic-DeepLab [20] adopts the trick of output stride 8 during inference on test set, making their M-Adds comparable to our XL models. # 4.5 Ablation Studies We perform ablation studies on Cityscapes validation set. # Axial-DeepLab 13 Table 5. Cityscapes val set and test set. MS: Multi-scale inputs. C: Cityscapes coarse annotation. V: Cityscapes video. MV: Mapillary Vistas (a) Cityscapes set (b) Cityscapes test set Method [Extra Data|MS|PQ AP mIoU Method Extra PQ AP mloU AdaptIS [80] Y¥ |62.0 36.3 79.2 GFF-Net [54] - = 823 SSAP [9] Tiel aya Zhu et al. [101] C,V,MV] - - 83.5 Panoptic-DeepLab [20] 63.0 35.3 80.5 AdaptIS [80 - 325 - Panoptic-DeepLab [20] ¥ |64.1 38.5 81.5 UPSNet [91 coco | - 330 - 2] Kole - 364 - ar Depta mame mo FANE | coco | ae DeepLab-L Y |64.7 37.9 81.5 yarans : a“ ial-DeepLab-XL 64.4 36.7 80.6 SSAP [29] 58.9 32.7 - Axial-DeepLab-XL Y 5.139.0 81.1 Li . [53 610 - - ic-DeepLab [20] 62.3 34.6 79.4 SpatialFlow [17] COCO |v 625 - = TAscNet [52] coco |6u7 - - Seamless [71] MV 65.0 - 80.7 Seamless [71] mv. le6- Panoptic-DeepLab [20]] MV 65.3 38.8 82.5 Li et al. [53] coco Panoptic-DeepLab [20]] MV — | ¥ |67.0 42.5 83.1 Panoptic-DeepLab [20]] | MV Axial-DeepLab-L MV 66.5 40.2 83.2 Axial-DeepLab-L 62.7 33.3 79.5 MV |v |67.7 42.9 83.8 Axial-DeepLab-XL 62.8 34.0 79.9 MV 67.8 41.9 84.2 Axial-DeepLab-L MV — [65.6 38.1 83.1 Axial-DeepLab-XL MV |v |68.544.2 84.6 Axial-DeepLab-XL MV [66.6 39.6 84.1 Table 6. Ablating self-attention variants on Cityscapes val set. ASPP: Atrous spatial pyramid pooling. PS: Our position-sensitive self-attention Backbone ASPP PS | Params M-Adds| PQ AP mloU ResNet-50 [32] (our impl.) ResNet-50 [32] (our impl.) v Attention [68] (our impl.) Attention [68] (our impl.) v 24.8M 374.8B | 58.1 30.0 73.3 30.0M 390.0B | 59.8 32.6 77.8 17.3M 317.7B | 58.7 31.9 75.8 22.5M 332.9B | 60.9 30.0 78.2 PS-Attention PS-Attention v 17.3M 326.7B | 59.9 32.2 76.3 22.5M 341.9B | 61.5 33.1 79.1 Axial-DeepLab-S 12.1M 220.8B | 62.6 34.9 80.5 Axial-DeepLab-M Axial-DeepLab-L Axial-DeepLab-XL 25.9M 419.6B | 63.1 35.6 = 80.3 44.9M 687.4B | 63.9 35.8 81.0 173.0M 2446.8B | 64.4 36.7 80.6 NSS] NL NN Importance of Position-Sensitivity and Axial-Attention: In Tab. 1, we experiment with attention models on ImageNet. In this ablation study, we transfer them to Cityscapes segmentation tasks. As shown in Tab. 6, all variants outperform ResNet-50 [32]. Position-sensitive attention performs better than previous self-attention [68], which aligns with ImageNet results in Tab. 1. How- ever, employing axial-attention, which is on-par with position-sensitive attention on ImageNet, gives more than 1% boosts on all three segmentation tasks (in PQ, AP, and mIoU), without ASPP, and with fewer parameters and M-Adds, suggest- ing that the ability to encode long range context of axial-attention significantly improves the performance on segmentation tasks with large input images. 14 H. Wang et al. Table 7. Varying axial-attention span on Cityscapes val set Backbone Span Params M-Adds PQ AP mIoU ResNet-101 - 43.8M 530.0B 59.9 31.9 74.6 Axial-ResNet-L Axial-ResNet-L Axial-ResNet-L Axial-ResNet-L Axial-ResNet-L 5 × 5 9 × 9 17 × 17 33 × 33 65 × 65 44.9M 44.9M 44.9M 44.9M 44.9M 617.4B 622.1B 631.5B 650.2B 687.4B 59.1 61.2 62.8 63.8 64.2 31.3 31.1 34.0 35.9 36.3 74.5 77.6 79.5 80.2 80.6 Importance of Axial-Attention Span: In Tab. 7, we vary the span m (i.e., spatial extent of local regions in an axial block), without ASPP. We observe that a larger span consistently improves the performance at marginal costs. # 5 Conclusion and Discussion In this work, we have shown the effectiveness of proposed position-sensitive axial- attention on image classification and segmentation tasks. On ImageNet, our Axial-ResNet, formed by stacking axial-attention blocks, achieves state-of-the- art results among stand-alone self-attention models. We further convert Axial- ResNet to Axial-DeepLab for bottom-up segmentation tasks, and also show state-of-the-art performance on several benchmarks, including COCO, Mapil- lary Vistas, and Cityscapes. We hope our promising results could establish that axial-attention is an effective building block for modern computer vision models. Our method bears a similarity to decoupled convolution [44], which factorizes a depthwise convolution [78,36,21] to a column convolution and a row convolu- tion. This operation could also theoretically achieve a large receptive field, but its convolutional template matching nature limits the capacity of modeling multi- scale interactions. Another related method is deformable convolution [24,100,28], where each point attends to a few points dynamically on an image. However, deformable convolution does not make use of key-dependent positional bias or content-based relation. In addition, axial-attention propagates information densely, and more efficiently along the height- and width-axis sequentially. Although our axial-attention model saves M-Adds, it runs slower than con- volutional counterparts, as also observed by [68]. This is due to the lack of specialized kernels on various accelerators for the time being. This might well be improved if the community considers axial-attention as a plausible direction. # Acknowledgments We thank Niki Parmar for discussion and support; Ashish Vaswani, Xuhui Jia, Raviteja Vemulapalli, Zhuoran Shen for their insightful comments and sugges- tions; Maxwell Collins and Blake Hechtman for technical support. This work is supported by Google Faculty Research Award and NSF 1763705. Axial-DeepLab 15 Table 8. Runtime of Axial-ResNet-L on a 224×224 image Model Our Profile (ms) [7] (ms) Axial-ResNet-L 16.54 - Stand-Alone-L [68] Xception-71 [21,16] ResNet-101 [32] ResNet-152 [32] ResNeXt-101 (32x4d) [90] SE-ResNet-101 [39] SE-ResNeXt-101 (32x4d) [39] DenseNet-201 (k=32) [41] 18.05 24.85 10.08 14.43 - - - - - - 8.9 14.31 17.05 15.10 24.96 17.15 # Appendix A Runtime In this section, we profile our Conv-Stem Axial-ResNet-L in a common setting: 224x224 feed-forward with batch size 1, on a V100 GPU, averaged over 5 runs. The time includes input standardization, and the last projection to 1000 logits. Our model takes 16.54 ms. For comparison, we list our TensorFlow runs of some popular models at hand (with comparable flops). To provide more context, we take entries from [7] for reference (A Titan X Pascal is used in [7], but the PyTorch code is more optimized). Our runtime is roughly at the same level of ResNeXt-101 (32x4d), SE-ResNet-101, ResNet-152, and DenseNet-201 (k=32). Note that we directly benchmark with our code optimized for TPU execution, with channels being the last dimension. Empirically, the generated graph involves transposing between NCHW and NHWC, before and after almost every conv2d operation. (This effect also puts Xception-71 at a disadvantage because of its separable conv design.) Further optimizing this could lead to faster inference. We observe that our Conv-Stem Axial-ResNet-L runs faster than Conv-Stem Stand-Alone-L [68], although we split one layer into two. This is because our axial-attention makes better use of existing kernels: – The width-axis attention is parallelizable over height-axis, i.e. this is a large batch of 1d row operations (the batch size is the height of the input). – Axial attention avoids extracting 2d memory blocks with pads, splits and concatenations, which are not efficient on accelerators. # Appendix B Axial-Decoder Axial-DeepLab employs dual convolutional decoders [20]. In this section, we explore a setting with a single axial-decoder instead. In the axial-decoder module, we apply one axial-attention block at each upsampling stage. In Fig. 5, we show an example axial-decoder in Axial-DeepLab-L from output stride 8 to output stride 4. We apply three such blocks, analogous to the three 5×5 convolutions in Panoptic-DeepLab [20]. 16 H. Wang et al. elke atetataaetatatatal lala aa aatatal ~, 1 | y z\ ' x i sco J Up | concat +! concat jt] Com” 1x1 sample [1 y >I )* | Axt 1 1 HW og HXWX128 | (HxWx16)x8 | (HxW x16)x8 | HxWx256 oxox 2°2 Multi-Head Attention W128! mylti-Head Attention W128 \___Height-Axis ' Width-Axis ; | Conv 4] Up 1x1 sample HOW HXWX256 HOW +X X256 Conv HOW ose 2*7 a) steal HXWXx256 2*2 HXWXx256 Encoder Feature HxWx256 Fig. 5. An axial-decoder block. We augment an axial-attention block with up- samplings, and encoder features Table 9. Ablating output strides and decoder types on Cityscapes val set. ASPP: Atrous spatial pyramid pooling. OS: Output stride (i.e., the ratio of image resolution to final feature resolution in backbone). AD: Use axial-decoder in Axial-DeepLab Backbone ASPP | OS | AD | Params M-Adds | PQ AP mloU Xception-71 v |i6| | 46.7M 547.7B | 63.2 35.0 80.2 Axial-ResNet-L 16 44.9M 687.4B | 63.9 35.8 81.0 Axial-ResNet-L 32 45.2M 525.2B | 63.9 36.3 80.9 Axial-ResNet-L 16 | v 45.4M 722.7B | 63.7 36.9 80.7 Axial-ResNet-L 32 | vo 45.9M 577.8B | 64.0 37.1 81.0 Importance of Output Stride and Axial-Decoder: In Tab. 9, we ex- periment with the effect of output stride and axial-decoder (i.e., replacing dual decoders with axial-attention blocks). As shown in the table, our models are robust to output stride, and using axial-decoder is able to yield similar results. Our simple axial-decoder design works as well as dual convolutional decoders. # Appendix C COCO Visualization In Fig. 6, we visualize some panoptic segmentation results on COCO val set. Our Axial-DeepLab-L demonstrates robustness to occlusion, compared with Panoptic-DeepLab (Xception-71). In Fig. 7 and Fig. 8, we visualize the attention maps of our Axial-DeepLab-L on COCO val set. We visualize a low level block (stage 3 block 2) and a high level block (stage 4 block 3), which are respectively the first block and the last block with resolution 65×65, in the setting of output stride 16. We notice that in our multi-head axial-attention, some heads learn to focus on local details while some others focus on long range context. Additionally, we find that some heads are able to capture positional information and some others learn to correlate with semantic concepts # Axial-DeepLab Original Image Axial-DeepLab Panoptic-DeepLab Ground Truth Original Image # Axial-DeepLab # Panoptic-DeepLab Ground Truth Fig. 6. Visualization on COCO val set. Axial-DeepLab shows robustness to occlusion. In row 1 and row 4, Axial-DeepLab captures the occluded left leg and the remote control cable respectively, which are not even present in ground truth labels. In the last row, Axial-DeepLab distinguishes one person occluding another correctly, whereas the ground truth treats them as one instance 17 18 # H. Wang et al. Original Image Panoptic Prediction column head 1 column head 2 column head 3 column head 4 column head 5 column head 6 column head 7 column head 8 row head 1 row head 2 row head 3 row head 4 row head 5 row head 6 row head 7 row head 8 Fig. 7. Attention maps in block 2 of stage 3. We take a row of pixels, and visualize their column (height-axis) attention in all 8 heads. Then, we take a column, and visualize their row attention. Blue pixels are queries that we take, and red pixels indicate the corresponding attention weights. We notice that column head 1 corresponds to human heads, while column head 4 correlates with the field only. Row head 6 focuses on relatively local regions whereas column head 5 pools all over the whole image Axial-DeepLab 19 Original Image Panoptic Prediction column head 1 column head 2 column head 3 column head 4 column head 5 column head 6 column head 7 column head 8 row head 1 row head 2 row head 3 row head 4 row head 5 row head 6 row head 7 row head 8 Fig. 8. Attention maps in block 3 of stage 4. They focus more on long range context than those in Fig. 7, although all of them have a global receptive field 20 # H. Wang et al. 20.0 : @m@m™ Axial-DeepLab-S 175 @mm Panoptic-DeepLab @@m™ Axial-DeepLab-L Training loss PoP oe o NY eo twoio Pl u 5.0 2.5 0.0 Heatmap loss Offset loss Semantic loss Fig. 9. Training loss on COCO. Equipped with position-sensitive axial-attention, our Axial-DeepLab fits data distribution better than Panoptic-DeepLab [20], especially on the task of predicting the offset to the object center, which requires precise and long range positional information In Fig. 9, we compare Axial-DeepLab with Panoptic-DeepLab [20], in terms of the three training loss functions, defined in Panoptic-DeepLab [20]. We observe that Axial-DeepLab is able to fit data better, especially on the offset prediction task. This also demonstrates the effectiveness of our position-sensitive attention design, and the long range modeling ability of axial-attention. # Appendix D Raw Data In companion to Fig. 3 of the main paper where we compare parameters and M- Adds against accuracy on ImageNet classification, we also show the performance of our models in Tab. 10. In companion to Fig. 4 of the main paper where we demonstrate the relative improvements of Axial-DeepLab-L over Panoptic-DeepLab (Xception-71) in our scale stress test on COCO, we also show the raw performance of both models in Fig. 10. Axial-DeepLab 21 Table 10. ImageNet validation set results. Width: the width multiplier that scales the models up. Full: Stand-alone self-attention models without spatial convolutions Method Width Full Params M-Adds Top-1 Conv-Stem + PS-Attention 0.5 5.1M 1.2B 75.5 Conv-Stem + PS-Attention 0.75 10.5M 2.3B 77.4 Conv-Stem + PS-Attention 1.0 18.0M 3.7B 78.1 Conv-Stem + PS-Attention 1.25 27.5M 5.6B 78.5 Conv-Stem + PS-Attention 1.5 39.0M 7.8B 79.0 Conv-Stem + Axial-Attention 0.375 7.4M 1.8B 76.4 Conv-Stem + Axial-Attention 0.5 12.4M 2.8B 77.5 Conv-Stem + Axial-Attention 0.75 26.4M 5.7B 78.6 Conv-Stem + Axial-Attention 1.0. 45.6M 9.6B 79.0 Full Axial-Attention 0.5 v 12.5M 3.3B 78.1 Full Axial-Attention 0.75 v 26.5M 6.8B 79.2 Full Axial-Attention 1.0 v 45.8M 11.6B 79.3 —e— Axial-DeepLab-L 40 +B. Panoptic-DeepLab 35 Ss 9 30 254} : “a a *e 20 wey ‘a 00 05 10 15 20 2.5 30 35 40 Testing Resolution Ratio Fig. 10. Scale stress test on COCO val set # References 1. Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghe- mawat, S., Irving, G., Isard, M., Kudlur, M., Levenberg, J., Monga, R., Moore, S., Murray, D.G., Steiner, B., Tucker, P., Vasudevan, V., Warden, P., Wicke, M., Yu, Y., Zheng, X.: Tensorflow: A system for large-scale machine learning. In: Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation (2016) 8 2. Ackley, D.H., Hinton, G.E., Sejnowski, T.J.: A learning algorithm for boltzmann machines. Cognitive science 9(1), 147–169 (1985) 1 3. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv:1409.0473 (2014) 3 22 H. Wang et al. 4. Bai, M., Urtasun, R.: Deep watershed transform for instance segmentation. In: CVPR (2017) 3 5. Ballard, D.H.: Generalizing the hough transform to detect arbitrary shapes. Pat- tern Recognition (1981) 3 6. Bello, I., Zoph, B., Vaswani, A., Shlens, J., Le, Q.V.: Attention augmented con- volutional networks. In: ICCV (2019) 2, 4 7. Bianco, S., Cadene, R., Celona, L., Napoletano, P.: Benchmark analysis of repre- sentative deep neural network architectures. IEEE Access 6, 64270–64277 (2018) 15 8. Bonde, U., Alcantarilla, P.F., Leutenegger, S.: Towards bounding-box free panop- tic segmentation. arXiv:2002.07705 (2020) 3 9. Brock, A., Donahue, J., Simonyan, K.: Large scale gan training for high fidelity natural image synthesis. In: ICLR (2019) 4 10. Buades, A., Coll, B., Morel, J.M.: A non-local algorithm for image denoising. In: CVPR (2005) 3, 4 11. Chan, W., Jaitly, N., Le, Q., Vinyals, O.: Listen, attend and spell: A neural net- work for large vocabulary conversational speech recognition. In: ICASSP (2016) 2 12. Chen, L.C., Collins, M., Zhu, Y., Papandreou, G., Zoph, B., Schroff, F., Adam, H., Shlens, J.: Searching for efficient multi-scale architectures for dense image prediction. In: NeurIPS (2018) 2 13. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semantic image segmentation with deep convolutional nets and fully connected crfs. In: ICLR (2015) 2, 3, 7 14. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE TPAMI (2017) 3, 7 15. Chen, L.C., Papandreou, G., Schroff, F., Adam, H.: Rethinking atrous convolution for semantic image segmentation. arXiv:1706.05587 (2017) 3, 7 16. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: ECCV (2018) 3, 15 17. Chen, Q., Cheng, A., He, X., Wang, P., Cheng, J.: Spatialflow: Bridging all tasks for panoptic segmentation. arXiv:1910.08787 (2019) 11, 13 18. Chen, Y., Kalantidis, Y., Li, J., Yan, S., Feng, J.: Aˆ 2-nets: Double attention networks. In: NeurIPS (2018) 4 19. Cheng, B., Collins, M.D., Zhu, Y., Liu, T., Huang, T.S., Adam, H., Chen, L.C.: Panoptic-deeplab. In: ICCV COCO + Mapillary Joint Recognition Challenge Workshop (2019) 2 20. Cheng, B., Collins, M.D., Zhu, Y., Liu, T., Huang, T.S., Adam, H., Chen, L.C.: Panoptic-deeplab: A simple, strong, and fast baseline for bottom-up panoptic segmentation. In: CVPR (2020) 2, 3, 7, 8, 10, 11, 12, 13, 15, 20 21. Chollet, F.: Xception: Deep learning with depthwise separable convolutions. In: CVPR (2017) 12, 14, 15 22. Chorowski, J.K., Bahdanau, D., Serdyuk, D., Cho, K., Bengio, Y.: Attention- based models for speech recognition. In: NeurIPS (2015) 2 23. Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., Schiele, B.: The cityscapes dataset for semantic urban scene understanding. In: CVPR (2016) 2, 8 24. Dai, J., Qi, H., Xiong, Y., Li, Y., Zhang, G., Hu, H., Wei, Y.: Deformable convo- lutional networks. In: ICCV (2017) 11, 14 Axial-DeepLab 23 25. Dai, Z., Yang, Z., Yang, Y., Carbonell, J.G., Le, Q., Salakhutdinov, R.: Transformer-xl: Attentive language models beyond a fixed-length context. In: ACL (2019) 3 26. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805 (2018) 3 27. Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., Lu, H.: Dual attention network for scene segmentation. In: CVPR (2019) 4 28. Gao, H., Zhu, X., Lin, S., Dai, J.: Deformable kernels: Adapting effective receptive fields for object deformation. arXiv:1910.02940 (2019) 14 29. Gao, N., Shan, Y., Wang, Y., Zhao, X., Yu, Y., Yang, M., Huang, K.: Ssap: Single- shot instance segmentation with affinity pyramid. In: ICCV (2019) 3, 9, 10, 11, 13 30. Goyal, P., Doll´ar, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., He, K.: Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv:1706.02677 (2017) 8 31. He, K., Gkioxari, G., Doll´ar, P., Girshick, R.: Mask r-cnn. In: ICCV (2017) 3 32. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016) 2, 7, 8, 9, 10, 13, 15 33. Ho, J., Kalchbrenner, N., Weissenborn, D., Salimans, T.: Axial attention in mul- tidimensional transformers. arXiv:1912.12180 (2019) 2, 4, 6 34. Holschneider, M., Kronland-Martinet, R., Morlet, J., Tchamitchian, P.: A real- time algorithm for signal analysis with the help of the wavelet transform. In: Wavelets, pp. 286–297. Springer (1990) 2 35. Howard, A., Sandler, M., Chu, G., Chen, L.C., Chen, B., Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V., et al.: Searching for mobilenetv3. In: ICCV (2019) 8 36. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv:1704.04861 (2017) 8, 14 37. Hu, H., Gu, J., Zhang, Z., Dai, J., Wei, Y.: Relation networks for object detection. In: CVPR (2018) 2 38. Hu, H., Zhang, Z., Xie, Z., Lin, S.: Local relation networks for image recognition. In: ICCV (2019) 2, 4, 9 39. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR (2018) 15 40. Huang, C.A., Vaswani, A., Uszkoreit, J., Simon, I., Hawthorne, C., Shazeer, N., Dai, A.M., Hoffman, M.D., Dinculescu, M., Eck, D.: Music transformer: Gener- ating music with long-term structure. In: ICLR (2019) 3 41. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR. pp. 4700–4708 (2017) 15 42. Huang, Z., Wang, X., Huang, L., Huang, C., Wei, Y., Liu, W.: Ccnet: Criss-cross attention for semantic segmentation. In: ICCV (2019) 2, 4, 6 43. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML (2015) 8 44. Jaderberg, M., Vedaldi, A., Zisserman, A.: Speeding up convolutional neural net- works with low rank expansions. In: BMVC (2014) 14 45. Kendall, A., Gal, Y., Cipolla, R.: Multi-task learning using uncertainty to weigh losses for scene geometry and semantics. In: CVPR (2018) 3 46. Keuper, M., Levinkov, E., Bonneel, N., Lavou´e, G., Brox, T., Andres, B.: Efficient decomposition of image and mesh graphs by lifted multicuts. In: ICCV (2015) 3 24 H. Wang et al. 47. Kirillov, A., Girshick, R., He, K., Doll´ar, P.: Panoptic feature pyramid networks. In: CVPR (2019) 3, 11 48. Kirillov, A., He, K., Girshick, R., Rother, C., Doll´ar, P.: Panoptic segmentation. In: CVPR (2019) 2, 8 49. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: NeurIPS (2012) 1 50. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998) 1 51. Leibe, B., Leonardis, A., Schiele, B.: Combined object categorization and seg- mentation with an implicit shape model. In: Workshop on statistical learning in computer vision, ECCV (2004) 3 52. Li, J., Raventos, A., Bhargava, A., Tagawa, T., Gaidon, A.: Learning to fuse things and stuff. arXiv:1812.01192 (2018) 3, 11, 12, 13 53. Li, Q., Qi, X., Torr, P.H.: Unifying training and inference for panoptic segmenta- tion. arXiv:2001.04982 (2020) 3, 11, 13 54. Li, X., Zhao, H., Han, L., Tong, Y., Yang, K.: Gff: Gated fully fusion for semantic segmentation. arXiv:1904.01803 (2019) 13 55. Li, Y., Chen, X., Zhu, Z., Xie, L., Huang, G., Du, D., Wang, X.: Attention-guided unified network for panoptic segmentation. In: CVPR (2019) 3, 11 56. Li, Y., Jin, X., Mei, J., Lian, X., Yang, L., Xie, C., Yu, Q., Zhou, Y., Bai, S., Yuille, A.: Neural architecture search for lightweight non-local networks. In: CVPR (2020) 3, 4 57. Liang, J., Homayounfar, N., Ma, W.C., Xiong, Y., Hu, R., Urtasun, R.: Poly- transform: Deep polygon transformer for instance segmentation. arXiv:1912.02801 (2019) 13 58. Lin, T.Y., Doll´ar, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: CVPR (2017) 3 59. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll´ar, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: ECCV (2014) 2, 8, 9 60. Liu, C., Chen, L.C., Schroff, F., Adam, H., Hua, W., Yuille, A., Fei-Fei, L.: Auto- deeplab: Hierarchical neural architecture search for semantic image segmentation. In: CVPR (2019) 2, 12 61. Liu, L., Jiang, H., He, P., Chen, W., Liu, X., Gao, J., Han, J.: On the variance of the adaptive learning rate and beyond. In: ICLR (2020) 8 62. Liu, S., Qi, L., Qin, H., Shi, J., Jia, J.: Path aggregation network for instance segmentation. In: CVPR (2018) 13 63. Liu, Y., Yang, S., Li, B., Zhou, W., Xu, J., Li, H., Lu, Y.: Affinity derivation and graph merge for instance segmentation. In: ECCV (2018) 3 64. Liu1, H., Peng, C., Yu, C., Wang, J., Liu, X., Yu, G., Jiang, W.: An end-to-end network for panoptic segmentation. In: CVPR (2019) 3 65. Neuhold, G., Ollmann, T., Rota Bulo, S., Kontschieder, P.: The mapillary vistas dataset for semantic understanding of street scenes. In: ICCV (2017) 2, 8, 11, 12 66. Neven, D., Brabandere, B.D., Proesmans, M., Gool, L.V.: Instance segmentation by jointly optimizing spatial embeddings and clustering bandwidth. In: CVPR (2019) 3 67. Papandreou, G., Kokkinos, I., Savalle, P.A.: Modeling local and global defor- mations in deep learning: Epitomic convolution, multiple instance learning, and sliding window detection. In: CVPR (2015) 2 Axial-DeepLab 25 68. Parmar, N., Ramachandran, P., Vaswani, A., Bello, I., Levskaya, A., Shlens, J.: Stand-alone self-attention in vision models. In: NeurIPS (2019) 2, 4, 5, 6, 7, 8, 9, 10, 13, 14, 15 69. Parmar, N., Vaswani, A., Uszkoreit, J., Kaiser, L., Shazeer, N., Ku, A., Tran, D.: mage transformer. In: ICML (2018) 3 70. Peng, C., Zhang, X., Yu, G., Luo, G., Sun, J.: Large kernel matters–improve semantic segmentation by global convolutional network. In: CVPR (2017) 2 71. Porzi, L., Bul`o, S.R., Colovic, A., Kontschieder, P.: Seamless scene segmentation. In: CVPR (2019) 3, 12, 13 72. Qi, H., Zhang, Z., Xiao, B., Hu, H., Cheng, B., Wei, Y., Dai, J.: Deformable convolutional networks – coco detection and segmentation challenge 2017 entry. ICCV COCO Challenge Workshop (2017) 12 73. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M.S., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. IJCV 115, 211–252 (2015) 2, 8 74. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: Inverted residuals and linear bottlenecks. In: CVPR (2018) 8 75. Shaw, P., Uszkoreit, J., Vaswani, A.: Self-attention with relative position repre- sentations. In: NAACL (2018) 3 76. Shen, Z., Zhang, M., Zhao, H., Yi, S., Li, H.: Efficient attention: Attention with linear complexities. arXiv:1812.01243 (2018) 4 77. Shensa, M.J.: The discrete wavelet transform: wedding the a trous and mallat algorithms. Signal Processing, IEEE Transactions on 40(10), 2464–2482 (1992) 2 78. Sifre, L.: Rigid-motion scattering for image classification. PhD thesis (2014) 14 79. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014) 8 80. Sofiiuk, K., Barinova, O., Konushin, A.: Adaptis: Adaptive instance selection network. In: ICCV (2019) 3, 11, 12, 13 81. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the incep- tion architecture for computer vision. In: CVPR (2016) 8 82. Uhrig, J., Rehder, E., Fr¨ohlich, B., Franke, U., Brox, T.: Box2pix: Single-shot instance segmentation by assigning pixels to object boxes. In: IEEE Intelligent Vehicles Symposium (IV) (2018) 3 3. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. In: NeurIPS (2017) 2, 3 . Vincent, L., Soille, P.: Watersheds in digital spaces: an efficient algorithm based on immersion simulations. IEEE TPAMI (1991) 3 85. Wang, H., Kembhavi, A., Farhadi, A., Yuille, A.L., Rastegari, M.: Elastic: im- proving cnns with dynamic scaling policies. In: CVPR (2019) 2 86. Wang, J., Sun, K., Cheng, T., Jiang, B., Deng, C., Zhao, Y., Liu, D., Mu, Y., Tan, M., Wang, X., Liu, W., Xiao, B.: Deep high-resolution representation learning for visual recognition. arXiv:1908.07919 (2019) 12 87. Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: CVPR (2018) 2, 3, 4 88. Wu, Y., Schuster, M., Chen, Z., Le, Q.V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., et al.: Google’s neural machine translation sys- tem: Bridging the gap between human and machine translation. arXiv:1609.08144 (2016) 2 89. Xie, C., Wu, Y., Maaten, L.v.d., Yuille, A.L., He, K.: Feature denoising for im- proving adversarial robustness. In: CVPR (2019) 2, 4 26 H. Wang et al. 90. Xie, S., Girshick, R., Doll´ar, P., Tu, Z., He, K.: Aggregated residual transforma- tions for deep neural networks. In: CVPR. pp. 1492–1500 (2017) 15 91. Xiong, Y., Liao, R., Zhao, H., Hu, R., Bai, M., Yumer, E., Urtasun, R.: Upsnet: A unified panoptic segmentation network. In: CVPR (2019) 3, 11, 13 92. Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., Zemel, R., Bengio, Y.: Show, attend and tell: Neural image caption generation with visual attention. In: ICML (2015) 2 93. Yang, T.J., Collins, M.D., Zhu, Y., Hwang, J.J., Liu, T., Zhang, X., Sze, V., Pa- pandreou, G., Chen, L.C.: Deeperlab: Single-shot image parser. arXiv:1902.05093 (2019) 3, 8, 9, 10, 11, 12 94. Yang, Y., Li, H., Li, X., Zhao, Q., Wu, J., Lin, Z.: Sognet: Scene overlap graph network for panoptic segmentation. arXiv:1911.07527 (2019) 11 95. Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-attention generative ad- versarial networks. arXiv:1805.08318 (2018) 4 96. Zhang, M., Lucas, J., Ba, J., Hinton, G.E.: Lookahead optimizer: k steps forward, 1 step back. In: NeurIPS (2019) 8 97. Zhang, R.: Making convolutional networks shift-invariant again. In: ICML (2019) 1 98. Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.: Pyramid scene parsing network. In: CVPR (2017) 2 99. Zhu, X., Cheng, D., Zhang, Z., Lin, S., Dai, J.: An empirical study of spatial attention mechanisms in deep networks. In: ICCV. pp. 6688–6697 (2019) 4 100. Zhu, X., Hu, H., Lin, S., Dai, J.: Deformable convnets v2: More deformable, better results. In: CVPR (2019) 14 101. Zhu, Y., Sapra, K., Reda, F.A., Shih, K.J., Newsam, S., Tao, A., Catanzaro, B.: Improving semantic segmentation via video propagation and label relaxation. In: CVPR (2019) 13 102. Zhu, Z., Xu, M., Bai, S., Huang, T., Bai, X.: Asymmetric non-local neural net- works for semantic segmentation. In: CVPR (2019) 4 103. Zoph, B., Le, Q.V.: Neural architecture search with reinforcement learning. In: ICLR (2017) 2
{ "id": "1812.01243" }
2003.06713
Document Ranking with a Pretrained Sequence-to-Sequence Model
This work proposes a novel adaptation of a pretrained sequence-to-sequence model to the task of document ranking. Our approach is fundamentally different from a commonly-adopted classification-based formulation of ranking, based on encoder-only pretrained transformer architectures such as BERT. We show how a sequence-to-sequence model can be trained to generate relevance labels as "target words", and how the underlying logits of these target words can be interpreted as relevance probabilities for ranking. On the popular MS MARCO passage ranking task, experimental results show that our approach is at least on par with previous classification-based models and can surpass them with larger, more-recent models. On the test collection from the TREC 2004 Robust Track, we demonstrate a zero-shot transfer-based approach that outperforms previous state-of-the-art models requiring in-dataset cross-validation. Furthermore, we find that our approach significantly outperforms an encoder-only model in a data-poor regime (i.e., with few training examples). We investigate this observation further by varying target words to probe the model's use of latent knowledge.
http://arxiv.org/pdf/2003.06713
Rodrigo Nogueira, Zhiying Jiang, Jimmy Lin
cs.IR, cs.LG
null
null
cs.IR
20200314
20200314
0 2 0 2 r a M 4 1 ] R I . s c [ arXiv:2003.06713v1_ 1 v 3 1 7 6 0 . 3 0 0 2 : v i X r a # Document Ranking with a Pretrained Sequence-to-Sequence Model Rodrigo Nogueira,∗ Zhiying Jiang,∗ and Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo # Abstract This work proposes a novel adaptation of a pretrained sequence-to-sequence model to the task of document ranking. Our approach is fundamentally different from a commonly-adopted classification-based formulation of ranking, based on encoder-only pretrained transformer architectures such as BERT. We show how a sequence-to-sequence model can be trained to generate relevance labels as “target words”, and how the underlying logits of these target words can be interpreted as relevance probabilities for ranking. On the popular MS MARCO passage ranking task, experimental results show that our approach is at least on par with previous classification-based models and can surpass them with larger, more-recent mod- els. On the test collection from the TREC 2004 Robust Track, we demonstrate a zero-shot transfer-based approach that outperforms previous state-of-the-art mod- els requiring in-dataset cross-validation. Furthermore, we find that our approach significantly outperforms an encoder-only model in a data-poor regime (i.e., with few training examples). We investigate this observation further by varying target words to probe the model’s use of latent knowledge. # 1 Introduction A simple, straightforward formulation of ranking is to convert the task into a classification problem, and then sort the candidate items to be ranked based on the probability that each item belongs to the desired class. Applied to the document ranking problem in information retrieval—where given a query, the system’s task is to return a ranked list of documents from a large corpus that maximizes some ranking metric such as average precision or nDCG—the simplest formulation is to deploy a classifier that estimates the probability each document belongs to the “relevant” class, and then sort all the candidates by these estimates. Deep transformer models pretrained with language modeling objectives, exemplified by BERT [3], have proven highly effective in a variety of classification and sequence labeling tasks in NLP. Nogueira and Cho [9] were the first to demonstrate its effectiveness in ranking tasks. Since it is impractical to apply inference to every document in a corpus with respect to a query, these tech- niques are typically applied to rerank a list of candidates. In a typical end-to-end system, these candidates are from the results of a keyword search based on a “classic” IR scoring function such as BM25 [15]. This gives rise to the standard multi-stage pipeline architecture of keyword retrieval followed by reranking using one or more machine learning models [1, 10]. The contribution of this work is to adapt a pretrained sequence-to-sequence model (in our case, T5 [14]) to the task of document reranking. To our knowledge, this is a novel use of this class of models that has not been previously described in the literature. In a data-rich regime, with lots of training examples, our method can outperform a pure classification-based encoder-only approach. # ∗Equal contribution. However, the sequence-to-sequence model appears to be far more data-efficient: our approach shines in a data-poor regime and significantly outperforms BERT with limited training examples. The main advantage of our approach, we believe, is that by “connecting” fine-tuned latent representations of relevance to related output “target words”, we can exploit the model’s latent knowledge (e.g., of semantics, linguistic relations, etc.) that has been honed through pretraining. We describe probing experiments that attempt to verify our intuitions by deliberately altering the target words to capture different aspects of “semantic relatedness”. # 2 Method Our reranking method is based on T5 [14], which is a sequence-to-sequence model that uses a similar masked language modeling objective as BERT to pretrain its encoder–decoder architecture. In this model, all target tasks are cast as sequence-to-sequence tasks. For our task, the input sequence is: Query: q Document: d Relevant: (1) where q and d are the query and document texts, respectively. The model is fine-tuned to produce the words “true” or “false” depending on whether the document is relevant or not to the query. That is, “true” and “false” are the “target words” (i.e., ground truth predictions in the sequence-to-sequence transformation). At inference time, to compute probabilities for each query–document pair (in a reranking setting), we apply a softmax only on the logits of the “true” and “false” tokens. Hence, we rerank the documents according to the probabilities assigned to the “true” token. We arrived at this particular approach after some trial and error. Other approaches, for example, reranking documents according to the logit of the “true” token or using logits of all tokens to compute the softmax, were not effective, i.e., the retrieval metrics were close to zero. Note that T5 tokenizes sequences using the SentencePiece model [7], which might split a word into subwords. We choose target words (“true” and “false”) that are represented as single tokens; thus each class is represented by a single logit. In the case where target words are split in multiple subwords, we would need a method to aggregate their logits into a single score; it is best to avoid this complexity in the design of the reranking setup. # 3 Experimental Setup # 3.1 Datasets We use the following datasets in our experiments: MS MARCO passage [2] is a passage ranking dataset with 8.8M passages obtained from the top 10 results retrieved from the Bing search engine (from 1M queries). Note that for terminological consistency, we refer to each “unit” in the corpus as a document, even though they are in reality paragraph-length passages. The training set contains approximately 500k pairs of query and relevant documents. Each query has one relevant passage, on average. Non-relevant documents for training are also provided as part of the training dataset. The development and test sets contain approximately 6,900 queries each, but relevance labels are only publicly available for the development set. We have not (yet) submitted our runs to the official MS MARCO leaderboard because our primary goal in this work is to conduct initial comparisons between T5 and BERT-based models. As a matter of good experimental practice, we limit official submissions as to not “probe” the unseen test set unnecessarily. After sufficient model refinement, we will proceed with official submissions to verify the quality of our models. Robust04 [16] represents the test collection from the TREC 2004 Robust Track. It comprises 250 queries, with relevance judgments on a collection of 528K documents (TREC Disks 4 and 5), whose average length is 2,800 characters or 460 words. We use the topic “titles” (short keyword phrases, much like the input to a search engine) as queries to our bag-of-words retrieval methods (see Sec- tion 3.3) and the topic “descriptions” (sentence-length statements of information needs) as input to our sequence-to-sequence models. These topic descriptions are more similar to MS MARCO’s natural language questions, and others have found that using them improves the effectiveness of pre- 2 trained reranking models [12]. We do not train our models on this dataset, and use all its queries and relevance judgments as a held-out test set; thus, our evaluation adopts a zero-shot transfer setting. # 3.2 Training and Inference We fine-tune our T5 models (base, large, and 3B) with a constant learning rate of 10−3 for 100k iterations with class-balanced batches of size 128. To simplify our training procedure (and related hyperparameters) as well as to eliminate the need for convergence checks, we simply trained for a fixed number of iterations, selected based on the computational demands of our largest model and the (self-allotted) time for running experiments. We report results using the model state at the final checkpoint. This procedure is consistent with the advice of Kaplan et al. [6] and recommendations by Dodge et al. [4], since we quantify effectiveness for a particular computational budget. We did not experiment with T5-11B due to its computational cost. We use a maximum of 512 input tokens and one output token. In the MS MARCO passage dataset, none of the inputs have to be truncated when using this length. We use Google’s TPU v3s to train and run inference. Training T5 base, large, and 3B take approximately 12, 48, and 160 hours overall, respectively, on a single TPU. We use greedy decoding during inference. Since we only use the logits of one decoding step, beam search or top-k random sampling [5] would give the same results as greedy decoding. Because Robust04 contains full-length documents, it is not feasible to directly apply our method to the entire text at once due to the length restrictions of the model. To address this issue, we first segment each document into passages by applying a sliding window of 10 sentences with a stride of 5. We then obtain a relevance probability for each passage by classifying it independently. We select the highest probability among these passages as the relevance probability of the document. # 3.3 Baselines We compare our method against the following baselines: BM25: For baseline bag-of-words retrieval, we use the BM25 implementation in the Anserini open- source IR toolkit [17],2 which is based on Lucene. We adopt all the default settings. At inference time, we retrieve the top 1000 documents per query. BM25+RM3: To examine the effects of query expansion, we applied the BM25+RM3 model as described in Yang et al. [18], where it is shown to be a competitive baseline for (pre-BERT) neural ranking models. We use the implementation in Anserini, with all default settings. BM25+BERT-large: We additionally compare our method against the BERT-large condition from Nogueira et al. [10], which is a two-stage pipeline with bag-of-words retrieval (BM25) followed by a BERT reranker. Architecturally, it is the same as our method, the only difference being BERT vs. T5 as the reranking model. Nogueira et al. [10] can be characterized as the baseline of the best methods from the official MS MARCO passage leaderboard; all higher-ranked submissions can be described as improvements upon this basic approach, and thus it represents a fair yet competitive comparison point. Note that we did not apply reranking on top of BM25+RM3 because RM3 is known to reduce effectiveness when evaluated using these relevance judgments [11]. # 4 Results and Analysis # 4.1 Main Results Main results on the development set of the MS MARCO passage retrieval task are shown in Table 1, comparing BERT-large [9] and T5 models of different sizes. Results in bold are significantly better (p < 0.01) than BERT-large, based on the Student’s paired t-test. Note that the training of T5-3B did not appear to have converged yet, even after exhausting our computational budget (see Section 3.2). In other words, we suspect that T5-3B remains under-trained at the checkpoint we used for evalu- ation, and it is likely that effectiveness would continue to rise given more computational resources. 2http://anserini.io/ 3 MRR@10 .184 .372 .363 .383 .382 BM25 + BERT-large [10] + T5-base + T5-large + T5-3B Table 1: Results on the development set of MS MARCO passage. CEDR [8] Birch [19] .538 .532 BM25 + T5-base + T5-large + T5-3B .253 .314 .296 .364 .363 .425 .416 .506 .424 .510 .499 .596 BM25 + RM3 + T5-base + T5-large + T5-3B .290 .320 .304 .384 .382 .424 .415 .510 .441 .503 .495 .601 Table 2: Results on Robust04. The T5 models are trained only on MS MARCO passage data and thus represent zero-shot transfer. Larger models outperforming smaller ones is an expected trend, and with T5-11B we might observe even higher MRR@10; unfortunately, we were not able to run these experiments due to their high computational costs. Results on Robust04 are shown in Table 2, where we apply our T5 reranker on top of retrieval results from BM25 and BM25+RM3 (see Section 3.2). Figures in bold for T5-3B indicate that those results are significantly better (p < 0.05) than T5-large, T5-base, and the corresponding baseline (BM25 or BM25+RM3), based on the Student’s paired t-test with Bonferroni correction. We compare our model with CEDR [8] and Birch [19], two BERT-based state-of-the-art models. Note that the CEDR results are from training on the Robust04 data (via cross-validation) and Birch uses Robust04 for tuning weighting parameters. In contrast, we apply inference directly using our model trained on the MS MARCO passage data; Robust04 relevance judgments were only used as a test set, which makes our results zero-shot. To our knowledge, our T5-3B model produces the highest known scores reported on Robust04. As expected, effectiveness increases with larger models, but in all cases T5 is able to improve over both a bag-of-words as well as a query expansion baseline. We explain the odd finding that T5-large performs worse than T5-base as follows: based on our training procedure, we simply ran a fixed number of iterations and then evaluated using the final checkpoint (see Section 3.2). This has the advantage of not requiring validation data. In the case of T5-large and T5-base, effectiveness does not monotonically increase in the out-of-domain dataset (Robust04), and thus the results reported in the table capture the somewhat arbitrary model state at the final checkpoint (where effectiveness may still be fluctuating within a rather large range). For T5-3B, in contrast, effectiveness is far more stable across model checkpoints. We leave for future work a more detailed examination of these model differences. Although there are better ways of selecting model checkpoints that could lead to even higher scores, these techniques generally require cross-validation, which increases the danger of overfitting while abandoning the current (highly-desirable) zero-shot learning setup. # 4.2 Effect of Model Size and Training Data Results from the MS MARCO passage ranking task (Table 1) represent a direct comparison between BERT and T5 models since the retrieval pipeline is otherwise the same. For Robust04 (Table 2), we 4 2k samples .127 ±.058 .238 ±.025 20k samples .201 ±.012 .261 ±.011 BM25 + BERT-base BM25 + T5-base Table 3: Comparisons between T5 and BERT trained with different numbers of training instances. Results report means and 95% confidence intervals over five trials. adopt a different architecture than CEDR and Birch, but effectiveness clearly improves as the size of the T5 model increases. Therefore, while our T5-based approach achieves better results, it is entirely possible that the improvements are due to simply having a bigger model, as opposed to any intrinsic advantages over a classification-based approach. Since we do not have pretrained T5 and BERT models of comparable sizes, it is difficult to conduct a fair empirical comparison. In a Another interesting dimension of size is, of course, the amount of training data available. data-poor regime with only a modest amount of training data, it appears that T5 can learn far more effectively than BERT. To demonstrate this, we fine-tuned BERT-base and T5-base with either 1k (or 10k) positive and 1k (or 10k) negative instances sampled from the full MS MARCO passage dataset. These two “base” models were selected due to their more modest computational demands for fine-tuning. We trained them using a batch size of 32 for three epochs. For BERT, we used a learning rate 10−6 and no warm-up step. For T5, we used a learning rate of 10−3. For each condition (2k or 20k samples in total), we repeated the experiment five times, drawing different samples each time. The results are reported in Table 3, with means and 95% confidence intervals. As expected, effectiveness significantly improves as we fine-tune the models with more data. We see clearly that with the same amount of limited training data, T5 is significantly more effective than BM25. In fact, with only 1k positive and 1k negative training instances, BERT performs worse than the BM25 baseline. With 20k training instances in total, BERT is able to modestly improve upon BM25, but remains six points behind T5 fine-tuned on the same amount of data. Interestingly, T5 is able to achieve roughly 45% of the possible gain in effectiveness over the BM25 baseline with only 4% of the training data. # 4.3 Target Word Probing Experiments Our experimental results immediately raise two questions: 1. Why is our approach more data-efficient than BERT? That is, why does T5 significantly outper- form BERT when fine-tuned with few training examples? 2. How is our approach fundamentally different from classification, given that the softmax in our case reduces the model down to a binary decision? That is, asking the model to decide between two output tokens seems no different from relevance classification. We believe these two issues are closely related. Specifically addressing the second question: At a high level, both neural models are learning latent representations important to the task at hand (in this case, relevance classification), starting with a pretrained model, and then mapping these latent representations into task-specific decisions. Thus, end-to-end task performance depends on a com- bination of the knowledge imparted via pretraining (already present at the start) and the knowledge gained via fine-tuning on task-specific data. In the classification-based approach using BERT, the end-to-end model relies on a single fully-connected layer to map the latent representation (i.e., from the [CLS] token) into this binary decision. While the approach can exploit pretrained knowledge when fine-tuning the latent representations, the final mapping (i.e., the fully-connected layer) needs to be, essentially, learned from scratch (since it is randomly initialized). In contrast, T5 can exploit both pretrained knowledge and knowledge gleaned from fine-tuning in learning the proper task-specific latent representations as well as the mapping to relevance decisions. Unlike the fully-connected layer in the classification-based approach, T5 can exploit the part of the network used for producing output. Embedded in that neural machinery is latent knowledge about semantics, linguistic relations, and other features that are necessary to generate fluent text. In other words, T5 has access to an additional source of knowledge that BERT does not. 5 Training Size Target Token Positive Negative 20k .261±.011 .235±.021 .216±.013 .234±.005 .212±.001 .151±.046 Type Baseline 2k .238±.025 .222±.014 .205±.032 .217±.024 .202±.021 .163±.027 false true true cold orange orange _de false hot apple hot _ab Reverse Antonyms Related Words Unrelated Words Subwords all .355 .340 .353 .358 .351 .348 Table 4: Results on the development set of the MS MARCO passage dataset comparing different target word manipulations. This explanation, we believe, also answers the first question. With plenty of training data, BERT has no trouble learning the final fully-connected layer (mapping latent representations to decisions), even from scratch (i.e., random initialization). However, faced with few training examples, BERT still must learn the classification layer, but without any benefit from pretraining—and our experiments above (see Table 3) show that it is unable to adequately do so. In contrast, in a low-data regime, T5 can “fall back” on pretrained neural machinery used for generating fluent textual output. In other words, our experiments suggest that the pretraining objective used in T5 can transfer well to generating relevance labels. To turn our intuition into a testable hypothesis, we can vary the target words used as the prediction targets and manipulate their “linguistic relatedness”—to deliberately “disrupt” linguistic knowledge that may be captured in the models. As Puri and Catanzaro [13] have shown, the choice of target words impacts effectiveness. Recall that in our baseline, “true” indicates a relevant document and “false”, a non-relevant document. We tried the following contrastive variants: • “Reverse”. We swap the target words; that is, “false” indicates a relevant document and “true”, a non-relevant document. If the model is indeed exploiting latent knowledge about linguistic relations, then forcing the model to make opposite associations on the same polarity scale should lower effectiveness with respect to the baseline. • “Antonyms”. We map a relevant document to “hot” and a non-relevant document to “cold”. This preserves the use of adjectives at opposite ends of a polarity scale, but a scale that is completely unrelated to relevance. If the model were exploiting latent knowledge, we would expect effective- ness to be lower than the baseline. • “Related Words”. We map a relevant document to “apple” and a non-relevant document to a related word “orange”. These words are semantically related, but do not present a polarity contrast as before. We would expect effectiveness to be lower than the baseline. • “Unrelated Words”. We map a relevant document to “hot” and a non-relevant document to a completely unrelated word “orange”. Thus, we force the model to build an arbitrary semantic mapping. We would expect effectiveness to be lower than the baseline and also lower than using related words. • “Subwords”. We map a relevant document to the subword “_ab” and a non-relevant document to the subword “_de” (note that we carefully select single tokens after tokenization by SentencePiece to avoid the need to combine multiple logits). Here, we have removed all “semantics” from the input-to-output mapping. We would expect effectiveness to be lower than the baseline and the above conditions. Using these target word configurations, we conducted experiments on T5-base with either 1k (or 10k) positive and 1k (or 10k) negative instances sampled from the full MS MARCO passage dataset, same as in Section 4.2. Once again, for each of the conditions, we repeated the experiment five times, drawing different samples every time. For reference, we also fine-tuned with all available data. We note that the effectiveness of T5-base is different from the one reported in Table 1 because we used slightly different hyperparameters which were more computationally efficient: here, we trained for 40k steps using a batch of size 256. Experimental results are shown in Table 4, with means and 95% confidence intervals. 6 There does not appear to be an obvious pattern when fine-tuning with all available training data, although the largest observed difference is between “baseline” and “reverse”. This does appear con- sistent with our hypothesis that with sufficient training data, T5 is able to learn arbitrary mappings between document relevance and target words. In the data-poor regime, the results are also con- sistent with our hypotheses. With both 2k and 20k total samples, the baseline mapping achieves the highest effectiveness. In the 2k condition, the confidence intervals computed from different samples mostly overlap (with the exception of subwords), so we do not have the benefit of greater certainty that comes with statistical significance. On the 20k condition, our target word manipula- tions all significantly reduce effectiveness. We note that the 95% confidence intervals are smaller with more data, which illustrates the greater instability in effectiveness when training on smaller datasets (which is expected). It is clear that the T5 model is taking advantage of latent semantic or linguistic knowledge in pre- dicting relevance. In both the 2k and 20k settings, the subwords condition performs worse than the BM25 baseline (and the 20k score is actually lower than the 2k score). In this condition, T5 exhibits difficulty in achieving any predictive power at all. There are at least two potential factors at play: we are removing all semantic associations, as the subwords are meaningless token fragments, and furthermore, we are forcing the model to produce tokens in an order (and context) that it has not encountered during pretraining. We are unable to tease apart the effects currently, but either expla- nation is consistent with our intuitions. For all other target word manipulations, we are at least able to beat the BM25 baseline. Finally, our experiments are inconclusive regarding the importance of having a polarity scale in the low-data regime. Quite clearly, reversing “true” and “false” has a large impact, but T5 is more effective learning targets that are semantically related but do not present a polarity contrast (“apple” and ”orange”) than targets that encode an unrelated polarity contrast (“hot” and “cold”). # 5 Conclusion The main contribution of this paper is to introduce a novel generation-based approach to the doc- ument ranking task using pretrained sequence-to-sequence models. Our models outperform a classification-based approach, especially in the data-poor regime with limited training data. We attempt to explain these observations in terms of hypotheses about the knowledge that a model gains from pretraining vs. fine-tuning on task-specific data. These hypotheses are operationalized into target word probing experiments, where we demonstrate that the model is indeed exploiting knowl- edge from its ability to generate fluent natural language text. Exactly how remains an open research question and the focus of our ongoing work. # 6 Acknowledgments This research was supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada. In addition, we would like to thank Google Cloud for credits to support this work. # References [1] N. Asadi and J. Lin. Effectiveness/efficiency tradeoffs for candidate generation in multi-stage retrieval architectures. In Proceedings of the 36th Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval (SIGIR 2013), pages 997–1000, 2013. [2] P. Bajaj, D. Campos, N. Craswell, L. Deng, J. Gao, X. Liu, R. Majumder, A. McNamara, B. Mitra, T. Nguyen, M. Rosenberg, X. Song, A. Stoica, S. Tiwary, and T. Wang. MS MARCO: A human generated MAchine Reading COmprehension dataset. arXiv:1611.09268, 2016. [3] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171–4186, June 2019. 7 [4] J. Dodge, S. Gururangan, D. Card, R. Schwartz, and N. A. Smith. Show your work: Improved reporting of experimental results. arXiv:1909.03004, 2019. [5] A. Fan, M. Lewis, and Y. Dauphin. Hierarchical neural story generation. arXiv:1805.04833, 2018. [6] J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei. Scaling laws for neural language models. arXiv:2001.08361, 2020. [7] T. Kudo and J. Richardson. SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, 2018. [8] S. MacAvaney, A. Yates, A. Cohan, and N. Goharian. CEDR: Contextualized embeddings for document ranking. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1101–1104, 2019. [9] R. Nogueira and K. Cho. Passage re-ranking with BERT. arXiv:1901.04085, 2019. [10] R. Nogueira, W. Yang, K. Cho, and J. Lin. Multi-stage document ranking with BERT. arXiv:1910.14424, 2019. [11] R. Nogueira, W. Yang, J. Lin, and K. Cho. Document expansion by query prediction. arXiv:1904.08375, 2019. [12] R. Padaki, Z. Dai, and J. Callan. Rethinking query expansion for BERT reranking. In Proceed- ings of the 42nd European Conference on Information Retrieval (ECIR 2020). [13] R. Puri and B. Catanzaro. Zero-shot text classification with generative language models. arXiv:1912.10165, 2019. [14] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683, 2019. [15] S. E. Robertson, S. Walker, S. Jones, M. Hancock-Beaulieu, and M. Gatford. Okapi at TREC-3. In Proceedings of the 3rd Text REtrieval Conference (TREC-3), pages 109–126, 1994. [16] E. M. Voorhees. Overview of the TREC 2004 Robust Track. In Proceedings of the Thirteenth Text REtrieval Conference (TREC 2004), pages 52–69, 2004. [17] P. Yang, H. Fang, and J. Lin. Anserini: Enabling the use of Lucene for information retrieval research. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1253–1256, 2017. [18] W. Yang, K. Lu, P. Yang, and J. Lin. Critically examining the “neural hype” weak baselines and the additivity of effectiveness gains from neural ranking models. In Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1129–1132, 2019. [19] Z. A. Yilmaz, W. Yang, H. Zhang, and J. Lin. Cross-domain modeling of sentence-level evi- dence for document retrieval. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3481–3487, 2019. 8
{ "id": "2001.08361" }
2003.05856
Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning
Continual learning studies agents that learn from streams of tasks without forgetting previous ones while adapting to new ones. Two recent continual-learning scenarios have opened new avenues of research. In meta-continual learning, the model is pre-trained to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adaptation. In their original formulations, both methods have limitations. We stand on their shoulders to propose a more general scenario, OSAKA, where an agent must quickly solve new (out-of-distribution) tasks, while also requiring fast remembering. We show that current continual learning, meta-learning, meta-continual learning, and continual-meta learning techniques fail in this new scenario. We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario. We empirically show that Continual-MAML is better suited to the new scenario than the aforementioned methodologies, as well as standard continual learning and meta-learning approaches.
http://arxiv.org/pdf/2003.05856
Massimo Caccia, Pau Rodriguez, Oleksiy Ostapenko, Fabrice Normandin, Min Lin, Lucas Caccia, Issam Laradji, Irina Rish, Alexandre Lacoste, David Vazquez, Laurent Charlin
cs.AI, cs.LG
null
NeurIPS 2020
cs.AI
20200312
20210120
1 2 0 2 n a J 0 2 ] I A . s c [ 3 v 6 5 8 5 0 . 3 0 0 2 : v i X r a # Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning Massimo Caccia123 Pau Rodríguez2 Oleksiy Ostapenko13 Fabrice Normandin13 Min Lin13 Lucas Caccia145 Irina Rish137 Alexandre Lacoste2 David Vazquez2 Laurent Charlin167 Issam Laradji2 1Mila - Quebec AI Institute, 2ElementAI, 3Université de Montréal, 4Facebook AI Research 5McGill University, 6HEC Montréal, 7Canada CIFAR AI Chair # Abstract Continual learning agents experience a stream of (related) tasks. The main chal- lenge is that the agent must not forget previous tasks and also adapt to novel tasks in the stream. We are interested in the intersection of two recent continual-learning scenarios. In meta-continual learning, the model is pre-trained using meta-learning to minimize catastrophic forgetting of previous tasks. In continual-meta learning, the aim is to train agents for faster remembering of previous tasks through adap- tation. In their original formulations, both methods have limitations. We stand on their shoulders to propose a more general scenario, OSAKA, where an agent must quickly solve new (out-of-distribution) tasks, while also requiring fast remem- bering. We show that current continual learning, meta-learning, meta-continual learning, and continual-meta learning techniques fail in this new scenario. We propose Continual-MAML, an online extension of the popular MAML algorithm as a strong baseline for this scenario. We show in an empirical study that Continual- MAML is better suited to the new scenario than the aforementioned methodologies including standard continual learning and meta-learning approaches. # Introduction A common assumption in supervised machine learning is that the data is independently and identically distributed (i.i.d.). This assumption is violated in many practical applications handling non-stationary data distributions, including robotics, autonomous driving, conversational agents, and other real-time applications. Over the last few years, several methodologies study learning from non-i.i.d. data. We focus on continual learning (CL), where the goal is to learn incrementally from a non-stationary data sequence involving different datasets or tasks, while not forgetting previously acquired knowledge, a problem known as catastrophic forgetting [48]. We draw inspiration from autonomous systems deployed in environments that might differ from the ones they were (pre-)trained on. For instance, a robot pre-trained in a factory and deployed in homes where it will need to adapt to new domains and even solve new tasks. Or a virtual assistant can be pre-trained on historical data and then adapt to its user’s needs and preferences once deployed. Further motivating applications exist in time-series forecasting including market prediction, game playing, autonomous customer service, recommendation systems, and autonomous driving. These systems must adapt online to maximize their cumulative rewards [31, 32]. As a step in that direction, we propose a task-incremental scenario (OSAKA) where previous tasks reoccur and new tasks appear. corresponding author: [email protected] 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. We measure the cumulative accuracy of models instead of the (more common) final accuracy to evaluate how quickly models and algorithms adapt to new tasks and remember previous ones. Background Task-incremental classification is a common supervised CL scenario where classifi- cation datasets are presented to an online learner sequentially, one task at a time. For each task Tt at iteration t, the data are sampled i.i.d. from their corresponding distribution Pt(x, y). In the task- incremental scenario models are evaluated by their average final performance across all tasks—after being trained on all tasks sequentially. Several families of recent CL approaches use this setting, including regularization methods [36], data replay [73], and dynamic architectures [66] (see Lange et al. [39] and Parisi et al. [56] for comprehensive overviews). More recent approaches propose relaxing some constraints associated with task-incremental CL by combining CL and meta-learning. Continual-meta learning (CML) focuses on fast remembering or how quickly the model recovers its original performance on past tasks [25]. Meta-continual learning (MCL) uses meta-learning to learn not to forget [29]. In this paper, we further extend the task-incremental setting and show empirical benefits compared to CML and MCL (see Section 6). OSAKA We propose a more flexible and general scenario inspired by a pre-trained agent that must keep on learning new tasks after deployment. In this scenario, we are interested in the cumulative performance of the agent throughout its lifetime [31, 32]. (Standard CL reports the final performance of the agent on all tasks at the end of its life.) To succeed in this scenario, agents need the ability to learn new tasks as well as quickly remember old ones. We name our CL setting Online faSt Adaptation and Knowledge Accumulation (OSAKA). The main characteristics of OSAKA are that at deployment or CL time: 1) task shifts are sampled stochastically, 2) the task boundaries are unknown (task-agnostic setting), 3) the target distribution is context- dependent, 4) multiple levels of non-stationarity are used, and 5) tasks can be revisited. Furthermore, our evaluation of CL performance is different from the one commonly used in CL. We report the cumulative or online average performance instead of the final performance on all seen tasks. Existing CL methods are not well-suited to OSAKA. Methods such as EWC [36], progressive networks [66] or MCL [29] require task boundaries. In contrast, task-agnostic methods (e.g. [1, 85, 25]) optimize for the final performance of the model and so resort to mechanisms that attempt to eliminate catastrophic forgetting. The extra computations resulting from the mechanisms hinder online performance and unnecessarily increase the computational footprint of the algorithms. To address the challenges of OSAKA, we propose Continual-MAML, a baseline inspired by the meta-learning approach of MAML [16]. Continual-MAML is pre-trained via meta-learning. When deployed, Continual-MAML adapts the learned parameter initialization to solve new tasks. When a change in the distribution is detected, new knowledge is added into the learned initialization. As a result, Continual-MAML is more efficient and robust to distribution changes since it does not require computationally expensive optimizers like BGD [85] or replay methods used in prior work [10, 70]. Using our OSAKA scenario, we compare the performance of Continual-MAML to recent and popular approaches from continual learning, meta-learning, and continual-meta learning. Across several datasets, we observe that Continual-MAML is better suited to OSAKA than prior methods from the aforementioned fields and thus provides an initial strong baseline. To summarize, our contributions include: (1) OSAKA, a new CL setting which is more flexible and general than previous ones. Related, we also propose a unifying scenario for discussing meta- and continual learning scenarios (Table 1); (2) the Continual-MAML algorithm, a new baseline that addresses the challenges of the OSAKA setting; (3) extensive empirical evaluation of our proposed method; and (4) a codebase for researchers to test their methods in the OSAKA scenario.1 # 2 A unifying framework We introduce the concepts and accompanying notation that we will use to describe OSAKA in Section 3. These concepts provide a unifying framework—highlighted in Table 1—for expressing several important paradigms such as continual learning, meta-learning, and variants. A motivation for this framework (and so this section in our paper) is to clarify some confusion that arose from the # 1https://github.com/ElementAI/osaka 2 Data Distribution Model for Fast Weights Slow Weights Updates Evaluation Supervised Learning S.Q~C fo = A(S) - L( fo, Q) Mw pM . Vallfo,. Qi , + Meta-learning (hee Te fo, = Ao(Ss) oF (So, Qs) Thy L(Ag(S:), Q2) Continual Learning Sir. Qur ~ Cit fe = CL(Si.r) _ DY, L( fe, Qe) uy ; MetaContinual Leaning {Cunha ~ Wo, = Chg(Sian) Ve Re LQ) OM 3, c(Ag(Si7), Qi) Continual-meta learning Sir, Qur ~ Cur fo, = Ag(Se-1) VoL Soy, Se) YO, L(Ao (St), Qe) OSAKA Qur ~ Cur for = Ag(Qu-1) Vel (foys Qt) DX, L(fo,, Qt) Table 1: A unifying framework for different machine learning settings. Data sampling, fast weights computation and slow weights updates as well as evaluation protocol are presented with meta-learning terminology, i.e., the support set S and query set Q. For readability, we omit OSAKA pre-training. recent interrelation of meta-learning and continual learning. Our main contribution, OSAKA, can be understood even if the reader chooses to skip this section. We begin by assuming a hidden context variable C that determines the data distribution, e.g., the user’s mood in a recommender system or an opponent’s strategy in game playing. In some fields, contexts are referred to as tasks. In the rest of the paper, we will use both terms interchangeably. We use W to denote a finite set of all possible contexts. Given C, data can be sampled i.i.d. from p(X|C). Different learning paradigms can be described by specializing the distribution P (C). For example, in the classical setting data are sampled i.i.d. from p(X|C)P (C) where C could represent the set of classes to be discriminated. We use terminology from meta-learning and introduce a support set S and a query set Q to denote the meta-training and meta-test sets [80], respectively. These sets are usually composed of n i.i.d. samples Xi = (xi, yi), generated conditionally from the context Ci. In some paradigms, including supervised learning, the target distribution is fixed, i.e. p(y|x) = p(y|x, C). We refer to the setting where the equality does not hold as having context-dependent targets. We define a learning algorithm A as a functional taking S as input and returning a predictor fθ, with θ parameters describing the behavior of the predictor, i.e. fθ = A(S). We also define a loss function L(fθ, Q) to evaluate the predictor fθ on the query set Q. In meta-learning, C represents a task descriptor or task label, and both meta-training and meta- testing sets are sampled i.i.d. from p(X|C). E.g., in N -shot classification, the task descriptor specifies the N classes which have to be discriminated. Targets are context-dependent in this learning paradigm. Here, we focus only on the meta-learning methods that rely on episodic training. A meta-learning algorithm A, adapts its behavior by learning the parameters ¢. It samples M i.i.d. pairs of S and Q from a distribution over contexts W™: {C;}44, ~ W™ and (S;,Qi) ~ Xi | Ci. Assuming that the learning process is differentiable, the parameters ¢@ are learned using the gradient from the query set, V4£(Ag(S;), Qi). Concretely, ¢ is first learned on the sets (S;,@Q;), where i< N < M and the final evaluation of the algorithm is ey L(Ag(Si), Qi)- In task-incremental continual learning, the data distribution is non-stationary, and various CL scenarios arise from specific assumptions about this non-stationarity. Here we assume that data non-stationarity is caused by a hidden process {C;}7_,, where C; is the context at time t. C’ in continual learning can be the task label, e.g., in Permuted MNIST, disjoint MNIST/CIFAR 10 [36]. It could also be the class label in the class-incremental setting [61]. Both frameworks have a fixed target distribution. {C;,}7_, is usually assumed to be an ordered list of the tasks/classes. Continual learning algorithms work with a sequence of support sets, Sj.7, and a sequence of query sets, Q1.7, obtained from a sequence of contexts, C).. A continual learning algorithm CL transforms Si.r into a predictor fg, i.e. fg = CL(Sj.7). The main difference with a conventional algorithm A is that the support set is observed sequentially and cannot be fully stored in memory. The evaluation is then performed independently on each Q, (obtained in the same context as S;): an L( fo, Q:)- In App. A we explain how the recent meta-continual learning and continual-meta learning settings fit into the unifying framework. 3 # 3 Online FaSt Adaptation and Knowledge Accumulation (OSAKA) We propose OSAKA, a new continual-learning scenario that lifts some of constraints of current task- incremental approaches [36, 29, 2]. OSAKA is aligned with the use case of deploying a pre-trained agent in the real world, where it is crucial for the agent to adapt quickly to new situations and even to learn new concepts when needed. In particular, OSAKA proposes a scenario for evaluating such continually-learning agents. To materialize such an evaluation OSAKA combines different ideas: 1) agents start in a pre-training stage before continual-learning starts; 2) it provides a mechanism for proposing both old and new tasks to agents where the task boundaries remain unobserved to them; 3) it evaluates agents using their cumulative performance (e.g. accuracy) to measure their capacity to adapt to new tasks. This evaluation implicitly allows agents to forget which may enable faster and more efficient adaptation. For instance, partially forgetting an infrequent task allows the agent to re-allocate modeling capacity to tasks that are encountered more frequently. We now describe OSAKA using the procedural view of Alg. 1. OSAKA proposes a two-stage approach where an agent θ0 starts in a pre-training phase (Alg. 1, L4–L8) and then moves to a deployment phase (Alg. 1, L10–L16) also known as continual-learning time. Algorithm 1: OSAKA Algorithm 2: Continual-MAML at CL time Require: P(C:.), P(C): distributions of contexts 1 Require: 7, 7, A: learning rate, hyperparameters Require: a: non-stationarity level 15 while continually learning Initialize: 09: Model 16 Cr ~ P(CalCr-1) while pre-training 7 wy. ~ P(x, y|Cr) Sample a context C ~ P(Chre) 18 L(fo, 1 (ae), y) Sample data from context x, y ~ p(a, y|C) Update 09 with x, y » M — o— bnVol(fo(ee), uy) 20 if L( fo, 1 (a), Ye) - L(fq,(a), ye) <Â¥ end while continually learning ma] | Oe O11 ~ by VoL for. (we); Ye) Sample current context C, ~ P(Cq|C;-1; @) 2 else Sample data from context «+, y: ~ p(x, y|C;) 23 m — ngr(L( fo,» (@1-1), yr-1)) Incur loss L(O,-1 (a), uy) 24 oH o- mV o£ (fo. 2(®t-1); Yi-1) Update 0; with a, y; at discretion 25 nH o- onVoL(fo(ae), yr) teot+l 2% | tot+l end 27 end Pre-training (Alg. 1 L4–L8). In many current settings [36, 25], the agent begins learning from randomly-initialized parameters. However, in many scenarios, it is unrealistic to deploy an agent without any world knowledge [43, 45], in part, since real-life non-i.i.d. training is difficult to learn. Further, in many domains, ample pre-training data can be leveraged. Continual-learning time (Alg. 1 L9–L15) After pre-training, a stream of continual learning tasks evaluate the model. Each iteration t in the stream relies on a context Ct which determines the current task (xt, yt). The contexts follow a Markov process {Ct}T t=1 with transition probabilities P (Ct|Ct−1; α) (Alg. 1, L10). The context is at the heart of OSAKA and its process controls the level of stationarity of the continual-learning stage and it enables both revisiting tasks and out-of-distribution ones as well as context-dependent targets. We discuss these features below. Controllable non-stationarity. OSAKA provides control, through a hyperparameter, over the level of non-stationarity of the Markov chain. A stream is α-locally-stationary when P (Ct = c|Ct−1 = c) = α. Namely, the data distribution is stationary within a local-time window, i.e., over a certain amount of timesteps. Control over α enables exploring environments with different levels of non-stationarity to test algorithmic robustness. Similar to the few-shot learning literature [80, 60, 54, 64], the transitions of the context variables the context transition matrix that encodes the probability of in OSAKA are not structured, i.e. transitioning from context i to context j has α on the diagonal and (1 − α)/(|C| − 1) everywhere else. For that reason, modelling the evolution of the context variables is not essential. Further, in OSAKA the environment provides enough feedback to the agents for re-adaptation via the targets 4 yt (Alg. 1, L13). We leave the design of a continual learning experimental setup and associated modeling with a structured context variable for future work. Task revisiting. Standard CL methods incrementally learn strictly new tasks. However, many CL applications require revisiting previous tasks. Through the process {Ct}T t=1 OSAKA proposes task revisiting, analogous to recurrent concept drift [18] in online learning. By revisiting previous data distributions, methods will enjoy the same form of implicit replay that agents and systems naturally benefit from in real-world scenarios. The domain of each context Ct contains all tasks and so the process allows to switch back and forth from old tasks to OoD tasks. Out-of-distribution (OoD) tasks. Current settings that permit pre-training then continually learn tasks sampled from the same data distribution [29, 6]. In contrast, in OSAKA the model has to learn online tasks sampled from new distributions not encountered at pre-training (see Section 6.1 for details). This setting is more realistic since an agent will encounter unexpected situations in real life requiring the algorithm to update its representations. Context-dependent targets. In standard CL, pt(x) shifts over time, but the target distribution p(y|x) is fixed. However, drift in the target distribution is common in multiple applications and is studied extensively in online learning as real concept drift [18]. Extending [25], OSAKA allows for context-dependent targets (Alg. 1, L11) making it more flexible and more aligned with our use-cases (see Sec. 1). In OSAKA the target distribution is p(y|x, Ct). The context variable in OSAKA is generic but it is motivated by real-world domains. For example, the context variable could be the strategy of an opponent in a game [71, 50, 79, 53], regimes in time-series forecasting [58, 20, 9], the mood of a user when navigating a content platform in recommender systems [26, 74] or any unobserved variable in RL, e.g., in partially observable Markov decision processes (POMDPs) [33] or in hidden-mode Markov decision processes [12]. In all these examples, like in OSAKA, the targets change over time based on a context. Task agnostic. In OSAKA the agent does not observe the task boundaries or context shifts, and it must infer the current task or context Ct. This is called task-agnostic (or task-free) CL [3, 4, 85, 25, 43] and is motivated by real-world scenarios where signals explicitly indicating a shift may not exist. Online Evaluation (Alg. 1 L12). Current settings reward methods that retain their performance on all previously seen tasks. This is an unrealistic constraint, particularly under limited computational resources [31, 32]. Instead of measuring the final performance of the model, OSAKA measures the online cumulative performance which better suits non-stationary environments. Models are evaluated in an online fashion using the sum of the losses across all timesteps an L(fo., Qt) where £ can be any loss (Alg. 1, L12). This is as opposed to reporting only the final accuracy—for example an L( for, Qt) [36, 61, 10, 11, 29]. Similar to the final accuracy, the online cumulative accuracy measures both plasticity and stability. Specifically, plasticity is evaluated when the algorithm encounters OoD tasks requiring additional learning. Models with higher stability can recover past performance faster and thus enjoy higher online cumulative performance. The cumulative accuracy is also similar to evaluating the (undiscounted) sum of rewards in reinforcement learning or the regret [7] in online learning albeit without the need to compute the performance of an optimal model. We instantiate OSAKA for image classification tasks (see Sec. 6), similarly to the majority of CL benchmarks. Some motivations for our proposed experimental setting are drawn, however, from a reinforcement learning (RL) scenario. In fact, we could adapt OSAKA to RL. We could replace the image classification tasks by tasks from multi-task RL benchmarks (e.g., such as different robotic tasks [84, 28]). We could also use a standard RL benchmark, e.g, from a model-based control environment [77], and create different contexts by changing some of the environment variables, e.g. the gravity. Once tasks or contexts are defined, we could group them in such a way that the CL-time tasks are OoD with respect to pre-training tasks. Increasingly more complex tasks could also be introduced at CL time to mimic a curriculum-learning scenario. Finally, to control for different levels of non-stationarity, we could adjust the time allocation or the number of episodes in each context/task. # 4 Continual-MAML We propose Continual-MAML (see Fig. 1), a CL baseline based on MAML [16] that can cope with the challenges of OSAKA. Continual-MAML (see Alg. 2 or its complete version Alg. 3 in App. B) consists of two stages: pre-training and continual learning. 5 The pre-training phase consists of MAML. That is, meta-learning model parameters such that a small number of gradient steps on a small new task will produce good generalization performance on that task (Alg. 3, L6–13). Specifically, the model adapts its initial weights φ to multiple tasks in the inner loop, obtaining θ. Then it updates the initialization φ in the outer loop. Note that the inner loop learning rate is meta-learned (φη in Alg. 3, L10). At CL time (Alg. 2), the inner loop optimization adapts the model to the current task. Specifically, the model uses current data Xt, Yt to obtain fast weights θt (Alg. 2, L21). Assuming that the data is locally stationary, it makes a prediction on the following data Xt+1 and incurs a loss (Alg. 2, L18). In the case of a sudden distribution shift, the model will fail at its first prediction because its fast weights θt are not suited for the new task yet, but it will have recovered by the next. The recovery is achieved by learning new fast weights θt+1 once the algorithm gets feedback on its prediction (Alg. 2, L25). Note that for some real-life applications, this feedback could be delayed [34]. Finally, to accumulate new knowledge, we further update the meta parameters φ on the incoming data as well (Alg. 2, L24). We also propose two features to improve Continual-MAML’s performance. First, the algorithm must update its knowledge only when it is solving an OoD task. Accordingly, we introduce a hyperparameter λ that controls the behavior of the algorithm between never training on the incom- ing data at CL time to always training (MAML and C-MAML in Section 6). Specifically, when L(fθt−1 (Xt), Yt) > λ, new knowledge is incorporated through outer loop optimization of the learned initialization. This mechanism is exemplified in Figure 1. To obtain a smoother interpolation between behaviors, we opted for a soft relaxation of the mechanism (Alg. 2, L23) where gλ : R → (0, 1). We call this first feature update modulation (UM). Second, to further leverage the local stationarity of OSAKA, we introduced a mechanism that keeps fine-tuning the fast weights θ (Alg. 2, L21) until a context shift or task boundary is detected. The simple yet effective context shift detection mechanism works by monitoring the difference in loss with respect to the previous task and is controlled by a hyperparameter γ (Alg. 2, L20). We call this second feature prolonged adaptation phase (PAP). In practice, we use a buffer to accumulate data whilst no task boundary is detected such that we can update the slow weights φ with more examples once it’s detected (see Alg. 3 in App. B). One can think of the update after the task boundary detection as a knowledge consolidation phase. An ablation of both mechanisms and an hyperparameter sensitivity analysis are provided in Section 6.3 and Appendix F.2, respectively. As a result, different from previous CL literature, the proposed algorithm benefits from fast adaptation, dynamic representations, task boundary detection, and computational efficiency, as we describe next. Fast Adaption During pre-training, Continual-MAML learns a weight initialization that adapts fast to new tasks. This is different from CL methods that focus on incorporating as much knowledge as possible into one representation that has to maximize performance in a multi-task regime. Dynamic representations In OSAKA, significant distribution shifts occur periodically. As shown in Section 6, models that require a fixed representation would fail to adapt. Instead, Continual-MAML, equipped with UM, detects OoD data and then learns new knowledge using outer-loop optimization. Computational efficiency As described by Farquhar and Gal [15], CL agents should operate under restricted computational resources since remembering becomes trivial in the infinite-resource setting. Pretraining Continual learning Task 3 Task 1 0 ‘ iN é SNS a 73 TI g — Meta learning Adaptation @ Out of Figure 1: Continual-MAML first pre-trains with MAML, obtanining φ. At continual-learning time, the model adapts φ to new distributions. The algorithm retrains its slow weights φ when it detects an OoD task to add new knowledge to the model. (Figure is adapted from Figure 1 in Finn et al. [16].) 6 Continual-MAML satisfies this desideratum by allowing the agent to forget (to some extent) and re-allocate parametric capacity to new tasks. Likewise, no computationally expensive mechanisms, such as replay [11], or BGD [85, 25], are used to alleviate catastrophic forgetting in our method. Task boundary detection Continual-MAML detects context shifts which not only help to condition its predictive function on more datapoints (PAP), it also avoids mixing gradient information from two different distributions. # 5 Related Work Continual Learning (CL) [48, 76] has evolved towards increasingly challenging and more realistic evaluation protocols. The first evaluation frameworks [21, 36] were made more general in [85, 3] via the removal of the known task boundaries assumption. Later, [25] proposed to move the focus towards faster remembering, or continual-meta learning (CML), which measures how quickly models recover performance rather than measuring the models’ performance without any adaptation. OSAKA builds upon this framework to get closer to real-life applications of CL, as explained in Section 3. Harrison et al. [24] propose a new CML framework and accompanying model (MOCA). OSAKA shares commonalities with this framework, but they are fundamentally different: it does not (1) allow context-dependent targets, (2) expose the algorithms to OoD tasks at CL time, (3) allow new unknown labels, nor (4) propose an update CL evaluation protocol. Further details are in Appendix C.1. # 6 Experiments We study the performance of different baselines in the proposed OSAKA setup. We first introduce the datasets, methods, and baselines, and then report and discuss experimental results and observations. # 6.1 Experimental setup For all datasets we study two different levels of non-stationarity at CL time, α values of 0.98 and 0.90. Unless otherwise stated the continual-learning episodes have a length of 10,000 timesteps; the probability to visit the pre-training distribution and to visit one of the OoD ones is 0.5 and 0.25, respectively; we report the performance averaged over 20 runs per model and their standard deviation. Statistical significance is assessed using a 95% confidence interval and highlighted in bold. Further experimental details are provided in Appendix E. We now introduce our three datasets. A few examples from each are shown in Appendix D. Omniglot / MNIST / FashionMNIST In this study, we pre-train models on the first 1,000 classes of Omniglot [38]. At CL time, the models are exposed to the full Onniglot dataset, and two out-of- distribution datasets: MNIST [40] and FashionMNIST [82]. Concerning the reported performance, MNIST is a simpler dataset than Omniglot, and FashionMNIST is the hardest. During CL time, the tasks switch with probability 1−α. For this study, we sample 10-way 1-shot classification tasks. Synbols In this study, models are pre-trained to classify characters from different alphabets on randomized backgrounds [37]. Tasks consist of 4 different symbols with 4 examples per symbol. During CL time, the model is exposed to a new alphabet. Further, the model will have to solve the OoD task of font classification, where the input distribution does not change, only its mapping to the output space. The font classification task consists of 4 different fonts with 4 symbols per font. Tiered-ImageNet Like Omniglot, Tiered-ImageNet [62] groups classes into super-categories corresponding to higher-level nodes in the ImageNet [14] hierarchy (we use 20/6/8 disjoint sets for training/validation/testing nodes). We use these higher-level splits to simulate a shift of distribution. We follow the original splits, where the test set contains data that is out of the training and validation distributions. Thus, we use their training set for pre-training, and introduce their validation and test sets at CL time. We refer to them as train, test and OoD in Table 3, respectively. Since only one of the two introduced sets is OoD, we increase its probability of being sampled to 0.5, in accordance to the previous benchmarks. This experiment uses 20,000 steps (twice as the others). 7 Omniglot ContinuakMAML MAML —_ MetaBGD ANIL, MetaCOG BGD 2k 4k é k Omniglot MNIST ContinuakMAML MAML —_ MetaBGD Continual-MAML MAML = MetaBGD. ANIL, MetaCOG BGD ANIL MetaCOG BGD FashionMNIST Continual-MAML MAML — MetaBGD ANIL MetaCOG = BGD 2k 4k é k 2 8k MNIST Continual-MAML MAML = MetaBGD. ANIL MetaCOG BGD 2 8k FashionMNIST Continual-MAML MAML — MetaBGD ANIL MetaCOG = BGD Figure 2: Omniglot / MNIST / FashionMNIST experiment in the α = 0.90 regime. Methods are allowed pre-training on Omniglot before deployment on a stream of Omniglot, MNIST and FashionMNIST tasks. We report the online performance (not cumulative) at each time-step with averaged over 20 runs, as well as standard error. Online ADAM and Fine tuning lie below of the graph. Continual-MAML is the only method with enough plasticity to increase its performance on new tasks, i.e. from MNIST and FashionMNIST, whilst simultaneously being stable enough remember the pretraining tasks, i.e. from Omniglot. # 6.2 Baselines Appendix D compares the main features of the baselines we benchmark in the OSAKA setting. For meta-learning methods, ADAM [35] and SGD are used for the outer and inner updates, respectively. Online ADAM and Fine tuning. We use ADAM without and with pre-training as a lower bounds. BGD [85]. Bayesian Gradient Descent (BGD) is a continual learning algorithm that models the distribution of the parameter vector φ with a factorized Gaussian. Similarly to [25] we apply BGD during the continual learning phase. More details about this baseline are provided in Appendix G.1. MAML [16]. MAML consists of a pre-training stage and a fine-tuning stage. During pre-training, the model learns a general representation that is common between the tasks. In the fine-tuning stage, the model fine-tunes its layers to adapt to a new task. ANIL [59]. ANIL differs from MAML only in the fine-tuning stage. Instead of adapting all the network layers, ANIL adapts only the network’s head towards the new task. The goal of this baseline is to show the problem with static representations in the continual learning setup. Therefore, ANIL is representative of meta-continual learning. MetaBGD and MetaCOG [25]. MetaBGD performs CML using MAML and BGD to alleviate catastrophic forgetting. MetaCOG introduces a per-parameter mask learned in the inner loop. # 6.3 Experimental results For all benchmarks, we report results on two α-locally-stationary environments. The first bench- mark’s results show online accuracy as function of timesteps in Figure 2 (full results are found in Appendix F.1). For Synbols and Tiered-Imagenet, the average accuracies over time are reported in Tables 2 and 3, respectively. For both regimes, the first column is the average performance over all predictions. The second, third and fourth columns show the performance on the three different set- tings. The prefix PRE. stands for pretraining. Algorithms perform better in the more locally-stationary regime (α = 0.98) because they spend more time in each task before switching. Fast adaptation We found fast adaptation (or meta-learning) to be the most critical feature for models to perform well in OSAKA, as highlighted by the performance gap between Online ADAM and Continual-MAML (up to +33% in Synbols α = 0.90). This gain comes from two advantages: quickly changing weights after a task/context switch, having slow (φ), and fast (θ) weights, which alleviate catastrophic forgetting. Dynamic representations Next, models need the ability to adapt the embedding space to correctly classify the OoD data. The Synbols font classification task highlights that learning a new mapping from the same inputs to a new output space is challenging when the embedded space is static. 8 α = 0.98 α = 0.90 MODEL TOTAL PREV. ALPH. NEW ALPH. FONT CLASS. TOTAL PREV. ALPH. NEW ALPH. FONT CLASS. ONLINE ADAM FINE TUNING MAML [16] ANIL [59] BGD [85] METACOG [25] METABGD [25] 59.6 ±1.5 63.7 ±2.3 64.0 ±2.0 69.6 ±2.1 71.2 ±2.8 90.3 ±0.8 69.4 ±1.9 91.3 ±0.8 68.3 ±1.4 73.6 ±2.3 68.3 ±1.7 73.6 ±1.7 72.5 ±1.6 77.8 ±1.8 59.5 ±3.7 63.0 ±3.6 65.7 ±1.5 59.0 ±1.6 69.7 ±3.0 69.6 ±2.8 73.6 ±1.7 50.7 ±2.9 52.9 ±2.8 37.9 ±1.1 33.2 ±1.0 56.1 ±3.5 56.8 ±2.8 58.8 ±3.5 27.5 ±0.8 28.3 ±1.1 26.6 ±1.8 27.0 ±2.4 69.3 ±0.9 86.3 ±0.5 88.4 ±0.4 70.2 ±0.8 33.9 ±1.3 36.7 ±1.6 34.6 ±1.3 37.1 ±1.8 60.3 ±0.4 65.8 ±0.7 26.3 ±0.9 26.2 ±1.5 64.3 ±0.7 68.7 ±0.6 32.0 ±1.9 33.5 ±2.4 62.2 ±1.4 26.9 ±0.7 26.1 ±1.2 40.4 ±0.7 35.1 ±0.5 30.3 ±0.9 30.5 ±1.0 47.8 ±1.4 C-MAML C-MAML + PRE. C-MAML + PRE. + UM C-MAML + PRE. + UM+ PAP 86.3 ±0.8 74.4 ±1.4 79.4 ±1.1 78.4 ±1.0 86.6 ±1.0 74.8 ±4.0 81.6 ±6.2 93.4 ±0.6 76.3 ±2.6 78.2 ±1.4 75.5 ±4.5 86.7 ±1.8 61.6 ±3.1 60.9 ±2.6 59.5 ±3.2 72.0 ±2.4 61.2 ±2.5 66.5 ±3.1 73.3 ±1.2 82.0 ±1.1 72.8 ±0.9 81.4 ±1.2 76.3 ±0.8 84.9 ±0.7 62.9 ±2.8 75.0 ±1.6 74.4 ±1.3 76.4 ±1.5 49.3 ±1.7 53.8 ±1.5 54.4 ±1.6 58.5 ±1.4 Table 2: Online cumulative accuracy for the Synbols experiments. Methods are allowed character classification pre-training on an alphabet. Then, they are deployed on a stream of tasks sampled from the pre-training alphabet and a new alphabet, as well as a font classification tasks on the pre-training alphabet. Continual-MAML + pre. outperforms all others methods in total cumulative accuracy and the PAP further increases performance. α = 0.98 α = 0.90 MODEL TOTAL TRAIN TEST OOD TOTAL TRAIN TEST OOD 44.5 ±1.7 ONLINE ADAM 44.6 ±1.5 FINE TUNING 59.3 ±1.2 MAML [16] 62.4 ±0.7 ANIL [59] 54.8 ±0.8 BGD [85] 55.2 ±0.7 METACOG [25] 55.9 ±0.6 METABGD [25] 61.4 ±0.5 C-MAML 59.1 ±0.9 C-MAML + PRE. 66.7 ±0.9 C-MAML + PRE. + UM C-MAML + PRE. + UM + PAP 69.1 ±0.7 43.9 ±2.1 43.8 ±2.8 61.4 ±1.9 65.7 ±0.8 53.8 ±1.0 54.1 ±1.1 55.7 ±0.9 59.5 ±1.4 57.4 ±1.2 65.7 ±1.7 68.7 ±0.9 44.6 ±2.2 44.1 ±2.1 61.0 ±1.8 64.8 ±1.3 54.6 ±1.9 55.8 ±1.6 54.1 ±1.4 61.2 ±1.3 58.4 ±1.8 66.2 ±1.6 69.3 ±1.0 44.6 ±2.1 45.2 ±1.8 57.3 ±1.0 59.5 ±0.9 55.3 ±1.0 55.4 ±1.0 56.8 ±0.9 62.4 ±0.9 60.1 ±1.2 67.4 ±0.9 69.1 ±1.2 22.7 ±0.2 22.6 ±0.2 60.4 ±0.4 58.1 ±0.5 27.7 ±0.7 24.5 ±0.2 46.8 ±0.8 53.7 ±0.3 57.8 ±0.7 59.7 ±0.3 53.4 ±6.4 22.7 ±0.4 22.5 ±0.3 63.2 ±0.7 61.0 ±0.8 27.4 ±0.7 23.9 ±0.4 45.8 ±1.1 52.0 ±0.6 56.3 ±0.7 59.1 ±0.8 53.5 ±6.1 22.6 ±0.4 22.7 ±0.4 62.6 ±0.5 59.7 ±0.7 27.7 ±0.8 24.0 ±0.3 46.8 ±1.0 53.0 ±0.6 57.7 ±0.9 59.7 ±0.6 53.7 ±6.2 22.7 ±0.3 22.6 ±0.3 58.0 ±0.3 55.8 ±0.4 27.8 ±0.8 25.1 ±0.3 47.3 ±0.9 54.9 ±0.5 58.6 ±0.7 59.9 ±0.4 53.2 ±6.6 Table 3: Online cumulative accuracy for the Tiered Imagenet experiment (see Sec. 6.1 for the experimental details). For this experiment, Continual-MAML outperforms others methods in the more non-stationary regime (α = 0.98). However, in the less-nonstationary one, MAML achieves better results due to its higher stability. Additionally, the UM mechanism consistently improved Continual-MAML’s performance. Namely, the dynamic representations of Continual-MAML offer a 23.7% and a 28.4% improvement in α = 0.98 compared to MAML and ANIL. This behavior is demonstrated in Figure 2 were these two baselines do not improve their performances over time, which is precisely the goal of CL. Thus, these results demonstrates the inapplicability of current MCL to real scenarios. Although MCL can continually learn new tasks without forgetting, its static embedded space will prevent it from learning tasks lying outside of the pre-training data distribution. Computational efficiency Moreover, adding BGD to slow-down forgetting hinder the acquisition of new knowledge. Removing this feature, e.g. from MetaBGD to Continual-MAML, increases the performance in five out of six experiments and diminishes the computation cost by 80%. Update modulation We now analyse, via ablations, the mechanisms we added to Continual- MAML for further improvements. Modulating the updates improved the performance in Omniglot and Tiered experiments but decreased it in Synbols’ (C-MAML + PRE. vs. C-MAML + PRE. + UM, resulting in an average increase of 1.7%. In Appendix F.2, we show how this mechanism interpolates C-MAML + UM’s behavior betweeen MAML and C-MAML. Prolonged adaptation phase Finally, our PAP enabled by the task boundary detection mechanism helps achieve impressive gains in the locally more stationary regime (+11.5% and 2.4% in Synbols and Tiered-ImageNet, respectively). In the other regime (α = 0.90), the shorter task sequences limits the room for improvements and the results are inconclusive. An hyperparameter sensitivity analysis on γ (see App. F.2) in terms of precision and recall for boundary detection accuracy shows that difference in loss magnitudes (see Alg. 2 L20) is a good signal for detecting context shifts. 9 # 7 Conclusions We propose OSAKA a new approach to continual learning that focuses on online adaptation, faster remembering and is aligned to real-life applications. This framework is task agnostic, allows context- conditioned targets and task revisiting. Furthermore, it allows pre-training, and introduces OoD tasks at continual-learning time. We show that the proposed setting is challenging for current methods that were not designed for OSAKA. We introduce Continual-MAML, an initial baseline that addresses the challenges of OSAKA and we empirically demonstrate its effectiveness. # Broader Impact Our work proposes a more-realistic (synthetic) continual-learning environment. This research could help accelerate the deployment of CL algorithms into applications such as autonomous driving, recommendation systems, information extraction, anomaly detection, and others. A domain often associated with continual learning is health care. In health care, patient data is (usually) very sensitive, CL algorithms can be the solution to accumulating knowledge from different hospitals: they can be trained continually across hospitals without the data ever leaving the premise. Possible negative impacts: Our framework enables previous tasks to be forgotten at different rates. If data is patient-level data and different tasks relate to different subset of patients, then it means that the system’s performance on past patients could vary. A diagnostic system, for example, could forget how to properly diagnose a patient from a previous population whilst learning about a new one. Further research, possibly at the intersection of continual learning and fairness, is needed before the safe deployment of these algorithms. Possible positive impacts: The aforementioned negative impact may also be its greatest asset for having a positive impact. Returning to our example, practitioners could understand to what extent a diagnostic system forgets previous diagnostics. They could then use and develop OSAKA to calibrate their algorithms to match their desiderata (e.g., by choosing when the negative consequence of forgetting may outweight the benefits of additional training data). # Acknowledgments and Disclosure of Funding Laurent Charlin is supported through a CIFAR AI Chair and grants from NSERC, CIFAR, IVADO, Samsung, and Google. Massimo Caccia is also supported through a MITACS grant. We would like to thank Grace Abuhamad for an helpful discussion on broader impacts. # References [1] Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., and Tuytelaars, T. (2017). Memory aware synapses: Learning what (not) to forget. CoRR, abs/1711.09601. [2] Aljundi, R., Caccia, L., Belilovsky, E., Caccia, M., Lin, M., Charlin, L., and Tuytelaars, T. (2019a). Online continual learning with maximal interfered retrieval. In Advances in Neural Information Processing Systems 32, pages 11849–11860. Curran Associates, Inc. [3] Aljundi, R., Kelchtermans, K., and Tuytelaars, T. (2019b). Task-free continual learning. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11246–11255. [4] Aljundi, R., Lin, M., Goujaud, B., and Bengio, Y. (2019c). Gradient based sample selection for online continual learning. In Advances in Neural Information Processing Systems (NeurIPS). [5] Antoniou, A., Patacchiola, M., Ochal, M., and Storkey, A. (2020). Defining benchmarks for continual few-shot learning. arXiv preprint arXiv:2004.11967. [6] Beaulieu, S., Frati, L., Miconi, T., Lehman, J., Stanley, K. O., Clune, J., and Cheney, N. (2020). Learning to continually learn. arXiv preprint arXiv:2002.09571. [7] Berry, D. A. and Fristedt, B. (1985). Bandit problems: sequential allocation of experiments (Monographs on statistics and applied probability). Springer. 10 [8] Caccia, L., Belilovsky, E., Caccia, M., and Pineau, J. (2019). Online learned continual compres- sion with adaptive quantization modules. arXiv, pages arXiv–1911. [9] Caccia, M. and Rémillard, B. (2018). Option pricing and hedging for discrete time autoregres- sive hidden markov model. In Proceedings of the Innovations in Insurance, Risk- and Asset Management Conference. Springer Proceeding in Mathematics and Statistics. [10] Chaudhry, A., Dokania, P. K., Ajanthan, T., and Torr, P. H. (2018). Riemannian walk for incremental learning: Understanding forgetting and intransigence. In European Conference on Computer Vision (ECCV). [11] Chaudhry, A., Ranzato, M., Rohrbach, M., and Elhoseiny, M. (2019). Efficient lifelong learning with A-GEM. In International Conference of Learning Representations (ICLR). [12] Choi, S. P., Yeung, D.-Y., and Zhang, N. L. (2000). Hidden-mode markov decision processes for nonstationary sequential decision making. In Sequence Learning, pages 264–287. Springer. [13] De Lange, M., Aljundi, R., Masana, M., Parisot, S., Jia, X., Leonardis, A., Slabaugh, G., and Tuytelaars, T. (2019). Continual learning: A comparative study on how to defy forgetting in classification tasks. arXiv preprint arXiv:1909.08383, 2(6). [14] Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition (CVPR). [15] Farquhar, S. and Gal, Y. (2018). Towards robust evaluations of continual learning. arXiv preprint arXiv:1805.09733. [16] Finn, C., Abbeel, P., and Levine, S. (2017). Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning (ICML). [17] Finn, C., Rajeswaran, A., Kakade, S., and Levine, S. (2019). Online meta-learning. International Conference on Machine Learning (ICML). In [18] Gama, J., Žliobait˙e, I., Bifet, A., Pechenizkiy, M., and Bouchachia, A. (2014). A survey on concept drift adaptation. ACM computing surveys (CSUR), 46(4):1–37. [19] Garnelo, M., Rosenbaum, D., Maddison, C. J., Ramalho, T., Saxton, D., Shanahan, M., Teh, Y. W., Rezende, D. J., and Eslami, S. (2018). Conditional neural processes. arXiv preprint arXiv:1807.01613. [20] Ghahramani, Z. (2001). An introduction to hidden markov models and bayesian networks. In Hidden Markov models: applications in computer vision, pages 9–41. World Scientific. [21] Goodfellow, I. J., Mirza, M., Xiao, D., Courville, A., and Bengio, Y. (2013). An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks. ArXiv e-prints. [22] Graves, A. (2011). Practical variational inference for neural networks. In Advances in neural information processing systems (NIPS). [23] Hannan, J. (1957). Approximation to bayes risk in repeated play. Contributions to the Theory of Games. [24] Harrison, J., Sharma, A., Finn, C., and Pavone, M. (2019). Continuous meta-learning without tasks. ArXiv, abs/1912.08866. [25] He, X., Sygnowski, J., Galashov, A., Rusu, A. A., Teh, Y. W., and Pascanu, R. (2019). Task agnostic continual learning via meta learning. ArXiv, abs/1906.05201. [26] Hidasi, B., Karatzoglou, A., Baltrunas, L., and Tikk, D. (2015). Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939. [27] Isele, D. and Cosgun, A. (2018). Selective experience replay for lifelong learning. In AAAI conference on artificial intelligence. 11 [28] James, S. W., Ma, Z., Arrojo, D. R., and Davison, A. J. (2020). Rlbench: The robot learning benchmark & learning environment. IEEE Robotics and Automation Letters, 5:3019–3026. [29] Javed, K. and White, M. (2019). Meta-learning representations for continual learning. In Advances in Neural Information Processing Systems (NeurIPS). [30] Jerfel, G., Grant, E., Griffiths, T., and Heller, K. A. (2019). Reconciling meta-learning and continual learning with online mixtures of tasks. In Advances in Neural Information Processing Systems (NeurIPS). [31] Kaelbling, L. P. (1991). Foundations of learning in autonomous agents. Robotics and Au- tonomous Systems, 8(1-2):131–144. [32] Kaelbling, L. P. (1993). Learning in embedded systems. A Bradford Book. [33] Kaelbling, L. P., Littman, M. L., and Cassandra, A. R. (1998). Planning and acting in partially observable stochastic domains. Artificial intelligence, 101(1-2):99–134. [34] Kaelbling, L. P., Littman, M. L., and Moore, A. W. (1996). Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237–285. [35] Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. [36] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., et al. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13):3521–3526. [37] Lacoste, A., Rodríguez, P., Branchaud-Charron, F., Atighehchian, P., Caccia, M., Laradji, I., Drouin, A., Craddock, M., Charlin, L., and Vázquez, D. (2020). Synbols: Probing learning algorithms with synthetic datasets. In NeurIPS 2020. [38] Lake, B. M., Salakhutdinov, R., and Tenenbaum, J. B. (2015). Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338. [39] Lange, M. D., Aljundi, R., Masana, M., Parisot, S., Jia, X., Leonardis, A., Slabaugh, G., and Tuytelaars, T. (2019). Continual learning: A comparative study on how to defy forgetting in classification tasks. [40] LeCun, Y. and Cortes, C. (2010). MNIST handwritten digit http://yann.lecun.com/exdb/mnist/. database. [41] Lee, J., Yoon, J., Yang, E., and Hwang, S. J. (2017). Lifelong learning with dynamically expandable networks. CoRR, abs/1708.01547. [42] Lesort, T., Caselles-Dupré, H., Garcia-Ortiz, M., Goudou, J.-F., and Filliat, D. (2019a). Genera- tive Models from the perspective of Continual Learning. In International Joint Conference on Neural Networks (IJCNN). [43] Lesort, T., Lomonaco, V., Stoian, A., Maltoni, D., Filliat, D., and Díaz-Rodríguez, N. (2019b). Continual learning for robotics. ArXiv, abs/1907.00182. [44] Lesort, T., Stoian, A., and Filliat, D. (2019c). Regularization shortcomings for continual learning. ArXiv, abs/1912.03049. [45] Lomonaco, V., Maltoni, D., and Pellegrini, L. (2019). Fine-grained continual learning. arXiv preprint arXiv:1907.03799. [46] Lopez-Paz, D. and Ranzato, M. (2017). Gradient episodic memory for continual learning. In Advances in Neural Information Processing Systems (NIPS). [47] Luo, Y., Huang, Z., Zhang, Z., Wang, Z., Baktashmotlagh, M., and Yang, Y. (2019). Learning from the past: Continual meta-learning via bayesian graph modeling. 12 [48] McCloskey, M. and Cohen, N. J. (1989). Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, pages 109–165. Elsevier. [49] Monahan, G. E. (1982). State of the art—a survey of partially observable markov decision processes: theory, models, and algorithms. Management science, 28(1):1–16. [50] Moravˇcík, M., Schmid, M., Burch, N., Lis`y, V., Morrill, D., Bard, N., Davis, T., Waugh, K., Johanson, M., and Bowling, M. (2017). Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science, 356(6337):508–513. [51] Mundt, M., Hong, Y. W., Pliushch, I., and Ramesh, V. (2020). A wholistic view of continual learning with deep neural networks: Forgotten lessons and the bridge to active and open world learning. ArXiv, abs/2009.01797. [52] Nguyen, C. V., Li, Y., Bui, T. D., and Turner, R. E. (2018). Variational continual learning. In International Conference on Learning Representations (ICLR). [53] OpenAI (2018). Openai five. https://blog.openai.com/openai-five/. [54] Oreshkin, B., López, P. R., and Lacoste, A. (2018). Tadam: Task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Processing Systems, pages 721–731. [55] Ostapenko, O., Puscas, M. M., Klein, T., Jähnichen, P., and Nabi, M. (2019). Learning to remember: A synaptic plasticity driven framework for continual learning. CoRR, abs/1904.03137. [56] Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., and Wermter, S. (2019). Continual lifelong learning with neural networks: A review. Neural Networks, 113:54 – 71. [57] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. (2019). Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems (NeurIPS). [58] Rabiner, L. R. (1989). A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. [59] Raghu, A., Raghu, M., Bengio, S., and Vinyals, O. (2019). Rapid learning or feature reuse? towards understanding the effectiveness of maml. arXiv preprint arXiv:1909.09157. [60] Ravi, S. and Larochelle, H. (2016). Optimization as a model for few-shot learning. ICLR. [61] Rebuffi, S.-A., Kolesnikov, A., Sperl, G., and Lampert, C. H. (2017). icarl: Incremental classifier and representation learning. In Computer Vision and Pattern Recognition (CVPR). [62] Ren, M., Triantafillou, E., Ravi, S., Snell, J., Swersky, K., Tenenbaum, J. B., Larochelle, H., and Zemel, R. S. (2018). Meta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676. [63] Riemer, M., Cases, I., Ajemian, R., Liu, M., Rish, I., Tu, Y., and Tesauro, G. (2018). Learning to learn without forgetting by maximizing transfer and minimizing interference. arXiv preprint arXiv:1810.11910. [64] Rodríguez, P., Laradji, I., Drouin, A., and Lacoste, A. (2020). Embedding propagation: Smoother manifold for few-shot classification. In Proceedings of the European Conference on Computer Vision (ECCV). [65] Rolnick, D., Ahuja, A., Schwarz, J., Lillicrap, T., and Wayne, G. (2019). Experience replay for continual learning. In Advances in Neural Information Processing Systems. [66] Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pas- canu, R., and Hadsell, R. (2016). Progressive neural networks. arXiv preprint arXiv:1606.04671. [67] Schmidhuber, J. (1987). Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook. PhD thesis, Technische Universität München. 13 [68] Schwarz, J., Luketina, J., Czarnecki, W. M., Grabska-Barwinska, A., Teh, Y. W., Pascanu, R., and Hadsell, R. (2018). Progress & compress: A scalable framework for continual learning. arXiv preprint arXiv:1805.06370. [69] Serrà, J., Surís, D., Miron, M., and Karatzoglou, A. (2018). Overcoming catastrophic forgetting with hard attention to the task. CoRR, abs/1801.01423. [70] Shin, H., Lee, J. K., Kim, J., and Kim, J. (2017). Continual learning with deep generative replay. In Advances in Neural Information Processing Systems. [71] Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489. [72] Snell, J., Swersky, K., and Zemel, R. (2017). Prototypical networks for few-shot learning. In Advances in neural information processing systems, pages 4077–4087. [73] Soltoggio, A. (2015). Short-term plasticity as cause–effect hypothesis testing in distal reward learning. Biological cybernetics, 109(1):75–94. [74] Song, W., Xiao, Z., Wang, Y., Charlin, L., Zhang, M., and Tang, J. (2019). Session-based social recommendation via dynamic graph attention networks. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pages 555–563. [75] Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P. H., and Hospedales, T. M. (2018). Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1199–1208. [76] Thrun, S. and Mitchell, T. M. (1995). Lifelong robot learning. Robotics and autonomous systems, 15(1-2):25–46. [77] Todorov, E., Erez, T., and Tassa, Y. (2012). Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026–5033. IEEE. [78] van de Ven, G. M. and Tolias, A. S. (2019). Three scenarios for continual learning. arXiv preprint arXiv:1904.07734. [79] Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., et al. (2019). Grandmaster level in starcraft ii using multi-agent reinforcement learning. Nature, 575(7782):350–354. [80] Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al. (2016). Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638. [81] Vuorio, R., Cho, D.-Y., Kim, D., and Kim, J. (2018). Meta continual learning. arXiv preprint arXiv:1806.06928. [82] Xiao, H., Rasul, K., and Vollgraf, R. (2017). Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747. [83] Xu, J. and Zhu, Z. (2018). Reinforced continual learning. In Advances in Neural Information Processing Systems (NIPS). [84] Yu, T., Quillen, D., He, Z., Julian, R. R., Hausman, K., Finn, C., and Levine, S. (2019). Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In CoRL. [85] Zeno, C., Golan, I., Hoffer, E., and Soudry, D. (2018). Task agnostic continual learning using online variational bayes. 14 # A A unifying framework Data Distribution Model for Fast Weights Slow Weights Updates Evaluation Supervised Learning S.QxC fo = A(S) — L(fo,Q) ota {ci}, ~w™ _ : VoL(fo;, Qi) uM . Meta-learning SO Ci fo, = Ag(Si) View My L(Aa (Si), Qi) Continual Learning Sir. Qur ~ Crt fo = CL(S1-7) — D L(fo, Qe) {Cir} ~ Ww" Sir. Qivr ~ Cir Continual-meta learning Sir. Qur ~ Crt fo, = Ag (Si—1) Vell for: Se) LD, L(Ae (Si), Qe) fo, = Cho (Siar) Ve Ue LF Qe) OM cA (Sits), Qi) Meta-Continual Learning Vien OSAKA Qur~ Cur fo, = Ag(Qi-1) Vel( for, Qt) DX, L( fers Qt) Table 4: A unifying framework for different machine learning settings. Data sampling, fast weights computation and slow weights updates as well as evaluation protocol are presented with meta-learning terminology, i.e., the support set S and query set Q. For readability, we omit OSAKA pre-training. Meta-continual learning combines meta-learning and continual learning. A collection of M se- quences of contexts is sampled i.id. from a distribution over sequences of contexts, W™, ice., {Cir}, ~ WM and Siter.Qiair ~ Xiar | Cizsr- Next, the continual learning algorithm, CLg, can be learned using the gradient Vg >>, £(CL4(Si,1:7), Qi), fori < N < M and eval- uated on the remaining sets yy Y, £(CLe(Si1:r), Qi). As in continual learning, the target distribution is fixed. Continual-meta learning considers a sequence of datasets Sy.7,Qi:-r ~ Cy.r. At training or continual-learning time, S},7 is both used as a support and query set: S; is used as the query set and S;_1 as the support. Predictions at time t are made using fg, = Ag(Q:—1). Since local stationarity is assumed, the model always fails on its first prediction when the task switches. Next, using l, = L(fo,,S:), the learning of ¢ is performed using gradient descent of Vgl;. The evaluation is performed at the end of the sequence where Ay recomputes fast weights using the previous supports and is tested on the query set, i.e., )>, £(Ay(S:), Q:). Similar to meta-learning, continual-meta learning allows for context-dependent targets. 15 # B Algorithms Algorithm 3: Continual-MAML 1 Require: P(Cyre), P(Cc1): distributions of contexts (or tasks) A: threshold and regularization hyperparameters step size hyperparameter @: Meta and fast adaptation parameters 5 Initialize: 1: learnable inner loop learning rate 6 Initialize: B buffer of incoming data 7 while pre-training 8 Sample batch of contexts (or tasks) {C;}#., ~ P(Cyre) 9 foreach C; do 10 Sample data from context #;, yi ~ P(a, y|Ci) uw O,-¢- bnV ol (fa(xil: k}). yal: kl) 12 end 1B 6— b6-V5d; L(fo,(ailk :]), yilk :]) 14 end 15 Initialize: current parameters 09 <~ ¢ 16 while continually learning 17 Sample current context C, ~ P(Cy|C;_1) 18 Sample data from context #, y; ~ P(C;) 19 Incur loss L( fo,_, (ae), Ye) 20 Virtual model 6; — ¢ — bnV o£ (fo(a2), yi) 2 if L( fo, 1 (#4), Ye) - L( fg, (as), Â¥e) <7 22 # No context shift detected 23 Further fine tune the fast parameters O — Oa — onVoL (fa, ,(@1), 2) 24 Add (a4, yz) to buffer B 25 else 26 # Task boundary detected 27 Sample training data from buffer ain, Yurain ~ B 28 Fast adaptation 0 ~ } — by VgL(fo(auain): Yurin) 29 sample test data from buffer Xtest, Yrest ~ B 30 Modulated learning rate 7, <— nga(L(fo (aes) Yes) ) 31 Update Meta parameters @ — ¢ — mV sL(fo (atest), Yrest) 32 Reset buffer B 33 Reset fast parameters 0, + ¢ — bnVol( fa (a), yr) 34 tet+1 35 end 16 Algorithm 4: Continual-MAML w/o Prolonged Adaptation Phase 1 Require: P (Cpre), P (Ccl): distributions of contexts (or tasks) 2 Require: γ, λ: threshold hyperparameters 3 Require: η: step size hyperparameter 4 Initialize: φ, θ: Meta and fast adaptation parameters 5 while pre-training 6 step size hyperparameter , 0: Meta and fast adaptation parameters 5 while pre-training 6 Sample batch of contexts (or tasks) {C;}., ~ P(Cpre) 7 foreach C; do 8 Sample data from context @;, y; ~ P(C;) 9 91 — b ~ by VoL (fo(wil: k]), wil: k]) 10 end u | ¢@b-nVed;L(fo.(wilk :]), yilk :]) 12 end 13 Initialize: current parameters 09 <~ ¢ 14 while continually learning 15 Sample current context C, ~ P(Cxy|C;-1) 16 Sample data from context @,, y, ~ P(#, y|C;) 17 Incur loss L( fo,_, (ae), Ye) 18 Reset fast parameters 0; <— ¢ — bnV ol (fala, yr) 19 if L( fo, 1(@1), Ye) - L( fo, (ae), Ye) <7 20 # No task boundary detected 2 Modulated learning rate 7, <— nga(L(Fo, 1 (@.),»:)) n 6— b—mVoLl\ fo, (a), ve) 23 tet+1 24 end 17 # C Related Work Our method intersects the topics of continual learning, meta learning, continual-meta learning, and meta-continual learning. For each of these topics, we describe the related work and current state-of-the-art methods. Continual learning. Given a non-stationary data stream, standard learning methods such as stochastic gradient descent (SGD) are prone to catastrophic forgetting as the network weights adapted to the most recent task quickly cannot perform the previous ones anymore. Many continual learning approaches have been proposed in recent years, which can be roughly clustered into: (1) replay-based methods, (2) regularization-based methods, and (3) parameter-isolation methods. Replay-based methods store representative samples from the past, either in their original form (e.g., rehearsal methods [61, 27, 65, 2], constrained optimization based on those samples [46]), or in a compressed form, e.g., via a generative model [2, 8, 55, 42]. However, those methods require additional storage, which may need to keep increasing when the task sequence is longer. Regularization-based or prior-based approaches [36, 52, 85] prevent significant changes to the parameters that are important for previous tasks. Most prior-based methods rely on task boundaries. However, they fail to prevent forgetting with long task sequences or when the task label is not given at test time [15, 44]. The third family, parameter isolation or dynamic architecture methods, attempts to prevent forgetting by using different subsets of parameters for fitting different tasks. This is done either by freezing the old network [83, 69] or growing new parts of the network [41, 68]. Dynamic architecture methods, however, usually assume that the task label is given a test time, which reduces their applicability in real-life settings. For more details, recent continual learning surveys have been proposed [13, 51]. Meta learning. Learning-to-learn methods are trained to infer an algorithm that adapts to new tasks [67]. Meta learning has become central for few-shot classification [60, 80, 54]. A commonly used meta-learning algorithm is MAML [16], which optimizes the initial parameters of a network such that adapting to a new task requires few gradient steps. ANIL [59] is another variation of meta learning that requires only adapting the network’s output layer or head to the new tasks. These algorithms leverage gradient descent to learn a feature representation that is common among various tasks, but they are not suitable when the new tasks have a drastic distribution shift from the existing tasks. Despite the limitations of meta-learning methods, they can be adapted to address the challenges of continual learning, as we will describe below. Meta-continual learning. Since non-stationary data distributions breaks the i.i.d assumption for SGD, it is natural to consider continual learning as an optimization problem where the learning rule learns with non-stationary data. Therefore, some recent works focus on learning a non-forgetting learning rule with meta learning, i.e., meta-continual learning. In Javed and White [29], the model is separated into a representation learning network and a prediction learning network. The representation learning network is meta learned so that the prediction learning part can be safely updated with SGD without forgetting. In Vuorio et al. [81], a gradient-based meta-continual learning is proposed. The update is computed from a parametric combination of the gradient of the current and previous task. This parametric combination is trained with a meta objective that prevents forgetting. These approaches are all limited by the fundamental assumption of meta learning that the distribution of the meta testing set matches that of the meta training set. Thus it is not guaranteed that the meta- learned representation or update rule is free of catastrophic forgetting when OoD data is encountered in the future. Despite that, meta-continual learning is actively researched [63, 6]. Continual-meta learning. Recently, several methods emerged that address the continual-meta learning setup. FTML [17] extends the MAML algorithm to the online learning setting by incorporat- ing the follow the leader (FTL) algorithm [23]. FTL provides an O(log T ) regret guarantee and has shown good performance on a variety of datasets. Dirchlet-based meta learning (DBML) [30] uses a Dirchlet mixture model to infer the task identities sequentially. More relevant to our work, MetaBGD [25] addresses the problem of fast remembering when the task segmentation is unavailable. MOCA [24] extends meta-learning methods with a differentiable Bayesian change-point detection scheme to identify whether a task has changed. Continual-meta learning is now an active research field [47, 5]. 18 # C.1 Contrasting OSAKA and MOCA’s framework In this section, we contrast OSAKA with the recently introduced framework showcasing meta-learning via online changepoint analysis (MOCA) [24]. We are incentivized to discuss these differences because both frameworks can appear similar. Specifically, OSAKA and MOCA’s framework represent the tasks or contexts as a hidden Markov chain. However, both settings are fundamentally different and the similarities are superficial. We now highlight their core differences. Context-dependent targets In most CL scenarios including in the MOCA’s framework, the joint distribution pt(x, y) changes through time via the input distribution pt(x). The target distribution p(y|x) however is fixed (i.e., pt(y|x) = p(y|x)). In other words, in standard incremental CL, new labels still appear even though pt(y|x) is fixed: they appear via pt(x) moving its probability mass to new classes. OSAKA is more general because it allows for drift in the target distribution pt(y|x) as well. This is achieved through the latent context variable C as detailed in Section 3. In other words, pt(y|x) = p(y|x, ct). This is a common scenarios in partially-observable environments [49, 19] or more generally to any case where a prediction depends on the context, e.g. time-series prediction. Out-of-distribution tasks Similar to Javed and White [29], Beaulieu et al. [6], MOCA’s framework allows for pre-training. However, all those frameworks test their models on similar data at CL time, i.e., new classes from the same dataset. They thus make strong assumptions about the data distribution that the CL agent will be exposed to at deployment time. This assumption can limite the real-world applicability of current methods. In OSAKA, pre-training is also allowed. However, at CL time, the model will be tested on OoD data distribution w.r.t the pre-training one (see Section 3. OSAKA thus helps us analyze robustness of algorithms to data distribution(s) outside of the pre-training one. It is thus more aligned with real-life cases of CL. Expanding set of labels In MOCA’s framework, all classes are known a priori (see Section B2 in Harrison et al. [24]). They do not allow for an expanding set of labels over time, which is a central idea in CL [36, 46, 61, 15, 2, 4, 70, 29, 11]. MOCA’s framework is closer to domain-incremental learning [78], i.e., classes are fixed but new variations can appear within them. Similarly to standard CL, OSAKA allows for an expending set of labels. Thus, algorithms’ capacity to incrementally learn new concepts is studied in OSAKA. To conclude, the main contribution of Harrison et al. [24] is a new algorithm: MOCA. In contrast, OSAKA is a new CL evaluation framework aiming to push CL beyond its current limits. We acknowledge that changepoint detection is important for continual learning and refer the readers to [24] for a review of the changepoint detection literature. 19 # D Datasets and Baselines Pretraining Time Continual Learning Time FashionMNIST Continual Learning Time = Symbols Sybois Fonts New aiphabet Pretraining Time Continual Learning Time Ae ins le fo SLX OS ee by laf ce ae 1 S SB Re Bs Bed aos Imager erry Pretraining Time Continual Learning Time FashionMNIST Continual Learning Time = Symbols Sybois Fonts New aiphabet Pretraining Time Continual Learning Time Ae ins le fo SLX OS ee by laf ce ae 1 S SB Re Bs Bed aos Imager erry Figure 3: We evaluate our setup on three different benchmarks, each one depicted in one row: Om- niglot/MNIST/FashionMNIST, Synbols, and Tiered-ImageNet. MODEL ONLINE ADAM FINE TUNING BGD [85] MAML [16] ANIL [59] METABGD [25] PRE-TRAIN CL TIME MAML ANIL MAML SGD BGD UM/PAP × × × √ × × × √ × × √ × × × × × × √ √ √ × × × √ × × × × √ × × × × N/A N/A × METACOG [25] CONTINUAL-MAML × √ × × √ √ × √ × × × √ Table 5: Baseline comparison. Columns 2–3 con- tain pre-training algorithms. Columns 4–7 show training algorithms at continual learning time. UM and PAP stand for update modulation and Pro- longed adaptation phase, respectively, and are ex- plained in Section 4. # E Experiment Details The procedure followed to perform the experiments in Section 6 is described next in detail. The code to reproduce the experiments is publicly available at https://github.com/ElementAI/osaka. For all experiments, we used a 4-layer convolutional neural network with 64 hidden units as commonly used in the few-shot literature [80, 72, 75, 64]. All the methods were implemented using the PyTorch library [57], run on a single 12GB GPU and 4 CPUs . # E.1 Hyperparameter search Hyperparameters were found by random search. During hyperaparmeter search, we allocated the same amount of trials for each method, i.e., each line in the reported Tables. We used Adam [35] for the outer-loop optimization and SGD in the inner (for meta-learning methods). For each trial, we sampled uniformly a method and then sampled hyperparameters uniformly according to the search space defined in Table 7. Each for each hyperparameter trial, we ran two continual learning episodes with different seeds. The seeding impacts the neural net initialization as well as what data stream the algorithm will be exposed to. Whenever the first ran didn’t return a cumulative accuracy better than random, we omitted the second run. We allocated equal amount of trials to both non-stationary levels α ∈ {0.90, 0.98}. We dedicated a fix amount of compute for each benchmarks and further provide specific details in the rest of this section. Omniglot / MNIST / FashionMNIST For this benchmark, we allocated a total of 12.5 days of compute. This allowed for 935 trials of which 381 were better than random. Synbols For this benchmark, we allocated a total of 19.5 days of compute. This allowed for 1,309 trials of which 340 were better than random. Tiered-Imagenet For this benchmark, we allocated a total of 62 days of compute. We only ran 1 seed per trials which allowed for 934 trials. For all benchmarks, concerning the runtime per trials, because BGD requires 5 times more compute than SGD, the BGD baseline took approximately five time longer to run than Online ADAM. Similarly, MetaBGD took approximately 5 time longer to run than C-MAML. Moreover, methods with meta-learning took approximately 5 times longer than methods without. 20 We add the following clarification: we do not need a validation set in OSAKA, as there is no training error. Specifically, in the CL episodes, algorithms always make prediction on held-out data. As for the evaluation runs, the best sets of hyperparameters are used to evaluate the methods on 20 new runs. The algorithms are thus exposed to 20 new CL episodes. For clarification, we do not use the best models found in the hyperparameter-search: we only use the hyperparameters to train and evaluate new models. MODEL ONLINE ADAM FINE TUNING MAML [16] ANIL [59] BGD [85] METABGD [25] METACOG [25] CONTINUAL-MAML CONTINUAL-MAML + PRE. CONTINUAL-MAML + UM CONTINUAL-MAML + PAP η BATCH SIZE INNER-STEP SIZE INNER ITERS √ √ √ √ √ √ √ √ √ √ √ × √ √ √ × × √ √ × × √ √ × √ √ × √ √ √ √ √ √ × √ √ √ √ √ √ × √ × × FIRST ORDER MC SAMPLES β σ γ λ × × √ √ × √ √ √ √ √ √ × × × × √ √ √ × × × × × × × × × × × × × × × × × × × × √ √ × × √ √ × × √ √ × × × × × × × × × × × × × √ × × × √ Table 6: Method’s hyperparameters. η is the step-size or outer-step size for meta-learning methods. Batch size is only needed for methods with pre-training. For methods using meta-learning, we searched the inner-step size, the number of inner iterations (inner iters) and the use of the first order approximation of MAML. BGD related hyperparameters, i.e., MC samples, β and σ are explained in Appendix G.1. γ and λ are specific of Continual-MAML and operate the update modulation and prolonged adaptation phase mechanisms, respectively. For readability, we omitted 2 hyperparameters related to MetaCOG and refer to the codebase for completeness. η Batch size Inner-step size 0.0005 0.001 0.005 0.01 0.05 0.1 0.5 Inner iters First Order MC Samples β σ γ λ Table 7: Hyperparameter search space. 21 # F Extra Results In this section, we provided further results as well as more details about baselines. # F.1 Omniglot / MNIST / FashionMNIST In Table 8, we report the full results for the Omniglot / MNIST / FashionMNIST experiment. Contrary to the other experiments, we found that C-MAML pre-training didn’t improve results. We thus focus the ablation on C-MAML instead of C-MAML + Pre. TOTAL α = 0.98 OMNIGLOT MNIST FASHION TOTAL α = 0.90 OMNIGLOT MNIST 73.9 ±2.2 72.7 ±1.7 84.5 ±1.7 75.3 ±2.0 87.8 ±1.3 88.0 ±1.0 91.1 ±2.6 89.5 ±0.7 92.2 ±0.5 92.8 ±0.6 81.7 ±2.3 80.8 ±2.0 97.3 ±0.3 95.1 ±0.6 95.1 ±0.5 95.2 ±0.5 96.8 ±1.5 95.4 ±0.4 97.1 ±0.3 97.8 ±0.2 70.0 ±3.6 68.7 ±2.8 80.4 ±0.3 58.7 ±2.9 86.9 ±1.1 87.1 ±1.5 92.5 ±1.9 91.1 ±0.9 94.1 ±0.8 93.9 ±0.8 62.3 ±2.5 59.6 ±3.1 63.5 ±0.3 49.7 ±0.3 74.4 ±1.1 74.3 ±1.5 77.8 ±3.8 76.6 ±1.3 80.5 ±1.4 79.9 ±0.7 23.8 ±1.2 22.1 ±1.1 75.5 ±0.7 69.1 ±0.8 63.4 ±0.9 63.6 ±0.9 74.8 ±1.1 82.6 ±0.4 84.5 ±0.4 83.3 ±0.4 26.6 ±2.0 25.5 ±1.5 88.8 ±0.4 88.3 ±0.5 72.8 ±1.2 73.5 ±1.3 83.1 ±1.0 87.8 ±0.4 88.6 ±0.5 89.0 ±0.5 20.0 ±1.4 18.1 ±1.9 68.1 ±0.5 52.4 ±0.6 55.9 ±2.2 55.9 ±1.8 71.7 ±1.5 84.6 ±1.0 86.2 ±0.6 84.5 ±0.7 FASHION 22.1 ±1.3 19.2 ±1.6 56.2 ±0.4 47.6 ±0.9 51.7 ±1.3 51.7 ±1.4 61.5 ±1.2 70.3 ±0.7 74.2 ±0.8 71.1 ±0.7 # Table 8: Omniglot / MNIST / FashionMNIST experiment # F.2 Hyperparameter Sensitivity Analysis In this section, we analyze the update modulation (UM) and prolonged adaptation phase (PAP) mechanisms we introduce in C-MAML. Their respective hyperparameters are λ and γ. We perform the analysis on Synbols for the following reasons: (i) It is harder to solve than the Omniglot benchmark; (ii) Models train faster than Tiered-Imagenet; (iii) It is the only benchmark with an OoD task in which the pre-training data is bestowed a new semantic meaning, i.e., the font classification task. We analyze the higher non-stationarity setting of α = 0.98. setting. This setting puts emphasis on challenging the fundamental i.i.d assumption that CL is interested in solving. Update Modulation We analyze the effect of λ parameterizing gλ : R → (0, 1). We use gλ to modulate the learning rate proportionally to the loss (see Alg. 2, L23). λ provides a smooth interpolation between the behavior of MAML and Continual-MAML. When λ = 0, Continual-MAML + UM collapses to MAML. When λ = inf, Continual-MAML + UM collapses to Continual-MAML. In Figure 4, we show the effect of λ on the online cumulative accuracy (same metric as reported elsewhere) which we obtained from our hyperparameter search. Interestingly, all values of λ consistently increased the performance of Continual-MAML + UM with respect to MAML and Continual-MAML. This increase is due to two reasons. First, MAML (λ = 0) cannot accumulate knowledge about the OoD tasks. Second, Continual-MAML (or λ = inf) overfits its slow parameters φ to the current tasks, interfering with previous knowledge too aggressively. # Prolonged Adaptation Phase To enable PAP, we need a mechanism to dectect the task boundary (or the context shifts). We propose a simple yet effective context shift detection mechanism which monitors the difference in loss with respect to the previous task and is controlled by a hyperparameter γ (Alg. 2, L20). Setting γ to high values will increase precision but reduce recall, and vice-versa. In Figure 5 we report precision and recall with respect to multiple values of γ. We can see that, when tuned appropriately, this mechanism can achieve near-perfect F1 scores, as highlighted by the trials near the top right corner. The effectiveness of PAP is shown in Figure 6. Specifically, we show that, across all values of γ, PAP increases the average performance of Continual-MAML. Again, the proposed mechanism is robust to its hyperparameter. 22 o © o Oo S BR 0 0.25 0.5 1 2 3 5 inf. (MAML) (C-MAML) # Accuracy Figure 4: Update modulation (UM) analysis. The proposed mechanism is robust to its hyperparam- eter λ and consistently increases average and maximum performance 2 2 a a e os § os § é 8 e 8 ° e 5 ©. 5 0.6 e@ 06 3 e 3 e & & 0.4 3 0.4 02 02 bd ° final_recall a8 final_recall_avg 0.0 0.0 0.0 02 0.4 06 08 1.0 0.0 02 0.4 06 08 1.0 2 a e os § 8 e 5 ©. 0.6 e@ 3 e & 0.4 3 02 bd final_recall a8 0.0 0.0 02 0.4 06 08 1.0 2 a os § é 8 ° e 5 06 3 e & 0.4 02 ° final_recall_avg 0.0 0.0 02 0.4 06 08 1.0 Figure 5: Precision (y-axis) and Recall (x-axis) for task boundary detection as a function of γ (color). Left: all trials are plotted, Right: trials are grouped by γ and the average is reported 23 o © ° N o fo) Accuracy id u 0.4 C-MAML C-MAML + PAP Figure 6: Prolonged adaptation phase (PAP) analysis. The proposed mechanism increases average and maximum performance. 24 # G Extra Notes # G.1 Bayesian Gradient Descent Bayesian Gradient Descent (BGD) is a continual learning algorithm that models the distribution of the parameter vector ¢ by a factorized Gaussian. Similarly to [25] we apply BGD during the continual learning phase. BGD models a the distribution of the parameter vector ¢ by a factorized Gaussian 4(¢) = []; N(¢i|i, 02). Essential motivation behind BGD is that o models the uncertainty of the estimation of the parameter ¢. Hence parameters with higher uncertainty should be allowed to change faster than the parameters with lower o, which are more important for preserving knowledge learned so far. BGD leverages variational Bayes techniques [22] and introduces an explicit closed-form update rule for the parameters 4; and o;: IL (for (Xt), Yi) bi =H; — Bo? E,( ), Oo OL( fo,_.(X1),Y Oi vat n (Soe. Pt t) ) eg)- 1 IL ( fo, (Xt), Ye) g7iEe, | ei, 06% where the expectations are approximated using Monte Carlo sampling and the re-parametrization trick is used as bj = pi + o1€:, €; ~ N(0, 1). 25 # H Q&A Here you can find reviewers questions and concerns and our answers that we couldn’t address in the main part of the paper due to space limitation. Pre-training limits the generality of OSAKA and adds computational needs. We disagree. OS- AKA aligns with the deployment of CL systems in real life (Sec. 1 & 3) and it would be more realistic to deploy an agent with some knowledge of the world. Nevertheless, pre-training is not mandatory, although prescribed, and we have a baseline that does use it (C-MAML). Furthermore, it is currently more computationally efficient to learn on i.i.d. data at pre-training than on non-stationary data at CL time and pre-training is a one-time cost compared to CL which is a recurring one. Why putting features of different frameworks together is useful for continual learning evalua- tion? We unified and extended these features to create a more realistic setting than the ones studied in the previous literature. Other frameworks study some of the features in silos but when methods are tested in less realistic settings some methods perform better than they should [12]. See Sec. 6.3 (under dynamic representations) for such an example. I think it is strange that MAML performs better in the 0.90 setting. The reviewer’s intuition is right. However, C-MAML needs to predict correctly the context switches otherwise it will get mixed gradients from different tasks. Thus, α = 0.90 can be more challenging for methods with dynamic representations when the OoD tasks are not too far from the pre-training ones, as in the Tiered-Imagenet experiment. Without task revisits, does φ stop being suitable for few-shot learning? It stays suitable because it is still trained with the MAML loss, which optimizes for few-shot learning. 26
{ "id": "1810.11910" }
2003.05997
Efficient Content-Based Sparse Attention with Routing Transformers
Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suffers from quadratic compute and memory requirements with respect to sequence length. Successful approaches to reduce this complexity focused on attending to local sliding windows or a small set of locations independent of content. Our work proposes to learn dynamic sparse attention patterns that avoid allocating computation and memory to attend to content unrelated to the query of interest. This work builds upon two lines of research: it combines the modeling flexibility of prior work on content-based sparse attention with the efficiency gains from approaches based on local, temporal sparse attention. Our model, the Routing Transformer, endows self-attention with a sparse routing module based on online k-means while reducing the overall complexity of attention to $O\left(n^{1.5}d\right)$ from $O\left(n^2d\right)$ for sequence length $n$ and hidden dimension $d$. We show that our model outperforms comparable sparse attention models on language modeling on Wikitext-103 (15.8 vs 18.3 perplexity) as well as on image generation on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers. Additionally, we set a new state-of-the-art on the newly released PG-19 data-set, obtaining a test perplexity of 33.2 with a 22 layer Routing Transformer model trained on sequences of length 8192.
http://arxiv.org/pdf/2003.05997
Aurko Roy, Mohammad Saffar, Ashish Vaswani, David Grangier
cs.LG, eess.AS, stat.ML
TACL 2020; pre-MIT Press publication version; v5 has a random attention baseline
null
cs.LG
20200312
20201024
0 2 0 2 t c O 4 2 ] G L . s c [ 5 v 7 9 9 5 0 . 3 0 0 2 : v i X r a # Efficient Content-Based Sparse Attention with Routing Transformers Aurko Roy and Mohammad Saffar and Ashish Vaswani and David Grangier Google Research {aurkor, msaffar, avaswani, grangier}@google.com # Abstract Self-attention has recently been adopted for a wide range of sequence modeling problems. Despite its effectiveness, self-attention suf- fers from quadratic compute and memory requirements with respect to sequence length. Successful approaches to reduce this complex- ity focused on attending to local sliding win- dows or a small set of locations independent of content. Our work proposes to learn dy- namic sparse attention patterns that avoid allocating computation and memory to at- tend to content unrelated to the query of interest. This work builds upon two lines of research: it combines the modeling flexibility of prior work on content-based sparse atten- tion with the efficiency gains from approaches based on local, temporal sparse attention. Our model, the Routing Transformer, endows self- attention with a sparse routing module based on online k-means while reducing the over- all complexity of attention to O(n1.5d) from O(n2d) for sequence length n and hidden di- mension d. We show that our model outper- forms comparable sparse attention models on language modeling on Wikitext-103 (15.8 vs 18.3 perplexity), as well as on image genera- tion on ImageNet-64 (3.43 vs 3.44 bits/dim) while using fewer self-attention layers. Ad- ditionally, we set a new state-of-the-art on the newly released PG-19 data-set, obtaining a test perplexity of 33.2 with a 22 layer Rout- ing Transformer model trained on sequences of length 8192. We open-source the code for Routing Transformer in Tensorflow. ∗ # 1 Introduction Generative models of sequences have witnessed rapid progress driven by the application of attention to neural networks. In particular, Bahdanau et al. (2015); Cho et al. (2014); Vaswani et al. (2017) relied on attention to drastically improve the state-of-the art in machine translation. Subsequent research (Radford et al., 2018; Devlin et al., 2019; Liu et al., 2019; Yang et al., 2019) demonstrated the power of # ∗https://github.com/google-research/ google-research/tree/master/routing_transformer self-attention in learning powerful representations of language to address several natural language processing tasks. Self-attention also brought im- pressive progress for generative modeling outside of language, e.g. image (Parmar et al., 2018; Menick and Kalchbrenner, 2018; Child et al., 2019) and music generation (Huang et al., 2018; Child et al., 2019). Self-attention operates over sequences in a step- wise manner: at every time-step, attention assigns an attention weight to each previous input element (representation of past time-steps) and uses these weights to compute the representation of the current time-step as a weighted sum of the past input ele- ments (Vaswani et al., 2017). Self-attention (Shaw et al., 2018) is a particular case of attention (Bah- danau et al., 2015; Chorowski et al., 2015; Luong et al., 2015). commonly used in auto- regressive generative models. These models gener- ate observations step-by-step, modeling the prob- ability of the next symbol given the previously generated ones. At every time step, self-attentive generative models can directly focus on any part of the previous context. In contrast, recurrent neu- ral networks (RNNs) and convolutional neural net- works (CNNs) have direct interactions with only a local neighborhood of context around the current time step. This advantage however comes at a price: un- like recurrent networks or convolution networks, the time and space complexity of self-attention is quadratic in n, the length of the sequence. Specifi- cally, for every position i ≤ n, self-attention com- putes weights for its whole context of length i, which induces a complexity of P i≤n i = n(n − 1)/2. This makes it difficult to scale attention based models to modeling long sequences. However, long sequences are the norm in many domains, including music, image, speech, video generation and document level machine translation. Therefore, an important research direction is to investigate sparse and memory efficient forms of at- tention in order to scale to tasks with large sequence lengths. Previous work has proposed data indepen- dent or fixed sparsity patterns bounding temporal dependencies, such as local or strided attention. At each time step, the model attends only to a fixed number of time steps in the past (Child et al., 2019). Extensions to local attention have suggested learn- ing the length of the temporal sparsity for each attention module in the network (Sukhbaatar et al., 2019). These strategies draw their inspiration from RNNs and CNNs and bound their complexity by at- tending only to representations summarizing a local neighborhood of the current time step. Their at- tention matrices (matrices containing the attention weights for every pair of previous, current time- step) are natively sparse and require instantiating only non-zero entries. While these approaches have achieved good results, fixing the sparsity pattern of a content based mechanism such as self-attention can limit its ability to pool in information from large contexts. As an alternative to local attention, Correia et al. (2019) consider content-based sparsity, an approach allowing for arbitrary sparsity patterns. This formulation however does require instantiating a full dense attention matrix prior to sparsification through variants of L0-sparsity or sparsemax ap- proximations (Blondel et al., 2019). The present work builds upon these two lines of research and proposes to retain the modeling flexibility of content-based sparse attention while leveraging the efficiency of natively sparse attention matrices. Our formulation avoids sparsemax vari- ants and relies on clustering of attention instead. Each attention module considers a clustering of the space: the current time-step only attends to con- text belonging to the same cluster. In other words, the current time-step query is routed to a limited number of context elements through its cluster as- signment. This strategy draws inspiration from the application of spherical k-means clustering to the Maximum Inner Product Search (MIPS) problem. Our proposed model, Routing Transformer, com- bines our efficient clustering-based sparse atten- tion with classical local attention to reach excel- lent performance both for language and image generation. These results are obtained without the need to maintain attention matrices larger than batch length which is the case with the seg- ment level recurrence mechanism used in Dai et al. (2019); Sukhbaatar et al. (2019). We present ex- perimental results on language modeling (enwik-8, Wikitext-103 and PG-19) and unconditional image generation (CIFAR-10 and ImageNet-64). Routing Transformer sets new state-of-the-art while hav- ing comparable or fewer number of self-attention layers and heads, on Wikitext-103 (15.8 vs 18.3 perplexity), PG-19 (33.2 vs 33.6 perplexity), and on ImageNet-64 (3.43 vs 3.44 bits/dim). We also report competitive results on enwik-8 (0.99 vs 0.98 perplexity) and present ablations on CIFAR-10. # 2 Related Work Attention with Temporal Sparsity: Research on efficient attention neural models parallels the ad- vent of attention-based architectures. In the context of speech recognition, Jaitly et al. (2016) proposed the Neural Transducer which segments sequences in non-overlapping chunks and attention is performed in each chunk independently. Limiting attention to a fixed temporal context around the current pre- diction has also been explored in Chorowski et al. (2015), while ? dynamically segment the sequence into variable sized-chunks. Hierarchical attention strategies have also been explored: the model first considers which part of the inputs should be attended to before comput- ing full attention in a contiguous neighborhood of the selected area (Gregor et al., 2015; Xu et al., 2015; Luong et al., 2015). Later, hierarchical atten- tion has been simplified by Liu et al. (2018) that alternates coarse layers (attending to the whole se- quence at a lower temporal resolution) with local layers (attending to a neighborhood of the current prediction). This alternating strategy is also employed by Child et al. (2019), which introduces bounded and strided attention, i.e. attending to a fixed context in the past at a sub-sampled temporal resolution. This work formalizes such a strategy using a sparse attention formalism, showing how it relates to full attention with a specific sparsity pattern in the attention matrix. It shows that sparse attention is sufficient to get state-of-the-art results in model- ing long sequences over language modeling, image generation and music generation. Sukhbaatar et al. (2019) build upon this work and show that is it is possible to obtain further sparsity by letting the model learn the length of the temporal context for each attention module. This work also makes use of the attention cache introduced in Dai et al. (2019), a memory mechanism to train models over tempo- ral contexts which extend beyond the length of the training batches. Attention with Content-Based Sparsity: The above work mainly relies on two efficient ideas: attending to less elements by only considering a fixed bounded local context in the past, and at- tending to less elements by decreasing the temporal resolution of context. These ideas do not allow arbitrary sparsity patterns in attention matrices. Content-based sparse attention has been introduced to allow for richer patterns and more expressive models. Martins and Kreutzer (2017); Malaviya et al. (2018) propose to compute attention weights with variants of sparsemax. Correia et al. (2019) generalizes this approach to every layer in a Trans- former using entmax which allows for more efficient inference. This line of work allows for learning arbi- trary sparsity attention patterns from data, based on the content of the current query and past con- text. However, sparsity here cannot be leveraged to improve space and time complexity since sparse- max/entmax formulations require instantiating the full attention matrix prior to sparsification. This is a drawback compared to temporal sparsity ap- proaches. Our work is motivated by bridging this gap and allows for arbitrary sparsity patterns while avoiding having to instantiate non-zero entries of attention matrices. Contemporaneous to our work, Kitaev et al. (2020) proposed to use Locality Sensitive Hashing (LSH) using random hyper-planes to infer content based sparsity patterns for attention: tokens that fall into the same hash bucket, get to attend to each other. While similar in spirit to our approach, the approach of Kitaev et al. (2020) keeps the ran- domly initialized hyper-planes fixed throughout, while we use mini-batch spherical k-means to learn the space-partitioning centroids. The motivation in both approaches is to approximate Maximum Inner Product Search (MIPS) in the context of dot product attention, for which both LSH and spher- ical k-means have been used in literature. How- ever, typically spherical k-means is known to out- perform LSH for MIPS (see e.g. Auvolat et al. (2015)). This is borne out in the common task of Imagenet-64 generation, where Reformer gets around 3.65 bits/dim (Figure 3), while the Routing Transformer gets 3.43 bits/dim (see Table 4 for a comparison). beyond Atten- tion: Learning models with sparse representa- tions/activations for saving time and computation has been addressed in the past in various context. Previous work often refers to this goal as gating for conditional computation. Gating techniques relying on sampling and straight-through gradient estimators are common (Bengio et al., 2013; Eigen et al., 2013; Cho and Bengio, 2014). Conditional computation can also be addressed with rein- forcement learning (Denoyer and Gallinari, 2014; Indurthi et al., 2019). Memory augmented neural networks with sparse reads and writes have also been proposed in Rae et al. (2016) as a way to scale Neural Turing Machines (Graves et al., 2014). In the domain of language modeling, a related work is the sparsely gated Mixture-of-experts (MOE) (Shazeer et al., 2017) where sparsity is induced by experts and a trainable gating network controls the routing strategy to each sub-network. Another related work is Lample et al. (2019) who use product quantization based key-value lookups to replace the feed forward network in the Transformer. Our work differs from theirs in that we make use of dynamic key-value pairs to infer sparsity patterns, while their key-value pairs are the same across examples. # 3 Self-Attentive Auto-regressive Sequence Modeling Auto-regressive sequence models decompose the probability of a sequence x = (x1, . . . , xn) as p(x) = pθ(x1) n Y pθ(xi|x<i). i=2 (1) In neural models, the conditional distribution pθ(xi|x<i) is modeled by a neural network with learned parameters θ and these parameters are typ- ically learned to maximize the likelihood of the training data. In particular, Transformer architec- tures have shown to reach state-of-the-art accuracy in several domains, including language modeling (Vaswani et al., 2017; Radford et al., 2018), image generation (Parmar et al., 2018) and music gener- ation (Huang et al., 2018). Transformer models compose a series of attention modules. Each mod- ule refines the input representation by taking a weighted average of the representations from the previous modules. For every module, the input representation is a sequence of n vectors x = (x1, . . . , xn) from a continuous space of dimension d. Thus one may actually treat the input sequence as a n × d matrix X. A self-attention layer operates on this represen- tation. It first applies three linear projections, Q = XWQ, K = XWK, V = XWV , (2) where Q, K and V are referred to as keys, queries and values, while WQ, WK, WV are learned projec- tion matrices. The key and the query matrices determine the nxn attention matrix A = softmax QK"), where the softmax operator over matrices denotes that the softmax function has been applied to each row. In the case of self-attention for auto-regressive models, queries attend only over keys from previous time- steps, i.e. A = softmax (1tr(QK T ) (3) where ltr denotes the lower triangular operator. The attention matrix A may be interpreted as a matrix of weights in [0, 1] where Aij denotes how much query position i at the next layer must pay attention to key position j at the previous layer. Given the attention matrix A, the next layer repre- sentation X 0 is then computed simply as AV . In summary, X 0 i = n X AijVj, j<i (4) In practice, Transformer (Vaswani et al., 2017) adds several extensions to this basic self-attention mecha- nism. In particular, the result X 0 of performing self- √ d. Moreover, each layer attention is scaled by 1/ relies on multiple attention heads, i.e. each layer performs multiple projections onto triplet (queries, keys, values) and attention is performed for each head. The attention results from all heads are then concatenated. This strategy allows each head to specialize on different aspects of the input sequence. In addition, Transformer further processes the re- sult of attention through a learnable non-linear transformation (multi-layer perceptron, mlp) fol- lowed by a residual connection and a normalization step, i.e. X 0 = layernorm(X 0 + X) X 00 = layernorm(mlp(X 0) + X), (6) where layernorm denotes the parameterized nor- malization step from (Ba et al., 2016). A full Transformer model is therefore a chain of atten- tion modules (Eq. 6) preceded by an embedding module (learnable representation for symbols and their positions) and followed by a logistic classifi- cation module (learnable linear classifier to predict the next symbol). Our work is interested in the application of the Transformer to long sequences, a challenging prob- lem since space and time complexity of attention is quadratic in sequence length n. We describe vari- ous approaches to sparse attention including ours in the next section. # 4 Efficient Content-Dependent Sparse Attention Attention-based models can be problematic for long sequences. For a sequence of length n, the full at- tention matrix A, as introduced in Section 3, is n × n-dimensional and can be prohibitive to instan- tiate. This motivates sparse attention models, i.e. models relying on attention matrices which have a majority of zero entries. For each query, a sparse attention model defines a set of keys which can be attended to. In the following, we introduce the set Si as the set of key positions that the query at position i can attend to, i.e. (5) of Child et al. (2019) propose block sparse attention where half the heads perform local attention, and half the heads perform strided attention given by Si = {j | i − j (mod k) = 0, j < i} for every i. The approach of Sukhbaatar et al. (2019) is also a variant of local attention where the cardinality of |Si| is learned from data with an L1 penalty to trade-off sparsity with modeling accuracy. These local attention sparsity variants are effec- tive in practice since correlation between observa- tions naturally decrease with time for many prob- lems. In our experiments, we actually find that local attention is a surprisingly strong baseline in both image generation and language modeling: for e.g., a scaled up ImageTransformer (Parmar et al., 2018) gets 3.48 bits/dim compared to the 3.44 bits/dim reported in Child et al. (2019). Similarly, scaled up versions of Transformer with local attention and the relative positional encoding scheme of Shaw et al. (2018) are able to get 19.8 perplexity on Wikitext-103, 1.10 bits per byte on enwik-8 and 39.3 on PG-19, while Transformer-XL (Dai et al., 2019) gets 18.3, 0.99 and 36.3 respectively. From an efficiency perspective, local attention is also interest- ing since sparsity patterns are regular, contiguous in memory and known in advance. In this work, however, we are interested in a more generic formulation of attention sparsity and would like the sparsity pattern to be informed by the data, i.e., S = f (x). This approach has several modeling advantages: it can accommodate data without a clear ordering over observations. For temporal data, it can also discover patterns with greater sparsity if some types of queries have a longer lasting effect on future observations than others. Content-based sparse attention should however be carefully im- plemented if we need to avoid instantiating full attention matrices at any point in time. For in- stance, Correia et al. (2019) infer sparsity from data but their formulation instantiates a full atten- tion matrix before finding its sparse counterpart. The next section explains how a natively sparse approach can actually be devised inspired by the Maximum Inner Product Search (MIPS) problem. # X X 0 i = AijVj. j∈Si (7) # 4.1 Routing Attention with Clustering Our strategy follows the motivation we delineated in the previous section: we model sparse attention matrices with a low rank sparsity patterns relying on k-means clustering. Our strategy first assigns queries and keys to clusters. Then only queries and keys from the same cluster are considered for attention. The set of all such key positions defines a sparsity pattern S = {Si | 1 ≤ i ≤ n} for the entire se- quence. For example, classical causal self attention can attend to every key prior to the current query, which translates to Si = {j | j < i} for every i. Most previous work on attention sparsity defined such sets purely based on positions, independently of actual query and key vectors. For example, local attention (Luong et al., 2015) considers attending only to a k-long time window prior to the current query, Si = {j | i−k ≤ j < i} for every i. The work Precisely, our model clusters both keys K and queries Q using mini-batch k-means clustering on the same set of centroid vectors µ = (µ1, · · · , µk) ∈ Rk×d. These centroid parameters are model param- eters and are shared across sequences. They are learned online along with the rest of the parame- ters, as delineated in (Bottou and Bengio, 1995). Once cluster membership for queries and keys are determined, we denote by µ(Qi) ∈ µ the nearest centroid to Qi and by µ(Kj) ∈ µ the nearest cen- troid to Kj. This allows us to define our sparse attention strategy as # X X 0 i = AijVj j:Kj ∈µ(Qi), j<i (8) In summary, queries are routed to keys belonging to the same cluster. To see the connection with Maxi- mum Inner Product Search (MIPS), we recall the setting of the MIPS problem adapted to the case of dot-product attention. In this problem we are given a large collection of vectors K = {K1, · · · , Kn} of size n in Rd and for a given query Qi ∈ Rd, we are interested in searching for a key Kj ∈ K which (approximately) maximizes Q> Kj = arg max x∈K Q> i x. (9) The MIPS problem is useful in the dot product attention setting because the importance of a par- ticular key Kj to a query Qi is directly proportional to its dot product Q> i Kj. Thus given a budget of items that a query Qi can attend to, the optimal choice of keys Kj are the ones given by the MIPS objective in Equation 9. The motivation for us- ing k-means clustering, is the observation that the MIPS problem is equivalent to the Nearest Neigh- bor Search (NNS) problem when the norm of every element Kj ∈ K is constant. Therefore, we work with queries and keys which are unit vectors, projecting them onto the unit ball, immediately before computing them. In practice, instead of normalizing by the ‘2 norm, we use Layer Normalization (Ba et al., 2016) with the scale and bias terms disabled. This has the benefit of pro- jecting vectors in Rd to the d-ball and prevents its entries from becoming too small. These layer nor- malized keys and queries are also used subsequently for computing the dot product attention. Note that performing k-means algorithm on unit vectors is equivalent to the spherical k-means algorithm. Projecting queries and keys to the unit ball implies that: ]Q: — KI? = |lQil|? + KyI? — 207 =2-2(Q/K;). (10) i Kj (11) (12) Thus if Qi and Kj belong to the same cluster center i.e., µ(Qi) = µ(Kj) = µ, then it follows that there is some ε > 0, such that kQi − µk , kKj − µk < ε. This implies via triangle inequality that: kQi − Kjk ≤ kQi − µk + kKj − µk < 2ε. (13) Thus from Equation 12 it follows that, Q> i Kj > 1 − 2ε2. Therefore, when two time steps i > j are assigned the same cluster due to a small kQi − µk , kKj − µk distance, it also means that their attention weight Q> i Kj is high, i.e., Kj is an approximate solution to the MIPS objective of Equation 9 for query Qi. This analysis shows that our clustering routing strategy preserves large attention weights as non-zero entries. Since, we route attention via spherical k-means clustering, we dub our model Routing Transformer. We give a detailed pseudo-code implementation for the routing attention computation in Algorithm 1. A visualization of the attention scheme and its com- parison to local and strided attention is given in Figure 1. The computational complexity of this variant of sparse attention is O(nkd+nd/k). Clus- ter assignments correspond to the first term, i.e. it compares n routing vectors to all k centroids in a space of size d. Query/key dot products corre- sponds to the second term, i.e. assuming balanced clusters, each of the n queries is compared to n/k in its cluster through a dot product of dimension d. Therefore the optimal choice of k is \/n as in (Child et al., 2019), thereby reducing overall memory and computational cost to O (n!°d) instead of O(n?d) (Vaswani et al., 2017). In practice, we apply mini-batch k-means to train the cluster centroids. However, in order to infer balanced routing patterns, we define the sets Si to n, i.e. for every be of equal size roughly n/k ∼ centroid µi we sort tokens by distance to µi and cluster membership is determined by this threshold (top-k). This adds an additional O(n log n) term to the cost, however note that this is eclipsed by the dominating term of O(n1.5d). This strategy is sim- ple and efficient. In particular, it guarantees that all clusters have the same size, which is extremely important in terms of computational efficiency on parallel hardware like graphic cards. As a downside, this assignment does not guarantee that each point belongs to a single cluster. In the future, we want to investigate using balanced variants of k-means (Banerjee and Ghosh, 2004; Malinen and Fränti, 2014) which is not common in an online setting. During training, we update each cluster centroid µ by an exponentially moving average of all the keys and queries assigned to it: µ ← λµ + (1 − λ) 2 X Qi + (1 − λ) 2 X i:µ(Qi)=µ j:µ(Kj )=µ Kj, where λ is a decay parameter which we usually set to 0.999. Additionally, we also exclude padding tokens from affecting the centroids. There is an additional nuance regarding clus- tering queries and keys that comes into play when using causal attention (i.e. left to right masking), as is usually the case in language models. When group- (a) Local attention (b) Strided attention (c) Routing attention Figure 1: Figures showing 2-D attention schemes for the Routing Transformer compared to local attention and strided attention of (Child et al., 2019). The rows represent the outputs while the columns represent the inputs. For local and strided attention, the colored squares represent the elements every output row attends to. For attention routed as in Section 4.1, the different colors represent cluster memberships for the output token. Algorithm 1 Routing Attention 1: Queries, Keys and Values: Q, K, V ∈ Rn×d 2: Centroid: µ ∈ Rk×d 3: decay: λ 4: if left to right mask then 5: 6: . Normalize to unit ball 7: Q ← LayerNorm(Q) 8: K ← LayerNorm(K) 9: Qprod ← µQ> 10: if not left to right mask then 11: K ← Q . scale, bias disabled . scale, bias disabled . k × n Kprod ← µK > . k × n . attention window . k × w . sort to preserve order . k × w 12: w ← n/k 13: Qidx ← top-k(Qprod, w) 14: Qidx ← sort(Qidx) 15: Kidx ← Qidx 16: if not left to right mask then Kidx ← top-k(Kprod, w) . k × w 17: Kidx ← sort(Kidx) . sort to preserve order . k × w × d . k × w × d . k × w × d . k × w × w A ← ltr(A) . k × w × w . k × n . k × n 18: 19: Q0 ← gather(Q, Qidx) 20: K 0 ← gather(K, Kidx) 21: V 0 ← gather(V, Kidx) 22: A ← Q0(K 0)> 23: if left to right mask then 24: 25: A ← softmax(A). 26: V 0 ← einsum(kww, kwd → kwd, A, V 0) 27: X ← scatter(Kidx, V 0) 28: Qm ← one-hot(arg max(Qprod)) 29: Km ← one-hot(arg max(Kprod)) 30: . Update centroids 31: µ ← λµ + (1 − λ)QmQ/2 + (1 − λ)KmK/2 32: return X ing queries and keys belonging to a certain cluster centroid µ, we may get as members queries Qi for keys Kj where time-step i ≤ j. This therefore re- quires an additional masking strategy in addition to the lower triangular mask used for causal atten- tion. One solution that avoids having to use an additional mask, is to simply share keys and queries. Empirically, we have found that this works at par or better than separate keys and queries together with an additional masking strategy in the causal attention setting. For encoder self attention and encoder-decoder cross-attention, additional mask- ing or sharing queries and keys is not necessary. # 5 Experiments We evaluate our sparse attention model on various generative modeling tasks including text and im- age generation. The following sections report our results on CIFAR-10, Wikitext-103 (Merity et al., 2017), enwik-8 (Mahoney, 2011), ImageNet-64 as well as PG-19 (Rae et al., 2020). We find that a scaled up version of local attention is a surprisingly strong baseline and that our Routing Transformer outperforms Transformer-XL (Dai et al., 2019) and the Sparse Transformer model of Child et al. (2019) on all tasks. On the recently released PG-19 data- set, we find that local attention again is a strong baseline, with a slightly worse performance com- pared to Transformer-XL (Dai et al., 2019). We also find that the Routing Transformer model out- performs both Transformer-XL (Dai et al., 2019) and Compressive Transformer (Rae et al., 2020), setting a new state-of-the-art result. In all our models except the one used for PG-19, we allocate half the heads to do local attention and the other half to route attention as in Equation 8. For all our experiments except for PG-19, we use the Adam optimizer (Kingma and Ba, 2015) with learning rate 2 × 10−4 with β1 = 0.9 and β2 = 0.98 following the learning rate schedule described in Vaswani et al. (2017). We train all models on 128 TPUv3 cores. The setup used for PG-19 is described in Section 5.5. # 5.1 CIFAR-10 CIFAR-10 is a widely used image data-set which consists of 60, 000 colored images of size 32 × 32. Since the sequence lengths in this case are rela- tively short (3072), we use this as a toy data-set to perform various ablations to tease apart the effect of various hyper-parameter choices on the model performance. We train 12 layer models with a total of 8 attention heads, and report a comparison of the effect of various hyper-parameter choices on the performance and speed on this data-set. In par- ticular, the following hyper-parameters are varied 1) the number of routing attention heads, 2) the number of routing attention layers and 3) the size of the attention window. For routing attention we use k = 6 while varying the attention window, to see the effect on speed and performance. All the CIFAR-10 models are trained with a batch size of 32 and for a total of 200, 000 steps. In addition, we also compare the Routing Transformer to a Random Transformer, where Kidx is randomly chosen rather than being drawn from nearest neighbor search. For a fair comparison, we take the best model from Ta- ble 1 with an attention window of 512 and replace all routing heads with random heads. We present the ablation results in Table 1 and discuss it in more detail in Section 6. # 5.2 Wikitext-103 Wikitext-103 (Merity et al., 2017) is a large public benchmark data-set for testing long term depen- dencies in word-level language models. It contains over 100 million tokens from 28K articles extracted from Wikipedia with an average of 3.6K tokens per article, which makes it a reference data-set to model long-term textual dependencies. We train a 10 layer Routing Transformer with 16 heads using the relative position encoding of Shaw et al. (2018) and with attention and ReLU dropout rate of 0.3 each. For routing attention as in Section 4.1 we choose k = 16 and attention window to be 256 during both training and evaluation. We describe our results in Table 2 and compare it to other re- cent work on sparse or recurrent attention such as Adaptive Inputs (Baevski and Auli, 2019) and TransformerXL (Dai et al., 2019) as well as a local attention with relative position encoding baseline (Huang et al., 2018). We find that local attention is a great inductive bias for sparse attention and is better than the adaptive methods proposed in Baevski and Auli (2019); Sukhbaatar et al. (2019). Moreover, our Routing Transformer model is able to get a test perplexity of 15.8 improving on the 18.3 obtained by TransformerXL (Dai et al., 2019) while having fewer self-attention layers, and without the need for segment level recurrence. # 5.3 enwik-8 The enwik-8 (Mahoney, 2011) is a data-set to benchmark text compression algorithms in the con- text of the Hutter prize. This data-set consists of the first 100M bytes of unprocessed Wikipedia. It is typically used to evaluate character-level language models. Similar to the prior work of Dai et al. (2019); Child et al. (2019) we use a sequence length n = 8192 and benchmark our results against vari- ous baselines including local attention. We train a 24 layer model with 8 attention heads with an atten- tion and ReLU dropout rate of 0.4 each and using the relative position encoding of Shaw et al. (2018). For routing attention as in Section 4.1 we set k = 32 and attention window 256. We report perplexity of 0.99 like TransformerXL and Sparse Transformer, slightly under 0.98 from Adaptive Transformer. # 5.4 ImageNet 64 × 64 In order to evaluate the ability of our model to cap- ture long term dependencies on a modality other than text, we report results on the ImageNet 64×64 data-set as used in Child et al. (2019). For auto- regressive image generation, this data-set consists of images of 64 × 64 × 3 bytes represented as long sequences of length 12, 288 presented in raster scan, red-green-blue order. We train a 24 layer model with 16 attention heads, with half the heads per- forming local attention, and the other half routing attention as in Section 3. For routing attention we set k = 8, attention window 2048, batch size 1 and train our model for roughly 70 epochs as in Child et al. (2019). We compare our model to a scaled- up ImageTransformer model with local attention (Parmar et al., 2018) and the SparseTransformer model of Child et al. (2019). We find that local attention (Parmar et al., 2018) is a strong baseline for image generation, obtaining 3.48 bits/dim when scaled up to 24 layers and 16 heads, compared to later work like Sub-scale Pixel Networks (SPN) (Menick and Kalchbrenner, 2018). Our Routing Transformer model achieves a perfor- mance of 3.425 bits/dim (see Table 4) compared to the previous state-of-the-art of 3.437 bits/dim (Child et al., 2019), thereby showing the advan- tage of the content based sparsity formulation of Section 4.1. # 5.5 PG-19 PG-19 is a new data-set released by Rae et al. (2020) which is larger and longer than previous language modeling data-sets. The data-set is created from ap- proximately 28, 000 Project Gutenberg books pub- lished before 1919, consisting of 1.9 billion tokens and comprises an average context size of roughly Model Routing heads Routing Layers Attention window Bits/dim Steps/sec Transformer Local Transformer Random Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer Routing Transformer 0 0 4 (random) 2 4 8 2 4 8 2 4 8 2 4 8 2 4 8 2 4 8 2 4 8 2 4 8 0 0 8 (random) 2 2 2 4 4 4 8 8 8 12 12 12 2 2 2 4 4 4 8 8 8 12 12 12 3072 512 512 512 512 512 512 512 512 512 512 512 512 512 512 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 1024 2.983 3.009 3.076 3.005 2.986 2.992 2.995 2.975 2.991 2.995 2.971 3.190 2.978 2.994 3.400 2.975 2.950 2.982 2.990 2.958 3.003 2.991 2.983 3.131 2.973 3.005 3.291 5.608 9.023 5.448 7.968 7.409 6.682 7.379 6.492 5.385 6.442 5.140 3.897 5.685 4.349 3.062 7.344 6.440 5.192 6.389 5.112 3.674 5.057 3.597 2.329 4.151 2.788 1.711 Table 1: Ablation studies of the Routing Transformer model on the CIFAR-10 data-set. All the models have a total of 12 attention layers and 8 heads. Routing layers when present are always added at the top of the model. A Routing Transformer model with less than 12 routing attention layers and less than 8 routing heads, has the remaining layers and heads of type local attention. A Random Transformer model has a random attention head in place of the routing attention head. We report the performance in bits/dim on the test set and step times are reported on a TPUv3. 69, 000 words. This is text that is 10× longer in con- text than all prior data-sets such as Wikitext-103, with minimal pre-processing and an open vocabu- lary that makes it extremely challenging for long text modeling tasks. We use a subword vocabulary of size approximately 98,000 and report perplex- ities normalized by the token counts reported in Rae et al. (2020). On this data-set we train a 22 layer Routing Transformer model with 8 heads with a sequence length of 8192 and set a new state-of- the-art result on this data-set, improving on both Compressive Transformers (Rae et al., 2020), as well as Transformer-XL (Dai et al., 2019). For this data-set we change our training setup in three ways. Firstly, we use only 2 routing heads instead of sharing it equally with local heads. Secondly, we use routing heads only in the last two layers of the model instead of having them present in every layer. This is motivated by our empirical finding that long range attention is only needed in the last few layers - see also Rae and Razavi (2020). Finally, we use the Adafactor optimizer (Shazeer and Stern, 2018) which is more memory efficient than Adam in training larger models. We use a learning rate constant of 0.01 with a linear warmup over 10, 000 steps followed by a rsqrt_normalized_decay. We do not make use of any dropout, or weight decay. The hidden dimension of our model is 1032 and the batch size is 8192 tokens. From Table 5, we see that Local Transformer again sets a very strong baseline, with a 24-layer local attention model obtaining a test set perplexity of 39.3, while a 36-layer Transformer-XL gets 36.3. Moreover, a 22-layer Routing Transformer model improves on the 36-layer Compressive Transformer, obtaining a test set perplexity of 33.2 compared to 33.6, while being able to generate sequences of length 8192. Model Layers Heads Perplexity LSTMs (Grave et al., 2017) QRNNs (Merity et al., 2018) Adaptive Transformer (Sukhbaatar et al., 2019) Local Transformer Adaptive Input (Baevski and Auli, 2019) TransformerXL (Dai et al., 2019) - - 36 16 16 18 - - 8 16 16 16 40.8 33.0 20.6 19.8 18.7 18.3 Routing Transformer 10 16 15.8 Table 2: Results on language modeling on Wikitext-103 data-set. Local Transformer refers to Transformer (Vaswani et al., 2017) with relative position encoding (Shaw et al., 2018) together with local attention. Perplexity is reported on the test set. Model Layers Heads Bits per byte T64 (Al-Rfou et al., 2019) Local Transformer TransformerXL (Dai et al., 2019) Sparse Transformer (Child et al., 2019) Adaptive Transformer (Sukhbaatar et al., 2019) 64 24 24 30 24 2 8 8 8 8 1.13 1.10 0.99 0.99 0.98 Routing Transformer 12 8 0.99 Table 3: Results on language modeling on enwik-8 data-set. Local Transformer refers to Transformer (Vaswani et al., 2017) with relative position encoding (Shaw et al., 2018) together with local attention. Bits per byte (bpc) is reported on the test set. # 6 Analysis # 6.1 Local vs Global As reported in Section 5, a scaled up version of local attention is a strong baseline for efficient attention over long sequences. From Table 1 we see that local attention is slightly worse than full attention - 3.009 vs 2.983 bits per dim. Adding 2 routing layers with 4 heads almost closes the gap with the performance of full attention, achieving 2.986 bits per dim. Adding more routing layers and heads improves performance up to a point, with the best performing model with an attention window of 512 having 4 routing layers and 4 routing heads, and achieving 2.975 bits per dim. Increasing the atten- tion window from 512 to 1024 uniformly results in improvement in every setting. The best model on CIFAR-10 has an attention window of 1024 with 4 routing layers and 4 routing heads. Interestingly, the best Routing Transformer models perform bet- ter than full attention, but not by a large enough amount to rule out noise. More importantly, Ta- ble 1 shows the importance of local attention in building intermediate representations, with a model with only routing attention layers and heads with attention windows of 512 and 1024 achieving 3.400 and 3.291 bits per dim respectively. Thus Table 1 shows us the importance of local representations, as well as the benefit of adding a few routing layers and heads to enforce a more global representation. Since attention weights are a probability distribution on the entire set of tokens, we evaluate the difference in attention patterns be- tween local and routing attention by computing the Jensen-Shannon divergence between the two kinds of attention distributions for a random subset of heads in our network on the Wikitext-103 data-set. The divergence is computed over the entire sequence length of 4096. We average over 10 runs and report means and standard deviations of the JSD in Ta- ble 6. Note that the JSD is always non-negative and is upper-bounded by 0.6931 when computed using the natural logarithm. We observe that the diver- gence between the different local heads is always very low compared to the divergence between local and routing attention heads, which is almost always very close to the upper-bound of 0.6931. Divergence between different routing attention heads falls some- where in between, being closer to the upper-bound. This shows that the attention distribution inferred by the routing attention of Section 4.1 is highly non-local in nature and different heads specialize in attending to very different parts of the input. Qualitatively, from the ablations in Table 1, we hypothesize that the reason for the strong perfor- mance of the Routing Transformer is due to the fact that it combines building local representations over several layers, together with enforcing global consistency for every token. This is achieved via an approximate Maximum Inner Product Search Model Layers Heads Bits/dim Glow (Kingma and Dhariwal, 2018) PixelCNN (Van den Oord et al., 2016) PixelSNAIL (Chen et al., 2018) SPN (Menick and Kalchbrenner, 2018) ImageTransformer (Parmar et al., 2018) Sparse Transformer (Child et al., 2019) Reformer (Kitaev et al., 2020) - - - - 24 48 - - - - - 16 16 - 3.81 3.57 3.52 3.52 3.48 3.44 3.65 Routing Transformer 24 16 3.43 Table 4: Results on image generation on ImageNet- 64 in bits/dim. Model Local Transformer TransformerXL (Dai et al., 2019) Compressive Transformer (Rae et al., 2020) 24 36 36 8 - - 39.3 36.3 33.6 Routing Transformer 22 8 33.2 Table 5: Results on language modeling on PG-19 data-set. Local Transformer refers to Transformer (Vaswani et al., 2017) with relative position encoding (Shaw et al., 2018) together with local attention. Perplexity is normalized by the number of tokens reported in (Rae et al., 2020) and is reported on the test set. (MIPS) over the entire set of tokens (see Section 4.1), and selecting pairs that have a high dot product for attention. This allows various entities such as gender, nouns, dates and names of places to be consistent throughout the entire sequence, since on expectation the dot product similarity between sim- ilar entities are high, while for differing entities they are expected to be low. Essentially, we conjecture that for every time step, the prediction depends on a small support of high value tokens: local attention facilitates local consistency and fluency, while a full dot product attention would facilitate global con- sistency. However, for long sequences since full at- tention is infeasible, we believe that using spherical k-means to perform a MIPS search over the global set of tokens and performing attention between these high dot product items is a good approxima- tion to full dot product attention. The importance of the MIPS search to select high dot product items is highlighted from the ablation in Table 1, where we see that a Random Transformer performs worse compared to a Local Transformer and a Routing Transformer with the same configuration, (3.076 vs 3.009 vs 2.971) bits/dim. # 6.2 Recurrence vs Sparse Attention We also note that sparse attention is an orthogonal approach to that of Transformer-XL and Compres- sive Transformer, which train on small sequences and by performing careful cross attention over cached previous chunks hope to generalize to longer sequences. By contrast, we directly train on long sequences from the beginning - e.g., the Compres- sive Transformer trains on chunks of size 512 for PG-19, while we train on sequences of length 8192. The benefit of the Transformer-XL like approach is that it is less memory consuming and thus is able to scale to 36 layers. Sparse attention (including local attention) on the other hand is more memory expensive since it trains directly on long sequences and therefore can scale to fewer layers for the same problem. However, as we demonstrate, it is com- petitive with the Transformer-XL like approaches even when using fewer layers and is guaranteed to generalize to the long sequence length that it was trained on. # 6.3 Wall-clock time We compare the step times for training the various sparse attention models on the CIFAR-10 data-set in Table 1 as well as on the PG-19 data-set in Ta- ble 7. For PG-19 we report only a comparison between the Local Transformer and the Routing Transformer, since sequence lengths are 8192 and performing full attention is infeasible. All the step time comparisons are made on a TPUv3, with the same number of cores and batch sizes to facilitate a fair comparison. As we see from Table 1 local attention is much faster than full attention, train- ing at 9.023 steps per second compared to 5.608 steps per second. The Routing Transformer models on CIFAR-10 have step times that depend on the number of routing heads, with the best performing model with the same attention budget as local at- JSD(localklocal) JSD(localkrouting) JSD(routingkrouting) layer 0 layer 1 layer 2 layer 3 layer 4 layer 5 layer 6 layer 7 layer 8 layer 9 0.0038 ± 0.0018 0.3071 ± 0.1217 0.2164 ± 0.0803 0.1163 ± 0.0336 0.1840 ± 0.0562 0.2284 ± 0.0225 0.1901 ± 0.0525 0.1566 ± 0.0685 0.1638 ± 0.0739 0.2095 ± 0.0560 0.4706 ± 0.0319 0.6674 ± 0.0153 0.5896 ± 0.0249 0.6047 ± 0.0181 0.6266 ± 0.0062 0.6463 ± 0.0155 0.6471 ± 0.0040 0.5798 ± 0.0235 0.5993 ± 0.0148 0.6127 ± 0.0053 0.1579 ± 0.0576 0.5820 ± 0.0104 0.4015 ± 0.0121 0.4144 ± 0.0264 0.4191 ± 0.0879 0.4687 ± 0.0449 0.5175 ± 0.0469 0.4350 ± 0.0139 0.4268 ± 0.0291 0.3581 ± 0.0019 Table 6: Jensen-Shannon divergence between the attention distributions of a random local attention head and a random head that routes attention as in Section 4.1 per layer on the Wikitext-103 data-set. We report means and standard deviations computed over 10 runs and use the natural logarithm so that divergences are upper-bounded by 0.6931. Model Dataset Seq. length Layers Heads Attention window Steps/sec Local Transformer Routing Transformer PG-19 PG-19 8192 8192 24 22 8 8 512 512 1.231 0.7236 Table 7: Step time comparison between Local Transformer and Routing Transformer on a TPUv3 for the PG-19 data-set. tention (i.e. an attention window of 512), which has 8 routing layers and 4 routing heads, training at 5.140 steps per second. Other Routing Transformer models are faster while still matching full attention, e.g., 2 routing layers with 4 routing heads trains at 7.409 steps per second. Therefore, Local Trans- former is roughly between 1.22 − 1.76× faster than the best performing Routing Transformers. On the other hand Transformer is between 0.76 − 1.09× faster than the best Routing Transformers. On PG-19, we see from Table 7, that the Local Transformer is roughly 1.7× faster compared to the Routing Transformer, similar to the trend on CIFAR-10. This trade-off with respect to speed compared to the Local Transformer is due to the lack of support for sparse operations on the TPU; on the GPU various sparse kernels have been proposed which promise to significantly speed up training of these models (Gale et al., 2020). Note that our goal in this work is a memory efficient version of sparse attention that can well approximate full attention for long sequences - wall-clock time efficiency is only a secondary goal. # 7 Conclusion Transformer models constitutes the state-of-the-art in auto-regressive generative models for sequen- tial data. Their space-time complexity is however quadratic in sequence length, due to their atten- tion modules. Our work proposes a sparse atten- tion model, the Routing Transformer. It relies on content-based sparse attention motivated by non- negative matrix factorization. Compared with local attention models, it does not require fixed attention patterns but enjoys similar space-time complexity. In contrast with prior work on content-based sparse attention, it does not require computing a full at- tention matrix but still selects sparsity patterns based on content similarity. Our experiments over text and image generation draw two main conclusions. First, we show that a scaled up version of local attention establishes a strong baseline on modern benchmark, even com- pared to recent state-of-the-art models. Second, we show that the Routing Transformer redefines the state-of-the-art in large long sequence bench- marks of Wikitext-103, PG-19 and ImageNet-64, while being very close to do so on enwik-8 as well. Our analysis also shows that routing attention mod- ules offer complementary attention patterns when compared to local attention. Overall, our work contributes an efficient atten- tion mechanism that applies to the modeling of long sequences and redefines the state of the art for auto-regressive generative modeling. Our approach could prove useful in domains where the inputs are naturally sparse, such as 3D point clouds, social networks, or protein interactions. # 8 Acknowledgments The authors would like to thank Phillip Wang and Aran Komatsuzaki for a Pytorch implementation of Routing Transformer. The authors would also like to thank Yonghui Wu, Weikang Zhou and Dehao Chen for helpful feedback in improving the imple- mentation of this work. The authors would also like to thank anonymous reviewers and the Action Editor of TACL for their constructive comments which helped improve the exposition of this work. # References Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. 2019. Character- level self- language modeling with deeper attention. In Proceedings of the AAAI Confer- ence on Artificial Intelligence, volume 33, pages 3159–3166. Alex Auvolat, Sarath Chandar, Pascal Vincent, Hugo Larochelle, and Yoshua Bengio. 2015. Clustering is efficient for approximate maxi- mum inner product search. arXiv preprint arXiv:1507.05910. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E arXiv Hinton. 2016. preprint arXiv:1607.06450. Layer normalization. Alexei Baevski and Michael Auli. 2019. Adaptive input representations for neural language mod- eling. In International Conference on Learning Representations. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Represen- tations, ICLR 2015. Arindam Banerjee and Joydeep Ghosh. 2004. Frequency-sensitive competitive learning for scal- able balanced clustering on high-dimensional hy- perspheres. IEEE Transactions on Neural Net- works, 15(3):702–719. Yoshua Bengio, Nicholas Léonard, and Aaron Courville. 2013. Estimating or propagating gra- dients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432. Mathieu Blondel, André F. T. Martins, and Vlad Niculae. 2019. Learning classifiers with fenchel- young losses: Generalized entropies, margins, and algorithms. In The 22nd International Confer- ence on Artificial Intelligence and Statistics, AIS- TATS 2019, 16-18 April 2019, Naha, Okinawa, Japan, pages 606–615. Leon Bottou and Yoshua Bengio. 1995. Conver- gence properties of the k-means algorithms. In Advances in neural information processing sys- tems, pages 585–592. Xi Chen, Nikhil Mishra, Mostafa Rohaninejad, and Pieter Abbeel. 2018. Pixelsnail: An improved autoregressive generative model. In International Conference on Machine Learning, pages 864–872. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long se- quences with sparse transformers. arXiv preprint arXiv:1904.10509. Chung-Cheng Chiu* and Colin Raffel*. 2018. Mono- tonic chunkwise attention. In International Con- ference on Learning Representations. Kyunghyun Cho and Yoshua Bengio. 2014. Expo- nentially increasing the capacity-to-computation ratio for conditional computation in deep learn- ing. arXiv preprint arXiv:1406.7362. Kyunghyun Cho, Bart van Merriënboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learn- ing phrase representations using rnn encoder– decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empir- ical Methods in Natural Language Processing (EMNLP), pages 1724–1734. Jan K Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, and Yoshua Bengio. 2015. Attention-based models for speech recogni- tion. In Advances in neural information process- ing systems, pages 577–585. Gonçalo M Correia, Vlad Niculae, and André FT Martins. 2019. Adaptively sparse transformers. In Proceedings of the 2019 Conference on Em- pirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2174–2184. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988. Ludovic Denoyer and Patrick Gallinari. 2014. Deep sequential neural network. arXiv preprint arXiv:1410.0510. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT (1). David Eigen, Marc’Aurelio Ranzato, and Ilya Sutskever. 2013. Learning factored representa- tions in a deep mixture of experts. arXiv preprint arXiv:1312.4314. Trevor Gale, Matei Zaharia, Cliff Young, and Erich Elsen. 2020. Sparse gpu kernels for deep learning. arXiv preprint arXiv:2006.10901. Edouard Grave, Armand Joulin, and Nicolas Usunier. 2017. Improving neural language mod- els with a continuous cache. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Confer- ence Track Proceedings. OpenReview.net. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. 2015. DRAW: A recurrent neural network for im- age generation. In Proceedings of the 32nd Inter- national Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 1462–1471. JMLR.org. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In International Conference on Learning Representations. Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Ian Simon, Curtis Hawthorne, Noam Shazeer, Andrew M Dai, Matthew D Hoffman, Monica Dinculescu, and Douglas Eck. 2018. Mu- sic transformer: Generating music with long-term structure. In International Conference on Learn- ing Representations. Sathish Reddy Indurthi, Insoo Chung, and Sangha Kim. 2019. Look harder: A neural machine trans- lation model with hard attention. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 3037–3043. Navdeep Jaitly, Quoc V Le, Oriol Vinyals, Ilya Sutskever, David Sussillo, and Samy Bengio. 2016. An online sequence-to-sequence model using par- tial conditioning. In Advances in Neural Infor- mation Processing Systems, pages 5067–5075. Diederik P. Kingma and Jimmy Ba. 2015. Adam: In 3rd A method for stochastic optimization. International Conference on Learning Represen- tations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Durk P Kingma and Prafulla Dhariwal. 2018. Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pages 10215–10224. Nikita Kitaev, Lukasz Kaiser, and Anselm Lev- skaya. 2020. Reformer: The efficient transformer. In International Conference on Learning Repre- sentations. Sablayrolles, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2019. Large memory layers with product keys. In Advances in Neural Information Processing Systems, pages 8548–8559. Peter J Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summarizing long sequences. In International Conference on Learning Representations. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi-task deep neural net- works for natural language understanding. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4487–4496. Minh-Thang Luong, Hieu Pham, and Christo- pher D Manning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1412–1421. Matt Mahoney. 2011. Large text compression benchmark. URL: http://www. mattmahoney. net/text/text. html. Chaitanya Malaviya, Pedro Ferreira, and André F. T. Martins. 2018. Sparse and constrained attention for neural machine translation. In Pro- ceedings of the 56th Annual Meeting of the Asso- ciation for Computational Linguistics (Volume 2: Short Papers), pages 370–376, Melbourne, Aus- tralia. Association for Computational Linguistics. Mikko I Malinen and Pasi Fränti. 2014. Balanced k-means for clustering. In Joint IAPR Inter- national Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (SSPR), pages 32– 41. Springer. André F. T. Martins and Julia Kreutzer. 2017. Learning what’s easy: Fully differentiable neu- ral easy-first taggers. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 349–362, Copenhagen, Denmark. Association for Computational Lin- guistics. Jacob Menick and Nal Kalchbrenner. 2018. Gen- erating high fidelity images with subscale pixel In networks and multidimensional upscaling. International Conference on Learning Represen- tations. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. 2018. An analysis of neural language modeling at multiple scales. arXiv preprint arXiv:1803.08240. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2017. Pointer sentinel mix- ture models. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Pro- ceedings. OpenReview.net. Aaron Van den Oord, Nal Kalchbrenner, Lasse Es- peholt, Oriol Vinyals, Alex Graves, et al. 2016. Conditional image generation with pixelcnn de- coders. In Advances in neural information pro- cessing systems, pages 4790–4798. Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, and Dustin Tran. 2018. Image transformer. In International Conference on Machine Learning, pages 4055–4064. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improv- ing language understanding by generative URL https://s3-us-west-2. pre-training. com/openai-assets/research- amazonaws. covers/languageunsupervised/language under- standing paper. pdf. Jack Rae, Jonathan J Hunt, Ivo Danihelka, Timo- thy Harley, Andrew W Senior, Gregory Wayne, Alex Graves, and Timothy Lillicrap. 2016. Scal- ing memory-augmented neural networks with sparse reads and writes. In Advances in Neu- ral Information Processing Systems, pages 3621– 3629. Jack Rae and Ali Razavi. 2020. Do transformers need deep long-range memory? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7524–7529, On- line. Association for Computational Linguistics. Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lilli- crap. 2020. Compressive transformers for long- range sequence modelling. In International Con- ference on Learning Representations. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of- experts layer. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Pro- ceedings. OpenReview.net. Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. In International Conference on Machine Learning, pages 4596–4604. Sainbayar Sukhbaatar, Édouard Grave, Piotr Bo- janowski, and Armand Joulin. 2019. Adaptive attention span in transformers. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 331–335. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Advances in neural information processing systems, pages 5998–6008. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdi- nov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption gen- eration with visual attention. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Confer- ence Proceedings, pages 2048–2057. JMLR.org. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- ing for language understanding. In Advances in neural information processing systems, pages 5753–5763. # A Samples from Routing Transformer In the following sections we present a few samples generated from the Routing Transformer trained on the PG-19 data-set with sequence length 8192. We use nucleus sampling (Holtzman et al., 2020) with p = 0.8 and temperature of 1.0 to generate these samples. # A.1 Sample - I During the early days of the Council of Nice and the subsequent existence of the Council of Basle, one section of the Council of Nice made a marked opposition to the pretensions of the Council of Basle. Some of them were men of high rank, others members of the lower classes. They had been formed into a union which was called the Papal Council, and which for the time being proved to be of the utmost importance to that Church in which it met. The first session of the Council of Nice took place on September 2, 1487. The two bodies met in solemn assembly and made arrangements with one another. It was decided that a considerable portion of the Council should proceed to Zurich and lay before the Council of Constance the proposals of the Church of Basle for settling their disputes. It was also resolved that a meeting of the representatives of the Christian Emperors of Germany, France, Portugal, Hungary, England and France should be held. Three bishops were commissioned to despatch ambassadors to each of the two Councils to urge their respective envoys to combine and come to some agreement regarding matters ecclesiastical. It was agreed that the Council of Basle should at once take steps for the reformation of the Church and the peace of Christendom; while the two meetings were to be united in one. Various questions of dispute were settled in a friendly way; but the whole subject of the relations of the Church to the Papacy was laid before the Council of Basel, and an agreement arrived at regarding the ecclesiastical and civil relations of the Church with the head of the Papacy.One important result of this Council was that it thus obtained two important concessions the first in making a papal from the Popes: establishment the natural basis of ecclesiastical authority on a great scale and yielding to the papal pretensions; and the second in providing for a Papal Council of Basle in which there should be ecclesiastical authority, and a bishop of the Roman Church, to meet the needs of the Churches of Europe. The Council of Basle likewise obtained the provision that the election of the Pope should be conducted by the same general council and by the head of the Church at Rome, and that no other form of appointment than that of a personal election to the Papacy should be in force. It was in effect a completion of the Council of Basle. It left without a head, indeed, but with an indication of its existence, the crowning work of the nineteenth century. The Council of Basle had not succeeded in bringing about the acceptance of the Papal headship; but there can be no question that the defeat of the Papal claim, at the Council of Lyons in the year following (December 17,1530), determined the attitude of the Papacy towards the Church, and prepared the way for the action of the Council of Trent. For at that time it seemed as though, after the Council of Lyons, the Council of Trent could no longer prevent the intrusion of the Papacy into the Church, and it was recognised that there was to be no more preaching in the Churches of Europe, for this once. Yet the fact remains that there was no Papal interference with Church government. From that time forward,however, the rule of the Church became more rigorous, and towards the end of the sixteenth century began the crisis in the Church which lasted until the general council of the Council of Trent.The organisation of the Swiss Church had been brought down to the time of Zwingli (1516-1531). It was in based upon an organisation strictly clerical character, as the Canons of the Roman Church insisted upon the clergy being for the most part clerics of the clerical order. In this respect this system was a reminiscence of that of the Roman Church, except that the mass of the people were clerics of the clerical order, who were liable to be deposed at any moment by the spiritual authorities. In the present instance we must recognise that Zwingli introduced a new conception of Church government; for although a great deal of the work of the Reformation was done under the direction of Zwingli, yet the organisation of the Swiss Church to some extent, and the connection of the civil with the ecclesiastical system, served as models for the organisation of the Church in all the Protestant lands.No doubt there was a great amount of copying of Rome, and some irregularities of arrangement were to be found. It is to be noted, however,that most of the reformation principles and practices of the Reformation were embodied in the Church organisation of the Swiss Protestants; the chief result being that, whereas the earlier system was still simple, the Church reformed more strongly and specifically, and was thereby destined to get more help in the direction of the Protestant reformation. So that even in the confusion arising from the change of Church organisation in the sixteenth century the Swiss Church was drawn much more closely to Rome than it would otherwise have been.The first work of the Reformation, however, to which the introduction of the Bible is to be attributed, was done in the early years of the sixteenth century. The era of the Reformation had begun; and this event was by no means likely to pass over without some indication of its influence in the world, for the Reformation had assumed the character of a great political event. The work of the reformation was in a large degree concerned with the national character of Protestantism. The reformation had been the work of religious philosophers, and it was a momentous and noteworthy step towards the winning of the political independence of the nations. But Luther had accomplished no permanent political revolution.Instead of that he had worked to establish that political absolutism of the kings which is the most distinctive characteristic of the Protestant polity. It was not in the modern European sense that he destroyed feudalism and for the other institutions based on tradition; victory was of the Gospel, and he hoped by its means to add another to the ten thousand proofs of the Divine origin of the kingdom of God. The power which he had created was in a large sense political power, and it was part of his function to secure such political power for the Church. He also worked, at an early stage, to further the establishment of the independence of the Church of the Brethren, but it was not until the Reformation became an aggressive factor in the life of the nation that the need for further political recognition of the Church was felt.The reformation movement was to have a most important effect on other aspects of the life of the people, and also upon the growth and extension of Protestantism. The great change which was thus produced, and which has been described as the direct and immediate outcome of the Reformation, was in effect essentially religious in its nature. The re-establishment of the Church of the Brethren has never been one of the least noteworthy phases in the history of the nation. During the next two centuries the popular Church of the Brethren increased in number, importance, and popularity. The king, the nobles, and the more educated portion of the people came more and more to regard it as the natural bulwark of Protestantism; and in a comparatively short time, and within comparatively short space of time, that work which Luther did for the establishment of the national life has been carried to a high degree of accomplishment by the English and other Protestant communities. The beginning of the Reformation, as already indicated, was a direct consequence of the effects which were brought about by the Reformation.It was not simply in the Church that the recognition of the Church of the Brethren made itself felt. The religious feelings which were aroused,and which were finally developed into a religious habit, have already been sufficiently dealt with in connection with the general history of the German nation; and the re-establishment of a purely spiritual faith and of a dominant religious life in the land, one which could not possibly have been attained save by the outpouring of the Holy Spirit and by the renewing and transforming influences of the Divine Spirit, was among the first results of the Reformation. To that work belongs the development of the German Reformation in its broadest and widest form; and the causes which determined the course of that development may be shortly stated as follows:In the first place, we have seen how the study of the Bible and of the Apocrypha, and of the Jewish conception of God and of the obligation to fast, created a desire for the study of the Bible in a larger and deeper manner than any before known; and, secondly, how the passion for writing profane history and for the writing of sacred history was fostered by the increase of the Roman Church; and,thirdly, how the study of the Scripture in a more liberal spirit–a great impulse to the study of the Old Testament in an earlier period in all its forms, and towards a development of the conception of God and of a more secular spirit in the life of the nation, helped to accelerate the spread of a new and healthier conception of the Christian life. This latter result, and this alone, tended to produce a new and productive condition of the nation in the matter of religion; but it also reacted on the missionary endeavours of the members of the Church of the Brethren to attain a deeper religious development. The desire to read the Bible, to adopt the principles of the Reformers, and to raise the standard of life and manners, not only stimulated the energy and assisted the zeal of the societies of the Brethren, but also stimulated their wider application to particular branches of the work which they had to do. In other words, the deeper study of the Bible as the study of the Old Testament became the religion of the people, and by the sheer force of the influence of these early studies the religious work of the German Reformation took shape, and became one of the most important political movements in Germany. The movement,thus inaugurated, was still later in reaching results in other countries.Before reaching Germany, however, the religious work of the Reformation had made a great impression upon one of the rulers of that country. Philip of Hesse, in 1495, was a child in years; but he was a man of religious instincts and aspirations, and his first utterances were destined to be the embodiment of that new religious idea which for so many years had been deeply implanted in the national mind. The importance of this movement will not be denied. It was an expression of the revival of the primitive and devout tendencies in the Lutheran Church; and in the Lutheran Reformation itself there was far less of scientific study than of poetic expression. It is plain, then, that the Reformation movement in Germany was in some respects influenced, as it was also in some respects modified, by the study of the Scriptures in their original languages, and with a more modern translation of the Bible into modern German.But the condition in which the Reformation found Germany had in a large measure changed. The enthusiasm which formerly animated men for the study of the Bible in all its original tongues was broken down. They did not recognize that the Bible for the understanding of God’s Word, and for its guidance through life, was not only the best language in which it was written, but, as already noticed, it was the chief interpreter of all other languages. When we remember that Luther was a professor of Divinity at Wittenberg; that Luther had expounded, in the German tongue, his new faith and new life; and that this same translation had found its way into the minds of thousands and tens of thousands of people in other countries; and that the old German Bibles did not by any means constitute the translation generally used, and that, except for the selection of modern translations, the standard text of the German Bibles for our service was of course by no means the best, we can hardly fail to see that it became clear that the Scriptures as the Bible for the understanding of God’s Word were inadequate for the elucidation of religious problems; and, further, that there was no substitute, no adequate translation of the Bible that was available.In order that the question of its complete translation might be understood, it was necessary to seek to adapt it to the spirit and needs of Germany, and this was the task which the Government of the German Empire set itself, and upon the result of which depended the situation under which the Reformation came about.The aim of the Reformation, in the words of Luther himself, was,primarily, the study of the Bible as a living interpreter of God’s words and revealing God’s will in them; and, secondarily, the acquisition of a living, active, self-interpreting, and God-glorifying Christian spirit. In order to study the Old Testament as a living revelation of God’s character and as an example of what God’s Spirit, as that Spirit of truth, is capable of doing, it was essential that they should have some historical contact with the Old Testament; and this contact was brought about by the introduction of commentaries on its text. It was in this way that the institution of the /Kleinpostille/ and the growth of a literature for it were due to the zealous and devoted efforts of German Christians at this period. It was because the /Kleinpostille/ and the/Kleinpostille-Lexicon/ were due to the vigorous, energetic,and helpful German literature which sprang up in Germany during this period, that the celebrated /Lutherana/ was put forth in the sixteenth century.Nor did it remain for the Reformation to avail itself of the facilities which this literary form gave it in Germany. It had not been intended to continue its work without the aid of a translation, and before it was generally accepted as such an adequate one, a work of translation had to be done, and this was accomplished in a most able and painstaking fashion by /The Commentary on the Galatians/ in 1531. In that work, also, the advantages of translation, as well as the emphasis which the services which it rendered were warranted to lay upon, were well recognised, and it has always been thought that Luther’s translation was the best rendering that was available for his readers.There is no need to dwell upon the fact that a work such as this,which for twenty years was in the hands of all the students of German theology, could not have found its way to a Christian home in a Protestant country like Germany without being a source of new and most valuable information. We find, indeed, in it the most valuable reflection on the extent of the religious life and the condition of culture in the countries which represented the belief and received the teachings of the Reforma- tion, as well as the most remarkable revelation of the kind which the Lutheran Reformation contain # A.2 Sample - II which the king and his council had agreed upon. On Sunday morning at eleven o’clock I arrived at the royal palace of Paris, where my uncle,the bishop of Chartres, received me in the grand antechamber with the customary grace of his manner. We went immediately into the room of the king, and the bishop of Chartres was so kind as to take me to him in the presence of his majesty. This morning Louis XVIII. held a review of the troops under the orders of the Duke of Orleans.[Illustration: THE OLD CHATEAU (ST. GERVAIS)]"I did not ob- serve," adds the Abbé de Pradt, "that he had a very fine set of teeth, although it is not the custom in the court of France. I was struck by the extreme whiteness of his countenance, and the whiteness of the beard, which he allowed me to see and feel. He was still very pale, and his clothes alone gave him the appearance of being in good health." He spoke to me in a low and gentle tone without any affecta- tion of severity.[Illustration: LOUIS-PHILIPPE DE FRANCE, SON OF LOUIS XVIII. AND CHAR- LOTTE CORDAY.]He was tall, but looked thin; his frame was very lean, and he did not possess sufficient dignity to conceal the feebleness arising from the length of his limbs and the length of his legs. He walked like a man who is too proud, and who does not wish people to see him. All those who had the honor of being admitted to the royal bedchamber immediately remarked his extreme ner- vousness. This state of the King’s character, which has been much remarked, arises from the long pe- riod of preparation for the functions which it oc- cupies, from the long life for which he has been obliged to prepare, and from the weakness of his health. It was natural that the king should not bear arms with all the agility which might be looked for from so young a man. As, however, there was no longer any necessity to employ his bodily strength, he resigned himself to taking a seat, and there he remained motionless for some moments after he had seated himself on a fauteuil. He seemed lost in thought, and his mind must have been deeply occupied. He spoke little. He frequently turned his head to look toward the door; but he did so so slowly that it was impossible to observe his features. At first he showed no interest in the proceedings of the day. At last, a cannon-shot being heard in the direction of St. Cloud,he raised his head, looked for a moment at his watch, and said, "Come now, here is the beginning of the play." I afterwards saw him every day in the same manner, and the habit of not looking for the end of the piece continued in his mind until his death.It was only in some moments of extreme agitation or deep reverie that an expression could be observed upon the King’s countenance. His features did not then wear that state of tension which they assumed on the first appearance of serious danger. He did not appear to feel the smallest uneasiness, but, on the contrary, a sort of inward joy.He was full of an instinctive respect for his son’s life, and of an anxiety for any danger threatening it. His great anxiety arose from his own extreme weakness as well as from his own inexperience in affairs of state. He was the dupe of his ministers; he regarded them as his real friends and as the most devoted subjects in the world; he would even not deny them the honors he paid to them. He was not disposed, even during his most active occupations, not to forget to send his minister on an important mission.If the King had been a man of energy he would have made active use of his power; but it was a peculiarity which might be said to belong to his whole history to allow himself to be led by others; never to have a will of his own; never to have the courage of his age.The King was very fond of his daughter-in-law, the Princess Louise,Madame Adelaide’s only daugh- ter. He was fond also of his daughters.Hortense especially, whom he loved sincerely, was extremely attached to him, and never quitted him without having her clothes pulled, and being told that her petticoats would fall off, in order, she said, that she might walk upon them, as she had never yet worn one. This affection of the poor King for his daughters was so great as to be almost an affection of paternity, and he appeared to be even more at- tached to them than they to him. The Princess Adelaide, who was also extremely gracious to him, often went with the King the same way; for her great tenderness for her father-in-law, and her own natural timidity, prevented her from ever daring to speak to him upon any political question. The Princesses, though very young, had some influence with their father, the King. Every one would have thought that the Princess Louise had been his wife, and that her father would have been entirely ruled by her wishes, and that this influence would have been an authority upon which he would not have ventured to act; and yet, since his daughter had taken the veil, and had abandoned the Regency, they had seen him frequently on these subjects, and the Princess Louise had been always his companion on the most interesting occasions.When our troops were about Versailles on the 16th of April, 1815, they were fired upon by the Prussian soldiers. The latter had been stationed some hundred paces in the rear of the King’s troops, with the object of watch- ing their movements. Suddenly all was changed, or, at least,a sudden silence ensued. At the turn of a road which runs from St. Joseph’s chapel to the King’s house there was a barricade.The insur- gents halted and took up their arms for an instant. The insurgents were very numerous, and had a small but regular force. One of the generals sent forward a soldier to beg permission to fire a few muskets for the purpose of driving back the en- emy. The officer advanced to the barricade alone, and returned in about five minutes accompanied by twenty-seven men, all in uniform. They were told to sit down in a circle, and not to stir. Then a man of the people spoke, asking permission to address a word to the general.The people were ev- idently frightened at this new sort of attack, and were evidently preparing to be frightened. The general, however,continued his calm and dignified demeanor, and began to speak a few words to the people.[Illustration: THE KING’S AMBASSADOR AT THE BARRICADE–Page 58.]"My friends," said he, "I am not surprised to find you ready to give us a demonstration of your love. We need it in our work of salvation, as you need liberty in your work of vengeance. I am about to begin."A man from a group of some thirty men placed himself on the barricade,from a desire to see what was going on. He then cried out loudly:"Forward, forward, my people! Forward!"The King advanced to this barrier. An officer of the national guard stepped forward, and, presenting his musket at their guns, said:"Down with the traitors!" The whole battal- ion instantly obeyed the order. They were taken, shot, and dispersed, while the royal troops marched along with their muskets at ease, and without firing a single musket.From that time forth the King was called upon to appear as an interested party in all the revolutionary scenes, and it was necessary to give him a part in every disturbance. Every hour had its dangers.It was necessary, too, that he should give some proofs of his firmness,even at the expense of his dignity. It was, therefore, necessary that he should not only give advice, but also that he should execute it. He could not do so, however, without being placed in some difficulty and embarrassment. If he were to send an officer to the Assembly with a written order, as he did, he could not avoid the risk of having him killed; and if in the Assembly itself he issued a proclamation, the magistrates could not fail to take notice of it, and would as- suredly refuse him the opportunity of showing his strength. He therefore thought it necessary to put forward a bold step to enable the King to save his kingdom. He gave orders to go and see General Bugeaud, who commanded the French troops at his command.Bugeaud was very popular. His name was known to the nation, but not much known to the King. The King, on his part, had been very well known, and had been very favorably noticed at a time when the people of France were filled with anxiety for the safety of the crown. He went to him and spoke to him of this event, of the conduct of his forces, of the danger which threatened France, and of the imminent danger of his Majesty. Bugeaud saw that he was right, and did not hesitate; for there was no longer any need of saying, or of look- ing about, or of any sort of hesitation."I was at your Majesty’s service," said he, "and I shall take care that you may not be obliged to regret it." He showed the King, by all the means in his power, that he considered the situation too dangerous to be abandoned, and that the only thing to be done was to carry the matter boldly through, without the slightest show of timidity. The King returned to Paris, and then Bugeaud marched for the scene of action.The town of the Faubourg St. Antoine still occupies a position surrounded by a double row of hedges, in which there are always sentinels placed to watch the approach of the inhabitants. It was through the gates of these hedges that the King and the deputies retired; but still it was neces- sary for them to pass through the streets to regain the town. They traversed these streets, the King being in advance of the others to take possession of the place. He was a magnificent specimen of a man,full of the vigor of youth and health, and with the strength of a Hercules. A great deal was said in the streets about his majesty,and they described a portrait of him in the character of Coriolanus.The King was accompanied by a numerous and splendid escort of the most distinguished persons–members of the Assembly and foreign ministers. The popu- lace, eager once more to see a king whom they had so long adored, came out, from all directions, in bands, to meet them.They formed in two long lines along the streets; they crowded so closely behind the King that it was with the greatest difficulty that he was enabled to reach his dwelling. They came thither tumultuously, and they presented to him, not flowers, or wreaths, or any other tokens of adulation, but those tricolored cockades which are the emblem of the revolutionary power, and which the King was well aware how fond they were of. He could not refuse them, and, after having taken leave of them cordially, he left them rejoicing and contented.In the meantime the King proceeded to hold a session at the Tuileries.The Assembly had reassembled, and had made him a new proposition, if such it might be called, and the King had to de- termine what he was to do with it. He had already given his consent to the removal to Vincennes of those deputies who were still in Paris # A.3 Sample - III the first time the subject was presented to me was at the house of a friend of mine named W. H. Green, whose father, at a dinner of his relations, the Bar- ings, asked him if he ever read anything. The book he chose was Bulwer’s romance, _Pelham_. The latter he read, and was highly gratified with its merits. Having become the possessor of this trea- sure, he determined to attempt a similar attempt on his own account. He therefore wrote out a dra- matic _scena_, and went to the theatre to ask me for an introduction to Messrs. Sheridan and the Hon. Mr.Norton, whose company he then repre- sented in the _Stranger_–a piece which came out at Drury Lane in the summer of 1822. The intro- duction,however, was not so readily obtained as he expected; the manager objected to the character of "Emilius," and the actor who supported him said that it would have been a great advantage to have given him his choice. On these representations Mr. Green made up his mind to write a play on the prin- ciples of Bulwer’s _Pelham_; and, after an interval of about three months, produced his play, _The Adventures of Major de la Motte_. The acting of these two dramas was about all he had to bestow; the public, however, was abundantly satisfied with one of them,for it brought into general notice a very clever young man, at the then head of our profession, Edmund Kean; and the public were by no means displeased with the style of the acting of the other in which his brother-in-law, Mr. Green, was conspicuous.These plays had been represented to Mr. Green, at whose suggestion the tragedy written for him had been rejected, when I met him unexpectedly at the house of a friend, a few days after the conclusion of these performances. I was surprised at the warmth he manifested when I told him whom I had seen, of my own failure in the _Stranger_ case, and in his subsequent successes. He was delighted with the latter, but told me he feared the former had not been altogether satisfac- tory from a literary point of view.I was delighted however, when I read the play with him, he said, and immediately became enthusiastic in praise of the performance. He urged me the more to under- take more of such parts as Mr. Kean had so well filled, and even offered to give me two or three hun- dred pounds for the parts, in addition to any little salary I might think I should derive from the perfor- mance. I did not wait for his proposals to go further, but at once commenced writing out, preparatory to acting, the parts he had himself assigned to me. This step was not one that at first met with any opposition on the part of the actors of the company, but afterwards, as they found reasons to dislike the idea of my acting in any but their favourite characters, the affair took so serious a turn that the manager felt called on to interfere to prevent its being carried into effect. After some altercation with him,the matter was brought to a compromise, by the agreement that I,instead of retaining the character, was to give up the play to the company, at their own option, and that Mr. Kean was to assume the part of Sir Giles Overreach.When this piece was finished, and given to be acted at Drury Lane Theatre, by the company then in London, I was very nearly leaving it without seeing it, but I felt the importance of a rehearsal, so that the actors might be more ready to read it afterwards. Mr. Kean,however, who for some time had taken the play by way of a pattern,determined to proceed with it to the other theatres, and with a view to making it perfectly familiar, made me sit down with him to receive and read over the parts, that he might put down in my notes what alterations he thought advisable. It was arranged that he and Mr. Green should make their first appearance, with Mr. Kean to second him in Sir Giles Overreach. During the progress of the rehearsal, Mr.Kean requested Mr. Green to sit down on a chair I had borrowed to write down the character with, and to read it over in a distinct voice. It was a trying moment for two men like them, to start so diametrically opposite to each other in their parts. In the part of Sir Giles, Mr.Green was very nearly equal to Mr. Kean, having a good deal of natural power. It was as a _listener_ that Mr. Green won Mr. Kean’s heart. When,therefore, Sir Giles made one of the speeches which had so excited my admiration at Drury Lane, Mr. Green listened with all the in- terest of a_listener_, but at the same time with a certain sarcastic curl of his lip. When he came to another, however, he was altogether the _lis- tener_of the play, and his part was the _listener_ in this instance with a spice of the _speaker_.It was a difficult task to Mr. Kean to play a part with so much character in it; and in his hands I have seen Mr. Green put on a _whole host_ of characters in a minute. It used to be said of Mr. Kean’s acting,that it was a _whole library_ of characters, and to hear him read a part over, was, for me, to begin with learning the scene to read it with him, and then the whole of it in its several parts. In the days of my youth, his reading was, at times, as interesting to me as any story-telling I ever listened to, and I never heard his readings through without feeling highly satisfied with myself for being an attentive listener to him. Mr. Kean never read a part over with me; indeed, as far as my memory serves me, he did not utter to me a single part of it aloud. After the first night it was not necessary that we should agree on the parts of Sir Giles. There the _listener_ (whose part, in this one instance, was not a difficult one to him) was more than a match for Mr. Kean; but from this time, and for several nights afterwards, the latter was in the habit of reading the part over in his usual manner, I being generally present. During this period, I was not so attentive as I otherwise should have been to Mr. Kean’s readings; but I was so fascinated with them, that I never for an instant doubted that they afforded me the most intense enjoyment. If I was particularly fond of any scene, I used on more than one occasion to read it half aloud to the play-acting manager; and, as I could never overcome what was then in my voice a defect of hearing, I was frequently rewarded by hearing the tones of Mr. Kean’s voice, with the accents I have just mentioned, coming from the other end of the theatre, when no person seemed to know any thing of its origin.Mr. Kean had a much longer and more difficult task than his brother in getting a play played, for Mr. Kean, after a certain stage success,was forced to give up everything as hopeless. In the autumn of 1847, he was engaged again to play for Messrs. Oxberry in the "Widow Married," which he did on the 16th of January, 1848; but that season, with the exception of one evening, was one of great fatigue to him. He gave up the stage for this engagement, as he said, to "have his hair cut," and this I believe he did, his grey locks being then closely clipped. In Mr. Kean’s account of the following circumstances, he speaks of"this hair cutting" being a scene to which he refers on one occasion,saying, "If it had been my hair I should have got more satisfaction from my barber’s art than from my razor;" and he mentions the following remark made in allusion to the incident:–"’How’s this?’ says Mr.Kean, as soon as the operation was over; ’this is a great loss.’ ’Oh!yes, sir,’ says the fellow; ’I know how little money I get for cutting a gentleman’s hair; but I can cut your wig with ease; but your hair’s a credit to the shop.’" Mr. Kean himself seems to have been aware that he was no longer so efficient in managing the part of a hero, as in his youth, and that there were times when he was really unable even to represent the characters suited to his talents. So it came to pass that the part of Sir Giles was handed over to Mr. Kean’s brother, who gave up the other four. It may be imagined that the task of acting Sir Giles had not in this case been very light.While acting the part of that character he had to play the part of_Edmund_ to Mr. Kean’s father, who had given me permis- sion to give his story as I find it in Mr. Kean’s manuscript:–"I had the honour of acting on one oc- casion at Drury-lane with Mr. Kean, who had the honour to be a pupil of Mr. Kean’s, at Colebrook Street, Covent Garden,and the theatre had been closed in consequence of the non-performance of my _debutante_. I had the honour of appearing in my professional character; my name was made known to the audience; the manager sent for me and told me to go to the box which had been reserved for me the night before. I saw the box door open, and I entered it in triumph, and I found the occupants of it the great Mr. Kean and Miss O’Neil. No words can convey to your readers any idea of the triumph that was given to me. They introduced me to Mr. Kean, and the manager sent me to the theatre in the evening, and the curtain was drawn up on the last act of ’The Hunchback,’ when Mr. Kean and Miss O’Neil made their _debut_on the stage. They were not long in creating a sensation. There were murmurs of applause that could be but one opinion as to their powers.The moment Mr. Kean had finished, there were cries of ’Mr. Smith! Mr.Smith!’ and it was quite evident that he had been acting in his own name,and not in that of Mr. Kean. The actor’s name was pronounced in a loud, decided tone, not the faint, piping cry of his brother-in-law.The effect was extraordinary; from this moment I was sure of Mr. Kean and his sister, and ever since has been my pride and my reward. Of course, if I had to be a manager myself, I should make it my business to look immediately into the merits of each one of these performers. I say that to my mind, the two were not equal for the purpose of the piece.Edmund Kean was the more powerful. There was a nervous motion, and a manner altogether superior to Mr. Kean, a great deal more majestic and impressive. He spoke more and better. Mr. Kean spoke in a louder, and, in my opinion at least, a better tone than the other; it was less that of an effeminate, than that of a manly actor."In the following letter to my father, I find Mr. Kean speaking of himself, in the _roles_ of Sir Giles Overreach in the _Courier_ and Sir Giles Overreach in the _Winter’s Tale_, as follows: "At the close of the first act of the _Winter’s Tale_, I entered into conversation with an acquaintance of mine. When I first saw him, in one of the boxes, it was evident that I was going to do him an in- justice. I asked him to come down with me to the stage-door. He was absent at the moment, being occupied with an elderly lady, who was on her way to her carriage. I was not, however, so much as- tonished at his non-attendance, as was his mother; and I had learned, in the course of my professional acquaintance, that this venerable lady did not often alight from her carriage to walk about behind the scenes with her son. With her, he had been in the habit of making short, hurried visits, and with her, I could easily discern, that the mother had been in the habit of making short visits, and with her, the daughter had been in the habit of making short visits, and that both equally were in the habit of having short visits made to them."Such was Mr. Kean’s manner, when he was at Exeter, in the year 1817; so changed by his residence in Paris, that the man who was the most accustomed actor of the two, now appeared the least so. Before I speak further of his first acting in London, I will give a sketch of his character on the stage, as it was at the opening of the theatre in 1809,at the Lyceum, in that city, on the 25th of January.A great actor, I have heard, in his more matured hours, can take pleasure in criticising the young efforts of his actors; and if any one doubts my statement, let him try the experiment. I myself do not think such an occu- pation necessary; but when it _is_ required, when no actor can perform his parts adequately, I should not be a little astonished if, in the character of Mr. Kean, he should not say with the poet:–"What, I think, I do,My actor can’t tell;Perhaps I shall be An able man after all."But Mr. Kean’s character on the stage at that time, consisted more in his acting than in anything else. He was the first manager who tried to put the best in the best place. He called his actors together,and said, "Now there must be no mistake about you, my hearties!" and then he would begin his remarks in this fashion: "This play is not for you, but for Mrs. Siddons; it is meant to show how the young men of this country must act. Do not let us, poor actors, be afraid of being laughed at and made to speak to a stupid, noisy town audience. They_are_ stupid, certainly; but they always laugh at you, and make a fool of you." It was this kind of thing that made Mr. Kean so admired,even in the midst of his success at Covent Garden; but the impression made upon us by his acting during Mr. Kemble’s performances,when compared with that of Mr. Kean, is very differ- ent. At first, I thought him more agreeable; then I thought him more impressive, as he became better acquainted with the ways of the stage.We have here, on his arrival from Paris, the following letter from Mr. Kean:–(Received from Mr. Kean, on the 11th of December, 1812.)"MY DEAR SIR,"The theatre does not open until to-morrow evening, as I am anxious that it should be ready for the public when I return. It is the last public play in # A.4 Sample - IV White-deer a pair of grey, northern Algonquin, also white-deer of a paler colour than common. Great babbler, the commonest of summer warblers,all these are found in a great number of localities in southern Ontario;but at Lake Erie and Lake Ontario, where they are few, they are quite com- mon.Then, again, during the migration season they will often be seen consorting with their relatives the Canada Jay. On this account, a very large number of hawks that, though they are not regular song- sters, are generally taken on the wing. But they are especially abundant in Newfoundland in the neighbourhood of the Little Fête and other great feasts, and are likewise met with in Newfoundland in winter, where they may be seen all the time, though they do not come in great numbers into the towns.Audubon tells us that although nearly all these birds spend the summer in Canada, yet they frequently winter in South America. Such have been frequently seen, but never described, by other observers. In studying any of these little northern warblers, we must go back to the winter quarters of these little birds, or at least see where they pass the summer.[Illustration: AMERICAN GOLDEN PLOVER, MALE AND FEMALE]How beautifully speckled are the breasts of these Golden Plovers! how beautifully spotted the upper parts of the head and breast, especially the under wing coverts. But on this account, their bright colours are particu- larly attractive, because the group is very abun- dant, and their close relative, the Golden Plover, is also frequently seen in the far north.This bird breeds sparingly in various parts of North Amer- ica, but almost exclusively in Labrador. There it nests in small colonies of a dozen or more, making choice, I have no doubt, of some open, dry piece of ground,building their nests of grass and scraps of grass, placing them in the midst of grass on which, in company with their kindred, they pass the winter. The nest is built, in all probability, on the ground, or on the top of a tussock of grass or a tuft of oats, which has been dried, or rolled into a conical shape by birds, but which they have neglected to do for themselves; and after laying their eggs, they scrape down the soil upon which the nest is built, and together, with a few young, feed them all the summer. They pair about the end of April, and begin to breed so soon as the breeding season has passed, at the same time that the male bird may be seen sitting upon the outside of the nest.The nest of this species is not built as closely as that of the English species, and not being peculiar to Amer- ica, a large number of its eggs has been obtained in Great Britain, and it is highly probable that it exists abundantly in the United States also.In breeding time, the Gulls and Terns, as well as the other birds, do not congregate in large flocks, but generally avoid flocks that are daily passing, and thereby contribute very much towards diminishing the number of their feathered associates, which, being fewer, would be more easily preserved. The same thing may be said of the very numerous young which come with the large migration northward, and, in a measure,counteract the tendency to over- crowding.But although the Gulls and Terns are thus apt to resort to the north in winter,how many of the same species are known to breed in the other parts of the world? The British Islands, indeed, are but thinly populated, and the season for breeding does not arrive so early as that for breeding in Europe. We find, therefore, in the British Islands only a few pairs or very few individuals. The Skuas and Petrels are probably more numerous,but such is the local distribution of this species, that it is difficult to find more than three or four of its breeding haunts. Our only figure of this species is in the "Manual" for the year 1858, in which it is figured under the name of _Crex pusilla_.[Illustration: BLACK GUILLE- MOT]BLACK GUILLEMOT.* * * * *SPECIFIC CHARACTER.BLACK GUILLEMOT.–Bill, the base of the upper mandible and the tip of the ear black; legs, legs, toes, and feet, black; wings, black- ish, the feathers margined with dull ash-grey; upper parts ash-grey; quills blackish, margined with grey- ish; tail blackish, the inner three feathers of the outer web tinged with brown, and the next tipped with white, except on the inner web; the two outer feathers of the outer web tipped with white.* * * * *The present species was discovered by Cap- tain King at Sitka, in Russian America, and may be distinguished from the preceding by its black rump,beneath which are eight blackish-brown lines, beginning at the base of the feathers. In its haunts, it is rather tame, but in autumn it seldom perches on trees. On the coast the breed begins to breed in December, and by the end of April it will have laid about six eggs. It is somewhat gregarious, some- times in large flocks. A female caught in Baffin’s Bay in 1825 was of a sooty black colour above and light ash-grey below, with three of the tail-feathers of a blackish tinge.* * * * *TEMMINCK’S GUILLE- MOT.TEMMINCK’S GUILLEMOT (_Haemato- pus bairdii_) is said to have been taken near the mouth of the Columbia, and by Captain Cook has been called the Common Guillemot.TEMMINCK’S HELMET.TEMMINCK’S HELMET. Plate XXI. fig. 3.* * * * *Adult Male. Plate XXII. fig. 1, 2.Bill, the base of the upper mandible and the tip of the ear black; legs,feet, toes, and feet black; upper part of the head and neck dark ash-grey;back, scapulars, wing-coverts, and quills black, the latter margined with pale greyish-white; tail of the same colour, the middle feathers of the outer web at the end tipped with white; three outer feathers of the same, and the next two very slightly tipped with the same; lower parts white.Total length 5 inches, extent of wings 5, depth of body 2 1/2 inches.This species is only two feet ten inches in length, and during the summer time, during which it can be seen floating on the ocean in autumn,resembles the preceding, but it is so extremely scarce, that it is rather a difficult matter to ascertain its haunts. I have no doubt that it migrates from Europe, across the Atlantic, to the north, even where it is now known to be extinct.* * * * *AMERICAN SEA-EAGLE.EIDER-BILLED BOOBY.* * * * *_HaliaA|etus leucogaster_, Wils.* * * * *AMERICAN WHITE-FRONTED BOOBY (_HaliaA|etus leucogaster_, TEMM.) is one of the smallest of the American species, measuring only five inches and three quarters in length. The bill is black, and the feet deep brown. It is a bird in the collection of the late Mr. John Cassin of New York, and was shot in the neighborhood of Lake Erie. Length 5 inches and 3/4, extent of wings 3 inches and 1/4, depth of body 1 1/2.* * * * *I have been indebted for the above description of the Blue-headed Buzzards to my friend, Mr. Wm. L. Beal.* * * * *PALL MALL BLUE-HEADED BOOBY (_HaliA|etus pallens_, TEMM.) may be distinguished by the reddish band over the eye, and the brown patch on the primaries,which are longer and more attenuated, than the black ones of the last species, the bill being a little broader and red, and the legs lighter than those of the last species. It has been called the Alpine Blue-headed Booby, by the late Dr. Edward Smith, in his description of this bird. I believe that there is but little difference in its appearance, except the colour of the bill, which in the male is of a dark brown, in the female yellow.* * * * *HORNED OWL._Strix flammea_, LINN.* * * * *_Strix argemone_, LINN.* * * * *The habits of the Horned Owl are, like those of the Snow Owl and the Long-eared Owl, imperfectly known. They have long been familiar objects to the inhabitants of the northern parts of our country, who are accustomed to their appearance and mode of travelling in com- panies. They are most frequently seen in the night. It is often heard to hoot, or squeal,and at times is very noisy.It is found during the whole of the northern summer, on the pine plains and barrens, on the <DW72>s of the higher elevations of our country, and in the northern parts of Maine, Nova Scotia, Newfoundland, and in several parts of New England. It is one of the most common inhabitants of our villages, and is so extremely restless and ac- tive, that it is almost impossible to catch it. They are very bold and noisy, rising from the tops of the low bushes and branches, and making a terrible hissing, as they do when alarmed, which will draw on them the attention of the person who perceives them. They are generally seen in flocks, and at all times wary, giving notice of the approach of danger, by their peculiar crowing, and various notes, which are peculiar to themselves, and often mistaken for a call. Their note resembles that of the Owl, and is much louder, resembling the cry of the Great Horned Owl.* * * * *I have been thus particular in giving you the above description, as I believe this species to be the one I have already figured. You will readily believe that it would be impossible for me to decide in which of the two localities which I have described the bird is to be looked for. I only mention the latter, as the description agrees better with that of the present bird than with that of any other in which I have seen it.* * * * *CHIMÆOLU- RUS VIRGINIANUS, _Lath._ Ind. Ornith. vol. ii. p. 301.—_Ch.Bonaparte_, Synops. of Birds of the United States, p. 54.CHIMÆOLURUS VIRGINI- ANUS, _Nuttall_, Manual, part i. p. 215.AMERI- CAN CHIMÆOLURUS, CHIMÆOLURUS AMER- ICANUS, _Ch. Bonaparte_, Amer.Ornith. vol. ii. fig. 2.—_Nuttall_, Manual, p. ii. p. 39. pl. 209.Adult Male. Plate XXIII. Fig. 1.Bill rather long, slender, strong, compressed toward the end; upper mandible with the dorsal outline a little con- vex, the ridge rather wide and flat,the sides convex from the base, the edges overlapping, the tip de- clinate;lower mandible with the angle narrow and very long, the dorsal line rather convex, the sides rounded, the tip acute. Nostrils basal, lateral,round, covered by the reversed filaments of the frontal si- nuses. Head rather large. Body moderate. Legs of ordinary length; tarsus very strong,scutellate an- teriorly, acute behind; toes free, scutellate above, the lateral ones nearly equal, the hind toe larger; claws of ordinary length,compressed.Plumage soft, blended, somewhat blended, not glossy. Wings rather long,third quill longest, second and fourth equal. Tail of ordinary length,slightly emarginate, the two lateral feathers longest, the two lateral in- ferior with some small tips.Bill deep brown, black at the end, paler at the sides. Iris brown. Feet flesh-colour. Head and neck pale ash-grey. Back, scapulars, and rump dark umber-brown, reflect- ing into deep brown, the tail, secondary quills,and coverts, as well as the ends of the secondary quills, and tips of the larger ones, white. Wings dusky, their coverts margined externally with reddish- brown. Fore part of the back, breast, and ab- domen deep brown, tinged with orange; the breast tinged with yellow, the abdomen with a tinge of dull red. On the breast a broad band of dusky red on each side.Length 7 inches, extent of wings 10; bill along the ridge 1-3/12, along the gap 1- 1/12; tarsus 2-1/12.Adult Female. Plate XXIII. Fig. 2.The Female resembles the male, but some- what resembles the white-headed Woodpecker, the head, neck, breast, and abdomen being pale ash- grey.The young resemble the female, and differ from the male, in having the chin and fore part of the breast light ash-grey, and the rest of the under parts ash-grey.THE COTTON PLANT.GOSSIUM GLYCYLLARUM, _Willd._ Sp. Pl. vol. ii. p. 779. _Pursh_,Flor. Amer. vol. ii. p. 422.—DE- CANDRIA MONOGYNIA, _Linn._DECANDRIA RHAMNACEAE, _Juss._This plant, from which the generic name of this genus is derived,is distin- guished by its pendulous cymes of large, silky, termi- nal panicles, and by the sinuosities of the branches, which are mostly smooth. The leaves are cordate, downy, and attenuated at the base. The flowers are pale orange-, and exhale a strong and very pleasant odour.THE HIGH BERRIES OF THE NORTH.(_MAGNOLIA CANADENSIS_, DESK.) NORTH OF KINGSBRIDGE.[Illustration: THE HIGH BERRIES OF THE NORTH.]The highest trees in the county of Brunswick are found near the town of Kingston; but the low and more sheltered parts of the country have abundance of the low- growing aromatic, which grows there from seed, and is,consequently, of a superior quality. Not more than fifty or sixty miles below the town of St. John’s, this shrub attains a height of upwards of fifty feet, with spreading branches of beautiful spread- ing foliage.THE CRANE CRANE._CATHARTE CANADENSIS_, TEMM.PLATE XXIII. MALE AND FEMALE.This species has never, or very rarely, been observed on our seaboard during the spring and summer, unless I mistake not, as is said by the natives, in many parts of Newfoundland. It frequently comes within a few miles of the sea-shore, and after passing over the downs or beach, settles upon the marshes or small islands, erecting its nest on the summit of a large tree, and generally resting on the trunk. There is, at all times, a sufficient number of young ones to fill its nest, and, conse- quently,it seldom requires to be robbed. It generally dwells upon high and exposed situations, yet never in an open forest. As many as four or five nests of this species may often be observed on a single tree, situated on a level with the ground, or where the lower branches have been broken off by storms. It sits upright, with its neck or tail drawn in, and so rarely, on opening its mouth, that you may often look down into it, and take your bird out by the neck or tail. It is only during the autumn, and towards the close of that season, that it deserts the salt marshes, retires to its aerial breeding-places, and generally makes its nest on a swamp or river island. The habits of this bird are so like those of the common stone crane, that it would have es- caped notice were it not for the variation in the colour of its bill. This is of a white colour, shading off towards the tips of the upper mandible, which are pale brown.So common is this species on the Atlantic seaboard, that few persons can fail to have seen it. While on board our ship at St. John’s, on the 30th of October 1828, I noticed many of these birds on a small pond that runs near our town. They were wading about and darting from one point of the shore to another, as if searching for a distant fish. They were rather shyer than the common white crane, but had the same abrupt note,so different from that of the red-necked species. They continued to hop about the pond, looking out for food, the whole time that the vessel remained there.
{ "id": "1507.05910" }
2003.04297
Improved Baselines with Momentum Contrastive Learning
Contrastive unsupervised learning has recently shown encouraging progress, e.g., in Momentum Contrast (MoCo) and SimCLR. In this note, we verify the effectiveness of two of SimCLR's design improvements by implementing them in the MoCo framework. With simple modifications to MoCo---namely, using an MLP projection head and more data augmentation---we establish stronger baselines that outperform SimCLR and do not require large training batches. We hope this will make state-of-the-art unsupervised learning research more accessible. Code will be made public.
http://arxiv.org/pdf/2003.04297
Xinlei Chen, Haoqi Fan, Ross Girshick, Kaiming He
cs.CV
Tech report, 2 pages + references
null
cs.CV
20200309
20200309
0 2 0 2 r a M 9 ] V C . s c [ 1 v 7 9 2 4 0 . 3 0 0 2 : v i X r a # Improved Baselines with Momentum Contrastive Learning Xinlei Chen Haoqi Fan Ross Girshick Kaiming He Facebook AI Research (FAIR) # Abstract Contrastive unsupervised learning has recently shown encouraging progress, e.g., in Momentum Contrast (MoCo) and SimCLR. In this note, we verify the effectiveness of two of SimCLR’s design improvements by implementing them in the MoCo framework. With simple modifications to MoCo— namely, using an MLP projection head and more data augmentation—we establish stronger baselines that outper- form SimCLR and do not require large training batches. We hope this will make state-of-the-art unsupervised learning research more accessible. Code will be made public. loss loss affinity affinity 2) D> 109) w 8 queue encoder encoder encoder momentum encoder nnn AAA AAA) A) ([ # 1. Introduction # (a) end-to-end # (b) Momentum Contrast Recent studies on unsupervised representation learning from images [16, 13, 8, 17, 1, 9, 15, 6, 12, 2] are converging on a central concept known as contrastive learning [5]. The results are promising: e.g., Momentum Contrast (MoCo) [6] shows that unsupervised pre-training can surpass its ImageNet-supervised counterpart in multiple detection and segmentation tasks, and SimCLR [2] further reduces the gap in linear classifier performance between unsupervised and supervised pre-training representations. This note establishes stronger and more feasible base- lines built in the MoCo framework. We report that two de- sign improvements used in SimCLR, namely, an MLP pro- jection head and stronger data augmentation, are orthogo- nal to the frameworks of MoCo and SimCLR, and when used with MoCo they lead to better image classification and object detection transfer learning results. Moreover, the MoCo framework can process a large set of negative sam- ples without requiring large training batches (Fig. 1). In contrast to SimCLR’s large 4k∼8k batches, which require TPU support, our “MoCo v2” baselines can run on a typical 8-GPU machine and achieve better results than SimCLR. We hope these improved baselines will provide a reference for future research in unsupervised learning. # 2. Background Contrastive learning. Contrastive learning [5] is a frame- learns similar/dissimilar representations from work that data that are organized into similar/dissimilar pairs. This can be formulated as a dictionary look-up problem. An ef- Figure 1. A batching perspective of two optimization mechanisms for contrastive learning. Images are encoded into a representation space, in which pairwise affinities are computed. fective contrastive loss function, called InfoNCE [13], is: exp(¢-k*/7) exp(q-kt+/7) + S exp(q-k-/7) qd) Lakt{k-} — log Here q is a query representation, k+ is a representation of the positive (similar) key sample, and {k−} are representa- tions of the negative (dissimilar) key samples. τ is a temper- ature hyper-parameter. In the instance discrimination pre- text task [16] (used by MoCo and SimCLR), a query and a key form a positive pair if they are data-augmented versions of the same image, and otherwise form a negative pair. The contrastive loss (1) can be minimized by various mechanisms that differ in how the keys are maintained [6]. In an end-to-end mechanism (Fig. 1a) [13, 8, 17, 1, 9, 2], the negative keys are from the same batch and updated end- to-end by back-propagation. SimCLR [2] is based on this mechanism and requires a large batch to provide a large set of negatives. In the MoCo mechanism (Fig. 1b) [6], the neg- ative keys are maintained in a queue, and only the queries and positive keys are encoded in each training batch. A mo- mentum encoder is adopted to improve the representation consistency between the current and earlier keys. MoCo decouples the batch size from the number of negatives. 1 unsup. pre-train ImageNet | VOC detection case MLP aug+ cos epochs acc. APso AP AP35 supervised 76.5 81.3 53.5 58.8 MoCo v1 200 60.6 81.5 55.9 62.6 (a) v 200 66.2 82.0 56.4 62.6 (b) v 200 63.4 82.2 56.8 63.2 (c) v v 200 67.3 82.5 57.2 63.9 (d) v vv 200 67.5 82.4 57.0 63.6 (e) v v Vv 800 711 82.5 57.4 64.0 Table 1. Ablation of MoCo baselines, evaluated by ResNet-50 for (i) ImageNet linear classification, and (ii) fine-tuning VOC object detection (mean of 5 trials). “MLP”: with an MLP head; “aug+”: with extra blur augmentation; “cos”: cosine learning rate schedule. Improved designs. SimCLR [2] improves the end-to-end variant of instance discrimination in three aspects: (i) a sub- stantially larger batch (4k or 8k) that can provide more neg- ative samples; (ii) replacing the output fc projection head [16] with an MLP head; (iii) stronger data augmentation. In the MoCo framework, a large number of negative samples are readily available; the MLP head and data aug- mentation are orthogonal to how contrastive learning is in- stantiated. Next we study these improvements in MoCo. # 3. Experiments Settings. Unsupervised learning is conducted on the 1.28M ImageNet [3] training set. We follow two common proto- cols for evaluation. (i) ImageNet linear classification: fea- tures are frozen and a supervised linear classifier is trained; we report 1-crop (224×224), top-1 validation accuracy. (ii) Transferring to VOC object detection [4]: a Faster R-CNN detector [14] (C4-backbone) is fine-tuned end-to-end on the VOC 07+12 trainval set1 and evaluated on the VOC 07 test set using the COCO suite of metrics [10]. We use the same hyper-parameters (except when noted) and codebase as MoCo [6]. All results use a standard-size ResNet-50 [7]. MLP head. Following [2], we replace the fc head in MoCo with a 2-layer MLP head (hidden layer 2048-d, with ReLU). Note this only influences the unsupervised training stage; the linear classification or transferring stage does not use this MLP head. Also, following [2], we search for an opti- mal τ w.r.t. ImageNet linear classification accuracy: τ w/o MLP w/ MLP 0.07 60.6 62.9 0.1 60.7 64.9 0.2 59.0 66.2 0.3 58.2 65.7 0.4 57.2 65.0 0.5 56.4 64.3 Using the default τ = 0.07 [16, 6], pre-training with the MLP head improves from 60.6% to 62.9%; switching to the optimal value for MLP (0.2), the accuracy increases to 66.2%. Table 1(a) shows its detection results: in contrast to the big leap on ImageNet, the detection gains are smaller. Augmentation. We extend the original augmentation in [6] by including the blur augmentation in [2] (we find the 1For all entries (including the supervised and MoCo v1 baselines), we fine-tune for 24k iterations on VOC, up from 18k in [6]. 2 unsup. pre-train ImageNet case MLP aug+ cos epochs __ batch acc. MoCo v1 [6] 200 256 60.6 SimCLR [2] v v v 200 256 61.9 SimCLR [2] v v v 200 8192 66.6 MoCo v2 v v v 200 256 67.5 results of longer unsupervised training follow: SimCLR [2] v v v 1000 4096 69.3 MoCo v2 v v v 800 256 711 Table 2. MoCo vs. SimCLR: ImageNet linear classifier accuracy (ResNet-50, 1-crop 224×224), trained on features from unsuper- vised pre-training. “aug+” in SimCLR includes blur and stronger color distortion. SimCLR ablations are from Fig. 9 in [2] (we thank the authors for providing the numerical results). mechanism batch memory / GPU MoCo end-to-end end-to-end 256 256 4096 5.0G 7.4G 93.0G† time / 200-ep. 53 hrs 65 hrs n/a Table 3. Memory and time cost in 8 V100 16G GPUs, imple- mented in PyTorch. †: based on our estimation. stronger color distortion in [2] has diminishing gains in our higher baselines). The extra augmentation alone (i.e., no MLP) improves the MoCo baseline on ImageNet by 2.8% to 63.4%, Table 1(b). Interestingly, its detection accuracy is higher than that of using the MLP alone, Table 1(b) vs. (a), despite much lower linear classification accuracy (63.4% vs. 66.2%). This indicates that linear classification accu- racy is not monotonically related to transfer performance in detection. With the MLP, the extra augmentation boosts ImageNet accuracy to 67.3%, Table 1(c). Comparison with SimCLR. Table 2 compares SimCLR [2] with our results, referred to as MoCo v2. For fair com- parisons, we also study a cosine (half-period) learning rate schedule [11] which SimCLR adopts. See Table 1(d, e). Us- ing pre-training with 200 epochs and a batch size of 256, MoCo v2 achieves 67.5% accuracy on ImageNet: this is 5.6% higher than SimCLR under the same epochs and batch size, and better than SimCLR’s large-batch result 66.6%. With 800-epoch pre-training, MoCo v2 achieves 71.1%, outperforming SimCLR’s 69.3% with 1000 epochs. Computational cost. In Table 3 we report the memory and time cost of our implementation. The end-to-end case re- flects the SimCLR cost in GPUs (instead of TPUs in [2]). The 4k batch size is intractable even in a high-end 8-GPU machine. Also, under the same batch size of 256, the end- to-end variant is still more costly in memory and time, be- cause it back-propagates to both q and k encoders, while MoCo back-propagates to the q encoder only. Table 2 and 3 suggest that large batches are not necessary for good accuracy, and state-of-the-art results can be made more accessible. The improvements we investigate require only a few lines of code changes to MoCo v1, and we will make the code public to facilitate future research. # References [1] Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. arXiv:1906.00910, 2019. [2] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Ge- offrey Hinton. A simple framework for contrastive learning of visual representations. arXiv:2002.05709, 2020. [3] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009. [4] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The PASCAL Visual Object Classes (VOC) Challenge. IJCV, 2010. [5] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimension- ality reduction by learning an invariant mapping. In CVPR, 2006. [6] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual rep- resentation learning. arXiv:1911.05722, 2019. [7] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In CVPR, Deep residual learning for image recognition. 2016. [8] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Adam Trischler, and Yoshua Bengio. Learn- ing deep representations by mutual information estimation and maximization. In ICLR, 2019. [9] Olivier J. Hnaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron van den Oord. Data-efficient image recognition with contrastive pre- dictive coding. arXiv:1905.09272v2, 2019. [10] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV. 2014. [11] Ilya Loshchilov and Frank Hutter. SGDR: Stochastic gradi- ent descent with warm restarts. In ICLR, 2017. [12] Ishan Misra and Laurens van der Maaten. supervised learning of pretext-invariant arXiv:1912.01991, 2019. # Self- # representations. [13] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Rep- resentation learning with contrastive predictive coding. arXiv:1807.03748, 2018. [14] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with re- gion proposal networks. In NeurIPS, 2015. [15] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Con- trastive multiview coding. arXiv:1906.05849, 2019. [16] Zhirong Wu, Yuanjun Xiong, Stella Yu, and Dahua Lin. Un- supervised feature learning via non-parametric instance dis- crimination. In CVPR, 2018. [17] Mang Ye, Xu Zhang, Pong C Yuen, and Shih-Fu Chang. Un- supervised embedding learning via invariant and spreading instance feature. In CVPR, 2019. 3
{ "id": "1807.03748" }
2003.04390
Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning
Meta-learning has been the most common framework for few-shot learning in recent years. It learns the model from collections of few-shot classification tasks, which is believed to have a key advantage of making the training objective consistent with the testing objective. However, some recent works report that by training for whole-classification, i.e. classification on the whole label-set, it can get comparable or even better embedding than many meta-learning algorithms. The edge between these two lines of works has yet been underexplored, and the effectiveness of meta-learning in few-shot learning remains unclear. In this paper, we explore a simple process: meta-learning over a whole-classification pre-trained model on its evaluation metric. We observe this simple method achieves competitive performance to state-of-the-art methods on standard benchmarks. Our further analysis shed some light on understanding the trade-offs between the meta-learning objective and the whole-classification objective in few-shot learning.
http://arxiv.org/pdf/2003.04390
Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, Xiaolong Wang
cs.CV, cs.LG
ICCV 2021. Code: https://github.com/yinboc/few-shot-meta-baseline
null
cs.CV
20200309
20210819
1 2 0 2 g u A 9 1 ] V C . s c [ 4 v 0 9 3 4 0 . 3 0 0 2 : v i X r a # Meta-Baseline: Exploring Simple Meta-Learning for Few-Shot Learning # Yinbo Chen UC San Diego Zhuang Liu UC Berkeley Huijuan Xu Penn State University Trevor Darrell UC Berkeley # Xiaolong Wang UC San Diego # Abstract # Abstract Meta-learning has been the most common framework for few-shot learning in recent years. It learns the model from collections of few-shot classification tasks, which is believed to have a key advantage of making the training objective consistent with the testing objective. However, some re- cent works report that by training for whole-classification, i.e. classification on the whole label-set, it can get compa- rable or even better embedding than many meta-learning algorithms. The edge between these two lines of works has yet been underexplored, and the effectiveness of meta- learning in few-shot learning remains unclear. In this paper, we explore a simple process: meta-learning over a whole- classification pre-trained model on its evaluation metric. We observe this simple method achieves competitive per- formance to state-of-the-art methods on standard bench- marks. Our further analysis shed some light on understand- ing the trade-offs between the meta-learning objective and the whole-classification objective in few-shot learning. Our code is available at https://github.com/yinboc/ few-shot-meta-baseline. # 1. Introduction While humans have shown incredible ability to learn from very few examples and generalize to many different new examples, the current deep learning approaches still rely on a large scale of training data. To mimic this hu- man ability of generalization, few-shot learning [4, 29] is proposed for training networks to understand a new con- cept based on a few labeled examples. While directly learn- ing a large number of parameters with few samples is very challenging and most likely leads to overfitting, a practical setting is applying transfer learning: train the network on common classes (also called base classes) with sufficient samples, then transfer the model to learn novel classes with a few examples. The meta-learning framework for few-shot learning fol- lows the key idea of learning to learn. Specifically, it sam- ples few-shot classification tasks from training samples be- longing to the base classes and optimizes the model to per- form well on these tasks. A task typically takes the form of N -way and K-shot, which contains N classes with K support samples and Q query samples in each class. The goal is to classify these N × Q query samples into the N classes based on the N × K support samples. Under this framework, the model is directly optimized on few- shot classification tasks. The consistency between the ob- jectives of training and testing is considered as the key ad- vantage of meta-learning. Motivated by this idea, many re- cent works [26, 6, 25, 30, 5, 22, 11, 33] focus on improving the meta-learning structure, and few-shot learning itself has become a common testbed for evaluating meta-learning al- gorithms. However, some recent works find that training for whole- classification, i.e. classification on the whole training label- set (base classes), provides the embedding that is compa- rable or even better than many recent meta-learning algo- rithms. The effectiveness of whole-classification models has been reported in both prior works [6, 1] and some con- current works [31, 27]. Meta-learning makes the form of training objective consistent with testing, but why it turns out to learn even worse embedding than simple whole- classification? While there are several possible reasons, e.g. optimization difficulty or overfitting, the answer has not been clearly studied yet. It remains even unclear that whether meta-learning is still effective compared to whole- classification in few-shot learning. In this work, we aim at exploring the edge between whole-classification and meta-learning by decoupling the discrepancies. We start with Classifier-Baseline: a whole- classification method that is similarly proposed in concur- rent works [31, 27]. In Classifier-Baseline, we first train a classifier on base classes, then remove the last fully- connected (FC) layer which is class-dependent. During test time, it computes mean embedding of support samples for each novel class as their centroids, and classifies query sam- ples to the nearest centroid with cosine distance. We ob- serve this baseline method outperforms many recent meta- learning algorithms. In order to understand whether meta-learning is still ef- 1 fective compared to whole-classification, a natural experi- ment is to see what happens if we perform further meta- learning over a converged Classifier-Baseline on its evalu- ation metric (i.e. cosine nearest-centroid). As a resulting method, it is similar to MatchingNet [29] or ProtoNet [24] with an additional classification pre-training stage. We observe that meta-learning can still improve Classifier- Baseline, and it achieves competitive performance to state- of-the-art methods on standard benchmarks. We call this simple method Meta-Baseline. We highlight that as a method, all the individual components of Meta-Baseline have been proposed in prior works, but to the best of our knowledge, it has been overlooked that none of the prior works studies them as a whole. We further decouple the discrepancies by evaluating on two types of generaliza- tion: base class generalization denotes performance on few-shot classification tasks from unseen data in the base classes, which follows the common definition of general- ization (i.e. evaluated in the training distribution); and novel class generalization denotes performance on few-shot clas- sification tasks from data in novel classes, which is the goal of the few-shot learning problem. We observe that: (i) During meta-learning, improving base class generaliza- tion can lead to worse novel class generalization; (ii) When training Meta-Baseline from scratch (i.e. without whole- classification training), it achieves higher base-class gener- alization but much lower novel class generalization. Our observations suggest that there could be a trade- off between the objectives of meta-learning and whole- classification. It is likely that meta-learning learns the em- bedding that works better for N -way K-shot tasks, while whole-classification learns the embedding with stronger class transferability. We find that the main advantage of training for whole-classification before meta-learning is likely to be improving class transferability. Our further ex- periments provide a potential explanation of what makes Meta-Baseline a strong baseline: by inheriting one of the most effective evaluation metrics of the whole-classification model, it maximizes the reusing of the embedding with strong class transferability. From another perspective, our results also rethink the comparison between meta-learning and whole-classification from the perspective of datasets. When base classes are collected to cover the distribution of novel classes, novel-class generalization should converge to base-class generalization and the strength of meta-learning may overwhelm the strength of whole-classification. In summary, our contributions are as following: • We present a simple Meta-Baseline that has been over- looked in prior work. It achieves competitive perfor- mance to state-of-the-art methods on standard bench- marks and is easy to follow. • We observe a trade-off between the objectives of meta- 2 learning and whole-classification, which potentially explains the success of Meta-Baseline and rethinks the effectiveness of both objectives in few-shot learning. # 2. Related Work Most recent approaches for few-shot learning follow the meta-learning framework. The various meta-learning ar- chitectures for few-shot learning can be roughly catego- rized into three groups. Memory-based methods [19, 15, 23, 13, 14] are based on the idea to train a meta-learner with memory to learn novel concepts (e.g. an LSTM- based meta-learner). Optimization-based methods [7, 22] follows the idea of differentializing an optimization pro- cess over support-set within the meta-learning framework: MAML [5] finds an initialization of the neural network that can be adapted to any novel task using a few optimiza- tion steps. MetaOptNet [11] learns the feature represen- tation that can generalize well for a linear support vector machine (SVM) classifier. Besides explicitly considering the dynamic learning process, metric-based methods [29] meta-learn a deep representation with a metric in feature space. For example, Prototypical Networks [24] compute the average feature for each class in support-set and clas- sify query samples by the nearest-centroid method. They use Euclidean distance since it is a Bregman divergence. Relation Networks [26] further generalizes this framework by proposing a relation module as a learnable metric jointly trained with deep representations. TADAM [16] proposes to use a task conditioned metric resulting in a task-dependent metric space. While significant progress is made in the meta-learning framework, some recent works challenge the effectiveness of meta-learning with simple whole-classification, i.e. a classification model on the whole training label-set. Co- sine classifier [6] and Baseline++ [1] perform whole- classification training by replacing the top linear layer with a cosine classifier, and they adapt the classifier to a few- shot classification task of novel classes by performing near- est centroid or fine-tuning a new layer respectively. They show these whole-classification models can achieve com- petitive performance compared to several popular meta- learning models. Another recent work [2] studies on a trans- ductive setting. Along with these baseline methods, more advanced meta-learning methods [25, 11, 33] are proposed and they set up new state-of-the-art results. The effective- ness of whole-classification is then revisited in two of the concurrent works [31, 27] with improved design choices. By far, the effectiveness of meta-learning compared to whole-classification in few-shot learning is still unclear, since the edge between whole-classification models and meta-learning models remains underexplored. The goal of this work is to explore the insights behind the phenomenons. Our experiments show a potential trade-off between the Method Matching Networks [29] Prototypical Networks [24] Baseline++ [1] Meta-Baseline (ours) Whole-classification training no / yes (large models) no yes (cosine classifier) yes Meta-learning attention + cosine centroid + Euclidean - centroid + cosine (∗τ ) Table 1: Overview of method comparison. We summarize the differences between Meta-Baseline and prior methods. meta-learning and whole-classification objectives, which provides a more clear understanding of the comparison be- tween both objectives for few-shot learning. support-set S, let Sc denote the few-shot samples in class c, it computes the average embedding wc as the centroid of class c: As a method, similar ideas to Classifier-Baseline are con- currently reported in recent works [31, 27]. Unlike some prior works [6, 1], Classifier-Baseline does not replace the last layer with cosine classifier during training, it trains the whole-classification model with a linear layer on the top and applies cosine nearest-centroid metric during the test time for few-shot classification on novel classes. The Meta-Baseline is meta-learning over a converged Classifier- Baseline on its evaluation metric (cosine nearest-centroid). It is similar (with inconspicuous and important differences as shown in Table 1) to those simple and classical metric- based meta-learning methods [29, 24]. The main purpose of Meta-Baseline in this paper is to understand the comparison between whole-classification and meta-learning objectives, but we find it is also a simple meta-learning baseline that has been overlooked. While every individual component in Meta-Baseline is not novel, to the best of our knowledge, none of the prior works studies them as a whole. # 3. Method We = Ss > fo(x), qd) tESe then for a query sample x in a few-shot task, it predicts the probability that sample x belongs to class c according to the cosine similarity between the embedding of sample x and the centroid of class c: exp ((folx),2)) Ye exp ((fo(x), wer)’ (2) ply =¢| 2) where (-,-) denotes the cosine similarity of two vectors. Similar methods to Classifier-Baseline have also been proposed in concurrent works [31, 27]. Compared to Base- line++ [1], the Classifier-Baseline does not use the cosine classifier for training or perform fine-tuning during testing, while it performs better on standard benchmarks. In this work, we choose Classifier-Baseline as the representative of whole-classification models for few-shot learning. For sim- plicity and clarity, we do not introduce additional complex techniques for this whole-classification training. # 3.1. Problem definition # 3.3. Meta-Baseline In standard few-shot classification, given a labeled dataset of base classes Cbase with a large number of im- ages, the goal is to learn concepts in novel classes Cnovel with a few samples. In an N -way K-shot few-shot clas- sification task, the support-set contains N classes with K samples per class, the query-set contains samples from the same N classes with Q samples per class, and the goal is to classify the N × Q query images into N classes. # 3.2. Classifier-Baseline Classifier-Baseline is a whole-classification model, i.e. It a classification model trained for the whole label-set. refers to training a classifier with classification loss on all base classes and performing few-shot tasks with the cosine nearest-centroid method. Specifically, we train a classifier on all base classes with standard cross-entropy loss, then re- move its last FC layer and get the encoder fθ, which maps the input to embedding. Given a few-shot task with the Figure 1 visualizes the Meta-Baseline. The first stage is the classification training stage, it trains a Classifier- training a classifier on all bases classes and Baseline, i.e. remove its last FC layer to get fθ. The second stage is the meta-learning stage, which optimizes the model on the eval- uation metric of Classifier-Baseline. Specifically, given the classification-trained feature encoder fθ, it samples N -way K-shot tasks (with N × Q query samples) from training samples in base classes. To compute the loss for each task, in support-set it computes the centroids of N classes de- fined in Equation 1, which are then used to compute the predicted probability distribution for each sample in query- set defined in Equation 2. The loss is a cross-entropy loss computed from p and the labels of the samples in the query- set. During training, each training batch can contain several tasks and the average loss is computed. Since cosine similarity has the value range of [−1, 1], when it is used to compute the logits, it can be helpful to 3 Classification Training Stage lassifier-Baseline training Meta-Learning Stage F classification IC] on base classes Classifier-Baseline / Meta-Baseline support-set evaluation , a cos}—+[] Oo oO label score Meta-Baseline training loss Figure 1: Classifier-Baseline and Meta-Baseline. Classifier-Baseline is to train a classification model on all base classes and remove its last FC layer to get the encoder fθ. Given a few-shot task, it computes the average feature for samples of each class in support-set, then it classifies a sample in query-set by nearest-centroid with cosine similarity as distance. In Meta- Baseline, it further optimizes a converged Classifier-Baseline on its evaluation metric, and an additional learnable scalar τ is introduced to scale cosine similarity. scale the value before applying Softmax function during training (a common practice in recent work [6, 17, 16]). We multiply the cosine similarity by a learnable scalar 7, and the probability prediction in training becomes: exp (7 - (fo(x), We)) Le exp (7 - (fo(x), we)) . (3) p(y =c|2) In this work, the main purpose of Meta-Baseline is to investigate whether the meta-learning objective is still ef- fective over a whole-classification model. As a method, while every component in Meta-Baseline has been proposed in prior works, we find none of the prior works studies them as a whole. Therefore, Meta-Baseline should also be an im- portant baseline that has been overlooked. It is a subset of ILSVRC-2012, containing 608 classes from 34 super-categories, which are then split into 20, 6, 8 super- categories, resulting in 351, 97, 160 classes as training, val- idation, testing set respectively. The image size is 84 × 84. This setting is more challenging since base classes and novel classes come from different super-categories. In addition to the datasets above, we evaluate our model on ImageNet-800, which is derived from ILSVRC-2012 1K classes by randomly splitting 800 classes as base classes and 200 classes as novel classes. The base classes contain the images from the original training set, the novel classes contain the images from the original validation set. This larger dataset aims at making the training setting standard as the ImageNet 1K classification task [8]. # 4. Results on Standard Benchmarks # 4.2. Implementation details # 4.1. Datasets The miniImageNet dataset [29] is a common benchmark for few-shot learning. It contains 100 classes sampled from ILSVRC-2012 [21], which are then randomly split to 64, 16, 20 classes as training, validation, and testing set respec- tively. Each class contains 600 images of size 84 × 84. The tieredImageNet dataset [20] is another common benchmark proposed more recently with much larger scale. We use ResNet-12 that follows the most of recent works [16, 25, 11, 33] on miniImageNet and tieredIma- geNet, and we use ResNet-18, ResNet-50 [8] on ImageNet- 800. For the classification training stage, we use the SGD optimizer with momentum 0.9, the learning rate starts from 0.1 and the decay factor is 0.1. On miniImageNet, we train 100 epochs with batch size 128 on 4 GPUs, the learning rate decays at epoch 90. On tieredImageNet, we train 120 epochs with batch size 512 on 4 GPUs, the learning rate de- 4 Model Backbone 1-shot 5-shot ConvNet-4 Matching Networks [29] Prototypical Networks [24] ConvNet-4 Prototypical Networks (re-implement) ResNet-12 Activation to Parameter [18] LEO [22] Baseline++ [1] SNAIL [13] AdaResNet [15] TADAM [16] MTL [25] MetaOptNet [11] SLA-AG [10] ProtoNets + TRAML [12] ConstellationNet [33] WRN-28-10 WRN-28-10 ResNet-18 ResNet-12 ResNet-12 ResNet-12 ResNet-12 ResNet-12 ResNet-12 ResNet-12 ResNet-12 43.56 ± 0.84 48.70 ± 1.84 53.81 ± 0.23 59.60 ± 0.41 61.76 ± 0.08 51.87 ± 0.77 55.71 ± 0.99 56.88 ± 0.62 58.50 ± 0.30 61.20 ± 1.80 62.64 ± 0.61 62.93 ± 0.63 60.31 ± 0.48 64.89 ± 0.23 55.31 ± 0.73 63.11 ± 0.92 75.68 ± 0.17 73.74 ± 0.19 77.59 ± 0.12 75.68 ± 0.63 68.88 ± 0.92 71.94 ± 0.57 76.70 ± 0.30 75.50 ± 0.80 78.63 ± 0.46 79.63 ± 0.47 77.94 ± 0.57 79.95 ± 0.17 Classifier-Baseline (ours) Meta-Baseline (ours) ResNet-12 ResNet-12 58.91 ± 0.23 63.17 ± 0.23 77.76 ± 0.17 79.26 ± 0.17 Table 2: Comparison to prior works on miniImageNet. Average 5-way accuracy (%) with 95% confidence interval. Model Backbone 1-shot 5-shot MAML [5] ConvNet-4 Prototypical Networks* [24] ConvNet-4 ConvNet-4 Relation Networks* [26] WRN-28-10 LEO [22] ResNet-12 MetaOptNet [11] 51.67 ± 1.81 53.31 ± 0.89 54.48 ± 0.93 66.33 ± 0.05 65.99 ± 0.72 70.30 ± 1.75 72.69 ± 0.74 71.32 ± 0.78 81.44 ± 0.09 81.56 ± 0.53 Classifier-Baseline (ours) Meta-Baseline (ours) ResNet-12 ResNet-12 68.07 ± 0.26 68.62 ± 0.27 83.74 ± 0.18 83.74 ± 0.18 Table 3: Comparison to prior works on tieredImageNet. Average 5-way accuracy (%) with 95% confidence interval. cays at epoch 40 and 80. On ImageNet-800, we train 90 epochs with batch size 256 on 8 GPUs, the learning rate de- cays at epoch 30 and 60. The weight decay is 0.0005 for ResNet-12 and 0.0001 for ResNet-18 or ResNet-50. Stan- dard data augmentation is applied, including random re- sized crop and horizontal flip. For meta-learning stage, we use the SGD optimizer with momentum 0.9. The learning rate is fixed as 0.001. The batch size is 4, i.e. each training batch contains 4 few-shot tasks to compute the average loss. The cosine scaling parameter τ is initialized as 10. We also apply consistent sampling for evaluating the per- formance. For the novel class split in a dataset, the sampling of testing few-shot tasks follows a deterministic order. Con- sistent sampling allows us to get a better model comparison with the same number of sampled tasks. In the following sections, when the confidence interval is omitted in the ta- ble, it indicates that a fixed set of 800 testing tasks are sam- pled for estimating the performance. # 4.3. Results Following the standard-setting, we conduct experiments on miniImageNet and tieredImageNet, the results are shown in Table 2 and 3 respectively. To get a fair comparison to prior works, we perform model selection according to the validation set. On both datasets, we observe that the Meta- Baseline achieves competitive performance to state-of-the- art methods. We highlight that many methods for compar- ison introduce more parameters and architecture designs (e.g. self-attention in [33]), while Meta-Baseline has the minimum parameters and the simplest design. We also no- tice that the simple Classifier-Baseline can achieve compet- itive performance when compared to meta-learning meth- ods, especially in 5-shot tasks. We observe that the meta- learning stage consistently improves Classifier-Baseline on 5 Model Backbone 1-shot 5-shot Classifier-Baseline (ours) ResNet-18 ResNet-18 Meta-Baseline (ours) 83.51 ± 0.22 86.39 ± 0.22 94.82 ± 0.10 94.82 ± 0.10 Classifier-Baseline (ours) ResNet-50 ResNet-50 Meta-Baseline (ours) 86.07 ± 0.21 89.70 ± 0.19 96.14 ± 0.08 96.14 ± 0.08 Table 4: Results on ImageNet-800. Average 5-way accuracy (%) is reported with 95% confidence interval. Meta-Baseline, Meta-Learning Stage (ResNet-12) minilmageNet, 1-shot O- -63.5 = 88.0 -63.0 94.2- & ® 86.0 -62.5 93.6 - > 3 i 4 84.0 = 62.0 93.0- 82.0 “615 924- 1 15 30 15 30 epochs epochs minilmageNet, 5-shot + -80.1 -79.8 -79.5 -79.2 -78.9 — base class generalization 8 tieredimageNet, 1-shot tieredimageNet, 5-shot -68.6 94.8- 88.2 - -82.5 -67.9 94.4- 87.6 - -82.0 67.2 94.0- 87.0 - -81.5 86.4 - -66.5 93.6- -81.0 1 15 30 1 15 30 epochs epochs — novel class generalization Figure 2: Objective discrepancy of meta-learning on miniImageNet and tieredImageNet. Each epoch contains 200 training batches. Average 5-way accuracy (%) is reported. ResNet-50, 1-shot ResNet-50, 5-shot S.way acc (%) epochs epochs — base class generalization — novel class generalization Figure 3: Objective discrepancy of meta-learning on ImageNet-800. Each epoch contains 500 training batches. Average 5-way accuracy (%) is reported. miniImageNet. Compared to miniImageNet, we find that the gap between Meta-Baseline and Classifier-Baseline is smaller on tieredImageNet, and the meta-learning stage does not improve 5-shot in this case. We further evaluate our methods on the larger dataset ImageNet-800. In this larger-scale experiment, we find freezing the Batch Normalization layer [9] (set to eval mode) is beneficial. The results are shown in Table 4. From the results, we observe that in this large dataset Meta- Baseline improves Classifier-Baseline in 1-shot, while it is not improving the performance in 5-shot. # 5. Observations and Hypothesis # 5.1. Objective discrepancy in meta-learning Despite the improvements of meta-learning over Classifier-Baseline, we observe the test performance drops during the meta-learning stage. While a common assump- tion for this phenomenon is overfitting, we observe that this issue seems not to be mitigated on larger datasets. To further locate the issue, we propose to evaluate base class general- ization and novel class generalization. Base class general- ization is measured by sampling tasks from unseen images in base classes, while novel class generalization refers to the performance of few-shot tasks sampled from novel classes. The base class generalization is the generalization in the in- put distribution for which the model is trained, it decouples the commonly defined generalization and class-level trans- fer performance, which helps for locating the reason for the performance drop. Figure 2 and 3 demonstrate the meta-learning stage of Meta-Baseline on different datasets. We find that during the meta-learning stage, when the base class generalization is increasing, the novel class generalization can be decreasing instead. This fact indicates that over a converged whole- classification model, the meta-learning objective itself, i.e. making the embedding generalize better in few-shot tasks from base classes, can have a negative effect on the perfor- mance of few-shot tasks from novel classes. It also gives a 6 Task Model mini-tiered mini-shuffled full-tiered 1-shot Classifier-Baseline Meta-Baseline ∆ 56.91 58.44 +1.53 61.64 65.88 +4.24 68.76 69.52 +0.76 77.67 80.48 +2.81 5-shot Classifier-Baseline Meta-Baseline ∆ 74.30 74.63 +0.33 79.26 80.58 +1.32 84.07 84.07 +0.00 90.58 90.67 +0.09 Table 5: Effect of dataset properties. Average 5-way accuracy (%), with ResNet-12. Training Base gen. Novel gen. 1-shot w/ ClsTr w/o ClsTr 86.42 86.74 63.33 58.54 5-shot w/ ClsTr w/o ClsTr 93.54 94.47 80.02 74.95 Method 1-shot 5-shot Classifier-Baseline Classifier-Baseline (Euc.) 60.58 56.29 79.24 78.93 Meta-Baseline Meta-Baseline (Euc.) 63.33 60.19 80.02 79.50 Table 6: Comparison on Meta-Baseline training from scratch. Average 5-way accuracy (%), with ResNet-12 on miniImageNet. ClsTr: classification training stage. Table 7: Importance of inheriting a good metric. Aver- age 5-way accuracy (%), with ResNet-12 on miniImageNet. possible explanation for why such phenomenon is not mit- igated on larger datasets, as this is not sample-level over- fitting, but class-level overfitting, which is caused by the objective discrepancy that the underlying training class dis- tribution is different from testing class distribution. This observation suggests that we may reconsider the motivation of the meta-learning framework for few-shot In some settings, optimizing towards the train- learning. ing objective with a consistent form as the testing objective (except the inevitable class difference) may have an even negative effect. It is also likely that the whole-classification learns the embedding with stronger class transferability, and meta-learning makes the model perform better at N -way K-shot tasks but tends to lose the class transferability. co-training the meta-learning objective with a whole- classification task is beneficial, which may be potentially related to our hypothesis. While our results show it is likely that the key effect of the whole-classification objective is improving the class transferability, it also indicates a po- tential trade-off that the whole-classification objective can have a negative effect on base class generalization. # 5.3. What makes Meta-Baseline a strong baseline? As a method with a similar objective as ProtoNet [24], Meta-Baseline achieves nearly 10% higher accuracy on 1- shot in Table 2. The observations and hypothesis in previ- ous sections potentially explain its strength, as it starts with the embedding of a whole-classification model which has stronger class transferability. # 5.2. Effect of whole-classification training before meta-learning According to our hypothesis, the whole-classification pre-trained model has provided extra class transferabil- ity for the meta-learning model, therefore, it is natural to compare Meta-Baseline with and without the classification training stage. The results are shown in Table 6. We observe that Meta-Baseline trained without classification training stage can actually achieve higher base class generalization, but its novel class generalization is much lower when com- pared to Meta-Baseline with whole-classification training. These results support our hypothesis, that the whole- classification training provides the embedding with stronger class transferability, which significantly helps novel class Interestingly, TADAM [16] finds that generalization. We perform further experiments, that in Meta-Baseline (with classification training stage) we replace the cosine distance with the squared Euclidean distance proposed in ProtoNet [24]. To get a fair comparison, we also include the learnable scalar τ with a proper initialization value 0.1. The results are shown in Table 7. While ProtoNet [24] finds that squared Euclidean distance (as a Bregman di- vergence) works better than cosine distance when perform- ing meta-learning from scratch, here we start meta-learning from Classifier-Baseline and we observe that cosine sim- ilarity works much better. A potential reason is that, as shown in Table 7, cosine nearest-centroid works much bet- ter than nearest-centroid with squared Euclidean distance in Classifier-Baseline (note that this is just the evaluation met- ric and has no changes in training). Inheriting a good metric 7 for Classifier-Baseline might be the key that makes Meta- Baseline strong. According to our hypothesis, the embed- ding from the whole-classification model has strong class transferability, inheriting a good metric potentially mini- mizes the future modifications on the embedding from the whole-classification model, thus it can keep the class trans- ferability better and achieve higher performance. # 5.4. Effect of dataset properties We construct four variants from the tieredImageNet dataset. Specifically, full-tiered refers to the original tiered- ImageNet, full-shuffled is constructed by randomly shuf- fling the classes in tieredImageNet and re-splitting the classes into training, validation, and test set. The mini- tiered and mini-shuffled datasets are constructed from full- tiered and full-shuffled respectively, their training set is con- structed by randomly selecting 64 classes with 600 images from each class in the full training set, while the validation set and the test set remain unchanged. Since tieredImageNet separates training classes and testing classes into different super categories, shuffling these classes will mix the classes in different super categories together and make the distribu- tion of base classes and novel classes closer. Our previous experiments show that base class general- ization is always improving, if novel classes are covered by the distribution of base classes, the novel class gener- alization should also keep increasing. From Table 5, we can see that from mini-tiered to mini-shuffled, and from full-tiered to full-shuffled, the improvement achieved by the meta-learning stage gets significantly larger, which consis- tently supports our hypothesis. Therefore, our results in- dicate it is likely that meta-learning is mostly effective over whole-classification training when novel classes are similar to base classes. We also observe that other factors may affect the im- provement of meta-learning. From mini-tiered to full-tiered and from mini-shuffled to full-shuffled, when the dataset gets larger the improvements become less. A potential hy- pothesis could be that the class transferability advantage of whole-classification training becomes more obvious when trained on large datasets. From the results of our experi- ments in Table 2, 3, 4, 5, we observe that the improvement of the meta-learning stage in 5-shot is less than 1-shot. We hypothesize this is because when there are more shots, tak- ing average embedding becomes a more reasonable choice to estimate the class center in Classifier-Baseline, therefore the advantage of meta-learning becomes less. # 5.5. The trade-off between meta-learning and whole-classification All of the experiments in previous sections support a key hypothesis, that there exists a trade-off: the meta-learning objective learns better embedding for N -way K-shot tasks 8 (in the same distribution), while the whole-classification ob- jective learns embedding with stronger class transferabil- ity. Optimizing towards one objective may hurt the strength of another objective. With this hypothesis, Meta-Baseline balances this trade-off by choosing to calibrate the whole- classification embedding with meta-learning and inherit the metric with high initial performance. The discrepancy between base class generalization and novel class generalization also considers the effectiveness of meta-learning and whole-classification from the perspec- tive of datasets. Specifically, when novel classes are similar enough to base classes or the base classes are sufficient to cover the distribution of novel classes, novel class gener- alization should converge to base class generalization. In practice, this can be potentially achieved by collecting base classes that are similar to the target novel classes. In this case, it may be possible that the novel meta-learning algo- rithms outperform the whole-classification baselines again. # 6. Additional Results on Meta-Dataset Meta-Dataset [28] is a new benchmark proposed for few- shot learning, it consists of diverse datasets for training and evaluation. They also propose to generate few-shot tasks with a variable number of ways and shots, for having a setting closer to the real world. We follow the setting in Meta-Dataset [28] and use ResNet-18 as the backbone, with the original image size of 126×126, which is resized to be 128×128 before feeding into the network. For the classifi- cation training stage, we apply the training setting similar to our setting in ImageNet-800. For the meta-learning stage, the model is trained for 5000 iterations with one task in each iteration. The left side of Table 8 demonstrates the models trained with samples in ILSVRC-2012 only. We observe that the Meta-Baseline does not significantly improve Classifier- Baseline under this setting in our experiments, possibly due to the average number of shots are high. The right side of Table 8 shows the results when the models are trained on all datasets, except Traffic Signs and MSCOCO which have no training samples. The Classifier- Baseline is trained as a multi-dataset classifier, i.e. an en- coder together with multiple FC layers over the encoded feature to output the logits for different datasets. The clas- sification training stage has the same number of iterations as training on ILSVRC only, to mimic the ILSVRC train- ing, a batch has 0.5 probability to be from ILSVRC and 0.5 probability to be uniformly sampled from one of the other datasets. For Classifier-Baseline, comparing to the results on the left side of Table 8, we observe that while the per- formance on ILSVRC is worse, the performances on other datasets are mostly improved due to having their samples in training. It can be also noticed that the cases where Meta- Baseline improves Classifier-Baseline are mostly on the Dataset Trained on ILSVRC fo-Proto-MAML Classifier/Meta [28] (ours) Trained on all datasets fo-Proto-MAML Classifier Meta (ours) [28] (ours) ILSVRC Omniglot Aircraft Birds Textures Quick Draw Fungi VGG Flower Traffic Signs MSCOCO 49.5 63.4 56.0 68.7 66.5 51.5 40.0 87.2 48.8 43.7 59.2 69.1 54.1 77.3 76.0 57.3 45.4 89.6 66.2 55.7 46.5 82.7 75.2 69.9 68.3 66.8 42.0 88.7 52.4 41.7 55.0 76.9 69.8 78.3 71.4 62.7 55.4 90.6 69.3 53.1 48.0 89.4 81.7 77.3 64.5 74.5 60.2 83.8 59.5 43.6 Table 8: Additional results on Meta-Dataset. Average accuracy (%), with variable number of ways and shots. The fo-Proto- MAML method is from Meta-Dataset [28], Classifier and Meta refers to Classifier-Baseline and Meta-Baseline respectively, 1000 tasks are sampled for evaluating Classifier or Meta. Note that Traffic Signs and MSCOCO have no training set. datasets which are “less relevant” to ILSVRC (the dataset “relevance” could be shown in Dvornik et al. [3]). A po- tential reason is that the multi-dataset classification train- ing stage samples ILSVRC with 0.5 probability, similar to ILSVRC training, the meta-learning stage is hard to improve on ILSVRC, therefore those datasets relevant to ILSVRC will have similar properties so that it is hard to improve on them. # 7. Conclusion and Discussion In this work, we presented a simple Meta-Baseline that has been overlooked for few-shot learning. Without any ad- ditional parameters or complex design choices, it is compet- itive to state-of-the-art methods on standard benchmarks. classification. From the perspective of datasets, we demonstrate how the preference between meta-learning and whole-classification changes according to class similarity and other factors, indicating that these factors may need more attention for model comparisons in future work. Acknowledgements. This work was supported, in part, by grants from DARPA LwLL, NSF 1730158 CI-New: Cognitive Hardware and Software Ecosystem Community Infrastructure (CHASE-CI), NSF ACI-1541349 CC*DNI Pacific Research Platform, and gifts from Qualcomm, TuSim- ple and Picsart. Prof. Darrell was supported, in part, by DoD including DARPA’s XAI, LwLL, and/or SemaFor programs, as well as BAIR’s in- dustrial alliance programs. We thank Hang Gao for the helpful discussions. # References Our experiments indicate that there might be an objec- tive discrepancy in the meta-learning framework for few- shot learning, i.e. a meta-learning model generalizing better on unseen tasks from base classes might have worse perfor- mance on tasks from novel classes. This provides a possible explanation that why some complex meta-learning methods could not get significantly better performance than simple whole-classification. While most recent works focus on im- proving the meta-learning structures, many of them did not explicitly address the issue of class transferability. Our ob- servations suggest that the objective discrepancy might be a potential key challenge to tackle. While many novel meta-learning algorithms are pro- posed and some recent works report that simple whole- classification training is good enough for few-shot learn- ing, we show that meta-learning is still effective over whole-classification models. We observe a potential trade- off between the objectives of meta-learning and whole- [1] Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Frank Wang, and Jia-Bin Huang. A closer look at few-shot classi- fication. In International Conference on Learning Represen- tations, 2019. 1, 2, 3, 5, 11, 12 [2] Guneet S Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. A baseline for few-shot image classifi- cation. arXiv preprint arXiv:1909.02729, 2019. 2 [3] Nikita Dvornik, Cordelia Schmid, and Julien Mairal. Select- ing relevant features from a universal representation for few- shot classification. arXiv preprint arXiv:2003.09338, 2020. 9 [4] Li Fei-Fei, Rob Fergus, and Pietro Perona. One-shot learning of object categories. IEEE transactions on pattern analysis and machine intelligence, 28(4):594–611, 2006. 1 [5] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model- agnostic meta-learning for fast adaptation of deep networks. In Proceedings of the 34th International Conference on Ma- chine Learning-Volume 70, pages 1126–1135. JMLR. org, 2017. 1, 2, 5 9 [6] Spyros Gidaris and Nikos Komodakis. Dynamic few-shot In Proceedings of the visual learning without forgetting. IEEE Conference on Computer Vision and Pattern Recog- nition, pages 4367–4375, 2018. 1, 2, 3, 4, 11 [7] Erin Grant, Chelsea Finn, Sergey Levine, Trevor Darrell, and Thomas Griffiths. Recasting gradient-based meta-learning as hierarchical bayes. arXiv preprint arXiv:1801.08930, 2018. 2 [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. 4 [9] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal co- variate shift. arXiv preprint arXiv:1502.03167, 2015. 6, 11 [10] Hankook Lee, Sung Ju Hwang, and Jinwoo Shin. Self- supervised label augmentation via input transformations. In International Conference on Machine Learning, pages 5714–5724. PMLR, 2020. 5 [11] Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex op- timization. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 10657–10665, 2019. 1, 2, 4, 5, 11 [12] Aoxue Li, Weiran Huang, Xu Lan, Jiashi Feng, Zhenguo Li, and Liwei Wang. Boosting few-shot learning with adaptive In Proceedings of the IEEE/CVF Conference margin loss. on Computer Vision and Pattern Recognition, pages 12576– 12584, 2020. 5 [13] Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter In Inter- Abbeel. A simple neural attentive meta-learner. national Conference on Learning Representations, 2018. 2, 5 [14] Tsendsuren Munkhdalai and Hong Yu. Meta networks. In Proceedings of the 34th International Conference on Ma- chine Learning-Volume 70, pages 2554–2563. JMLR. org, 2017. 2 [15] Tsendsuren Munkhdalai, Xingdi Yuan, Soroush Mehri, and Adam Trischler. Rapid adaptation with conditionally shifted neurons. arXiv preprint arXiv:1712.09926, 2017. 2, 5 [16] Boris Oreshkin, Pau Rodr´ıguez L´opez, and Alexandre La- coste. Tadam: Task dependent adaptive metric for improved few-shot learning. In Advances in Neural Information Pro- cessing Systems, pages 721–731, 2018. 2, 4, 5, 7, 11 [17] Hang Qi, Matthew Brown, and David G Lowe. Low-shot learning with imprinted weights. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5822–5830, 2018. 4 [18] Siyuan Qiao, Chenxi Liu, Wei Shen, and Alan L Yuille. Few- shot image recognition by predicting parameters from activa- tions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 7229–7238, 2018. 5 [19] Sachin Ravi and Hugo Larochelle. Optimization as a model In In International Conference on for few-shot learning. Learning Representations (ICLR), 2017. 2 [20] Mengye Ren, Eleni Triantafillou, Sachin Ravi, Jake Snell, Kevin Swersky, Joshua B Tenenbaum, Hugo Larochelle, and 10 Richard S Zemel. Meta-learning for semi-supervised few- shot classification. arXiv preprint arXiv:1803.00676, 2018. 4 [21] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San- jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Imagenet large Aditya Khosla, Michael Bernstein, et al. scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015. 4 [22] Andrei A. Rusu, Dushyant Rao, Jakub Sygnowski, Oriol Vinyals, Razvan Pascanu, Simon Osindero, and Raia Had- sell. Meta-learning with latent embedding optimization. In International Conference on Learning Representations, 2019. 1, 2, 5 [23] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. In International con- ference on machine learning, pages 1842–1850, 2016. 2 [24] Jake Snell, Kevin Swersky, and Richard Zemel. Prototypical networks for few-shot learning. In Advances in Neural Infor- mation Processing Systems, pages 4077–4087, 2017. 2, 3, 5, 7 [25] Qianru Sun, Yaoyao Liu, Tat-Seng Chua, and Bernt Schiele. In Proceed- Meta-transfer learning for few-shot learning. ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 403–412, 2019. 1, 2, 4, 5 [26] Flood Sung, Yongxin Yang, Li Zhang, Tao Xiang, Philip HS Torr, and Timothy M Hospedales. Learning to compare: Re- lation network for few-shot learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 1199–1208, 2018. 1, 2, 5 [27] Yonglong Tian, Yue Wang, Dilip Krishnan, Joshua B Tenen- baum, and Phillip Isola. Rethinking few-shot image classi- fication: a good embedding is all you need? arXiv preprint arXiv:2003.11539, 2020. 1, 2, 3, 11 [28] Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, et al. Meta-dataset: A dataset of datasets for learning to learn from few examples. arXiv preprint arXiv:1903.03096, 2019. 8, 9 [29] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Daan Wierstra, et al. Matching networks for one shot learning. In Advances in neural information processing systems, pages 3630–3638, 2016. 1, 2, 3, 4, 5 [30] Xin Wang, Fisher Yu, Ruth Wang, Trevor Darrell, and Joseph E Gonzalez. Tafe-net: Task-aware feature embed- dings for low shot learning. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 1831–1840, 2019. 1 [31] Yan Wang, Wei-Lun Chao, Kilian Q Weinberger, and Lau- rens van der Maaten. Simpleshot: Revisiting nearest- neighbor classification for few-shot learning. arXiv preprint arXiv:1911.04623, 2019. 1, 2, 3 [32] Bing Xu, Naiyan Wang, Tianqi Chen, and Mu Li. Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853, 2015. 11 [33] Weijian Xu, yifan xu, Huaijin Wang, and Zhuowen Tu. Con- stellation nets for few-shot learning. In International Con- ference on Learning Representations, 2021. 1, 2, 4, 5, 11 Meta-Baseline, Training from Scratch minilmageNet, 1-shot minilmageNet, 5-shot ey 6 © 3 2 6 a 3 x ° S-way acc (%) S-way acc (%) » 6 ua 638 1 40 80 epochs 120. 160 1 40 80 epochs 120 160 — base class generalization — novel class generalization Figure 4: Training Meta-Baseline without classification- training stage on miniImageNet. Dataset Classifier 1-shot 5-shot miniImageNet Linear Cosine 60.58 61.93 79.24 78.73 tieredImageNet Linear Cosine 68.76 67.58 84.07 83.31 Table 9: Comparison to classifier trained with cosine met- ric, Average 5-way accuracy (%), with ResNet-12. # A. Details of ResNet-12 The ResNet-12 backbone consists of 4 residual blocks that each residual block has 3 convolutional layers. Each convolutional layer has a 3 × 3 kernel, followed by Batch Normalization [9] and Leaky ReLU [32] with 0.1 slope. The channels of convolutional layers in each residual block are 64, 128, 256, 512 respectively, a 2×2 max-pooling layer is applied after each residual block. Finally, a 5 × 5 global average pooling is applied to get a 512-dimensional feature vector. This architecture is consistent with recent works [16, 33]. Some other recent works also introduce additional parame- ters and design choices in the backbone (e.g. DropBlock and wider channels of 64, 160, 320, 640 in [11, 27]), while these modifications may make the performance higher, we do not include them here for simplicity. # B. Training plot of Meta-Baseline without clas- sification training stage We show the process of training Meta-Baseline from scratch (i.e. without the classification-training stage) on miniImageNet in Figure 4. We observe that when the learn- ing rate decays, the novel class generalization quickly starts to be decreasing. While it is able to achieve higher base class generalization than Meta-Baseline with classification training, its highest novel class generalization is still much 11 worse, suggesting whole-classification training may pro- vide representations with extra class transferability. # C. Comparison to cosine classification training We compare the effect of classification training with replacing the last linear-classifier with cosine nearest- neighbor metric which is proposed in prior work [6, 1], the results are shown in Table 9, where Cosine denotes clas- sification training with cosine metric and Linear denotes the standard classification training. On miniImageNet, we observe that Cosine outperforms Linear in 1-shot, but has worse performance in 5-shot. On tieredImageNet, we ob- serve Linear outperforms Cosine in both 1-shot and 5-shot. We choose to use the linear layer as it is more common and we find it works better in more cases. # D. Objective discrepancy on ImageNet-800 Besides miniImageNet and tieredImageNet, in our large- scale dataset ImageNet-800, we also observe the novel class generalization decreasing when base class generalization is increasing, the training process is demonstrated in Fig- ure 5. From the figure, we see that for both backbones of ResNet-18 and ResNet-50, the base class generalization performance is increasing during the training epochs, while the novel class generalization performance quickly starts to be decreasing. These observations are consistent with our observations on miniImageNet and tieredImageNet, which further support our hypothesis. # E. Comparison of the Classifier-Baseline and Baseline++ [1] We connect the Classifier-Baseline to Baseline++ [1] with a step-by-step ablation study on miniImageNet, the results are shown in Table 10. We see that fine-tuning is outperformed by the simple nearest-centroid method with cosine metric, and using a standard ImageNet-like opti- mizer significantly improves the performance of the whole- classification method for few-shot learning. Meta-Baseline, Meta-Learning Stage (ImageNet-800) ResNet-18, 1-shot ResNet-18, 5-shot ResNet-50, 1-shot ResNet-50, 5-shot 97.5- 98.5 - -96.0 94.2 - 89.6 27 -86.4 97.2- 96.0 - 98.4- 95.7 g -94.0 - 88.9 8918 -86.0 96.9- 95.6 - 98.3 - -95.4 2 4 90.9 - -85.6 96.6 - 7938 952. 788.2 98.2 - -95.1 90.0 - -85.2 96.3- -93.6 94.8- 87.5 98.1- -94.8 1 30 60 90 1 30 60 90 1 30 60 90 1 30 60 90 epochs epochs epochs epochs — base class generalization |©—— novel class generalization Figure 5: Objective discrepancy of meta-learning on ImageNet-800. Each epoch contains 500 training batches. Average 5-way accuracy (%) is reported. difference 51.75 ± 0.80 50.84 ± 0.80 52.15 ± 0.83 53.37 ± 0.71 56.06 ± 0.71 50.49 ± 0.71 53.59 ± 0.72 59.19 ± 0.71 # 5-way 1-shot accuracy (%) Table 10: Comparison of the Classifier-Baseline and Baseline++ [1]. 12
{ "id": "1505.00853" }
2004.03497
ProGen: Language Modeling for Protein Generation
Generative modeling for protein engineering is key to solving fundamental problems in synthetic biology, medicine, and material science. We pose protein engineering as an unsupervised sequence generation problem in order to leverage the exponentially growing set of proteins that lack costly, structural annotations. We train a 1.2B-parameter language model, ProGen, on ~280M protein sequences conditioned on taxonomic and keyword tags such as molecular function and cellular component. This provides ProGen with an unprecedented range of evolutionary sequence diversity and allows it to generate with fine-grained control as demonstrated by metrics based on primary sequence similarity, secondary structure accuracy, and conformational energy.
http://arxiv.org/pdf/2004.03497
Ali Madani, Bryan McCann, Nikhil Naik, Nitish Shirish Keskar, Namrata Anand, Raphael R. Eguchi, Po-Ssu Huang, Richard Socher
q-bio.BM, cs.LG, stat.ML
null
null
q-bio.BM
20200308
20200308
0 2 0 2 r a M 8 ] M B . o i b - q [ 1 v 7 9 4 3 0 . 4 0 0 2 : v i X r a # ProGen: Language Modeling for Protein Generation # Ali Madani 1 Bryan McCann 1 Nikhil Naik 1 Nitish Shirish Keskar 1 Namrata Anand 2 Raphael R. Eguchi 2 Po-Ssu Huang 2 Richard Socher 1 Abstract Generative modeling for protein engineering is key to solving fundamental problems in synthetic biology, medicine, and material science. We pose protein engineering as an unsupervised sequence generation problem in order to leverage the expo- nentially growing set of proteins that lack costly, structural annotations. We train a 1.2B-parameter language model, ProGen, on ∼280M protein se- quences conditioned on taxonomic and keyword tags such as molecular function and cellular com- ponent. This provides ProGen with an unprece- dented range of evolutionary sequence diversity and allows it to generate with fine-grained control as demonstrated by metrics based on primary se- quence similarity, secondary structure accuracy, and conformational energy. sequence data is growing at a near exponential rate. Recent research (Alley et al., 2019; Rives et al., 2019; Rao et al., 2019) has begun to capitalize on the much larger set of raw protein sequences by adapting state-of-the-art rep- resentation learning techniques (Devlin et al., 2018) from natural language processing (NLP) to classification of pro- tein properties. However, there has been no attempt to adapt state-of-the-art methods for artificial text generation (Rad- ford et al., 2019), and in particular the kind of controllable generation (Keskar et al., 2019) that would be most useful for protein engineering. We introduce ProGen for controllable protein generation. ProGen is a 1.2 billion parameter conditional language model trained on a dataset of 280 million protein sequences together with conditioning tags that encode a variety of annotation such as taxonomic, functional, and locational in- formation. By conditioning on these tags, ProGen provides a new method for protein generation that can be tailored for desired properties (Figure 1). # 1. Introduction Generating proteins with desired properties is one of the most complex yet impactful problems in biology. Protein engineering research has grown over the past 50 years and yielded remarkable outcomes including the development of new enzymes, therapies, and sensors. However, leading experimental techniques for protein engineering such as directed evolution (Arnold, 1998) still rely on heuristics and random mutations to select initial sequences for rounds of evolution. The raw amino acid sequence encodes a protein, and during synthesis, this chain of amino acids folds in ways that exhibit local (secondary) and global (tertiary) structure. These struc- tural properties then directly determine a unique function, which is of ultimate interest to protein engineers. Unfortu- nately, obtaining three-dimensional structural information for proteins is expensive and time consuming. Consequently, there are three orders of magnitude more raw sequences than there are sequences with structural annotations, and protein 1Salesforce Research, Palo Alto, CA, USA 2Department of Bioengineering, Stanford University, Stanford, CA, USA. Correspondence to: Ali Madani <[email protected]>. According to NLP metrics, ProGen is a powerful language model, achieving comparable performance to similarly- sized models for English. This performance improves in settings with larger amino acid contexts and when ProGen is provided a larger number of conditioning tags, which highlights its potential for applications in providing viable, starting sequences for directed evolution or de novo protein design (Huang et al., 2016). ProGen also performs well when used to model unseen protein families, but it is even more effective when fine-tuned for those unseen families as an alternative to training from random initialization. These results inspire the use of ProGen to generate candidate se- quences in challenging, low-homology applications. Proteins generated by ProGen satisfy desired structural, and by extension functional, properties when evaluated with metrics for sequence similarity, secondary structure accu- racy, and conformational energy– from lower level structure to higher level structure. Generation performance is judged higher quality by higher level metrics, which suggests that ProGen has learned invariances to mutations at the sequence level that conserve structure and inferred function. At the highest level, conformational energy analysis reveals that generated proteins exhibit energy levels near that of native ProGen: Language Modeling for Protein Generation @ Data Availability @) Desired Arguments 280M Organism ce 3 geave™ Function | Actin-binding 2 protel §& 20m Location | Cytoplasm Protein Structures Process | Cardiac disease aay sei) Amino Acid YMIQEE Year Homo sapiens Inferred Result Controlled Sequence Generation ProGen Structure | “hod @ @ Function Generated Sequences Figure 1. a) Protein sequence data is growing exponentially as compared to structural data. b) We utilize protein sequence data along with taxonomic and keyword tags to develop a conditional language model: ProGen. proteins, providing our strongest evidence that these pro- teins satisfy the desired structure and inferred function. In our first case study, we examine completion of a VEGFR2 protein, which is held-out in training. ProGen generates candidate completions that conserve the structural elements most important for determining function and exhibit con- formational energy near to native levels across a variety of generation lengths. In our second case study, we ob- serve that ProGen can select high fitness antibody-binding GB1 proteins without supervised training or unsupervised finetuning– indicating that ProGen has learned the underly- ing distribution of functional proteins. work, along with O’Connell et al. (2018), Boomsma & Frellsen (2017), and Greener et al. (2018), all utilize explicit structural information for generative modeling, thereby are unable to fully capture the number and diversity of sequence- only data available. Meanwhile sequence-only generative modeling have been attempted recently through residual causal dilated convolutional neural networks (Riesselman et al., 2019) and variational autoencoders (Costello & Mar- tin, 2019). Unlike these prior works, our work on generative modeling focuses on a high-capacity language models that scale well with sequence data and can be used for control- lable generation. # 2. Related Work Protein representation learning. Recent methods for con- textualized representations (McCann et al., 2017; Peters et al., 2018; Devlin et al., 2018) in natural language process- ing have been demonstrated to work well for contextual pro- tein representation learning. Structural information about a protein can be extracted from such representations using linear methods, and the representations themselves can be adapted to improve performance on other tasks (Rives et al., 2019). Similarly, UniRep (Alley et al., 2019) demonstrated that such representations could be used to predict stability of natural and de novo designed proteins as well as quanti- tative function of molecularly diverse mutants. TAPE (Rao et al., 2019) is a new benchmark consisting of five tasks for assessing such protein embeddings. While this body of prior work focuses on transferable representation learning using bidirectional models, our work demonstrates controllable protein engineering with generative, unidirectional models. Language Models and Controllable Generation. Large Transformer architectures (Vaswani et al., 2017) like GPT- 2 (Radford et al., 2019) represent the state-of-the-art in unconditional language modeling and demonstrate impres- sive text generation capabilities (Zellers et al., 2019) af- ter training on vast amounts of unsupervised English text. CTRL (Keskar et al., 2019) trained a similarly large Trans- former architecture for language generation by conditioning on properties of the text easily extracted at scale, e.g. do- main, style, and even associated URL. We adapt this per- spective to protein engineering by training a conditional transformer language model on amino acid sequences con- ditioned on a set of protein properties referred to as condi- tioning tags. Notably different from Keskar et al. (2019), protein engineering requires a finer-grained, much larger, and more complex set of conditioning tags. Additionally, a single protein can be paired with dozens of conditioning tags. Generative models for protein engineering. Recent gen- erative modeling work such as Ingraham et al. (2019) ex- tends the transformer to condition it on a graph-structured specification of a desired target. Anand & Huang (2018) utilizes generative adversarial networks to produce 2D pair- wise distance map for given protein structural fragments, es- sentially in-painting missing residues. The aforementioned # 3. Methods Let a = (a1, . . . , ana ) be a sequence of amino acids that constitutes a protein. In the context of protein engineering, there is typically also a set of desired protein properties such as function or affiliation with a particular organism. Following recent work on controllable, conditional language modeling (Keskar et al., 2019), we refer to these properties ProGen: Language Modeling for Protein Generation generally as ‘conditioning tags’ through which we would like to control generation of amino acid sequences. Let c = (c1, . . . , cnc) be a sequence of such conditioning tags, and let x = [c; a] the sequence formed by prepending a conditioning tag sequence to an amino acid sequence. p(x) is then the probability over such combined sequences of length n = na + nc. We can factorize this distribution using the chain rule of probability (Bengio et al., 2003): Each block precedes core functionality with layer normal- ization (Ba et al., 2016; Child et al., 2019) and follows it with a residual connection (He et al., 2016). Together, they yield Xi+1: Block 1 Block 2 p(x) = [[ r(eie<i) i=1 ¯Xi = LayerNorm(Xi) Hi = MultiHead( ¯Xi) + ¯Xi Xi+1 = FF( ¯Hi) + ¯Hi This decomposes the problem of conditional protein gen- eration into next-token prediction, where a token xi can either be an amino acid or a conditioning tag. A neural net- work with parameters θ can then be trained to minimize the negative log-likelihood over a dataset D = {x1, . . . , x|D|}: |D| L(D) = — So log po (at |x’) k=1 <i) Scores are then computed from the output of the last layer: Scores(X0) = LayerNorm(Xl)Wvocab During training, these scores are the inputs of a cross- entropy loss function. During generation, the scores corre- sponding to the final token are normalized with a softmax, yielding a distribution for sampling a new token. Note that p(a|c), the distribution over proteins condi- tioned on their corresponding conditioning tags, is just one of the many conditional distributions that can be re- covered from a model that learns p(x). A new protein ˜a of length ma with desired properties encoded by a con- ditioning tag sequence ˜c of length mc can then be gen- erated by sequentially sampling its constituent symbols: pθ(a0|˜c), pθ(a1|˜a0, ˜c), . . . , pθ(ap|˜a<p, ˜c). We train a variant of the Transformer (Vaswani et al., 2017) to learn these conditional distributions over amino acids and conditioning tags. A sequence containing n tokens is embedded as a sequence of n corresponding vectors in Rd. Each vector is the sum of a learned token embedding and a sinusoidal positional embedding as in the original Trans- former architecture. This sequence of vectors is stacked into a matrix X0 ∈ Rn×d so that it can be processed by l attention layers. The ith layer consists of two blocks, each of which preserves the model dimension d. The core of the first block is multi-head attention with k heads that uses a causal mask to preclude attending to future tokens: k(xXYT Attention(X, Y, Z) = softmax ea) Z vd MultiHead(X, k) = [hi;--+ :hy] Wo where h; = Attention(XW}, XW?, XW?) # 3.1. Data We utilize all protein sequences and associated tags available in Uniparc (Leinonen et al., 2004), Unipro- tKB (Bairoch et al., 2005), SWISS-PROT (Bairoch et al., 2004), TrEMBL (Boeckmann et al., 2003), Pfam (Bate- man et al., 2004), and NCBI taxonomic information (Feder- hen, 2012). The aggregated dataset contains over 281M proteins—the most comprehensive, non-redundant, anno- tated database of proteins used to train a machine learning model. For the amino acid vocabulary, we use the standard 25 amino acids designations in IUPAC (Pettit & Powell, 2006). The conditioning tags are divided into 2 categories: (1) keyword tags and (2) taxonomic tags. Following the definitions laid out in the UniprotKB controlled, hierarchi- cal vocabulary of keywords (many of which are derived from Gene Ontology (GO) terms) (Ashburner et al., 2000), the conditioning keyword tags included 1100 terms ranging from cellular component, biological process, and molecular function terms. The taxonomic tags include 100k terms from the NCBI taxonomy across the eight standard taxonomic ranks. The aggregated dataset was split into a training set of size 280M, a held-out protein family1 test set (OOD-test) of size 100k, and a randomly sampled test set (ID-test) of size 1M. OOD-test comprises of 20 protein families, as defined in Pfam, that were excluded from the training data. Perfor- mance on OOD-test measures ability to model samples from unseen protein families, whereas performance on ID-test measures ability to model samples from a wider range of protein families that more closely match the distribution of the training set as described in section A.1 The core of the second block is a feedforward network with ReLU activation that projects inputs to an inner dimension f , with parameters U ∈ Rd×f and V ∈ Rf ×d: F F (X) = max(0, XU )V 1Protein families are groups of evolutionarily-related proteins that have similar structure, function, and sequence similarity as defined by Pfam (Bateman et al., 2004) ProGen: Language Modeling for Protein Generation # 3.2. Training Details # 3.4. Evaluation Details For training, we include each sequence and its reverse, as proteins are invariant to the temporal notion of sequence generation. We then prepend each sequence (and its re- verse) with a corresponding subset of conditioning tags. For a given sequence, there can be multiple versions across databases, each with their own associated conditioning tags. In training, we randomly sample which set of condition- ing tags to utilize but bias toward SWISSPROT tags as they are manually verified. We apply dropout to the con- ditioning tags themselves at a rate of 0.4. We additionally always include a sample with the sequence alone without conditioning tags so that ProGen can be used to complete proteins using only sequence data even when no protein properties are known. We then truncate all sequences to a maximum length of 512. Sequences of length less than 512 were padded, but no loss was backpropagated through the network for padding tokens. The model has dimension d = 1028, inner dimension f = 512, 36 layers, and 8 heads per layer. Dropout with probability 0.1 follows the residual connections in each layer. Token embeddings were tied with the embeddings of the final output layer (Inan et al., 2016; Press & Wolf, 2016). To assess how well ProGen models the training and test distributions, we rely on perplexity as the standard metric for language models, a mean hard accuracy over each token to strictly assess each amino acid error, and a mean soft accuracy defined by incorporating BLOSUM62 (Henikoff & Henikoff, 1992), a standard amino acid substitution matrix. Perplexity is the exponentiated cross-entropy loss computed over each token in a dataset. Thus, high quality language models are expected to have low perplexities. Mean per- token hard accuracy over the tokens in a sequence judges a prediction incorrect for any amino acid that is not the ground truth. Mean per-token soft accuracy relies on BLO- SUM62, a block substitution matrix that specifies which amino acid substitutions are more or less acceptable ac- cording to their frequency in known well-formed proteins. BLOSUM62 is widely used across adopted alignment soft- ware (e.g., BLAST2). Our mean per-token soft accuracy uses BLOSUM62 to penalize incorrect amino acid predic- tions according to the frequency of that substitution in the matrix. In this way, if the substitution is likely in nature, soft accuracy penalizes the model less. Our model was implemented in TensorFlow (Abadi et al., 2016) and trained with a global batch size of 64 distributed across 256 cores of a Cloud TPU v3 Pod for 1M itera- tions. Training took approximately two weeks using Ada- grad (Duchi et al., 2011) with linear warmup from 0 to 1e−2 over 40k steps. Gradient norms were clipped to 0.25. Training in early stages was improved and stabilized by ini- tializing with the pretrained weights of Keskar et al. (2019). # 3.3. Generation Details ProGen generates proteins one amino acid at a time. For one step of generation, ProGen takes a context sequence of amino acids as input and outputs a probability distribution over amino acids. We sample from that distribution and then update the context sequence with the sampled amino acid. This process repeats until a protein of desired length has been generated. We compare different combinations of top-k sampling (Radford et al., 2019) with a repetition penalty designed for amino acid sequence generation. The repetition penalty reduces the probability of amino acids that have been generated within 4 tokens prior to the token to be predicted. Top-k sampling draws the next token from the k most probable tokens in the distribution output by ProGen. We report results for top-k values of k = 1 and k = 3 with repetition penalties of 0 and 1.2. To assess the quality of generation, we evaluate across three levels of structure: (1) primary sequence similarity, (2) sec- ondary structure accuracy, and (3) conformational energy analysis. Primary sequence similarity is defined by a global, pairwise sequence alignment score computed with the Biopython package3. This score is based on the Needleman-Wunsch algorithm (Needleman & Wunsch, 1970) informed by the BLOSUM62 substitution matrix. We use a gap open penalty of −0.5 and gap continue penalty of −0.1. The resulting score is then normalized by the length of the protein. Exper- iments reporting sequence similarity are limited to test sam- ples with a form of experimental evidence of X-ray/NMR crystallography, mass spectrometry, or existence in cDNA or RT-PCR to indicate transcript existence. We refer the reader to UniprotKB existence scores with experimental evidence4 for further details. Secondary structure accuracy was computed per-residue for predicted secondary structures by PSIPRED5 with greater than 0.5 confidence. PSI-BLAST was performed on each generated sample to extract the Multiple Sequence Align- ments (MSAs) with respect to the UniRef90 database (Suzek et al., 2015). These MSAs were provided to PSIPRED for higher quality secondary structure prediction. Experiments reporting secondary structure accuracy were limited to test # 2https://blast.ncbi.nlm.nih.gov/Blast.cgi 3https://biopython.org/ 4https://www.uniprot.org/help/protein existence 5http://bioinf.cs.ucl.ac.uk/psipred/ ProGen: Language Modeling for Protein Generation Table 1. ProGen outperforms uniform random and empirical base- lines on the full test set, which includes ID- and OOD-test. OOD- test results reveal that ProGen also performs well on protein fam- ilies unseen during training. Fine-tuning ProGen dramatically improves performance over training from random initialization. MODEL PPL HARD ACC. UNIFORM BASELINE EMPIRICAL BASELINE PROGEN 25 18.14 8.56 4 6 45 ID-TEST OOD-TEST 8.17 13.34 45 22 OOD-TEST-20 (RAND. INIT.) OOD-TEST-20 (FINE-TUNED) 17.78 7.45 9 50 06 os eu neta mee sree eramenanrs ere weet Accuracy == test - soft acc = = train - soft acc === test - hard acc == train - hard acc === test - ppl 02 ==: train - ppl Perplexity 10 o. 0 200000 400000 600000 800000 Training iteration 1000000 samples with high UniprotKB existence scores as described in the previous paragraph. Conformational energy uses the Rosetta-RelaxBB protocol6. Rosetta-RelaxBB performs a Monte Carlo optimization of the Rosetta energy function over the space of amino acid types and rotamers. The Rosetta energy is based on biophysi- cal laws and constraints. Between each design round, amino acid side-chains are replaced, while the carbon backbone torsions are kept fixed. Energy minimization/relaxation is performed after threading the amino acid sequence through the known structure. This allows the backbone to move, possibly into a lower energy state. A lower resulting Rosetta energy correlates to a more relaxed-state and viable confor- mation for a given protein structure. Before applying the procedure above, we relax the native template first. Experi- ments that report conformational energy are limited to test samples from SWISSPROT with associated 3D structures in RCSB PDB 7. To assess generative quality, we provide baselines for dif- ferent levels of random mutation. For a given sequence, a proportion (25 − 100%) of amino acids in the sequence is randomly substituted within one of the 20 standard amino acids other than itself. For conformational energy, we also include an all-alanine baseline (i.e. a sequence with only the amino acid alanine), as it is a non-bulky, chemically inert amino acid that mimics the existing secondary structure well when substituted. These baselines provide a scale across each of the above metrics. A particular random mutation may or may not have constructive or destructive effects on protein structure or function. But viewed in aggregate, the performance of the 100% mutation baseline for any met- ric indicates failed generation. As performance approaches 0%, generation statistically indicates a closer reflection to desired structural and functional properties. Figure 2. Large model capacity is warranted as ProGen has yet to overfit. BLOSUM62-informed soft accuracy shows no gap between train and test performance, suggesting hard accuracy hides the possibility that ProGen errors often correspond to amino acid substitutions found in nature. For metrics details see Section 3.4. # 4. Results and Analysis # 4.1. Evaluating ProGen as a language model In this section, we demonstrate that ProGen is a high-quality language model according to per-token metrics on the train- ing and test sets. ProGen generalizes to the full test set and achieves perplexities representative of a high-quality language model. Perplexities reported in Table 1 demonstrate that ProGen dramatically improves over a Uniform Baseline, in which amino acids are sampled according to a uniform dis- tribution, and an Empirical Baseline, in which amino acids are sampled according to the empirical frequencies in the training set. As a point of reference, state-of-the-art unidi- rectional language models for English Wikipedia achieve perplexities that range from 10 to 17 depending on model size (between 257M and 8.3B parameters) and whether training data was constrained to English Wikipedia (Rae et al., 2019) or not (Shoeybi et al., 2019). ProGen generalizes to unseen protein families. The sec- ond section of Table 1 breaks this result into perplexities over the ID-test and OOD-test sets separately. Results on ID-test confirm that ProGen generalizes well to sequences that belonged to protein families randomly sampled. As expected, performance is worse on the sequences in the OOD-test set, but the model still outperforms the Empirical Baseline for those held out protein families. 6https://www.rosettacommons.org/ 7https://www.rcsb.org/ Fine-tuning ProGen on unseen protein families im- proves over training from random initialization. We fur- ProGen: Language Modeling for Protein Generation » a 0.60 0.55 0.50 0.45 0.40 0.35 Bow So Perplexity 6 Accuracy 0.30 cs 0.25 a 0.20 (1, 50) (51, 100) (101, 150) 151, 200) (201, 250) (251, 300) 301, 350) 351, 400) (401, 450) (451, 500) iv) equence Length interval 11 2 Q ) 10 Perplexity ss eg 9s Fg 8g B RB GG & & § &§ €& 8 & Accuracy ° iy & 0 2 4 6 8 10 12 14 Number of conditioning tags Figure 3. Full test set performance is better for later segments of sequences in keeping with intuition that additional context supports better predictions. We examined intervals up to 500 tokens to ensure a minimum of 30k samples per interval. Figure 4. Full test set performance also improves as the number of conditioning tags associated with proteins increases. We examined proteins with up to 14 conditioning tags to ensure a minimum of 3k samples per category. . ther split OOD-test into OOD-test-80 and OOD-test-20, fine-tuned ProGen on OOD-test-80 until convergence (5 epochs; Adam; linear learning rate warmup to 1k iterations), and retested on OOD-test-20. The third section of Table 1 shows that fine-tuning from ProGen improves over training the same architecture with randomly initialized weights. ProGen performance improves with increased amino acid and conditioning tag context. In Figure 3, we ex- amine the mean perplexity and per-token hard accuracy over different portions of proteins. Perplexity decreases and hard accuracy increases for later portions of a protein, in keeping with the intuition that additional amino acid context narrows down the possibilities for future tokens. The same trends hold when increasing the number of conditioning tags and taking the mean over sequence lengths with the same of tags (in Figure 4). This indicates that conditioning tags also provide signal that improves model predictions. Training curves suggest that protein generation would benefit from even larger models and longer training. With 1B parameters, ProGen is comparable in size to the largest language models that have been publicly released for any modality, and, to the best of our knowledge, it is the largest model trained on amino acid sequences. Figure 2 shows that despite its size and the amount of compute used to train, ProGen has yet to overfit the training data. This sug- gests that models for protein generation could still benefit from even larger models and additional compute. structures that these sequences encode in three-dimensional space. Model performance based on BLOSUM62 soft ac- curacy (Section 3.4) is more than 20% higher than using hard accuracy, which indicates that when ProGen errors may often be substitutions that are acceptable in nature be- cause they still reflect the proper higher-level properties. This suggests that ProGen has learned how to work within function-preserving mutational invariances—we continue to validate this finding for primary, secondary, and confor- mational structure in Section 4.2. # 4.2. Generating with ProGen In this section, we focus on assessing ProGen as a genera- tive model. Generation quality is directly correlated with evolutionary viability and functional qualities, which can be inferred through protein structure. For this reason, we assess generation quality by using metrics for primary sequence similarity, secondary structure accuracy, and conformational energy (Section 3.4). We also include several mutation baselines (Section 3.4) that allow us to compare the similarity of generated proteins to a target, reference protein across all metrics. In reference to these mutation baselines, ProGen quality improves as we move from primary sequence to full conformational structure metrics, thereby suggesting the model has learned mutational invariances in structure which present as errors in lower-level metrics. BLOSUM62 soft accuracy reveals that ProGen predic- tion errors often follow natural amino acid substitutions that likely conserve higher level structure. Though Pro- Gen models proteins as pure sequences, protein function is more directly determined by the secondary and tertiary ProGen achieves higher sequence similarity scores with an amino acid repetition penalty. Figure 5 depicts the results of experimenting with various combinations of top- k sampling and repetition penalties (see Section 3.4 for details). Over all context lengths, ProGen performs best ProGen: Language Modeling for Protein Generation 8 ty + 50% mutation baseline +++ 100% mutation baseline — ProGen- top w/ penalty 2.00 —= ProGen- topk w/ penalty — ProGen- topk — ProGen- top1 Sequence Similarity 0s 0.6 07 08 09 Proportion of Sequence as Context ba o _ bed cy y a bad FS w N y ° N cy Sequence Similarity N a N B 0.5 0.6 0.7 0.8 0.9 Proportion of Sequence as Context === 25% mutation base === 50% mutation base —— ProGen: [8,20] tags — ProGen: [3,7] tags — ProGen: [0,2] tags Figure 5. Across all context lengths, greedily sampling with a rep- etition penalty provides the best results according to sequence similarity. Figure 6. A greater number of conditioning tags enables higher quality generation. With at least 8 conditioning tags, generation quality approaches the 25% mutation baseline. with k = 1 and the repetition penalty applied to recently generated amino acids. Consequently, we use these settings for all following generation experiments. With this nearly greedy sampling, ProGen manages to generate proteins with sequence similarity comparable to randomly mutating 50% of the amino acids that are not seen in the given context. Sequence similarity suggests that ProGen merely ap- proaches the 25% mutation baseline, but secondary structure accuracy suggests that ProGen surpasses it. In Figure 6, we analyze this sequence similarity across differ- ing numbers of conditioning tags. Sequences associated with at least 3 conditioning tags begin to exceed the 50% mutation baseline, and as amino acid context increases, se- quences with at least 8 conditioning tags approach the 25% mutation baseline. Notably, even in the best case, according to sequence similarity, ProGen doesn’t surpass the 25% mu- tation baseline. By contrast, according to secondary struc- ture accuracy, sequences with at least 8 conditioning tags surpass the 25% mutation baseline (Figure 7). This discrep- ancy between sequence similarity and secondary structure accuracy further corroborates our claim from Section 4: er- rors registered by lower-level metrics often correspond to acceptable substitutions according to higher-level metrics that more directly correspond to functional viability. 2 © a ---- 25% mutation | ProGen 2 © g 2 © is} 2 © 6 © u a _ [+ Secondary structure accuracy ° ° y y x @ © u ° [0,2] (3,7] [8,20] Number of conditioning tags Figure 7. ProGen generates sequences that conserve secondary structure of the protein. Increasing the number of conditioning tags yields better secondary structure accuracy than the 25% mutation baseline. baseline. Proteins completed by ProGen are much closer to the energy levels of the native protein than all baselines. Generated samples exhibit energy levels near or even below their associated relaxed native templates. After threading and relaxation, samples generated by ProGen are likely to exhibit desired structure and func- tion. As a measure of generation quality, we thread ProGen sequences through known structures and examine if they exhibit favorable, low energy states. Figure 8 shows the differences between the energy levels of native proteins, ProGen samples, the native proteins with 50% and 100% of amino acids randomly mutated, as well as the all-alanine # 4.3. Case Study: Completing VEGFR2 kinase domain VEGFR2 is responsible for fundamental cell processes such as cell proliferation, survival, migration, and differentiation. VEGFR2 was excluded from training as a subsequence be- longs to a held out protein family in OOD-test. We study how well ProGen generates in the context of a protein com- ProGen: Language Modeling for Protein Generation — Relaxed native _— Gq 3 ° s w S ° 1 a 3 l I Rosetta energy difference from native ProGen 50% mutation 100% mutation All alanine 25% Mutation Energy: -857.10 ProGen Generated Energy: -900.98 75% Mutation Energy: -778.45 Figure 10. ProGen makes fewer mistakes and prioritizes conserva- tion of secondary structure as compared to baselines. Blue is low energy (stable) and red high (unstable). Figure 8. Conformational energies for ProGen generated proteins surpasses all baselines and adheres closely to the energy of the native template. Figure 10 shows one sample from ProGen as well as one from each of the 25% and 75% mutation baselines. The ProGen sample exhibits lower energy overall, and energy is highest for amino acids that do not have secondary structure. This suggests that ProGen learned to prioritize the most structurally important segments of the protein. — 100% mutation —— 75% mutation 160 | —= 50% mutation — 25% mutation 140} —— ProGen 180 120 100 80 60 40 Rosetta Energy Difference From Native 0 10 20 30 40 50 60 70 Generation Length # 4.4. Case Study: Zero-shot fitness selection for protein GB1 The ultimate goal of protein engineering is to engineer func- tional proteins. One promising avenue is via directed evolu- tion, which iterates through rounds of mutation and screen- ing to converge on a high-fitness (i.e. functioning) protein. Machine learning has shown initial promise to aid in the subsequent rounds of directed evolution by in silico screen- ing of proteins (Wu et al., 2019), but it still relies on random mutation in an exponentially large search space. Ideally, a generative model, such as ProGen, that has learned the distribution of evolutionarily-relevant proteins can directly generate high-fitness proteins. Figure 9. ProGen completion quality for VEGFR2 remains steadily near native conformational energy levels across generation lengths. pletion task. We consider the amino acid sequence begin- ning at residue 806 and ending at residue 1168 of VEGFR2 (PDB ID: 2XIR). For different generation lengths, we sam- ple from ProGen to complete the sequence up to residue 1168 with the remainder of the sequence provided as context. Figure 9 shows that the conformational energy calculated after threading and relaxation of ProGen samples are lower compared to all baselines, indicating better structural conser- vation. Generation quality remains near the native relaxed protein independent of generation length. The generated samples across Figure 9 exhibit a mean se- quence identity of 73.1% with the native sequence. This correlates to a lower sequence identity than the 25% muta- tion baseline (74% identity) but with better Rosetta energies. This suggests meaningful deviation from the native protein while achieving the ultimate goal of preserving low energy. We examine the empirical fitness landscape of protein G domain B1 (GB1) binding to an antibody (Wu et al., 2016). Protein G is important for the purification, immobilization, and detection of immunoglobulins (antibodies), proteins used by our immune system to neutralize pathogenic viruses and bacteria. Ideally, we would want the ability to generate GB1 proteins with high binding affinity and stability. The data includes 149,361 of a total 160,000 possible variants from NNK/NNS saturation mutagenesis at four positions known to interact epistatically. Reported fitness values cor- respond to a measure of both stability (i.e. the fraction of folded proteins) and function (i.e. binding affinity to IgG-Fc) by coupling mRNA display with next-generation sequencing. Protein sequences with high fitness values are desired. Without supervised training of ProGen on the GB1 data or unsupervised fine-tuning of ProGen on a subset of simi- lar immunoglobulin-binding proteins, we pass each variant ProGen: Language Modeling for Protein Generation MTYKLILNGKTLKGETTTEAVDAATAEKVFKQYANDNGVDGEWTY DDATKTFTVTE, ° ry LS oe : _ i rs ProGen selection Random selection homologous domains where existing techniques, such as MSAs, fall short. # 6. Acknowledgements We would like to thank Alex Chu for assistance in the thread- ing and minimization experiments along with Jesse Vig for visualizing the attention heads of ProGen. Figure 11. Without training on the Wu et al. (2016) dataset, Pro- Gen can identify which protein variants exhibit high fitness. The dataset reports fitness values for protein variants of GB1 binding to an antibody. Each sample corresponds to mutating one of four highlighted residues, in the above sequence, to a standard amino acid. At the left, the crystallized structure of GB1 is shown. At the right, the fitness value of samples selected through ProGen vs random selection are shown. through ProGen and select the top one hundred variants with the lowest perplexity values. In Figure 11, we demonstrate ProGen is effective in zero-shot selection of high-fitness protein sequences. In comparison, random mutation, which is the main technique used by directed evolution and ML- assisted directed evolution, statistically generates samples with low or zero fitness. With effective sampling techniques, ProGen can be utilized to generate a spread of samples that are statistically high fitness. These results imply that ProGen has not only learned the distribution of structurally-relevant proteins, but also functionally-relevant proteins. # 5. Conclusion We introduced ProGen, a controllable protein generation language model trained on the full evolutionary diversity of one of the largest sequence databases. The model generates proteins that exhibit near native structure energies which likely implies functional viability. ProGen has the potential to play a new, complementary role alongside other state- of-the-art methods in protein engineering. For example, in directed evolution, initial sequences may be sampled from ProGen according to desired conditioning tags. In later rounds of evolution, protein completion with context for particular residue spans, or hotspots, may provide higher fitness samples. In de novo protein design, using ProGen with conditioning tags may allow for designing new proteins with existing folding motifs in new protein families or host organisms. This same strategy may be used in conjunction with threading and structure-based protein design. Because conditioning tags orient ProGen in sequence space, ProGen may even be used as a model to sample from the distribution of evolutionarily viable proteins near one particular protein. This may provide useful augmentations around data for non- ProGen: Language Modeling for Protein Generation # References Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pp. 265–283, 2016. Alley, E. C., Khimulya, G., Biswas, S., AlQuraishi, M., and Church, G. M. Unified rational protein engineering with sequence-based deep representation learning. Nature methods, 16(12):1315–1322, 2019. Anand, N. and Huang, P. Generative modeling for protein structures. In Advances in Neural Information Processing Systems, pp. 7494–7505, 2018. Child, R., Gray, S., Radford, A., and Sutskever, I. Gen- erating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019. Costello, Z. and Martin, H. G. How to hallucinate functional proteins. arXiv preprint arXiv:1903.00458, 2019. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805, 2018. Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradi- ent methods for online learning and stochastic optimiza- tion. Journal of machine learning research, 12(Jul):2121– 2159, 2011. Arnold, F. H. Design by directed evolution. Accounts of chemical research, 31(3):125–131, 1998. Federhen, S. The ncbi taxonomy database. Nucleic acids research, 40(D1):D136–D143, 2012. Ashburner, M., Ball, C. A., Blake, J. A., Botstein, D., Butler, H., Cherry, J. M., Davis, A. P., Dolinski, K., Dwight, S. S., Eppig, J. T., et al. Gene ontology: tool for the unification of biology. Nature genetics, 25(1):25–29, 2000. Ba, J., Kiros, R., and Hinton, G. E. Layer normalization. CoRR, abs/1607.06450, 2016. Greener, J. G., Moffat, L., and Jones, D. T. Design of metalloproteins and novel protein folds using variational autoencoders. Scientific reports, 8(1):1–12, 2018. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Bairoch, A., Boeckmann, B., Ferro, S., and Gasteiger, E. juggling between evolution and stability. Swiss-prot: Briefings in bioinformatics, 5(1):39–55, 2004. Henikoff, S. and Henikoff, J. G. Amino acid substitution matrices from protein blocks. Proceedings of the National Academy of Sciences, 89(22):10915–10919, 1992. Bairoch, A., Apweiler, R., Wu, C. H., Barker, W. C., Boeck- mann, B., Ferro, S., Gasteiger, E., Huang, H., Lopez, R., Magrane, M., et al. The universal protein resource (uniprot). Nucleic acids research, 33(suppl 1):D154– D159, 2005. Bateman, A., Coin, L., Durbin, R., Finn, R. D., Hollich, V., Griffiths-Jones, S., Khanna, A., Marshall, M., Moxon, S., Sonnhammer, E. L., et al. The pfam protein fami- lies database. Nucleic acids research, 32(suppl 1):D138– D141, 2004. Huang, P.-S., Boyken, S. E., and Baker, D. The coming of age of de novo protein design. Nature, 537(7620): 320–327, 2016. Inan, H., Khosravi, K., and Socher, R. Tying word vectors and word classifiers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462, 2016. Ingraham, J., Garg, V., Barzilay, R., and Jaakkola, T. Gener- ative models for graph-based protein design. In Advances in Neural Information Processing Systems, pp. 15794– 15805, 2019. Bengio, Y., Ducharme, R., Vincent, P., and Jauvin, C. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155, 2003. Boeckmann, B., Bairoch, A., Apweiler, R., Blatter, M.-C., Estreicher, A., Gasteiger, E., Martin, M. J., Michoud, K., O’Donovan, C., Phan, I., et al. The swiss-prot pro- tein knowledgebase and its supplement trembl in 2003. Nucleic acids research, 31(1):365–370, 2003. Keskar, N. S., McCann, B., Varshney, L. R., Xiong, C., and Socher, R. Ctrl: A conditional transformer lan- guage model for controllable generation. arXiv preprint arXiv:1909.05858, 2019. Leinonen, R., Diez, F. G., Binns, D., Fleischmann, W., Lopez, R., and Apweiler, R. Uniprot archive. Bioinfor- matics, 20(17):3236–3237, 2004. Boomsma, W. and Frellsen, J. Spherical convolutions and their application in molecular modelling. In Advances in Neural Information Processing Systems, pp. 3433–3443, 2017. McCann, B., Bradbury, J., Xiong, C., and Socher, R. Learned in translation: Contextualized word vectors. In Advances in Neural Information Processing Systems, pp. 6294–6305, 2017. ProGen: Language Modeling for Protein Generation Needleman, S. B. and Wunsch, C. D. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of molecular biology, 48(3):443–453, 1970. S., Wallach, H., Fergus, R., Vishwanathan, S., and Gar- nett, R. (eds.), Advances in Neural Information Process- ing Systems 30, pp. 5998–6008. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/ 7181-attention-is-all-you-need.pdf. O’Connell, J., Li, Z., Hanson, J., Heffernan, R., Lyons, J., Paliwal, K., Dehzangi, A., Yang, Y., and Zhou, Y. Spin2: Predicting sequence profiles from protein structures using deep neural networks. Proteins: Structure, Function, and Bioinformatics, 86(6):629–633, 2018. Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018. Pettit, L. D. and Powell, K. The iupac stability constants database. Chemistry international, 2006. Vig, J. A multiscale visualization of attention in the trans- former model. arXiv preprint arXiv:1906.05714, 2019. Wu, N. C., Dai, L., Olson, C. A., Lloyd-Smith, J. O., and Sun, R. Adaptation in protein fitness landscapes is facili- tated by indirect paths. Elife, 5:e16965, 2016. Wu, Z., Kan, S. J., Lewis, R. D., Wittmann, B. J., and Arnold, F. H. Machine learning-assisted directed protein evolution with combinatorial libraries. Proceedings of the National Academy of Sciences, 116(18):8852–8858, 2019. Press, O. and Wolf, L. Using the output embedding to im- prove language models. arXiv preprint arXiv:1608.05859, 2016. Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., and Choi, Y. Defending against neural fake news. arXiv preprint arXiv:1905.12616, 2019. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2019. Rae, J. W., Potapenko, A., Jayakumar, S. M., and Lillicrap, T. P. Compressive transformers for long-range sequence modelling. arXiv preprint arXiv:1911.05507, 2019. Zimmermann, L., Stephens, A., Nam, S.-Z., Rau, D., K¨ubler, J., Lozajic, M., Gabler, F., S¨oding, J., Lupas, A. N., and Alva, V. A completely reimplemented mpi bioinformatics toolkit with a new hhpred server at its core. Journal of molecular biology, 430(15):2237–2243, 2018. Rao, R., Bhattacharya, N., Thomas, N., Duan, Y., Chen, P., Canny, J., Abbeel, P., and Song, Y. Evaluating pro- tein transfer learning with tape. In Advances in Neural Information Processing Systems, pp. 9686–9698, 2019. Riesselman, A. J., Shin, J.-E., Kollasch, A. W., McMahon, C., Simon, E., Sander, C., Manglik, A., Kruse, A. C., and Marks, D. S. Accelerating protein design using autore- gressive generative models. bioRxiv, pp. 757252, 2019. Rives, A., Goyal, S., Meier, J., Guo, D., Ott, M., Zitnick, C. L., Ma, J., and Fergus, R. Biological structure and func- tion emerge from scaling unsupervised learning to 250 million protein sequences. bioRxiv, pp. 622803, 2019. Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. Megatron-lm: Training multi-billion parameter language models using gpu model parallelism. arXiv preprint arXiv:1909.08053, 2019. Suzek, B. E., Wang, Y., Huang, H., McGarvey, P. B., Wu, C. H., and Consortium, U. Uniref clusters: a comprehen- sive and scalable alternative for improving sequence sim- ilarity searches. Bioinformatics, 31(6):926–932, 2015. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I. Atten- tion is all you need. In Guyon, I., Luxburg, U. V., Bengio, ProGen: Language Modeling for Protein Generation # A. Appendix # A.1. Measuring out-of-distribution The objective of our work is to enable high-quality protein generation. To test the effectiveness of our trained model, we had two test subsets: ID-Test and OOD-Test. ID-Test is a random split of the non-redundant sample database and can be viewed as a typical in-distribution test set of held-out samples. The generated sequence of length 400 is then passed to the HHblits package by Zimmermann et al. (2018) to search for a multiple sequence alignment (MSA). As shown in Figure 13, there are multiple sequences that align well with the ProGen sequence. Figures 14-16 demonstrate the align- ments have high E-values and have related properties. The lower the E-value, the lower the probability of a random match and the higher the probability that the alignment match is related to the searched sequence. In contrast, OOD-Test represents an out-of-distribution set. OOD-Test consists samples that contained a matching sub- sequence residing in one of twenty Pfam protein families that were held out of Train and ID-Test. 3-GRAM SAE 5-GRAM SAE TRAIN AND ID-TEST TRAIN AND OOD-TEST ID-TEST AND OOD-TEST 0.027 0.399 0.387 0.095 1.112 1.104 # A.3. Model visualizations ProGen was trained from a randomly initialized embedding layer with no prior knowledge of residue biochemical prop- erties. Through per-token training on millions of protein sequences, ProGen seems to have inherently learned the natural clustering of amino acids that align with our under- standing of biophysicochemical properties. In Figure 12, the trained embedding weights for the standard amino acids tokens are reduced to three dimensions with principle com- ponent analysis (PCA). Table 2. The training data and ID-Test data seem to be drawn from a similar distribution, but OOD-Test is markedly different from the others. SAE refers to the sum of absolute errors for normalized 3-gram and 5-gram histograms. If two histograms were entirely divergent, the SAE would yield a value of 2. To quantify the out-of-distribution nature of OOD-Test, we computed a normalized histogram of 3-grams and 5-grams across samples in the Train, ID-Test, and OOD-Test datasets. The sum of absolute errors (SAE) was computed for a pair of histograms as shown in Table 2. Two normalized histograms that align perfectly would have an SAE of 0 and two normal- ized histograms that are completely divergent would have an SAE of 2. The results imply that the OOD-Test is drawn from a significantly different distribution. aromatic aliphatic small polar 08 The held-out protein families PF04680, PF17988, PF12325, PF03272, PF03938, PF17724, PF10696, PF11968, PF04153, PF06173, PF12378, PF04420, PF10841, PF06917, PF03492, PF06905, PF15340, PF17055, PF05318. # A.2. Generation with only conditioning tags We observe that ProGen can be used to generate proteins with only conditioning tags and no initial amino acid context. For the following example, we prompt ProGen to greedily generate a protein sequence with the tags Flavoprotein and FMN. As defined by the UniprotKB keyword, the FMN tag refers to “a protein involved in flavin adenine mononu- cleotide (FMN) synthesis or protein which contains at least one FMN as prosthetic group/cofactor (flavoproteins) or cosubstrate, such as many oxidation-reduction enzymes”. Figure 12. Principle component analysis (PCA) of the ProGen’s amino acid embeddings aligns with our intuition of amino acid properties. Using Vig (2019), we visualize the attention head patterns of ProGen. For both Figure 17 and Figure 18, we are visu- alizing the attention weight patterns in each head of ProGen for α-actinin protein (PDB: 4D1E) residues 510 to 528, which exhibits an alpha helical structure. In Figure 17, we visualize layers 1 to 3 and attention heads 1 to 12 of ProGen. The attention mechanism exhibits well-differentiated local and global patterns which may indicate specialization of each head on different tasks. ProGen: Language Modeling for Protein Generation aN ro} 6 UniRef 166_AGAG7BHHDS UniRef 166_ABAG64E339 UniRef 166_ABABE3KNGS UniRef 166_ABABH3GPP2 UniRef 188_ABABBSFR4AI UniRef 166_ABABIBQIX2 Unikef 108_AGA2X3F SH2 UniRet 186_ABAINSF EK? Suni Ret 1068_ABAOBSPEUB r UniRet 166_ABA2K2VANS UniRet 166_ABA4USD3F5 UniRet 166_ABAS7 7DLR7 UniRet 166_AGAGF3IEB3 UniRet 166_AGA377BHE8 UniRet 106_AGABISSPx2 [UniRef 186_AGA2X3 19 Figure 13. There are multiple sequences that align well with the ProGen generated FMN sequence from only conditioning tags. Many of the matching alignments have properties reflective of FMN proteins (e.g. oxidoreductases). A red color corresponds to a significantly low E-value, implying a matching homolog. The MSA was directly taken using HHblits. ProGen: Language Modeling for Protein Generation 1, UniRef100_A0A078MHD9 Nitrite reductase subunit B n=1 Tax=Pseudomonas saudimassiliensis TaxID=1461581 RepID=A0A078MHD9_9PSED Probability: 100%, E-value: 1.5e-152, Score: 1145.78, Aligned Cols: 399, Identities: 74%, Similarity: 1.199 Qa MSKVRLALIGNGMVGHRFIEDLLDKSDAANFDITVFCEEPRIAYDRVHLSSYFSHHTAEELSLVREGFYDKHGIKVLVGERAITI 85 (400) mskvrlaiigngmvghr fiedlldksdaanfditvfceepriaydrvhlssyfshhtaeelslvregfydkhgikvlvgeraiti DHL TP TEPT TTT DD rede dette Pe PPE PEEP EDL De Deere tlt eee Pee dle lt ~~k~rLVVVGNGMVGH~f~E~Lv~~ ~~~~ItVF~EEpr~AYDRVhLSeyFsg~~aedLsL~~~~~' Y~~~gI~l~lg~rv~~1 T iil MSKQRLVVIGNGMVGHRFIEQLVAKGAHQQYQITVFCEEPRPAYDRVHLSEYFSGRTAEDLSLVREGFYEKHGITLHLGERVVEL 195 (843) Q 86 NRQEKVIHSSAGRTVFYDKLIMATGSHPFVPPISGNDTK-C--FRNLEDAKFLYDNANSTGKQAVVIGGGLLGLEAAGALKNLGM 167 (400) nrgekvihssagrtvfydklimatgshpfvppisgndtk-c--frnledakflydnanstgkqavvigggllgleaagalknlgm + fee teee Peet TTL TELL Tt Le) Lo thetl pee eteeetee Pee EETEPELTTELL PEAT DR~~K~V~T~~G~~~~YDkLVLATGSyPFVPPIpG~d~~gcfVYRTIEDL~alra~a~~-ak~GvVIGGGLLGLEAA~ALk~LGL T 196 DRQEKTVTTAAGRTLPYDKLVLATGSYPFVPPIPGADREGCFVYRT IEDLDAIRACARR-AKRGVVVGGGLLGLEAANALKDLGL 279 (843) Q 168 —ETHVVEFAPRLMAVQLDDRGGAMLREKIESTGVRLHTGKNTQETVNGEQAAHRLKFADGSELETDFIVFSAGIRPQDELARQCGL 252 (400) ethvvefaprlmavqlddrggamLrekiestgvrlhtgkntqeivngeqaahr lkfadgseletdfivfsagirpqdelarqcgl PEEP E EET te ede ete TUFF TEE EEE EE eTHVVEFAPrLMpvQLDe~GG~~Lr~kIE~LGV~VHT~k~T~eI~~g~~~~~ rm~FaDGt~LetDmIVFSAGIRPrDeLAR~cGL T 280 ETHVVEFAPRLMPVQLDEGGGAQLRRKIEALGVTVHTGKNTQEIVDGEEARHRMNFADGSELETDMIVFSAGIRPRDELARQCGL 364 (843) +40. It. Q 253 ALGPRGGIAIDDHCLTSDPDVYAIGECASWHGRVYGLVAPGYKMAQVAVDHILGNENAFKGADMSTKLKLLGVDVGGIGDAHGRT 337 (400) algprggiaiddhcltsdpdvyaigecaswhgrvyglvapgykmaqvavdhi Lgnenafkgadmstklkllgvdvggi gdahgert PHLTTTTDAL LT LTPP TTT Peder rte Ped dette dette EPP EE ~vG~RGGIvIdd~CrTSDpdIyAIGECAlw~gr i fGLVAPGY~MArvaA~~L~g~~~~FtGADmSTKLKL1LGVDVaSiGDAhg~t T 365 AVGERGGIVIDDHCRTSDPDIYAIGECALWNGRIYGLVAPGYKMARVAADHLLGGDAAFTGADMSTKLKLLGVDVASIGDAHGRT 449 (843) Q 338 PGARSYVYLDESKEVYKRLIVSEDNKTLLGAVLVGDTSDYGNLLQLVLNNIDLPQHPDSLILP 400 (400) pgarsyvy Ldeskevykrlivsednktllgavlvgdtsdygnllqlvlnnidlpqhpdslilp MMe ole Deets | [rea Ce eT liebe Jel | |sel pga~sy~y~D~~~~iYKkLVvS~Dgk~LLGaVLVGDas~Y~~L1lq~~~N~i~LP~~Pe~LI1P T 450 PGARSYVYLDERKGVYKRLVVSEDGKRLLGAVLVGDASDYGTLLQLVLNGIPLPEDPESLILP 512 (843) Figure 14. First alignment (ranked by E-value) of a ProGen generated FMN protein. An E-value less than 1e−4 and identity greater than 40% is desired to consider the match as potentially homologous. The sequence labeled as Q is the ProGen protein and the sequence labeled as T is the matched sequence. ProGen: Language Modeling for Protein Generation UniRef100_A0A064E339 Nitrite reductase [NAD(P)H] large subunit n=1 Tax=Citrobacter freundii MGH 56 TaxID=1439318 RepID=A0A064E339_CITFR Probability: 100%, E-value: 1.5e-130, Score: 1017.97, Aligned Cols: 398, Identities: 60%, Similarity: 1.014 Qiu MSKVRLALIGNGMVGHRFIEDLLDKSDAANFDITVFCEEPRIAYDRVHLSSYFSHHTAEELSLVREGFYDKHGIKVLVGERAITI 85 (400) mskvr Lai igngmvghr fied LdksdaanfditvfceepriaydrvhlssyfshhtaeelsLvregfydkhgikvlvgeraiti haber hed AOS Bd Cd er bl Che MtKp~LVVVGhGMVgHhf LEqLv~r~ Lh~~y~IvVfgEE~~~AYDRVHLSeYFsGrsA~sLSLv~~~f f~~~gIELRL~~~v~al T 666 —MTKPVLVVVGHGMVGHHFLEQCVSRNLHQQYRIVVFGEERYAAYDRVHLSEYFAGRSAESLSLVEGDFFAEHGIELRLGEQVVAI 750 (1639) Q 86 NRQEKVIHSSAGRTVFYDKLIMATGSHPFVPPISGNDT~KCF~-RNLEDAKFLYDNANSTGKQAVVIGGGLLGLEAAGALKNLGM 167 (400) nrgekvihssagrtv fydkLimatgshp fvppi sgndt-kc f--rnledakf lydnanstgkqavvigggllgleaagalknlgm a ehemeeeed heed aa ee SD ce hd eed ed ee Dr~~r~V~da~G~e~~~D~LVLATGSypFVPP i pG~D~pgCfVYRTLdDLdAI~a~A-~~a~~GVVIGGGLLGLEAANALkqLGL T 751 DRDARVVRDAEGHETHWDKLVLATGSYPFVPPVPGNDLPGCFVYRTLDDLDAIAAHA-AAAKRGVVIGGGLLGLEAANALKQLGL 834 (1639) Q 168 ETHVVEFAPRLMAVQLDDRGGAMLREKIESTGVRLHTGKNTQEIVNGEQAAHRLKFADGSELETDFIVFSAGIRPQDELARQCGL 252 (400) ethvvefaprlmavqlddrggamlrekiestgvrlhtgkntqeivngeqaahrlkfadgseletdfivfsagirpqdelarqcgl PULEPEELL LTTE LL Pee PE De PEt Deere eee ee EP TELE PEELE EPEEELATT DE HLL eTHVVEFAPrLMaVQLD~gGaamLrrkKleaLgV~VHT~k~T~~I~~~: ~L~FADG~~LetD1LVvFSAGIRPrD~LAR~aGL T 835 ETHVVEFAPRLMAVQLDNGGAAMLRRKIEALGVGVHTSKATTAIVREE-DGLRLNFADGEALETDMVVFSAGIRPQDALARSAGL. 918 (1639) Q 253 ALGPRGGIAIDDHCLTSDPDVYAIGECASWHGRVYGL VAPGYKMAQVAVDHILGNENAFKGADMSTKLKLLGVDVGGIGDAHGRT 337 (400) algpregiaiddhcltsdpdvyaigecaswhgrvyglvapgykmaqvavdhi lenenafkgadmstklkLlgvdvggigdahert HLL LTD AL TP PPe TPE ET Ee Ee Pe Pte EEE EEL ~vGeRGGIvIdd~CrTSDp~I fAIGECALW~GqI fGLVAPGYqMArv~A~~LaG~~a~F~GADMSTKLKLLGVdVASfGDAhGrT T 919 AVGERGGIVIDDQCRTSDPDVFAIGECALWEGKIFGLVAPGYQMARVAAATLAGEEACFSGADMSTKLKLLGVDVASFGDAHGRT 1003 (1639) Q 338 PGARSYVYLDESKEVYKRLIVSEDNKTLLGAVLVGDTSDYGNLLQLVLNNIDLPQHPDSLILP 400 (400) pgarsyvyldeskevykrlivsednktllgavlvgdtsdygnllglvinnidlpghpdslilp Hee TDD ed tee Peete TEE Et PE pGsqsY~w~dgp~~7i YKKIVVS~Dgk~LLGgVLVGDsseYstLLQmmLNg~~LPa~PesLILP T 1004 PGSQSYQWTDGPQQTYKKIVVSADGKTLLGGVLVGDASDYATLLQMMLNGMALPARPESLILP 1066 (1639) Figure 15. Second alignment (ranked by E-value) of a ProGen generated FMN protein. An E-value less than 1e−4 and identity greater than 40% is desired to consider the match as potentially homologous. The sequence labeled as Q is the ProGen protein and the sequence labeled as T is the matched sequence. ProGen: Language Modeling for Protein Generation 3. UniRef100 AQA063KMG3 Uncharacterized protein n=1 Tax=Pseudoalteromonas fuliginea TaxID=1872678 RepID=A0A063KMG3_9GAMM Probability: 100%, E-value: 7.1e-129, Score: 953.46, Aligned Cols: 396, Identities: 57.99999999999999%, Similarity: 0.992 Qu MSKVRLALIGNGMVGHRF IEDLLDKSDAANFDITVFCEEPRIAYDRVHLSSYFSHHTAEELSLVREGFYDKHGIKVLVGERALTI 85 (400) mskvrLaiigngmvghrfied1ldksdaanfditvfceepriaydrvhlssyfshhtaeelslvregfydkhgikvlvgeraiti Pet] PEPTIDE DL Dee Peet TFT EEE E LETTE ETD. ttt [Lee tL [ttt |e tt ttt | ~~~=t LVWVGnGMvGHr LvE~L~a-~d~~~~r IvV1~EEprpAYDRV~LS~yf~Gkta~dLsL~~~~~~~d~-v~Lrl-~~v~~1 T 84 — MMTRTLVVVGNGMVGHRLVEQLRA-RDRERWRIVVLGEEPRPAYDRVHLSSYFDGKTADDLSLTGPDFYDDPGVDLRLGTRVVAI 167 (592) Q 86 — NRQEKVIHSSAGRTVFYDKLIMATGSHPFVPPISGNDTK-C--FRNLEDAKFLYDNANSTGKQAVVIGGGLLGLEAAGALKNLGM 167 (400) nrqekvihssagrtvfydklimatgshpfvppisgndtk-c--frnledakflydnanstgkqavvigggllgleaagalknlgm tle dette Pete TTT TTTE ET TTT Ptl) Doth tite tee dee the TEEPE En DR~~rtVtta~G~~~~YD~LVLATGS~PFVPPVPG~d1~gCFVYRTieDLdaIraaA~~-~r~GVVIGGGLLGLEAA~ALr~LGL T 168 —DRDARTVTTADGETFPYDALVLATGSRPFVPPVPGHDLPGCFVYRTIEDLDAIRAAARP-GKPGVVIGGGLLGLEAANALRLLGL 251 (592) Q 168 ETHVVEFAPRLMAVQLDDRGGAMLREKIESTGVRLHTGKNTQEI VNGEQAAHRLKFADGSELETDFIVFSAGIRPQDELARQCGL 252 (400) ethwvefaprlmavqlddr ggamlrekiestgvrlntgkntgei vngeqaahr lkfadgseletdfivfsagirpqdelarqcgl FHT] PLETED DD ttt dle. dt tle de i ptt dete te] [te tee dette ddd. an +L 1+FL I ~tHVVEfAPr LMp~QLD~~Gg~~L~~kIe~LGv~VH~~~at~~I~~~~g~v~~~~faDGt~LetDmVVFSAGIRPRDeLA~~~gL T 252 RTHVVEFAPRLMPVQLDEGGGRVLARKIEELGVRVHCGKATESIEGGDGRVYRMTFADGTVLETDMVVFSAGIRPRDELARPAGL 336 (592) Q 253 ALGPRGGIAIDDHCLTSDPDVYAIGECASWHGRVYGLVAPGYKMAQVAVDHILGNEN-AFKGADMSTKLKLLGVDVGGIGDAHGR 336 (400) algprggiaiddhcltsdpdvyai gecaswhgrvyglvapgykmaqvavdhi Lgnen-afkgadmstkLkLlgvdvggigdahgr HUFL LTT TPT TT TELAT TTT EEEE ETAT ETD ED tte tet th Pert de ddd ~~GeRGGi1VD~~CrTsDp~I~AIGECAa~~Gr~~GLVAPGY~MAe~vA~qL1g~~~~~F~gaDmSTKLKLLGVdVASfGDaha~ T 337 ERGERGGILVDDHCRTSDPDIWAIGECAAWNGRCYGLVAPGYRMAEVVARQLLGNPAEPFPGADMSTKLKLLGVDVASFGDAHAR 421 (592) Q 337 TPGARSYVYLDESKEVYKRLIVSEDNKTLLGAVLVGDTSDYGNLLQLVLNNIDLPQHPDSLILP 400 (400) tpgarsyvyldeskevykrlivsednktllgavlvgdtsdygnllqlvtnnidlpqhpdslilp ei cea ea ee red eed a ca dd ae ee | tega-~~~~~d~~~g~Â¥~K Lv 1L~~Dg~~LLGgVLVGDa~aY~~L~~--1~g~~Lpa~pE~L1~~ T 422 TEGAIEYVFEDEAAGIYAKLVLSPDGRTLLGGVLVGDTSAYPTLLQ--LNGRELPAPPEQLLLP 483 (592) rs Figure 16. Third alignment (ranked by E-value) of a ProGen generated FMN protein. An E-value less than 1e−4 and identity greater than 40% is desired to consider the match as potentially homologous. The sequence labeled as Q is the ProGen protein and the sequence labeled as T is the matched sequence. ProGen: Language Modeling for Protein Generation Figure 17. Attention patterns of ProGen for a given sequence. Layers 1-3 (rows) and attention heads 1-12 (columns) are displayed. The attention mechanism exhibits well-differentiated local and global patterns which may indicate specialization of each head on different tasks. Two corresponding attention heads from this visualization are shown in Figure 18. Layer 1, Head 5 Layer 2, Head 1 L A L L Ls gb L yl E E E ai vv T T I y | = D p D D Q Q Q- Q L yz L yL HZ WH H HH L¢ A i: Ei; Eé E E E F/ YF F “li AZ A A A K/ K K- K R/ R R R AZ A A A AZ A A A P/ P Pp P FZ F F F Figure 18. Local attention pattern for two example attention heads. Lines indicate attention to previous tokens for a given predicted token.
{ "id": "1611.01462" }
2003.03033
What is the State of Neural Network Pruning?
Neural network pruning---the task of reducing the size of a network by removing parameters---has been the subject of a great deal of work in recent years. We provide a meta-analysis of the literature, including an overview of approaches to pruning and consistent findings in the literature. After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades. To address this situation, we identify issues with current practices, suggest concrete remedies, and introduce ShrinkBench, an open-source framework to facilitate standardized evaluations of pruning methods. We use ShrinkBench to compare various pruning techniques and show that its comprehensive evaluation can prevent common pitfalls when comparing pruning methods.
http://arxiv.org/pdf/2003.03033
Davis Blalock, Jose Javier Gonzalez Ortiz, Jonathan Frankle, John Guttag
cs.LG, stat.ML
Published in Proceedings of Machine Learning and Systems 2020 (MLSys 2020)
null
cs.LG
20200306
20200306
0 2 0 2 r a M 6 ] G L . s c [ 1 v 3 3 0 3 0 . 3 0 0 2 : v i X r a # WHAT IS THE STATE OF NEURAL NETWORK PRUNING? # Davis Blalock * 1 Jose Javier Gonzalez Ortiz * 1 Jonathan Frankle 1 John Guttag 1 ABSTRACT Neural network pruning—the task of reducing the size of a network by removing parameters—has been the subject of a great deal of work in recent years. We provide a meta-analysis of the literature, including an overview of approaches to pruning and consistent findings in the literature. After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades. To address this situation, we identify issues with current practices, suggest concrete remedies, and introduce ShrinkBench, an open-source framework to facilitate standardized evaluations of pruning methods. We use ShrinkBench to compare various pruning techniques and show that its comprehensive evaluation can prevent common pitfalls when comparing pruning methods. 1 # 1 INTRODUCTION Much of the progress in machine learning in the past decade has been a result of deep neural networks. Many of these networks, particularly those that perform the best (Huang et al., 2018), require enormous amounts of compu- tation and memory. These requirements not only increase infrastructure costs, but also make deployment of net- works to resource-constrained environments such as mo- bile phones or smart devices challenging (Han et al., 2015; Sze et al., 2017; Yang et al., 2017). One popular approach for reducing these resource require- ments at test time is neural network pruning, which entails systematically removing parameters from an existing net- work. Typically, the initial network is large and accurate, and the goal is to produce a smaller network with simi- lar accuracy. Pruning has been used since the late 1980s (Janowsky, 1989; Mozer & Smolensky, 1989a;b; Karnin, 1990), but has seen an explosion of interest in the past decade thanks to the rise of deep neural networks. For this study, we surveyed 81 recent papers on pruning in the hopes of extracting practical lessons for the broader community. For example: which technique achieves the best accuracy/efficiency tradeoff? Are there strategies that work best on specific architectures or datasets? Which high-level design choices are most effective? networks without reducing accuracy, and many pruning methods outperform random pruning. However, our cen- tral finding is that the state of the literature is such that our motivating questions are impossible to answer. Few papers compare to one another, and methodologies are so inconsis- tent between papers that we could not make these compar- isons ourselves. For example, a quarter of papers compare to no other pruning method, half of papers compare to at most one other method, and dozens of methods have never In addition, been compared to by any subsequent work. no dataset/network pair appears in even a third of papers, evaluation metrics differ widely, and hyperparameters and other counfounders vary or are left unspecified. Most of these issues stem from the absence of standard datasets, networks, metrics, and experimental practices. To help enable more comparable pruning research, we identify specific impediments and pitfalls, recommend best prac- tices, and introduce ShrinkBench, a library for standard- ized evaluation of pruning. ShrinkBench makes it easy to adhere to the best practices we identify, largely by provid- ing a standardized collection of pruning primitives, models, datasets, and training routines. Our contributions are as follows: 1. A meta-analysis of the neural network pruning litera- ture based on comprehensively aggregating reported re- sults from 81 papers. There are indeed several consistent results: pruning param- eters based on their magnitudes substantially compresses *Equal contribution 1MIT CSAIL, Cambridge, MA, USA. Correspondence to: Davis Blalock <[email protected]>. Proceedings of the 3 rd MLSys Conference, Austin, TX, USA, 2020. Copyright 2020 by the author(s). 2. A catalog of problems in the literature and best prac- tices for avoiding them. These insights derive from an- alyzing existing work and pruning hundreds of models. 3. ShrinkBench, an open-source library for evaluating neural network pruning methods available at https://github.com/jjgo/shrinkbench. What is the State of Neural Network Pruning? # 2 OVERVIEW OF PRUNING Algorithm 1 Pruning and Fine-Tuning Before proceeding, we first offer some background on neu- ral network pruning and a high-level overview of how ex- isting pruning methods typically work. # 2.1 Definitions We define a neural network architecture as a function fam- ily f(x;-). The architecture consists of the configuration of the network’s parameters and the sets of operations it uses to produce outputs from inputs, including the arrangement of parameters into convolutions, activation functions, pool- ing, batch normalization, etc. Example architectures in- clude AlexNet and ResNet-56. We define a neural network model as a particular parameterization of an architecture, ie., f(x;W) for specific parameters W. Neural network pruning entails taking as input a model f(x; W) and pro- ducing a new model f(a;M © W’). Here W’ is set of parameters that may be different from W, M ¢€ {0,1}! is a binary mask that fixes certain parameters to 0, and © is the elementwise product operator. In practice, rather than using an explicit mask, pruned parameters of W are fixed to zero or removed entirely. # 2.2 High-Level Algorithm There are many methods of producing a pruned model f(x; M OW’) from an initially untrained model f(z; Wo), where Wo is sampled from an initialization distribution D. Nearly all neural network pruning strategies in our survey derive from Algorithm | (Han et al., 2015). In this algo- rithm, the network is first trained to convergence. After- wards, each parameter or structural element in the network is issued a score, and the network is pruned based on these scores. Pruning reduces the accuracy of the network, so it is trained further (known as fine-tuning) to recover. The process of pruning and fine-tuning is often iterated several times, gradually reducing the network’s size. Many papers propose slight variations of this algorithm. For example, some papers prune periodically during train- ing (Gale et al., 2019) or even at initialization (Lee et al., 2019b). Others modify the network to explicitly include additional parameters that encourage sparsity and serve as a basis for scoring the network after training (Molchanov et al., 2017). Input: N , the number of iterations of pruning, and X, the dataset on which to train and fine-tune 1: W € initialize() 2: W < trainToConvergence(f(X;W)) 3: Me MI 4: fori in 1 to N do 5: M ¢ prune(M, score(W)) 6: We fineTune(f(X;M ©W)) 7: end for 8: return 1/7, W network, which—although smaller in terms of parameter- count—may not be arranged in a fashion conducive to speedups using modern libraries and hardware. Other methods consider parameters in groups (structured prun- ing), removing entire neurons, filters, or channels to ex- ploit hardware and software optimized for dense computa- tion (Li et al., 2016; He et al., 2017). Scoring. It is common to score parameters based on their absolute values, trained importance coefficients, or contri- butions to network activations or gradients. Some prun- ing methods compare scores locally, pruning a fraction of the parameters with the lowest scores within each struc- tural subcomponent of the network (e.g., layers) (Han et al., 2015). Others consider scores globally, comparing scores to one another irrespective of the part of the network in which the parameter resides (Lee et al., 2019b; Frankle & Carbin, 2019). Scheduling. Pruning methods differ in the amount of the network to prune at each step. Some methods prune all desired weights at once in a single step (Liu et al., 2019). Others prune a fixed fraction of the network iteratively over several steps (Han et al., 2015) or vary the rate of pruning according to a more complex function (Gale et al., 2019). Fine-tuning. For methods that involve fine-tuning, it is most common to continue to train the network using the trained weights from before pruning. Alternative propos- als include rewinding the network to an earlier state (Fran- kle et al., 2019) and reinitializing the network entirely (Liu et al., 2019). # 2.4 Evaluating Pruning # 2.3 Differences Betweeen Pruning Methods Within the framework of Algorithm 1, pruning methods vary primarily in their choices regarding sparsity structure, scoring, scheduling, and fine-tuning. Structure. Some methods prune individual parameters (unstructured pruning). Doing so produces a sparse neural Pruning can accomplish many different goals, including re- ducing the storage footprint of the neural network, the com- putational cost of inference, the energy requirements of in- ference, etc. Each of these goals favors different design choices and requires different evaluation metrics. For ex- ample, when reducing the storage footprint of the network, all parameters can be treated equally, meaning one should evaluate the overall compression ratio achieved by prun- ing. However, when reducing the computational cost of What is the State of Neural Network Pruning? inference, different parameters may have different impacts. For instance, in convolutional layers, filters applied to spa- tially larger inputs are associated with more computation than those applied to smaller inputs. racy. In fact, for small amounts of compression, pruning can sometimes increase accuracy (Han et al., 2015; Suzuki et al., 2018). This basic finding has been replicated in a large fraction of the papers in our corpus. Regardless of the goal, pruning imposes a tradeoff between model efficiency and quality, with pruning increasing the former while (typically) decreasing the latter. This means that a pruning method is best characterized not by a single model it has pruned, but by a family of models correspond- ing to different points on the efficiency-quality curve. To quantify efficiency, most papers report at least one of two metrics. The first is the number of multiply-adds (often referred to as FLOPs) required to perform inference with the pruned network. The second is the fraction of param- eters pruned. To measure quality, nearly all papers report changes in Top-1 or Top-5 image classification accuracy. As others have noted (Lebedev et al., 2014; Figurnov et al., 2016; Louizos et al., 2017; Yang et al., 2017; Han et al., 2015; Kim et al., 2015; Wen et al., 2016; Luo et al., 2017; He et al., 2018b), these metrics are far from perfect. Param- eter and FLOP counts are a loose proxy for real-world la- tency, throughout, memory usage, and power consumption. Similarly, image classification is only one of the countless tasks to which neural networks have been applied. How- ever, because the overwhelming majority of papers in our corpus focus on these metrics, our meta-analysis necessar- ily does as well. # 3 LESSONS FROM THE LITERATURE After aggregating results from a corpus of 81 papers, we identified a number of consistent findings. In this section, we provide an overview of our corpus and then discuss these findings. # 3.1 Papers Used in Our Analysis Our corpus consists of 79 pruning papers published since 2010 and two classic papers (LeCun et al., 1990; Hassibi et al., 1993) that have been compared to by a number of recent methods. We selected these papers by identifying popular papers in the literature and what cites them, sys- tematically searching through conference proceedings, and tracing the directed graph of comparisons between prun- ing papers. This last procedure results in the property that, barring oversights on our part, there is no pruning paper in our corpus that compares to any pruning paper outside of our corpus. Additional details about our corpus and its construction can be found in Appendix A. # 3.2 How Effective is Pruning? One of the clearest findings about pruning is that it works. More precisely, there are various methods that can sig- nificantly compress models with little or no loss of accu- Along the same lines, it has been repeatedly shown that, at least for large amounts of pruning, many pruning methods outperform random pruning (Yu et al., 2018; Gale et al., 2019; Frankle et al., 2019; Mariet & Sra, 2015; Suau et al., 2018; He et al., 2017). Interestingly, this does not always hold for small amounts of pruning (Morcos et al., 2019). Similarly, pruning all layers uniformly tends to perform worse than intelligently allocating parameters to different layers (Gale et al., 2019; Han et al., 2015; Li et al., 2016; Molchanov et al., 2016; Luo et al., 2017) or pruning glob- ally (Lee et al., 2019b; Frankle & Carbin, 2019). Lastly, when holding the number of fine-tuning iterations constant, many methods produce pruned models that outperform re- training from scratch with the same sparsity pattern (Zhang et al., 2015; Yu et al., 2018; Louizos et al., 2017; He et al., 2017; Luo et al., 2017; Frankle & Carbin, 2019) (at least with a large enough amount of pruning (Suau et al., 2018)). Retraining from scratch in this context means training a fresh, randomly-initialized model with all weights clamped to zero throughout training, except those that are nonzero in the pruned model. Another consistent finding is that sparse models tend to outperform dense ones for a fixed number of parameters. Lee et al. (2019a) show that increasing the nominal size of ResNet-20 on CIFAR-10 while sparsifying to hold the number of parameters constant decreases the error rate. Kalchbrenner et al. (2018) obtain a similar result for audio synthesis, as do Gray et al. (2017) for a variety of additional tasks across various domains. Perhaps most compelling of all are the many results, including in Figure 1, showing that pruned models can obtain higher accuracies than the origi- nal models from which they are derived. This demonstrates that sparse models can not only outperform dense counter- parts with the same number of parameters, but sometimes dense models with even more parameters. # 3.3 Pruning vs Architecture Changes One current unknown about pruning is how effective it tends to be relative to simply using a more efficient archi- tecture. These options are not mutually exclusive, but it may be useful in guiding one’s research or development efforts to know which choice is likely to have the larger impact. Along similar lines, it is unclear how pruned mod- els from different architectures compare to one another— i.e., to what extent does pruning offer similar benefits across architectures? To address these questions, we plot- ted the reported accuracies and compression/speedup levels of pruned models on ImageNet alongside the same metrics What is the State of Neural Network Pruning? for different architectures with no pruning (Figure 1).1 We plot results within a family of models as a single curve.2 Figure 1 suggests several conclusions. First, it reinforces the conclusion that pruning can improve the time or space vs accuracy tradeoff of a given architecture, sometimes even increasing the accuracy. Second, it suggests that prun- ing generally does not help as much as switching to a better architecture. Finally, it suggests that pruning is more effec- tive for architectures that are less efficient to begin with. # 4 MISSING CONTROLLED COMPARISONS While there do appear to be a few general and consistent findings in the pruning literature (see the previous section), by far the clearest takeaway is that pruning papers rarely make direct and controlled comparisons to existing meth- ods. This lack of comparisons stems largely from a lack of experimental standardization and the resulting fragmen- tation in reported results. This fragmentation makes it dif- ficult for even the most committed authors to compare to more than a few existing methods. # 4.1 Omission of Comparison Speed and Size Tradeoffs for Original and Pruned Models 85 Top 1 Accuracy (%) * ** £8 8 Top 5 Accuracy (%) g 10° 10” 10° 10° 10° Number of Parameters Number of FLOPs —®- MobileNet-v2 (2018) —@- ResNet (2016) —® VGG (2014) MobileNetv2Prned » ResNetPruned + VGG Pruned 2 EfficientNet (2019) Figure 1: Size and speed vs accuracy tradeoffs for dif- ferent pruning methods and families of architectures. Pruned models sometimes outperform the original ar- chitecture, but rarely outperform a better architecture. Many papers claim to advance the state of the art, but don’t compare to other methods—including many pub- lished ones—that make the same claim. Ignoring Pre-2010s Methods There was already a rich body of work on neural network pruning by the mid 1990s (see, e.g., Reed’s survey (Reed, 1993)), which has been al- most completely ignored except for Lecun’s Optimal Brain Damage (LeCun et al., 1990) and Hassibi’s Optimal Brain Surgeon (Hassibi et al., 1993). Indeed, multiple authors have rediscovered existing methods or aspects thereof, with Han et al. (2015) reintroducing the magnitude-based prun- ing of Janowsky (1989), Lee et al. (2019b) reintroducing the saliency heuristic of Mozer & Smolensky (1989a), and He et al. (2018a) reintroducing the practice of “reviving” previously pruned weights described in Tresp et al. (1997). Ignoring Recent Methods Even when considering only post-2010 approaches, there are still virtually no methods that have been shown to outperform all existing “state-of- the-art” methods. This follows from the fact, depicted in the top plot of Figure 2, that there are dozens of modern papers—including many affirmed through peer review— that have never been compared to by any later study. A related problem is that papers tend to compare to few existing methods. In the lower plot of Figure 2, we see that more than a fourth of our corpus does not compare to any previously proposed pruning method, and another fourth compares to only one. Nearly all papers compare to three or fewer. This might be adequate if there were a clear progression of methods with one or two “best” methods at any given time, but this is not the case. 1Since many pruning papers report only change in accuracy or amount of pruning, without giving baseline numbers, we normal- ize all pruning results to have accuracies and model sizes/FLOPs as if they had begun with the same model. Concretely, this means multiplying the reported fraction of pruned size/FLOPs by a stan- dardized initial value. This value is set to the median initial size or number of FLOPs reported for that architecture across all papers. This normalization scheme is not perfect, but does help control for different methods beginning with different baseline accuracies. 2The EfficientNet family is given explicitly in the original pa- per (Tan & Le, 2019), the ResNet family consists of ResNet- 18, ResNet-34, ResNet-50, etc., and the VGG family consists of VGG-{11, 13, 16, 19}. There are no pruned EfficientNets since EfficientNet was published too recently. Results for non-pruned models are taken from (Tan & Le, 2019) and (Bianco et al., 2018). # 4.2 Dataset and Architecture Fragmentation Among 81 papers, we found results using 49 datasets, 132 architectures, and 195 (dataset, architecture) combinations. As shown in Table 1, even the most common combination of dataset and architecture—VGG-16 on ImageNet3 (Deng et al., 2009)—is used in only 22 out of 81 papers. More- over, three of the top six most common combinations in- volve MNIST (LeCun et al., 1998a). As Gale et al. (2019) and others have argued, using larger datasets and models is essential when assessing how well a method works for real- 3We adopt the common practice of referring to the ILSVRC2012 training and validation sets as “ImageNet.” What is the State of Neural Network Pruning? Number of Papers Comparing to a Given Paper 6 9 12 15 18 Compared to by this many other papers Number of Papers a Given Paper Compares To Number of papers compared to this many times Number of papers that compare to this many others Compares to this many “other papers mmm Peer-Reviewed @mm Other # Number of Papers using Pair (Dataset, Architecture) Pair ImageNet ImageNet MNIST CIFAR-10 ResNet-56 LeNet-300-100 MNIST LeNet-5 MNIST ImageNet CaffeNet CIFAR-10 CIFAR-VGG (Torch) AlexNet ImageNet ResNet-18 ImageNet ImageNet ResNet-34 CIFAR-10 ResNet-110 CIFAR-10 CIFAR-10 ResNet-32 VGG-16 ResNet-50 LeNet-5-Caffe PreResNet-164 22 15 14 14 12 11 10 8 8 6 6 5 4 4 # Table 1: All combinations of dataset and architecture used in at least 4 out of 81 papers. Figure 2: Reported comparisons between papers. world networks. MNIST results may be particularly un- likely to generalize, since this dataset differs significantly from other popular datasets for image classification. In par- ticular, its images are grayscale, composed mostly of zeros, and possible to classify with over 99% accuracy using sim- ple models (LeCun et al., 1998b). # 4.3 Metrics Fragmentation As depicted in Figure 3, papers report a wide variety of metrics and operating points, making it difficult to com- pare results. Each column in this figure is one (dataset, ar- chitecture) combination taken from the four most common combinations4, excluding results on MNIST. Each row is one pair of metrics. Each curve is the efficiency vs accu- racy tradeoff obtained by one method.5 Methods are color- coded by year. ods are nearby on the x-axis, it is not clear whether one meaningfully outperforms another since neither reports a standard deviation or other measure of central tendency. Fi- nally, most papers in our corpus do not report any results with any of these common configurations. # Incomplete Characterization of Results If all papers reported a wide range of points in their trade- off curves across a large set of models and datasets, there might be some number of direct comparisons possible be- tween any given pair of methods. As we see in the upper half of Figure 4, however, most papers use at most three (dataset, architecture) pairs; and as we see in the lower half, they use at most three—and often just one—point to char- acterize each curve. Combined with the fragmentation in experimental choices, this means that different methods’ results are rarely directly comparable. Note that the lower half restricts results to the four most common (dataset, ar- chitecture) pairs. It is hard to identify any consistent trends in these plots, aside from the existence of a tradeoff between efficiency and accuracy. A given method is only present in a small subset of plots. Methods from later years do not consis- tently outperform methods from earlier years. Methods within a plot are often incomparable because they report results at different points on the x-axis. Even when meth- # 4.5 Confounding Variables Even when comparisons include the same datasets, models, metrics, and operating points, other confounding variables still make meaningful comparisons difficult. Some vari- ables of particular interest include: 4We combined the results for AlexNet and CaffeNet, which is a slightly modified version of AlexNet (caf, 2016), since many authors refer to the latter as “AlexNet,” and it is often unclear which model was used. 5Since what counts as one method can be unclear, we consider all results from one paper to be one method except when two or more named methods within the paper report using at least one identical x-coordinate (i.e., when the paper’s results can’t be plot- ted as one curve). • Accuracy and efficiency of the initial model • Data augmentation and preprocessing • Random variations in initialization, training, and fine- tuning. This includes choice of optimizer, hyperparam- eters, and learning rate schedule. • Pruning and fine-tuning schedule • Deep learning library. Different libraries are known to What is the State of Neural Network Pruning? VGG-16 on ImageNet Alex/CaffeNet on ImageNet ResNet-50 on ImageNet ResNet-56 on CIFAR-10 4 > 5 05 * = 0 y * & ow 0 Ps = 2 ® £3 wy 7 ~ 00 Wal s ~ of 1 Py a y 2 §8 a -0.5 at a * Or, Y -2 -3 ies -1.0 F i) 4 4A 3 1 2 4 8 16 2 4 8 16 1 2 4 8 16 2 8 32 Compression Ratio Compression Ratio Compression Ratio Compression Ratio 0 e gS 2 Sa = 0 * a fi) 31 af 4 £9 0 Y 2 ~ 8 Qo -3 FS -2 -2 F . 4 -3 1 2 4 8 1 2 4 8 16 1 2 4 Compression Ratio Compression Ratio Compression Ratio 4 0.0 v 0 ® * c 2 05 a ~ ° 23 ' —— qj a of “1.0 a S35 o -2 ee) 1.5 £< -2 co “3 % 2 - -2.0 ° F -2.5 4 -3 4 A 2 4 6 1 2 3 2 3 1 2 3 Theoretical Speedup Theoretical Speedup Theoretical Speedup Theoretical Speedup = 2 ° Mog,” A ° = G > se? ee 4 g - 5-2 “ 23 b es) 4 6 Sw 4 -2 a 8 S _ 2-6 -10 2 4 6 8 10 2 4 6 2 3 Theoretical Speedup Theoretical Speedup Theoretical Speedup —© Collins2014 -® Kim 2016 —@ Lin 2017 —¥ Dubey 2018, AP+Coreset-K Peng 2018 — Choi 2019 —F Han 2015 —H Srinivas 2016 = —t— Luo 2017 —t- Dubey 2018, AP+Coreset-S —@- Suau 2018, PFAEn —W Gale 2019, Magnitude-v2 te Zhang 2015 —® Wen 2016 + Srinivas 2017 —+ He, Yang 2018 —*— Suau 2018, PFAKL = —A~ Kim 2019 —{ Figumov 2016 —*— Alvarez2017 © —A~ Yang 2017 tHe, Yang 2018, Fine-Tune Suzuki 2018 A Liu 2019, Scratch-B > Guo 2016 3 He 2017 —*— Carreira-Perpinan 2018 —— He, Yihui 2018 —#— Yamamoto 2018 — Luo 2019 —— Han 2016 —#- He 2017,3C = + Ding 2018 —— Huang 2018 + Yu2018 —® Peng 2019, CCP —— Hu 2016 + 112017 —@- Dubey 2018, AP+CoresetA —@ Lin 2018 —® Zhuang 2018 —% Peng 2019, CCP-AC Figure 3: Fragmentation of results. Shown are all self-reported results on the most common (dataset, architecture) combinations. Each column is one combination, each row shares an accuracy metric (y-axis), and pairs of rows share a compression metric (x-axis). Up and to the right is always better. Standard deviations are shown for He 2018 on CIFAR-10, which is the only result that provides any measure of central tendency. As suggested by the legend, only 37 out of the 81 papers in our corpus report any results using any of these configurations. yield different accuracies for the same architecture and dataset (Northcutt, 2019; Nola, 2016) and may have sub- tly different behaviors (Vryniotis, 2018). • Subtle differences in code and environment that may not be easily attributable to any of the above variations (Crall, 2018; Jogeshwar, 2017; unr, 2017). both used the same code as the methods to which it com- pares and reports enough measurements to average out ran- dom variations. This is exceptionally rare, with Gale et al. (2019) and Liu et al. (2019) being arguably the only ex- amples. Moreover, neither of these papers introduce novel pruning methods per se but are instead inquiries into the efficacy of existing methods. In general, it is not clear that any paper can succeed in ac- counting for all of these confounders unless that paper has Many papers attempt to account for subsets of these con- founding variables. A near universal practice in this re- What is the State of Neural Network Pruning? Number of (Dataset, Architecture) Pairs Used Pruning ResNet-50 with Unstructured Magnitude-Based Pruning Number of papers using this many pairs | =n - of 8 10 12 14 16 18 20 Number of pairs Number of Points used to Characterize Tradeoff Curve 27 24 21 18 15, 12 2 || 6 — aa ° — 1 2 3 4 5 6 Number of points Number of curves using this many points 7 8 9 mmm Peer-Reviewed mmm Other Figure 4: Number of results reported by each paper, excluding MNIST. Top) Most papers report on three or fewer (dataset, architecture) pairs. Bottom) For each pair used, most papers characterize their tradeoff be- tween amount of pruning and accuracy using a single point in the efficiency vs accuracy curve. In both plots, the pattern holds even for peer-reviewed papers. gard is reporting change in accuracy relative to the original model, in addition to or instead of raw accuracy. This helps to control for the accuracy of the initial model. However, as we demonstrate in Section 7, this is not sufficient to remove initial model as a confounder. Certain initial models can be pruned more or less efficiently, in terms of the accuracy vs compression tradeoff. This holds true even with identical pruning methods and all other variables held constant. ~ 3 & > 74 ) g B72 8 i ~ 70 Qa fe} F 68 Pruning ResNet-50 with All Other Methods 76 x = 74 ) g 5 72 § i = 70 Qa fe} F 68 Number of Parameters —- Frankle 2019, PruneAtEpoch=15 — -: Dubey 2018, AP+Coreset-K —®- Frankle 2019, PruneAtEpoch=90 «>. Dubey 2018, AP+Coreset-S A= Frankle 2019, ResetToEpoch=10 +.» Gale 2019, SparseVD —¥— Frankle 2019, ResetToEpoch=R = --- Huang 2018 —®- Gale 2019, Magnitude “@- Lin 2018 —¥- Gale 2019, Magnitude-v2 “E> Liu 2019, Scratch-B —t— Liu 2019, Magnitude -@- Luo 2017 “Alvarez 2017 ++ Yamamoto 2018 ~hk» Dubey 2018, AP+Coreset-A “9. Zhuang 2018 Figure 5: Pruning ResNet-50 on ImageNet. Methods in the upper plot all prune weights with the smallest mag- nitudes, but differ in implementation, pruning sched- ule, and fine-tuning. The variation caused by these vari- ables is similar to the variation across different pruning methods, whose results are shown in the lower plot. All results are taken from the original papers. # 5 FURTHER BARRIERS TO COMPARISON There are at least two more empirical reasons to believe that confounding variables can have a significant impact. First, as one can observe in Figure 3, methods often introduce changes in accuracy of much less than 1% at reported op- erating points. This means that, even if confounders have only a tiny impact on accuracy, they can still have a large impact on which method appears better. In the previous section, we discussed the fragmentation of datasets, models, metrics, operating points, and experimen- tal details, and how this fragmentation makes evaluating the efficacy of individual pruning methods difficult. In this section, we argue that there are additional barriers to com- paring methods that stem from common practices in how methods and results are presented. Second, as shown in Figure 5, existing results demonstrate that different training and fine-tuning settings can yield nearly as much variability as different methods. Specif- ically, consider 1) the variability introduced by differ- ent fine-tuning methods for unstructured magnitude-based pruning (Figure 6 top) and 2) the variability introduced by entirely different pruning methods (Figure 6 bottom). The variability between fine-tuning methods is nearly as large as the variability between pruning methods. # 5.1 Architecture Ambiguity It is often difficult, or even impossible, to identify the exact architecture that authors used. Perhaps the most prevalent example of this is when authors report using some sort of ResNet (He et al., 2016a;b). Because there are two different variations of ResNets, introduced in these two papers, say- ing that one used a “ResNet-50” is insufficient to identify a particular architecture. Some authors do appear to deliber- ately point out the type of ResNet they use (e.g., (Liu et al., 2017; Dong et al., 2017)). However, given that few papers What is the State of Neural Network Pruning? even hint at the possibility of confusion, it seems unlikely that all authors are even aware of the ambiguity, let alone that they have cited the corresponding paper in all cases. Perhaps the greatest confusion is over VGG networks (Si- monyan & Zisserman, 2014). Many papers describe exper- imenting on “VGG-16,” “VGG,” or “VGGNet,” suggesting a standard and well-known architecture. In many cases, what is actually used is a custom variation of some VGG model, with removed fully-connected layers (Changpinyo et al., 2017; Luo et al., 2017), smaller fully-connected lay- ers (Lee et al., 2019b), or added dropout or batchnorm (Liu et al., 2017; Lee et al., 2019b; Peng et al., 2018; Molchanov et al., 2017; Ding et al., 2018; Suau et al., 2018). times never made clear. Even when reporting FLOPs, which is nominally a consistent metric, different authors measure it differently (e.g., (Molchanov et al., 2016) vs (Wang & Cheng, 2016)), though most often papers entirely omit their formula for computing FLOPs. We found up to a factor of four variation in the reported FLOPs of dif- ferent papers for the same architecture and dataset, with (Yang et al., 2017) reporting 371 MFLOPs for AlexNet on ImageNet, (Choi et al., 2019) reporting 724 MFLOPs, and (Han et al., 2015) reporting 1500 MFLOPs. # 6 SUMMARY AND RECOMMENDATIONS In some cases, papers simply fail to make clear what model they used (even for non-VGG architectures). For exam- ple, one paper just states that their segmentation model “is composed from an inception-like network branch and a DenseNet network branch.” Another paper attributes their VGGNet to (Parkhi et al., 2015), which mentions three VGG networks. Liu et al. (2019) and Frankle & Carbin (2019) have circular references to one another that can no longer be resolved because of simultaneous revisions. One paper mentions using a “VGG-S” from the Caffe Model Zoo, but as of this writing, no model with this name ex- ists there. Perhaps the most confusing case is the Lenet- 5-Caffe reported in one 2017 paper. The authors are to be commended for explicitly stating not only that they use Lenet-5-Caffe, but their exact architecture. However, they describe an architecture with an 800-unit fully-connected layer, while examination of both the Caffe .prototxt files (Jia et al., 2015a;b) and associated blog post (Jia et al., 2016) indicates that no such layer exists in Lenet-5-Caffe. In the previous sections, we have argued that existing work tends to • make it difficult to identify the exact experimental setup and metrics, • use too few (dataset, architecture) combinations, • report too few points in the tradeoff curve for any given combination, and no measures of central tendency, • omit comparison to many methods that might be state- of-the-art, and fail to control for confounding variables. These problems often make it difficult or impossible to as- sess the relative efficacy of different pruning methods. To enable direct comparison between methods in the future, we suggest the following practices: • Identify the exact sets of architectures, datasets, and metrics used, ideally in a structured way that is not scat- tered throughout the results section. # 5.2 Metrics Ambiguity It can also be difficult to know what the reported metrics mean. For example, many papers include a metric along the lines of “Pruned%”. In some cases, this means frac- tion of the parameters or FLOPs remaining (Suau et al., 2018). In other cases, it means the fraction of parameters or FLOPs removed (Han et al., 2015; Lebedev & Lempitsky, 2016; Yao et al., 2018). There is also widespread misuse of the term “compression ratio,” which the compression liter- ature has long used to mean original size compressed size (Siedelmann et al., 2015; Zukowski et al., 2006; Zhao et al., 2015; Lindstrom, 2014; Ratanaworabhan et al., 2006; Blalock et al., 2018), but many pruning authors define (usually without making the formula explicit) as 1 − compressed size original size • Use at least three (dataset, architecture) pairs, including modern, large-scale ones. MNIST and toy models do not count. AlexNet, CaffeNet, and Lenet-5 are no longer modern architectures. For any given pruned model, report both compression ratio and theoretical speedup. Compression ratio is de- fined as the original size divided by the new size. The- oretical speedup is defined as the original number of multiply-adds divided by the new number. Note that there is no reason to report only one of these metrics. • For ImageNet and other many-class datasets, report both Top-1 and Top-5 accuracy. There is again no reason to report only one of these. • Whatever metrics one reports for a given pruned model, also report these metrics for an appropriate control (usu- ally the original model before pruning). Reported “speedup” values present similar challenges. These values are sometimes wall time, sometimes original number of FLOPs divided by pruned number of FLOPs, sometimes a more complex formula relating these two quantities (Dong et al., 2017; He et al., 2018a), and some- • Plot the tradeoff curve for a given dataset and architec- ture, alongside the curves for competing methods. • When plotting tradeoff curves, use at least 5 operating points spanning a range of compression ratios. The set of ratios {2, 4, 8, 16, 32} is a good choice. What is the State of Neural Network Pruning? • Report and plot means and sample standard deviations, instead of one-off measurements, whenever feasible. • Ensure that all methods being compared use identical libraries, data loading, and other code to the greatest ex- tent possible. We also recommend that reviewers demand a much greater level of rigor when evaluating papers that claim to offer a better method of pruning neural networks. complex methods (Han et al., 2015; 2016; Gale et al., 2019; Frankle et al., 2019). Gradient-based methods are less com- mon, but are simple to implement and have recently gained popularity (Lee et al., 2019b;a; Yu et al., 2018). Random pruning is a common straw man that can serve as a useful debugging tool. Note that these baselines are not reproduc- tions of any of these methods, but merely inspired by their pruning heuristics. # 7.3 Avoiding Pruning Pitfalls with Shrinkbench # 7 SHRINKBENCH # 7.1 Overview of ShrinkBench To make it as easy as possible for researchers to put our suggestions into practice, we have created an open-source library for pruning called ShrinkBench. ShrinkBench pro- vides standardized and extensible functionality for training, pruning, fine-tuning, computing metrics, and plotting, all using a standardized set of pretrained models and datasets. ShrinkBench is based on PyTorch (Paszke et al., 2017) and is designed to allow easy evaluation of methods with ar- bitrary scoring functions, allocation of pruning across lay- ers, and sparsity structures. In particular, given a callback defining how to compute masks for a model’s parameter tensors at a given iteration, ShrinkBench will automati- cally apply the pruning, update the network according to a standard training or fine-tuning setup, and compute metrics across many models, datasets, random seeds, and levels of pruning. We defer discussion of ShrinkBench’s implemen- tation and API to the project’s documentation. Using the described baselines, we pruned over 800 net- works with varying datasets, networks, compression ratios, initial weights and random seeds. In doing so, we identi- fied various pitfalls associated with experimental practices that are currently common in the literature but are avoided by using ShrinkBench. We highlight several noteworthy results below. For addi- tional experimental results and details, see Appendix D. One standard deviation bars across three runs are shown for all CIFAR-10 results. Metrics are not Interchangeable. As discussed previ- ously, it is common practice to report either reduction in the number of parameters or in the number of FLOPs. If these metrics are extremely correlated, reporting only one is suf- ficient to characterize the efficacy of a pruning method. We found after computing these metrics for the same model un- der many different settings that reporting one metric is not sufficient. While these metrics are correlated, the correla- tion is different for each pruning method. Thus, the relative performance of different methods can vary significantly un- der different metrics (Figure 6). # 7.2 Baselines ResNet-18 on ImageNet We used ShrinkBench to implement several existing prun- ing heuristics, both as examples of how to use our library and as baselines that new methods can compare to: • Global Magnitude Pruning - prunes the weights with the lowest absolute value anywhere in the network. • Layerwise Magnitude Pruning - for each layer, prunes the weights with the lowest absolute value. • Global Gradient Magnitude Pruning - prunes the weights with the lowest absolute value of (weight × gra- dient), evaluated on a batch of inputs. Layerwise Gradient Magnitude Pruning - for each layer, prunes the weights the lowest absolute value of (weight × gradient), evaluated on a batch of inputs. • Random Pruning - prunes each weight independently with probability equal to the fraction of the network to be pruned. Magnitude-based approaches are common baselines in the literature and have been shown to be competitive with more —e Global Weight —t— Layer Weight = Global Gradient — Layer Gradient 1 2 4 8 16 1 2 4 8 16 32 Compression Ratio Theoretical Speedup Figure 6: Top 1 Accuracy for ResNet-18 on ImageNet for several compression ratios and their corresponding theoretical speedups. Global methods give higher accu- racy than Layerwise ones for a fixed model size, but the reverse is true for a fixed theoretical speedup. Results Vary Across Models, Datasets, and Pruning Amounts Many methods report results on only a small number of datasets, models, amounts of pruning, and ran- dom seeds. If the relative performance of different methods tends to be constant across all of these variables, this may What is the State of Neural Network Pruning? not be problematic. However, our results suggest that this performance is not constant. Figure 7 shows the accuracy for various compression ra- tios for CIFAR-VGG (Zagoruyko, 2015) and ResNet-56 on CIFAR-10. In general, Global methods are more accu- rate than Layerwise methods and Magnitude-based meth- ods are more accurate than Gradient-based methods, with random performing worst of all. However, if one were to look only at CIFAR-VGG for compression ratios smaller than 10, one could conclude that Global Gradient outper- forms all other methods. Similarly, while Global Gradient consistently outperforms Layerwise Magnitude on CIFAR- VGG, the opposite holds on ResNet-56 (i.e., the orange and green lines switch places). Moreover, we found that for some settings close to the drop-off point (such as Global Gradient, compression 16), different random seeds yielded significantly different re- sults (0.88 vs 0.61 accuracy) due to the randomness in minibatch selection. This is illustrated by the large verti- cal error bar in the left subplot. CIFAR-VGG CIFAR-VGG ResNet-56 0.9 0.8 fs) g 307 —e— Global Weight Bg —t- Layer Weight —® Global Gradient 0.6 — Layer Gradient —#-— Random 0.5 1 2 4 8 16 32 1 2 4 8 16 32 Compression Ratio Compression Ratio Figure 7: Top 1 Accuracy on CIFAR-10 for several com- pression ratios. Global Gradient performs better than Global Magnitude for CIFAR-VGG on low compression ratios, but worse otherwise. Global Gradient is con- sistently better than Layerwise Magnitude on CIFAR- VGG, but consistently worse on ResNet-56. Using the Same Initial Model is Essential. As men- tioned in Section 4.5, many methods are evaluated using different initial models with the same architecture. To as- sess whether beginning with a different model can skew the results, we created two different models and evaluated Global vs Layerwise Magnitude pruning on each with all other variables held constant. Absolute Relative 0.9 0.8 8 5 ‘e 907 \ Z —® Global A ‘ -e- Global B 0.6 4. LayerA * -a- Layer B \ 0.5 ‘ 1 2 4 8 16 32 64 1 2 4 8 16 32 64 Compression Ratio Compression Ratio Figure 8: Global and Layerwise Magnitude Pruning on two different ResNet-56 models. Even with all other variables held constant, different initial models yield different tradeoff curves. This may cause one method to erroneously appear better than another. Controlling for initial accuracy does not fix this. We also found that the common practice of examining changes in accuracy is insufficient to correct for initial model as a confounder. Even when reporting changes, one pruning method can artificially appear better than another by virtue of beginning with a different model. We see this on the right side of Figure 8, where Layerwise Magnitude with Weights B appears to outperform Global Magnitude with Weights A, even though the former never outperforms the latter when initial model is held constant. # 8 CONCLUSION Considering the enormous interest in neural network prun- ing over the past decade, it seems natural to ask simple questions about the relative efficacy of different pruning techniques. Although a few basic findings are shared across the literature, missing baselines and inconsistent experi- mental settings make it impossible to assess the state of the art or confidently compare the dozens of techniques proposed in recent years. After carefully studying the literature and enumerating numerous areas of incompa- rability and confusion, we suggest concrete remedies in the form of a list of best practices and an open-source library—ShrinkBench—to help future research endeavors to produce the kinds of results that will harmonize the lit- erature and make our motivating questions easier to an- swer. Furthermore, ShrinkBench results on various pruning techniques evidence the need for standardized experiments when evaluating neural network pruning methods. To obtain the models, we trained two ResNet-56 networks using Adam until convergence with η = 10−3 and η = 10−4. We’ll refer to these pretrained weights as Weights A and Weights B, respectively. As shown on the left side of Figure 8, the different methods appear better on differ- ent models. With Weights A, the methods yield similar absolute accuracies. With Weights B, however, the Global method is more accurate at higher compression ratios. # ACKNOWLEDGEMENTS We thank Luigi Celona for providing the data used in (Bianco et al., 2018) and Vivienne Sze for helpful discus- sion. This research was supported by the Qualcomm Inno- vation Fellowship, the “la Caixa” Foundation Fellowship, Quanta Computer, and Wistron Corporation. What is the State of Neural Network Pruning? # REFERENCES What’s the advantage of the reference caffenet in com- parison with the alexnet? https://github.com/ BVLC/caffe/issues/4202, 5 2016. Accessed: 2019-07-22. Frankle, J. and Carbin, M. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In 7th Inter- national Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenRe- view.net, 2019. URL https://openreview.net/ forum?id=rJl-b3RcF7. Keras exported model shows very low accuracy in https://github.com/ 2017. tensorflow serving. keras-team/keras/issues/7848, Accessed: 2019-07-22. 9 Frankle, J., Dziugaite, G. K., Roy, D. M., and Carbin, M. The lottery ticket hypothesis at scale. arXiv preprint arXiv:1903.01611, 2019. Bianco, S., Cadene, R., Celona, L., and Napoletano, P. Benchmark analysis of representative deep neural net- work architectures. IEEE Access, 6:64270–64277, 2018. Blalock, D., Madden, S., and Guttag, J. Sprintz: Time se- ries compression for the internet of things. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiq- uitous Technologies, 2(3):93, 2018. Changpinyo, S., Sandler, M., and Zhmoginov, A. The power of sparsity in convolutional neural networks. arXiv preprint arXiv:1702.06257, 2017. Choi, Y., El-Khamy, M., and Lee, J. Jointly sparse convolu- tional neural networks in dual spatial-winograd domains. arXiv preprint arXiv:1902.08192, 2019. Crall, J. Accuracy of resnet50 is much higher than https://github.com/kuangliu/ reported! pytorch-cifar/issues/45, 2018. Accessed: 2019-07-22. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Ding, X., Ding, G., Han, J., and Tang, S. Auto-balanced fil- ter pruning for efficient convolutional neural networks. In Thirty-Second AAAI Conference on Artificial Intelli- gence, 2018. Dong, X., Huang, J., Yang, Y., and Yan, S. More is less: A more complicated network with less inference complex- ity. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5840–5848, 2017. Gale, T., Elsen, E., and Hooker, S. The state of sparsity in deep neural networks, 2019. Gray, S., Radford, A., and Kingma, D. P. Gpu kernels for block-sparse weights. arXiv preprint arXiv:1711.09224, 2017. Han, S., Pool, J., Tran, J., and Dally, W. Learning both weights and connections for efficient neural network. In Advances in neural information processing systems, pp. 1135–1143, 2015. Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In Bengio, Y. and Le- Cun, Y. (eds.), 4th International Conference on Learn- ing Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/abs/1510.00149. Hassibi, B., Stork, D. G., and Wolff, G. J. Optimal brain In IEEE inter- surgeon and general network pruning. national conference on neural networks, pp. 293–299. IEEE, 1993. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learn- ing for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016a. He, K., Zhang, X., Ren, S., and Sun, J. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Springer, 2016b. He, Y., Zhang, X., and Sun, J. Channel pruning for accel- erating very deep neural networks. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1389–1397, 2017. Dubey, A., Chatterjee, M., and Ahuja, N. Coreset-based neural network compression. In Proceedings of the Eu- ropean Conference on Computer Vision (ECCV), pp. 454–470, 2018. He, Y., Kang, G., Dong, X., Fu, Y., and Yang, Y. Soft filter pruning for accelerating deep convolutional neural In IJCAI International Joint Conference on networks. Artificial Intelligence, 2018a. Figurnov, M., Ibraimova, A., Vetrov, D. P., and Kohli, P. Perforatedcnns: Acceleration through elimination of re- dundant convolutions. In Advances in Neural Informa- tion Processing Systems, pp. 947–955, 2016. He, Y., Lin, J., Liu, Z., Wang, H., Li, L.-J., and Han, S. Amc: Automl for model compression and accelera- tion on mobile devices. In Proceedings of the European What is the State of Neural Network Pruning? Conference on Computer Vision (ECCV), pp. 784–800, 2018b. Huang, Y., Cheng, Y., Chen, D., Lee, H., Ngiam, J., Le, Q. V., and Chen, Z. Gpipe: Efficient training of gi- ant neural networks using pipeline parallelism. arXiv preprint arXiv:1811.06965, 2018. Huang, Z. and Wang, N. Data-driven sparse structure se- lection for deep neural networks. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 304–320, 2018. Lebedev, V. and Lempitsky, V. Fast convnets using group- wise brain damage. In Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition, pp. 2554–2564, 2016. I., and Lempitsky, V. Speeding-up convolutional neu- ral networks using fine-tuned cp-decomposition. arXiv preprint arXiv:1412.6553, 2014. LeCun, Y., Denker, J. S., and Solla, S. A. Optimal brain damage. In Advances in neural information processing systems, pp. 598–605, 1990. Janowsky, S. A. Pruning versus clipping in neural net- works. Physical Review A, 39(12):6600–6603, June 1989. ISSN 0556-2791. doi: 10.1103/PhysRevA.39. 6600. URL https://link.aps.org/doi/10. 1103/PhysRevA.39.6600. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al. Gradient-based learning applied to document recogni- the IEEE, 86(11):2278–2324, tion. 1998a. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, lenet. J., Girshick, R., Guadarrama, S., and Darrell, T. https://github.com/BVLC/caffe/blob/ master/examples/mnist/lenet.prototxt, 2 2015a. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, lenet- J., Girshick, R., Guadarrama, S., and Darrell, T. train-test. https://github.com/BVLC/caffe/ blob/master/examples/mnist/lenet_ train_test.prototxt, 2 2015b. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., and Training lenet on mnist with caffe. Long, Darrell, T. https://caffe.berkeleyvision.org/ gathered/examples/mnist.html, Accessed: 2019-07-22. J., Girshick, R., Guadarrama, S., 5 2016. LeCun, Y., Cortes, C., and Burges, C. The mnist database of handwritten digits, 1998b. Accessed: 2019-09-6. Lee, N., Ajanthan, T., Gould, S., and Torr, P. H. S. A Signal Propagation Perspective for Pruning Neu- ral Networks at Initialization. arXiv:1906.06307 [cs, stat], June 2019a. URL http://arxiv.org/abs/ 1906.06307. arXiv: 1906.06307. Lee, N., Ajanthan, T., and Torr, P. H. S. Snip: single- shot network pruning based on connection sensitivity. In 7th International Conference on Learning Represen- tations, ICLR 2019, New Orleans, LA, USA, May 6- 9, 2019. OpenReview.net, 2019b. URL https:// openreview.net/forum?id=B1VZqjAcYX. Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H. P. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016. Jogeshwar, A. Validating resnet50. https://github. 12 com/keras-team/keras/issues/8672, 2017. Accessed: 2019-07-22. Kalchbrenner, N., Elsen, E., Simonyan, K., Noury, S., Casagrande, N., Lockhart, E., Stimberg, F., Oord, A. v. d., Dieleman, S., and Kavukcuoglu, K. Efficient neu- ral audio synthesis. arXiv preprint arXiv:1802.08435, 2018. Karnin, E. D. A simple procedure for pruning back- propagation trained neural networks. IEEE transactions on neural networks, 1(2):239–242, 1990. Lindstrom, P. Fixed-rate compressed floating-point arrays. IEEE transactions on visualization and computer graph- ics, 20(12):2674–2683, 2014. Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C. Learning efficient convolutional networks through net- In Proceedings of the IEEE Interna- work slimming. tional Conference on Computer Vision, pp. 2736–2744, 2017. Liu, Z., Sun, M., Zhou, T., Huang, G., and Darrell, T. Re- thinking the value of network pruning. In 7th Interna- tional Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenRe- view.net, 2019. URL https://openreview.net/ forum?id=rJlnB3C5Ym. Kim, Y.-D., Park, E., Yoo, S., Choi, T., Yang, L., and Shin, D. Compression of deep convolutional neural net- works for fast and low power mobile applications. arXiv preprint arXiv:1511.06530, 2015. Louizos, C., Ullrich, K., and Welling, M. Bayesian com- pression for deep learning. In Advances in Neural Infor- mation Processing Systems, pp. 3288–3298, 2017. What is the State of Neural Network Pruning? Luo, J.-H., Wu, J., and Lin, W. Thinet: A filter level pruning method for deep neural network compression. In Proceedings of the IEEE international conference on computer vision, pp. 5058–5066, 2017. Ratanaworabhan, P., Ke, J., and Burtscher, M. Fast loss- In less compression of scientific floating-point data. Data Compression Conference (DCC’06), pp. 133–142. IEEE, 2006. Mariet, Z. and Sra, S. Diversity networks: Neural network compression using determinantal point processes. arXiv preprint arXiv:1511.05077, 2015. Molchanov, D., Ashukha, A., and Vetrov, D. Variational In Proceed- dropout sparsifies deep neural networks. ings of the 34th International Conference on Machine Learning-Volume 70, pp. 2498–2507. JMLR. org, 2017. IEEE Trans- actions on Neural Networks, 4(5):740–747, Septem- ber 1993. 10.1109/72. 248452. URL http://ieeexplore.ieee.org/ document/248452/. Siedelmann, H., Wender, A., and Fuchs, M. High speed lossless image compression. In German Conference on Pattern Recognition, pp. 343–355. Springer, 2015. Molchanov, P., Tyree, S., Karras, T., Aila, T., and Kautz, J. Pruning convolutional neural networks for resource effi- cient inference. arXiv preprint arXiv:1611.06440, 2016. Simonyan, K. and Zisserman, A. Very deep convolu- tional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Morcos, A. S., Yu, H., Paganini, M., and Tian, Y. One ticket to win them all: generalizing lottery ticket initializations across datasets and optimizers. arXiv:1906.02773 [cs, stat], June 2019. URL http://arxiv.org/abs/ 1906.02773. arXiv: 1906.02773. Suau, X., Zappella, L., and Apostoloff, N. Network com- pression using correlation analysis of layer responses. 2018. Mozer, M. C. and Smolensky, P. Skeletonization: A tech- nique for trimming the fat from a network via relevance assessment. In Advances in neural information process- ing systems, pp. 107–115, 1989a. Suzuki, T., Abe, H., Murata, T., Horiuchi, S., Ito, K., Wachi, T., Hirai, S., Yukishima, M., and Nishimura, T. Spectral-pruning: Compressing deep neural network via spectral analysis. arXiv preprint arXiv:1808.08558, 2018. Mozer, M. C. and Smolensky, P. Using Relevance to Connection Reduce Network Size Automatically. Science, 1(1):3–16, January 1989b. ISSN 0954-0091, 1360-0494. doi: 10.1080/09540098908915626. URL https://www.tandfonline.com/doi/full/ 10.1080/09540098908915626. Sze, V., Chen, Y.-H., Yang, T.-J., and Emer, J. Efficient processing of deep neural networks: A tutorial and sur- vey. arXiv preprint arXiv:1703.09039, 2017. Tan, M. and Le, Q. V. Efficientnet: Rethinking model scal- ing for convolutional neural networks. arXiv preprint arXiv:1905.11946, 2019. Nola, D. Keras doesn’t reproduce caffe example code accuracy. https://github.com/keras-team/ keras/issues/4444, 11 2016. Accessed: 2019-07- 22. Tresp, V., Neuneier, R., and Zimmermann, H.-G. Early brain damage. In Advances in neural information pro- cessing systems, pp. 669–675, 1997. Northcutt, ity: https://l7.curtisnorthcutt.com/towards-reproducibility- benchmarking-keras-pytorch, 2 2019. 2019-07-22. Parkhi, O. M., Vedaldi, A., Zisserman, A., et al. Deep face recognition. In bmvc, volume 1, pp. 6, 2015. Vryniotis, V. Change bn layer to use moving mean/var if frozen. https://github.com/keras-team/ keras/pull/9965, 4 2018. Accessed: 2019-07-22. Wang, P. and Cheng, J. Accelerating convolutional neural networks for mobile applications. In Proceedings of the 24th ACM international conference on Multimedia, pp. 541–545. ACM, 2016. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. Automatic differentiation in pytorch. 2017. Wen, W., Wu, C., Wang, Y., Chen, Y., and Li, H. Learn- ing structured sparsity in deep neural networks. In Ad- vances in neural information processing systems, pp. 2074–2082, 2016. Peng, B., Tan, W., Li, Z., Zhang, S., Xie, D., and Pu, S. Extreme network compression via filter group approxi- mation. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 300–316, 2018. Yamamoto, K. and Maeno, K. Pcas: Pruning channels with attention statistics. arXiv preprint arXiv:1806.05382, 2018. What is the State of Neural Network Pruning? Yang, T.-J., Chen, Y.-H., and Sze, V. Designing energy- efficient convolutional neural networks using energy- aware pruning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5687– 5695, 2017. Yao, Z., Cao, S., and Xiao, W. for efficient dnn inference on gpu. arXiv:1811.00206, 2018. Balanced sparsity arXiv preprint Yu, R., Li, A., Chen, C.-F., Lai, J.-H., Morariu, V. I., Han, X., Gao, M., Lin, C.-Y., and Davis, L. S. Nisp: Pruning networks using neuron importance score propagation. In Proceedings of the IEEE Conference on Computer Vi- sion and Pattern Recognition, pp. 9194–9203, 2018. Zagoruyko, S. 92.45% on cifar-10 in torch. https:// torch.ch/blog/2015/07/30/cifar.html, 7 2015. Accessed: 2019-07-22. Zhang, X., Zou, J., He, K., and Sun, J. Accelerating very deep convolutional networks for classification and detec- tion. IEEE transactions on pattern analysis and machine intelligence, 38(10):1943–1955, 2015. Zhao, W. X., Zhang, X., Lemire, D., Shan, D., Nie, J.-Y., Yan, H., and Wen, J.-R. A general simd-based approach to accelerating compression algorithms. ACM Transac- tions on Information Systems (TOIS), 33(3):15, 2015. Zukowski, M., Heman, S., Nes, N., and Boncz, P. Super- scalar ram-cpu cache compression. In Data Engineering, 2006. ICDE’06. Proceedings of the 22nd International Conference on, pp. 59–59. IEEE, 2006. What is the State of Neural Network Pruning? # A CORPUS AND DATA CLEANING We selected the 81 papers used in our analysis in the fol- lowing way. First, we conducted an ad hoc literature search, finding widely cited papers introducing pruning methods and identifying other pruning papers that cited them using Google Scholar. We then went through the con- ference proceedings from the past year’s NeurIPS, ICML, CVPR, ECCV, and ICLR and added all relevant papers (though it is possible we had false dismissals if the title and abstract did not seem relevant to pruning). Finally, during the course of cataloging which papers compared to which others, we added to our corpus any pruning paper that at least one existing paper in our corpus purported to compare to. We included both published papers and un- published ones of reasonable quality (typically on arXiv). Since we make strong claims about the lack of compar- isons, we included in our corpus five papers whose meth- ods technically do not meet our definition of pruning but are similar in spirit and compared to by various pruning papers. In short, we included essentially every paper intro- ducing a method of pruning neural networks that we could find, taking care to capture the full directed graph of papers and comparisons between them. # B CHECKLIST FOR EVALUATING A PRUNING METHOD For any pruning technique proposed, check if: • It is contextualized with respect to magnitude prun- ing, recently-published pruning techniques, and prun- ing techniques proposed prior to the 2010s. • The pruning algorithm, constituent subroutines (e.g., score, pruning, and fine-tuning functions), and hyper- parameters are presented in enough detail for a reader to reimplement and match the results in the paper. the technique are appropriately restricted to only the experiments presented (e.g., CIFAR-10, ResNets, image classification tasks, etc.). • There is a link to downloadable source code. For all experiments, check if you include: Because different papers report slightly different metrics, particularly with respect to model size, we converted re- ported results to a standard set of metrics whenever possi- ble. For example, we converted reported Top-1 error rates to Top-1 accuracies, and fractions of parameters pruned to compression ratios. Note that it is not possible to con- vert between size metrics and speedup metrics, since the amount of computation associated with a given parameter can depend on the layer in which it resides (since convo- lutional filters are reused at many spatial positions). For simplicity and uniformity, we only consider self-reported results except where stated otherwise. • A detailed description of the architecture with hyper- parameters in enough detail to for a reader to reimple- ment it and train it to the same performance reported in the paper. • If the architecture is not novel: a citation for the ar- chitecture/hyperparameters and a description of any differences in architecture, hyperparameters, or per- formance in this paper. • A detailed description of the dataset hyperparameters (e.g., batch size and augmentation regime) in enough detail for a reader to reimplement it. • A description of the library and hardware used. We also did not attempt to capture all reported metrics, but instead focused only on model size reduction and theoret- ical speedup, since 1) these are by far the most commonly reported and, 2) there is already a dearth of directly compa- rable numbers even for these common metrics. This is not entirely fair to methods designed to optimize other metrics, such as power consumption (Louizos et al., 2017; Yang et al., 2017; Han et al., 2015; Kim et al., 2015), memory bandwidth usage (Peng et al., 2018; Kim et al., 2015), or fine-tuning time (Dubey et al., 2018; Yamamoto & Maeno, 2018; Huang & Wang, 2018; He et al., 2018a), and we con- sider this a limitation of our analysis. For all results, check if: • Data is presented across a range of compression ratios, including extreme compression ratios at which the ac- curacy of the pruned network declines substantially. • Data specifies the raw accuracy of the network at each point. • Data includes multiple runs with separate initializa- tions and random seeds. Lastly, as a result of relying on reading of hundreds of pages of dense technical content, we are confident that we have made some number of isolated errors. We therefore welcome correction by email and refer the reader to the arXiv version of this paper for the most up-to-date revision. • Data includes clearly defined error bars and a measure of central tendency (e.g., mean) and variation (e.g., standard deviation). • Data includes FLOP-counts if the paper makes argu- ments about efficiency and performance due to prun- ing. What is the State of Neural Network Pruning? For all pruning results presented, check if there is a com- parison to: • A random pruning baseline. Epochs: 30 • Optimizer: Adam • Initial Learning Rate: 3 × 10−4 • Learning rate schedule: Fixed – A global random pruning baseline. – A random pruning baseline with the same layer- wise pruning proportions as the proposed tech- nique. All reported ImageNet experiments used the following finetuning setup • A magnitude pruning baseline. – A global or uniform layerwise proportion magni- tude pruning baseline. – A magnitude pruning baseline with the same lay- erwise pruning proportions as the proposed tech- nique. Batch size: 256 • Epochs: 20 • Optimizer: SGD with Nesterov Momentum (0.9) • Initial Learning Rate: 1 × 10−3 • Learning rate schedule: Fixed # D ADDITIONAL RESULTS • Other relevant state-of-the-art techniques, including: – A description of how the comparisons were pro- duced (data taken from paper, reimplementation, or reuse of code from the paper) and any differ- ences or uncertainties between this setting and the setting used in the main experiments. Here we include the entire set of results obtained with ShrinkBench. For CIFAR10, results are included for CIFAR-VGG, ResNet-20, ResNet-56 and ResNet-110. Standard deviations across three different random runs are plotted as error bars. For ImageNet, results are reported for ResNet-18. # C EXPERIMENTAL SETUP reproducibility purposes, ShrinkBench fixes ran- For dom seeds for all the dependencies (PyTorch, NumPy, Python). # C.1 Pruning Methods For the reported experiments, we did not prune the clas- sifier layer preceding the softmax. ShrinkBench supports pruning said layer as an option to all proposed pruning strategies. For both Global and Layerwise Gradient Mag- nitude Pruning a single minibatch is used to compute the gradients for the pruning. Three independent runs using different random seeds were performed for every CIFAR10 experiment. We found some variance across methods that relied on randomness, such as random pruning or gradient based methods that use a sampled minibatch to compute the gradients with respect to the weights. # C.2 Finetuning Setup Pruning was performed from the pretrained weights and fixed from there forwards. Early stopping is implemented during finetuning. Thus if the validation accuracy repeat- edly decreases after some point we stop the finetuning pro- cess to prevent overfitting. All reported CIFAR10 experiments used the following fine- tuning setup: • Batch size: 64 What is the State of Neural Network Pruning? ° foo) a Accuracy ° oo °o ° N a CIFAR-VGG on CIFAR-10 —® Global Weight —t- Layer Weight —® Global Gradient —- Layer Gradient —*— Random 1 2 4 8 16 32 Compression Ratio 0.95 ° ° ° & foo) io ° a ° Accuracy ° N a CIFAR-VGG on CIFAR-10 1 2 4 8 16 32 Theoretical Speedup Figure 9: Accuracy for several levels of compression for CIFAR-VGG on CIFAR-10 Figure 10: Accuracy vs theoretical speedup for CIFAR-VGG on CIFAR-10 ° foo) a ° & ° “ $tttt ResNet-20 on CIFAR-10 Global Weigh Layer Weight Global Gradient Layer Gradient Random 2 4 8 16 32 Compression Ratio ° foo) a ° & ° “ $tett ResNet-20 on CIFAR-10 Global Weig Layer Weight’ Global Gradient Layer Gradient Random 2 4 8 16 32 Theoretical Speedup Figure 11: Accuracy for several levels of compres- sion for ResNet-20 on CIFAR-10 Figure 12: Accuracy vs theoretical speedup for ResNet-20 on CIFAR-10 ° foo) a ° & ° " $tttt ResNet-56 on CIFAR-10 Global Weight Layer Weight Global Gradient Layer Gradient Random 2 4 8 16 32 Compression Ratio ° foo) a ° & ° " $t4tt ResNet-56 on CIFAR-10 Global Weight Layer Weight Global Gradient Layer Gradient Random 2 4 8 16 32 Theoretical Speedup Figure 13: Accuracy for several levels of compres- sion for ResNet-56 on CIFAR-10 Figure 14: Accuracy vs theoretical speedup for ResNet-56 on CIFAR-10 What is the State of Neural Network Pruning? Accuracy ° ° oo fos) °o u " $tttt ResNet-110 on CIFAR-10 Global Weight Layer Weight Global Gradient Layer Gradient Random 2 4 8 Compression Ratio 16 32 Accuracy ° ° oo fos) °o u " $t4tt ResNet-110 on CIFAR-10 Global Weight Layer Weight Global Gradient Layer Gradient Random 2 4 8 16 32 Theoretical Speedup Figure 15: Accuracy for several levels of compres- sion for ResNet-110 on CIFAR-10 Figure 16: Accuracy vs theoretical speedup for ResNet-110 on CIFAR-10 ResNet-18 on ImageNet ResNet-18 on ImageNet 0.65 ° a ° Top 1 Accuracy fo} fo} un un fo} uo S Bb a 0.40 —® Global Weight —t— Layer Weight —#- Global Gradient —- Layer Gradient 1 2 4 8 16 Compression Ratio 0.65 ° a ° Top 1 Accuracy fo} fo} un un fo} uo S rs a 0.40 —® Global Weight —t— Layer Weight —® Global Gradient — Layer Gradient 1 2 4 8 16 32 Theoretical Speedup Figure 17: Accuracy for several levels of compres- sion for ResNet-18 on ImageNet Figure 18: Accuracy vs theoretical speedup for ResNet-18 on ImageNet
{ "id": "1702.06257" }
2003.02232
Interactive Robot Training for Non-Markov Tasks
Defining sound and complete specifications for robots using formal languages is challenging, while learning formal specifications directly from demonstrations can lead to over-constrained task policies. In this paper, we propose a Bayesian interactive robot training framework that allows the robot to learn from both demonstrations provided by a teacher, and that teacher's assessments of the robot's task executions. We also present an active learning approach -- inspired by uncertainty sampling -- to identify the task execution with the most uncertain degree of acceptability. Through a simulated experiment, we demonstrate that our active learning approach identifies a teacher's intended task specification with an equivalent or greater similarity when compared to an approach that learns purely from demonstrations. Finally, we demonstrate the efficacy of our approach in a real-world setting through a user-study based on teaching a robot to set a dinner table.
http://arxiv.org/pdf/2003.02232
Ankit Shah, Samir Wadhwania, Julie Shah
cs.RO, cs.AI
null
null
cs.RO
20200304
20201128
0 2 0 2 v o N 8 2 ] O R . s c [ 2 v 2 3 2 2 0 . 3 0 0 2 : v i X r a # Interactive Robot Training for Non-Markov Tasks # Ankit Shah1, Samir Wadhwania2, Julie Shah3 Abstract— Defining sound and complete specifications for robots using formal languages is challenging, while learning formal specifications directly from demonstrations can lead to over-constrained task policies. In this paper, we propose a Bayesian interactive robot training framework that allows the robot to learn from both demonstrations provided by a teacher, and that teacher’s assessments of the robot’s task executions. We also present an active learning approach – inspired by uncertainty sampling – to identify the task execution with the most uncertain degree of acceptability. Through a simulated experiment, we demonstrate that our active learning approach identifies a teacher’s intended task specification with an equivalent or greater similarity when compared to an approach that learns purely from demonstrations. Finally, we demonstrate the efficacy of our approach in a real-world setting through a user-study based on teaching a robot to set a dinner table. # I. INTRODUCTION Humans are adept at quickly learning to perform multi- step tasks like setting a dinner table, clearing a desk, or assembling furniture. Tasks such as these typically involve temporal elements like adherence to constraints or decom- position into and prioritization of sub-tasks. Linear temporal logic (LTL) [1] provides an expressive grammar for modeling a range of such non-Markov temporal properties; however, formal languages like LTL are often unwieldy for the average user. In order to facilitate rapid deployment of capable robots to novel scenarios and tasks, it is desirable to allow users with task-specific expertise to directly program robots. There has been a considerable amount of research related to inferring formal specifications through intuitive interfaces such as demonstrations [2], [3] and preferences expressed as natural language instructions [4], [5]. To resolve the ambi- guity associated with these teaching modalities, we proposed planning with uncertain specifications (PUnS) [6], a frame- work for generating task plans wherein specifications are expressed as a belief (P(ϕ)) over LTL formulas. However, policies computed to optimize the PUnS criteria generate task executions that attempt to satisfy a large number of candidate formulas, potentially over-constraining task execution. In this paper, we demonstrate that belief over LTL formulas can also serve to identify task executions with an uncertain degree of acceptability . These executions can then be demonstrated 1Ankit Shah is a PhD candidate at the Computer Science and Artifi- cial Intelligence Laboratory at the Massachusetts Institute of Technology. [email protected] back to the user to elicit an assessment of their acceptability, which in turn can reduce the uncertainty of the distribution. We also propose an active querying strategy for identi- fying and performing such ambiguous task executions, and evaluate the performance of this active learning approach compared with learning purely from demonstrations (termed Batch) and another interactive approach wherein task ex- ecutions are generated by performance of random actions (termed Random). Through results obtained from a simula- tion experiment, we demonstrate that our proposed method yields posterior belief distributions with higher similarity to the ground truth specification as compared to Batch and Random approaches for a wide range of ground truth task specifications. Finally, we conducted a user study involving training a robot to set a dinner table using our active learning approach, with demonstrations provided either in-person or by remotely operating the robot. Our findings indicate the efficacy of our active learning approach for learning task specifications that are well aligned with the ground truth specifications (average similarity: 0.86 95% CI [0.82, 0.92]). # II. RELATED WORK The objective of allowing domain experts to directly pro- gram robots has driven research into methods for program- ming through intuitive modalities. Prior research has yielded models for learning a teacher’s intended task by processing input provided by the teacher through demonstrations [7], [8], natural language instructions [9], [4], [5], corrections [10], [11], or preferences [12], [13], [14]. One key feature of our approach is the ability to model temporal tasks by using LTL as the specification language. Chanlette-Vazquez et al . [15] proposed observing the demonstrated task execution given the true specification as a maximum entropy estimator . Kasenburg and Scheutz [16] proposed an optimization-based framework for modeling a decision maker’s behavior as an LTL formula. Camacho et al. [17] developed an exact method for mining LTL formulas based on sets of satisfying and non-satisfying traces for the shortest LTL-F(inite) formula. Shah et al. [2] proposed a Bayesian approach to specification inference to model the uncertainty associated with inferring task specifications from a small number of demonstrations. While most of the previous work on learning non-Markov task specifications has focused on learning solely from teacher’s demonstrations, in this paper, we adopt an iterative Bayesian approach that unifies the teacher’s input provided via demonstrations or as assessments of the robot’s task executions. 2Samir is a PhD student Computer Science and Artificial In- Institute of Technology. telligence Laboratory at [email protected] the Massachusetts 3Julie Shah is an associate professor at the Massachusetts Institute of Technology There has also been considerable interest in developing algorithms that allow the learner to elicit the teacher’s feed- back (an active learning paradigm). One expected benefit of an active approach is that the learner can guide the teacher’s feedback such that it is optimally impactful to the learner’s own behavior. Cakmak et al. [18], [19] developed a taxonomy of queries that allow a learner to refine its understanding of the task specifications. Sadigh et al. [12] proposed an active learning framework for sequential decision-making problems that relies upon pairwise preference between can- didate trajectories selected according to a maximum volume removal heurisitc; Biyik et al. [14] then extended this to generate queries using maximum information gain criterion. Biyik and Sadigh [13] proposed a batch active framework for preference-based learning wherein multiple queries are generated simultaneously instead of one at a time. Cui and Niekum [20] proposed an active learning model based on in- formation gain that operates on individual state-action pairs, allowing segments of the trajectory to be labeled “desirable” and “undesirable.” However, present research into active learning for robotics has largely focused on formulations that represent the underlying task as a Markov decision process (MDP), with the state space known a priori. Admitting non-Markov task specifications would increase the robot’s ability to handle complex tasks . Therefore, prior research has led to development of planning algorithms for hybrid controller synthesis [21], symbolic planning [22], [23], and reinforcement learning [24], [25], [26], [27]. In this paper, we build upon planning with uncertain specification (PUnS) [6], a problem formulation that allows task specifica- tions to be expressed as a belief over multiple LTL formulas. Policies computed to optimize the PUnS evaluation criteria satisfy the entire belief distribution rather than a single LTL formula, allowing the learner to reconcile the ambiguity inherent in the teacher’s demonstrations. Our proposed ex- tension leverages the reward machine [27] representing the learner’s belief over LTL formula in order to identify a task execution suitable for active learning. Our contribution in this paper is twofold. First, we propose a novel interactive learning framework (Figure 1) for non- Markov tasks that unifies teacher inputs through demonstra- tions and assessments of the learner’s task execution . Sec- ond, we develop an active learning approach that leverages the reward machine representation of an instance of a PUnS problem to identify task executions with the most uncertain degree of acceptability. # III. PRELIMINARIES A. Linear Temporal Logic Linear temporal logic (LTL), first proposed by Pnueli [1], provides a flexible grammar for defining temporal properties over Boolean propositions. A valid LTL formula is con- structed using atomic propositions (discrete time sequences of Boolean values) and logical and temporal operators. The truth value of an LTL formula is evaluated for traces [ααα] for a set of atomic propositions, ααα. The notation [ααα],t |= ϕ indicates that formula ϕ holds at time t. Trace [ααα] satisfies ϕ (denoted as [ααα] |= ϕ iff [ααα], 0 |= ϕ. The minimal syntax of LTL is as follows: ϕ ::= p | ¬ϕ1 | ϕ1 ∨ ϕ2 | Xϕ1 | ϕ1Uϕ2 (1) Here, p is an atomic proposition, and ϕ1 and ϕ2 represent valid LTL formulas. The operator X is read as “next” and Xϕ1 evaluates as true at t if ϕ1 holds at t + 1. The operator U is read as “until” and ϕ1Uϕ2 evaluates as true at t if ϕ2 holds at some time t2 > t1 and ϕ1 holds for all t, where t1 ≤ t ≤ t2. In addition to the minimal syntax, we also incorporate the conjunction operator ∧, along with two other temporal operators: F (eventually) and G (globally). Fϕ1 holds at t1 if ϕ1 holds for some time t ≥ t1; similarly, Gϕ1 holds at t1 if ϕ1 holds for all t ≥ t1. Finally, a progression Prog(ϕ, αt ) over an LTL formula with respect to the truth assignment, αt , is defined such [αt , [ααα]],t |= ϕ iff [ααα],t + 1 |= Prog(ϕ, αt ). A that ∀[ααα] : progression of an LTL formula with respect to a particular truth assignment must hold at the next time step in order for the original formula to hold at the current time step. We use the syntactic progression rules defined by Bacchus and Kabanza [28] to compute formula progressions. B. Markov Decision Process A Markov decision process (MDP) is a planning problem defined as a tuple M = (8,A,7,R), where 8 represents the set of all possible states; A is the set of actions available to the learner; T := P(s' | s,a) is a probability distribution over the next state s’ € § given current state s € S, and the action a€A executed at the current time step; and R: S > R is the reward function that returns a scalar value given the current state. IV. INTERACTIVE TRAINING FOR NON-MARKOV TASKS A. Problem Formulation In this setting, the teacher intends to teach a task repre- sented by an LTL formula, ϕ ∗, unknown to the learner. The learner maintains a belief over candidate LTL formulas P(ϕ); this distribution is defined as a mass function, P : ϕϕϕ → [0, 1]. The support of P(ϕ) is restricted to a discrete set of LTL formulas, {ϕ}, where each formula represents a property belonging to the “Obligations" class as defined by Manna and Pnueli [29]. The learner’s degree of success is determined by comparing the similarity between the belief distribution and the ground truth LTL formula (Equation 15). The learner represents the task environment as a state, x ∈ X, and also has access to a set of actions, A. The state of the system, x, maps to a set of finite known Boolean propositions, ααα ∈ {0, 1}nprop, through a labeling function, f : X → {0, 1}nprop . We assume that a trace of propositions, depicted by [ααα], is sufficient to determine the truth value of any formula within the support {ϕ} of the learner’s belief; thus, any task execution, whether generated by the robot or demonstrated by the teacher, is represented as a trace, [ααα]. We also define a Boolean label, L([ααα]) ∈ {0, 1}, that indicates whether the given trace is acceptable. For the purposes of this paper, we assume that all task executions demonstrated by the teacher are labeled as acceptable, and Teaching Provides a . Ab Pie Assessments —Provides Queries Ow ——— Feedback Teacher’s ee Task —— —LUpdates a 's Belief over Tasks <x. rates Task Executions Fig. 1: Our proposed Bayesian interactive learning framework that unifies learning from demonstrations provided by the teacher and using informative queries generated by the learner to refine the learner’s belief. The green path depicts the teacher initiating training using task demonstrations. The orange path indicates the learner initiating training by demonstrating a task execution as a query requesting an assessment from the teacher. that the teacher’s assessment of the executions demonstrated by the learner is perfect. B. Overview of the Interactive Learning Framework Figure [I] depicts our proposed interactive framework for training a learner using a combination of demonstrations provided by a teacher and that teacher’s assessments of task executions generated as a query by the leaner . The learner must carry out two processes: learning, wherein the robot updates its belief conditioned upon labeled task executions; and planning, where it must use that belief to generate task executions. We adopt an iterative version of our prior work on Bayesian specification inference [2], and extend it to allow both positive and negative examples (as elaborated upon in Section [IV-C). Formally, if the learner’s initial belief over formulas is P'(@) and the learner receives a dataset of task executions and their labels, D = {([@],£({@]))}, then the learner computes an estimate of the posterior distribution, P(g |D). The learner updates its belief to be the computed posterior as follows: Pi+1(ϕ) ← P(ϕ | D) (2) The learner has the ability to compute two types of policies depending upon the availability of a teacher to assess its task executions. If an assessment is unavailable, the learner computes a policy to satisfy its current belief, Pi(ϕ). (This is an instance of planning with uncertain specifications (PUnS) [6], as briefly described in IV-D.) The original non- Markov planning problem is compiled into an equivalent MDP representation, with a reward function representing the minimum regret criterion [6]. C. Bayesian Specification Inference Bayesian specification inference [2] is a probabilistic model for using demonstrations provided by a teacher to infer LTL formulas corresponding to the task specifications. According to this model, the hypothesis space of candidate LTL formulas comprises the set of formulas corresponding to the following template, which includes conjunctions of temporal behaviors identified by Dwyer et al. [30]: ϕ = ϕglobal ∧ ϕeventual ∧ ϕorder (3) In our previous work [2], we also proposed a domain- independent approximation of the likelihood function P([ααα] | ϕ) — depending upon the number of conjunctive clauses — that satisfied the size principle [31]. (A restrictive hypothesis has greater likelihood than a less-restrictive hypothesis in the presence of data conforming to both.) Our approach is founded upon the classical interpretation of probability championed by Laplace [32], which involves computing the probabilities in terms of equally likely outcomes. If Ncon j conjunctive clauses exist within a formula, ϕ, there are 2Ncon j possible outcomes in terms of the truth values of the conjunctive clauses. In the absence of additional information, we assign equal probabilities to each of the potential outcomes. Consider two candidate formulas, ϕ1 and ϕ2, with Ncon j1 and Ncon j2 conjunctive clauses and [ααα] |= ϕ1. If this trace is considered acceptable ((L([ααα]) = 1), the approximate likelihood odds ratio is computed as follows: N, Neon Tomy 112] Neon jz Neon, € EK [a] Â¥ ge (4) If a teacher’s assessment is available, the learner computes a policy to generate a task execution with the most uncertain degree of acceptability as per the learners current belief, Pi(ϕ). The teacher’s assessment of this task execution would be most beneficial for reducing the learner’s uncertainty of the true specification; we describe our approach to generating such an informative query in Section IV-E. If trace [a] is labeled as unacceptable (£([a@]) = 0) and [a] Â¥ @, the likelihood odds ratio is computed following the classical probability interpretation as before . With 2%eon/ conjunctive clauses, there are 2%) — 1 possible evaluations of each of the individual clauses that would result in the given trace not satisfying the candidate formula; thus, the likelihood odds ratio is computed as follows: 2Neoniy (2Neonin —1) P(([a],£([at]) = 1) |r) _ J oma %on TF P(e Ce) = 1) 1) Ya al gy (5) P(e Ce) = 1) 1) Ya al gy (5) We assume that each data point in a given dataset D = {({a],£([@]))} is independent of the others; thus, the like- lihood of the entire dataset is the product of the individual likelihoods, as follows: PD\e)= TI ([ali,£ ([a])/)ED P(([ali,£({a]);) |e) ©) The probabilistic model is implemented in webppl [33], a universal probabilistic programming language. The posterior is approximated using webppl’s Markov chain Monte Carlo algorithm with the Metropolis-Hastings acceptance criterion. D. Planning with Uncertain Specifications Planning with uncertain specifications (PUnS) [6] is a formulation for planning problems wherein task specifica- tions are known as beliefs over LTL formulas, P(@). An instance of a PUnS problem is defined by the planning environment, which is encoded as an MDP sans a reward function, Mx = (X,A,T»); a task specification represented as a belief over LTL formulas, P(@), with support over a finite set of formulas, {~}; and one of the four evaluation criteria proposed by us (Shah et al. [6]) for satisfying a belief over LTL formulas. Consider a planning domain representing the task of set- ting a dinner table, with three objects accessible to the robot: a fork, a bowl, and a plate. The environment MDP, MX, consists of a discrete state space, X, that encodes whether each object is correctly placed; a discrete action space, A, that encodes the selection of the object to be placed next; and the transition function, TX, which encodes how the action selection affects a change in the state. The acceptability of the task execution is evaluated using the vector of Boolean propositions, ααα = [Fork, Bowl, Plate], where each proposi- tion represents whether that object was correctly placed on the table. An example of a belief over the task specifications is represented by the distribution P(ϕ), whose support, {ϕ} = {ϕ1 = G ¬Fork ∧ F Bowl ∧ ¬Bowl U Plate, ϕ2 = G ¬Fork ∧ F Bowl}. The probabilities are as follows: P(ϕ1) = 0.3, and P(ϕ2) = 0.7. ϕ1 encodes the specification that the fork must never be placed, the bowl must be placed eventually, and that the bowl must not be placed until the plate has been placed; ϕ2 encodes that the form must never be placed, and that the bowl must be placed eventually. Thus, any task execution that satisfies ϕ2 also satisfies ϕ1; however, the converse is not true. In order to perform the task to best align with this belief over specifications, one must place the plate, then the bowl, and must not place the fork. The PUnS formulation [6] formalizes this intuition. In order to compute the policy to satisfy an instance of PUnS, we first compile the non-Markov belief P(ϕ) into an equivalent deterministic MDP M{ϕ}. The graphical representation of the compilation process for the dinner table example is depicted in ene! Formally, Mig) = ({(@') }, {0, 1}, Tio}, Rro}). {(") FAs the set of ordered tuples (@’) that represent all progressions of the formulas contained in {@}, and the actions represent the truth values of the propositions, a. Let g" represent the i” formulas in the tuple (@'); the transition function 7,9} is then defined as follows: 1, if pf =Prog(gii,a) Vi To) (1): (5), = {3 Hs = Proe(@h a) VF ay Let (@’)serm be the set of terminal states, where each of the component formula has either been satisfied (T), dissatisfied (1), or has progressed to a safe-LTL formula. The reward function depends upon the choice of the PUnS evaluation criterion. The minimum regret criterion is linearly dependent on the probability of the task execution being acceptable as determined by the belief P(@) For the minimum regret criterion, the reward function is defined as follows: LiP(P')r(g"), if (9') € (@')rerm R ‘= 8 to} ((P)) rf , otherwise ®) where r(@") is defined as follows: ti 1 , g'=T or g"€ safe-LTL = . 9 r(@") - ofHt (9) This compiled deterministic MDP, M{ϕ} is then composed with MX to obtain an MDP equivalent of the original PUnS problem, defined as follows: Mspece = (Xx {(0')},A, TrpecRip}) (10) Here, Tspec(((P1) 1) ((@3),x2),4) = Tp} (Pi) s (P2) f(22)) X Toe (11 42,4) qd) The state-space of MSpec is generated by the outer product of the state-spaces of the environment MDP MX and the reward machine M{ϕ} Thus, the MDP equivalent of a PUnS problem, MSpec generates a problem definition compatible with reinforce- ment learning algorithms. (In this paper, we utilize discrete representations for state and action spaces; therefore, we use tabular Q-learning [34] to compute the policy.) E. Determining the query execution In an active learning framework, the learner generates a query that the teacher must answer by providing a label. There are Many strategies for generating an informative query [35] have been proposed in prior research. Our strategy is based on the uncertainty sampling approach [36], wherein a learner queries about the instance it is least certain how to label . The following is an illustrative example that describes P(g) = 0.3 91 = GATy NF W, AW, UW, P(p2) = 0.7 G2 = GAT, AFW, &> Mo,:3 States =) Rig) = 0.4 Se Rates = 8 Rig) = 0.4 Crt, new Renapea = 1 En | Rig) = 0.4 -10 Rig) = 0.4 Rhaped = -1 Rehaped = —1 shape Migy:5 States Rig = 0.4 Rehapea = —1 Fig. 2: Example compilation process with {ϕ} = {ϕ1, ϕ2} and the minimum regret criterion. The deterministic MDPs Mϕ1, and Mϕ2 are composed through a cross product to yield the deterministic MDP M{ϕ} corresponding to the set {ϕ}. The reward based on the minimum regret criterion (R{ϕ}) is indicated in black, while the value of the shaped reward function (Rshaped) that enables the most uncertain task execution is indicated in blue. the nature of an informative query selected on the basis of uncertainty sampling. Consider the table-setting example depicted in Figure 2. Uncertainty over whether ϕ1 or ϕ2 is the true formula results in a policy that favours ϕ1, as it is the more restrictive of the two: if the plate and bowl were placed in that order, it would satisfy the specification of just placing the bowl. However, if ϕ2 were the ground truth formula (only the bowl must be placed) , the learned policy would be detrimental to the flexibility of the system during task execution; therefore, it is desirable to refine the belief according to the teacher’s feedback. A task execution where either the fork is placed or both the plate and bowl are placed in that order is not informative; both formulas would label the execution unacceptable or acceptable, respectively. An informative query would attempt to reach the state (1.,G—Fork) by placing only the bowl and not the fork. If this task execution were labeled acceptable, then @) would be more likely to be the ground truth specification; conversely, if this task execution were judged unacceptable, then @; would be more likely to be the ground truth specification. The principle of uncertainty sampling [36] for active learn- ing states that the most informative query is the one where the current model is most ambivalent about the teacher’s the probability of the expected label. For binary labels, query task execution being acceptable should be closest to 0.5. Given a current belief distribution, Pi(ϕ), the learner’s estimate of the probability of a trace ([ααα]) being acceptable is computed as follows: P(L((at]) = 1) = Epp) [1((o] 9] =0.5 x (1+R yo} ((@")) (12) Here, (¢’) represents the final state of the reward machine, Myo}, after a sequence of transitions described by [a]. P(£([a]) = 1) = 0.5 corresponds to a reward value of 0. Therefore, given a reward machine Myo}, the most informa- tive query as per the uncertainty sampling approach should end in a state defined as follows, with (@) representing the set of terminal states of My}: (9') selected = argmin | Re} ((G’)) | (13) (P'VE(P) term Finally, in order to compute a policy for performing a task execution that terminates in (@)selecteqas We reshape the reward values of Myo). Let (®) path be the set of states that lie along any path joining the initial state, (@), and (@’) selected; the reshaped reward function would then be defined as follows: 1 ,(9") = (®)selected Rshaped((9’)) = 40 — ,(Q") EP) path —1_, otherwise (14) The reshaped reward, Rshaped((Q’)), is indicated in blue for the dinner table example described in Figure [2] Note that this reward is only maximized when an execution terminates in (') selected: The policy to generate an informative query execution can be computed by solving the MDP Mopec = (X x {(9")},A, TspecsRshaped). (Note that this is identical to Mspec apart from reward function.) # V. EVALUATIONS We evaluated our proposed framework using both a sim- ulated experiment and a user study. The experiment incor- porated the synthetic environment proposed in our previous work [2] to rapidly generate scenarios with varying tempo- ral specifications. We assessed the ability of our proposed framework to infer the correct LTL specifications compared with baselines as described in Section V-A, and found that an active learning protocol within our framework generated pos- terior beliefs that were better aligned with the ground truth specification compared with learning purely from demonstra- tions or an interactive framework with randomly sampled queries. # A. Baselines To our knowledge, our proposed framework is the first to model robot learning for non-Markov tasks that unifies demonstrations and a teacher’s acceptability assessments. A natural baseline for our framework is the classical learning- from-demonstrations (LfD) formulation , where the learner learns solely from demonstrations provided by the teacher. We also wanted to evaluate the effect of query selection on learning performance; therefore, as a second baseline, we generated the query executions by selecting actions at each time step from a uniform random distribution. Based on these three paradigms, we used the following three protocols: 1) Active: The teacher initially provides two demonstra- tions, then the learner generates queries. The learner’s belief over LTL formulas is updated after an assessment is provided by the teacher for each of the queries. Each query is generated to reach an informative terminal state, as defined by Equation 13. 2) Random: This protocol is identical to the Active pro- tocol, except that queries are generated by uniformly sampling available actions at each time step. the teacher only provides demonstrations, and the learner can not elicit any assessment of its task performance. The final belief is the posterior distribution computed using Bayesian specification inference [6]. In each training protocol, the task policy was computed using the final belief compiled with the minimum regret criterion. The number of task executions provided to the learner (as either demonstrations or queries) was equal in all cases. B. Simulation Experiments The task environment for all simulations was based on the synthetic domain [2]. This domain allows a variable number of threats and waypoints, where the admissible orders for visiting waypoints are encoded within the ground truth formula LTL formula . We allowed a maximum of five waypoints and five threats for any simulation run; the available action space enabled the learner to select any of these 10 targets to visit. For all runs of the simulation, the procedure was as follows: 1) Select the number of queries nquery. 2) A ground truth LTL formula ϕ was sampled from the priors developed in our previous work [2]. 3) Two (2) demonstrations that satisfied the ground truth formula were generated and added to the dataset D = {((ot]1,1), ([@l2, 1)}. 4) D was used with the Active protocol with nquery queries generated by the learner. The final belief, Pactive(ϕ), was recorded. 5) D was used with the Random protocol with nquery queries generated by the learner. The final belief, Prandom(ϕ), was recorded. 6) An augmented dataset, Dparcn = DU {([@]o4i,1) 21 € {1,...,Nquery}}, Was created by generating three ad- ditional demonstrations that satisfied the ground truth formula. This dataset was then used with the Batch pro- tocol, and the final belief, Pharcn(@), was recorded. (This ensured the total number of task executions provided to all baselines was equal.) The experiment was conducted for values of nquery = {1, . . . , 6}, with 200 runs for each value and a different ground truth formula sampled for each run. For every in- dividual run, the entropy of the final belief and similarity (a) (b) Mean Entropy Protocol — Active — Random — Batch Entropy 3 H 5 6 7 Number of task executions Median Similarity Protocol — Active — Random — Baten Similarity H 5 6 7 Number of task executions Fig. 3: The average entropy (left; lower is better) of the final posterior and the similarity of the posterior to the ground truth formula (right; higher is better) for the four training conditions. All error bars indicate 95% confidence interval. to the ground truth formula were recorded for each of the training protocols. Given two formulas, ϕ1 and ϕ2, that are conjunctive compositions of the clauses in sets CCC1 and CCC2, respectively, the similarity of the two formulas is defined using the intersection-over-union, as follows: L(ϕ1, ϕ2) = CCC1 ∩CCC2 CCC1 ∪CCC2 (15) The similarity of belief distribution P(ϕ) with ground truth formula ϕ ∗ is computed as follows: L(P(ϕ)) = EP(ϕ) (16) [L(9, 9") CCC1 and CCC2 represent the sets of conjunctive clauses. 1) Results: Figure 3 depicts the results from our sim- ulation experiment, and Figure 3a depicts the mean en- tropy value of the final belief for all baselines across all runs. Our results indicate that belief distribution’s entropy decreased as the training protocols processed more labeled task executions; however, this decrease was slower for the Random protocol than for the Active and Batch protocols. This is to be expected, as demonstrations generated through random actions are less informative than either correct demonstrations or the most uncertain task execution (as per the learner’s initial belief). Our findings also indicate that both the Batch and Active protocols yielded similar entropy values, suggesting a similar degree of confidence over the final belief distribution. Figure 3b depicts the median value of the similarity between the final belief and the ground truth formula. The maximum value for the similarity metric is 1, while the min- imum is 0. The Active protocol outperformed the Batch and Random protocols with regard to inferring a belief aligned with the ground truth specification. Also, the difference between the median similarity metrics increased with the total number of task executions processed by the training protocols. Finally, the low similarity score and entropy observed for the Batch protocol indicate that it is susceptible to inferring a belief distribution that is not aligned with the ground truth formula with a high degree of confidence. One potential explanation for this finding is confirmation bias, as multiple identical demonstrations would cause the inference model Median Similarity Task — Task 0.94) — Task 2 Similarity 3 a 5 Number of Task Executions (a) (b) (c) (d) Table & Table B _ g al 4 Fig. 4: Figure 4a depicts the experiment setup, where objects must be arranged on Table B. Figure 4b depicts the desired final configuration for Task 1, with all objects placed on the table. Figure 4c depicts one of the final configurations for Task 2, where only the required objects (dinner plate and bowl) are placed, while optional objects (knife and fork) and the prohibited object (small plate) are not. to assign high probability to an over-constrained formula satisfied by the demonstration. Notably, the similarity of the final posterior learned with the Active protocol increases monotonically as additional task executions are provided and indicating good learning performance variance decreases, performance on all ground truth specifications rather than just a subset thereof. # C. User Study In order to evaluate the real-world performance of the active learning approach, we conducted a user study that involved participants teaching a robot to set a dinner table in various reference configurations. Figure 4a depicts the experiment setup. During each task execution (whether demonstrated by the participant or performed by the robot), the five objects were initially placed on Table A, and subsequently arranged on Table B. In the first phase of the study, the task (Task 1) involved arranging the objects into the configuration depicted in Figure 4b, with demonstrations provided directly by participants. In the second phase, we modified the study to be conducted online due to the restrictions on in-person studies imposed in light of the COVID-19 pandemic, and participants remotely commanded the robot to provide the demonstrations. The robot’s task execution and the experiment instructions were displayed to the participants via video conferencing. In addition to Task 1, we also added a second task (Task 2), wherein the dinner plate and bowl were to be placed on the table in the configuration depicted in Figure 4c. Placing the fork and knife were optional, and placing the small plate was not permitted. (example video: ral2021.ajshah.info) We instructed participants to move only a single object at a time while providing demonstrations, and informed them that objects could not be picked up again once placed on Table B. Participants were also instructed to provide an assessment after observing the robot while it executed the task; a participant’s label was only recorded once the entire task had been completed. For both the in-person and remote study protocols, the participant initiated the robot’s belief with two demonstrations; beliefs were then refined using three queries generated via our active learning models. Finally, the robot demonstrated the results of its learning by performing three task executions observed by the participant. The state space of the robot, X, was identical to the set of propositions required for evaluating the task, ααα, and contained five Boolean propositions, each of which encoded whether a particular object was successfully placed on the table. The robot’s action space, A, comprised five actions (one for each object). Initiating an action triggered a sequence of parameterized primitives programmed into the robot to locate, pick up, and place the object on Table B. Based on the constraints provided to the participants and the robot’s action space, the only way to successfully complete Task 1 was to ensure that the dinner plate, small plate, and bowl were placed in that specific order (the fork and the knife could be placed at any instant). There were multiple final acceptable configurations for Task 2; in each, the dinner plate and bowl were placed in that partial order , while the fork and the knife may or may not have been placed. 1) Results and Discussions: : We recruited 18 participants for the in-person phase of the study, but had to terminate the protocol with three partici- pants due to robot hardware failure. The results include data collected from 15 participants (10 male, 5 female, median age: 26 years), seven of whom reported prior experience with robots or automated systems. All participants were instructed to teach the robot to perform Task 1. For the remote phase, we recruited 12 participants (8 male, 4 female, median age: 28 years); four participants reported prior experience with robotics. We assigned six participants each to Task 1 and Task 2. All participants were successfully able to teach the as- signed task to the robot — i.e., the policies learned by the robot did not result in an incorrect table setting during any of the test executions. The learning curves for the robot are depicted in Figure 4d. the final belief formula was with 0.83 95% CI : the median similarity was 0.87 95% CI : [0.73, 0.94]; for Task 2, it was 0.78 95% CI : [0.59, 0.99]. For Task 1, the posterior belief distribution recovered the ground truth formula as the most likely LTL specification for 14 out of 21 participants, while the most likely specification differed from the ground truth by a single conjunctive clause in four cases. Similarly, for Task 2, the posterior belief distribution for two out of six participants recovered the ground truth formula as the most likely LTL specification, while the most likely specification for two of them differed from the ground truth formula by a single conjunctive clause. Our demonstration of the entire learning pipeline on an embodied robot indicates the viability of deploying our active learning framework for real-world applications. # VI. CONCLUSION Our proposed interactive training framework provides a unified formulation capable of learning non-Markov tasks from both demonstrations provided by a teacher and that teacher’s assessment of the robot’s task executions. We further proposed an active querying algorithm that allows the learner to identify and perform a task execution with the most uncertain degree of acceptability based on the principle of uncertainty sampling. Finally, we demonstrated the efficacy of our active learning framework for learning non-Markov tasks with a wide range of ground truth specifications through both a simulation experiment and a user study. Notably, the robot performed its task without errors, and the final belief of the robot was closely aligned with the true task specifications across all participants. # REFERENCES logic of programs,” in Foundations of Computer Science, 1977., 18th Annual Symposium on, pp. 46–57, IEEE, 1977. [2] A. Shah, P. Kamath, J. A. Shah, and S. Li, “Bayesian Inference of Temporal Task Specifications from Demonstrations,” in Advances in Neural Information Processing Systems 31, pp. 3804–3813, 2018. [3] J. Kim, C. Muise, A. Shah, S. Agarwal, and J. Shah, “Bayesian inference of linear temporal logic specifications for contrastive ex- planations,” in IJCAI, 2019. [4] Y. Oh, R. Patel, T. Nguyen, B. Huang, E. Pavlick, and S. Tellex, “Plan- ning with state abstractions for non-Markovian task specifications,” in RSS, June 2019. [5] N. Gopalan, D. Arumugam, L. L. Wong, and S. Tellex, “Sequence-to- sequence language grounding of non-markovian task specifications.,” in Robotics: Science and Systems, 2018. [6] A. Shah, S. Li, and J. Shah, “Planning with uncertain specifications (PUnS),” IEEE Robotics and Automation Letters, 2020. [7] B. D. Argall, S. Chernova, M. Veloso, and B. Browning, “A survey learning from demonstration,” Robotics and autonomous of robot systems, vol. 57, no. 5, pp. 469–483, 2009. [8] S. Chernova and A. L. Thomaz, “Robot learning from human teach- ers,” Synthesis Lectures on Artificial Intelligence and Machine Learn- ing, vol. 8, no. 3, pp. 1–121, 2014. [9] J. Luketina, N. Nardelli, G. Farquhar, J. Foerster, J. Andreas, E. Grefenstette, S. Whiteson, and T. Rocktäschel, “A survey of reinforcement learning informed by natural language,” in Proceedings the Twenty-Eighth International Joint Conference on Artificial of Intelligence, IJCAI-19, pp. 6309–6317, International Joint Conferences on Artificial Intelligence Organization, 7 2019. [10] A. Bajcsy, D. P. Losey, M. K. O’Malley, and A. D. Dragan, “Learning robot objectives from physical human interaction,” Proceedings of Machine Learning Research, vol. 78, pp. 217–226, 2017. [11] A. Bajcsy, D. P. Losey, M. K. O’Malley, and A. D. Dragan, “Learning from physical human corrections, one feature at a time,” in Proceed- ings of the 2018 ACM/IEEE International Conference on Human- Robot Interaction, HRI ’18, (New York, NY, USA), p. 141–149, Association for Computing Machinery, 2018. [12] D. Sadigh, A. D. Dragan, S. Sastry, and S. A. Seshia, “Active preference-based learning of reward functions,” in Robotics: Science and Systems (RSS), 2017. [13] E. Biyik and D. Sadigh, “Batch active preference-based learning of reward functions,” in Conference on Robot Learning, pp. 519–528, 2018. [14] E. Biyik, M. Palan, N. C. Landolfi, D. P. Losey, and D. Sadigh, “Asking easy questions: A user-friendly approach to active reward learning,” in 3rd Conference on Robot Learning (CoRL), October 2019. [15] M. Vazquez-Chanlatte, S. Jha, A. Tiwari, M. K. Ho, and S. Seshia, “Learning task specifications from demonstrations,” in Advances in Neural Information Processing Systems 31, pp. 5368–5378, 2018. [16] D. Kasenberg and M. Scheutz, “Interpretable apprenticeship learning with temporal logic specifications,” in IEEE Conference on Decision and Control, 2017. [17] A. Camacho and S. A. McIlraith, “Learning interpretable models in linear temporal logic,” in Proceedings of the Twenty-Nineth Interna- tional Conference on Automated Planning and Scheduling (ICAPS), pp. 621–630, 2019. [18] M. Cakmak and A. L. Thomaz, “Designing robot learners that ask good questions,” in Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pp. 17–24, ACM, 2012. [19] M. Cakmak, C. Chao, and A. L. Thomaz, “Designing interactions for robot active learners,” IEEE Transactions on Autonomous Mental Development, vol. 2, no. 2, pp. 108–118, 2010. [20] Y. Cui and S. Niekum, “Active reward learning from critiques,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 6907–6914, IEEE, 2018. [21] H. Kress-Gazit, G. E. Fainekos, and G. J. Pappas, “Temporal-logic- based reactive mission and motion planning,” IEEE transactions on robotics, vol. 25, no. 6, pp. 1370–1381, 2009. [22] A. Camacho, J. A. Baier, C. Muise, and S. A. McIlraith, “Finite LTL synthesis as planning,” in Twenty-Eighth International Conference on Automated Planning and Scheduling, 2018. [23] A. Camacho and S. A. McIlraith, “Strong fully observable non- deterministic planning with LTL and LTL-f goals,” in Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence (IJCAI), pp. 5523–5531, 2019. [24] M. L. Littman, U. Topcu, J. Fu, C. Isbell, M. Wen, and J. Mac- Glashan, “Environment-independent task specifications via GLTL,” arXiv preprint arXiv:1704.04341, 2017. [25] R. Toro Icarte, T. Q. Klassen, R. Valenzano, and S. A. McIlraith, “Teaching multiple tasks to an RL agent using LTL,” in Proceedings of the 17th International Conference on Autonomous Agents and MultiA- gent Systems, pp. 452–461, International Foundation for Autonomous Agents and Multiagent Systems, 2018. [26] R. T. Icarte, T. Klassen, R. Valenzano, and S. McIlraith, “Using reward machines for high-level task specification and decomposition in reinforcement learning,” in International Conference on Machine Learning, pp. 2112–2121, 2018. [27] A. Camacho, R. T. Icarte, T. Q. Klassen, R. Valenzano, and S. A. languages for reward func- McIlraith, “LTL and beyond: Formal tion specification in reinforcement learning,” in Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), pp. 6065–6073, 2019. [28] F. Bacchus and F. Kabanza, “Using temporal logics to express search control knowledge for planning,” Artificial intelligence, vol. 116, no. 1- 2, pp. 123–191, 2000. [29] Z. Manna and A. Pnueli, A hierarchy of temporal properties. Depart- ment of Computer Science, 1987. [30] M. B. Dwyer, G. S. Avrunin, and J. C. Corbett, “Patterns in property specifications for finite-state verification,” in Proceedings of the 21st international conference on Software engineering, pp. 411–420, ACM, 1999. learning,” in Advances in neural information processing systems, pp. 59–65, 2000. [32] P. S. Laplace and P. Simon, “A philosophical essay on probabilities, translated from the 6th french edition by frederick wilson truscott and frederick lincoln emory,” 1951. [33] N. D. Goodman and A. Stuhlmüller, “The Design and Implementation of Probabilistic Programming Languages.” http://dippl.org, 2014. Accessed: 2018-4-9. [34] C. J. Watkins and P. Dayan, “Q-learning,” Machine learning, vol. 8, no. 3-4, pp. 279–292, 1992. [35] B. Settles, “Active learning literature survey,” tech. rep., University of Wisconsin-Madison Department of Computer Sciences, 2009. [36] D. D. Lewis and J. Catlett, “Heterogeneous uncertainty sampling for supervised learning,” in Machine learning proceedings 1994, pp. 148– 156, Elsevier, 1994.
{ "id": "1704.04341" }
2003.02234
Contrastive estimation reveals topic posterior information to linear models
Contrastive learning is an approach to representation learning that utilizes naturally occurring similar and dissimilar pairs of data points to find useful embeddings of data. In the context of document classification under topic modeling assumptions, we prove that contrastive learning is capable of recovering a representation of documents that reveals their underlying topic posterior information to linear models. We apply this procedure in a semi-supervised setup and demonstrate empirically that linear classifiers with these representations perform well in document classification tasks with very few training examples.
http://arxiv.org/pdf/2003.02234
Christopher Tosh, Akshay Krishnamurthy, Daniel Hsu
cs.LG, stat.ML
null
null
cs.LG
20200304
20200304
0 2 0 2 r a M 4 ] G L . s c [ 1 v 4 3 2 2 0 . 3 0 0 2 : v i X r a # Contrastive estimation reveals topic posterior information to linear models Christopher Tosh∗1, Akshay Krishnamurthy†2, and Daniel Hsu‡1 1Columbia University, New York, NY 2Microsoft Research, New York, NY # December 7, 2021 # Abstract Contrastive learning is an approach to representation learning that utilizes naturally occurring similar and dissimilar pairs of data points to find useful embeddings of data. In the context of document classification under topic modeling assumptions, we prove that contrastive learning is capable of recovering a representation of documents that reveals their underlying topic posterior information to linear models. We apply this procedure in a semi-supervised setup and demonstrate empirically that linear classifiers with these representations perform well in document classification tasks with very few training examples. # 1 Introduction Using unlabeled data to find useful embeddings is a central challenge in the field of representation learning. Classical approaches to this task often start by fitting some type of structure to the unlabeled data, such as a generative model or a dictionary, and then embed future data by performing inference using the fitted structure (Blei et al., 2003; Raina et al., 2007). While this approach has sometimes enjoyed good empirical performance, it is not without its drawbacks. One issue is that learning structures and performing inference is often hard in general (Sontag and Roy, 2011; Arora et al., 2012). Another issue is that we must a priori choose a structure and method for fitting the unlabeled data, and unsupervised methods for learning these structures can be sensitive to model misspecification (Kulesza et al., 2014). Contrastive learning (also called noise contrastive estimation, or NCE) is an alternative approach to representation learning that tries to capture the latent structure in unlabeled data implicitly. Informally, contrastive learning methods formulate a classification problem in which the goal is to distinguish examples that naturally occur in pairs, called positive samples, from randomly paired examples, called negative samples. The particular choice of positive samples depends on the setting. In image representation problems, for example, neighboring frames from videos may serve as positive examples (Wang and Gupta, 2015). In text modeling, the positive samples may be neighboring sentences (Logeswaran and Lee, 2018; Devlin et al., 2018). The idea is that in the course of learning to distinguish between semantically similar positive examples and randomly chosen negative examples, the representations constructed along the way will capture some of that latent semantic information. In this work, we consider contrastive learning for document modeling where we have a corpus of text documents and our goal is to construct a useful vector representation for these documents. In this setting, # ∗[email protected] †[email protected] ‡[email protected] 1 there is a natural source of positive and negative examples: a positive example is simply a document from the corpus, and a negative example is one formed by pasting together the first half of one document and the second half of another document. We prove that when the corpus is generated by a topic model, learning to distinguish between these two types of documents yields representations that are closely related to their underlying latent variables. In fact, we show that linear functions of these representations can approximate the posterior mean of any continuous function of the latent variables. One potential application of contrastive learning is in a semi-supervised setting, where there is a small amount of labeled data as well as a much larger collection of unlabeled data. In these situations, purely supervised methods that fit complicated models may have poor performance due to the limited amount of labeled data. On the other hand, when the labels are well-approximated by some function of the latent structure, our results show that an effective strategy is to fit linear functions, which may be learned with relatively little labeled data, on top of contrastive representations. In our experiments, we verify empirically that this approach produces reasonable results. # 1.1 Related work There has been much work on reducing unsupervised problems to synthetically-generated supervised prob- lems. In dynamical systems modeling, Langford et al. (2009) showed that if one can solve a few forward prediction problems, then it is possible to track the underlying state of a nonlinear dynamical system. In anomaly/outlier detection, a useful technique is to learn a classifier that distinguishes between true samples from a distribution and fake samples from some synthetic distribution (Steinwart et al., 2005; Abe et al., 2006). Similarly, estimating the parameters of a probabilistic model can be reduced to learning to classify between true data points and randomly generated points (Gutmann and Hyvärinen, 2010). In the context of natural language processing, methods such as skip-gram and continuous bag-of-words turn the problem of finding word embeddings into a prediction problem (Mikolov et al., 2013a,b). Modern language representation training algorithms such as BERT and QT also use naturally occurring classification tasks such as predicting randomly masked elements of a sentence or discriminating whether or not two sentences are adjacent (Devlin et al., 2018; Logeswaran and Lee, 2018). Training these models often employs a technique called negative sampling, in which softmax prediction probabilities are estimated by randomly sampling examples; this bears close resemblance to the way that negative examples are produced in contrastive learning. Most relevant to the current paper, Arora et al. (2019) gave a theoretical analysis of contrastive learning. They considered the specific setting of trying to minimize the contrastive loss L(f) = Exo [l (fF (@)(F (e+) — f(e)))] where (x, x+) is a positive pair and (x, x−) is a negative pair. They showed that if there is an underlying collection of latent classes and positive examples are generated by draws from the same class, then minimizing the contrastive loss over embedding functions f yields good representations for the classification task of distinguishing latent classes. The main difference between our work and that of Arora et al. (2019) is that we adopt a generative modeling perspective and induce the contrastive distribution naturally, while they do not make generative assumptions but assume the contrastive distribution is directly induced by the downstream classification task. In particular, our contrastive distribution and supervised learning problem are only indirectly related through the latent variables in the generative model, while Arora et al. assume an explicit connection. The focus of our work is therefore complementary to theirs: we study the types of functions that can be succinctly expressed with the contrastive representation in our generative modeling setup. In addition, our results apply to semi-supervised regression, but it is unclear how to define their contrastive distribution in this setting; this makes it difficult to apply their results here. 2 # 1.2 Overview of results In Section 3, we present a simple contrastive learning procedure that is based on learning a function to determine if two bag-of-words vectors were generated by randomly partitioning a document or if they came from two different documents. We also present a way to turn the outputs of such a function into an embedding of future documents. In Section 4, we show that under certain topic modeling assumptions, the document embeddings we construct from contrastive learning capture underlying topic structure. In particular, we demonstrate that linear functions of these embeddings are capable of representing any polynomial of the topic posterior vector. In Section 5, we analyze the errors that arise in the finite sample setting. We show that whenever we can achieve low prediction error on the contrastive learning task, linear functions learned on the resulting representations must also be high quality. In Section 6, we apply our contrastive learning procedure to a semi-supervised document classification task. We show that these embeddings outperform several natural baselines, particularly in the low labeled data regime. We also investigate the effect of contrastive model capacity and model performance on the contrastive task on embedding quality. In Section 7, we investigate the effects of model capacity and corpus size on a simulated topic recovery task. We demonstrate that increasing either of these quantities leads to an improvement in topic recovery accuracy. # 2 Setup Let V denote a finite vocabulary, and take K to be a finite set of K topics. We consider a very general topic modeling setup, which generates documents according to the following process. First, a topic distribution w ∈ ∆(K) is drawn, and then each of m words x1, . . . , xm are drawn by sampling zi ∼ w and then xi ∼ O(·|zi). The parameters of this model that are of primary interest are the topic distributions O(·|k) ∈ ∆(Rd). Note that documents need not have the same number of words. This model is quite general and captures topic models such as Latent Dirichlet Allocation (LDA) as well as topic models with word embeddings. In LDA, the topic distributions O(· | k) are unconstrained. When word embeddings are introduced, we set O(· | k) = softmax(Aβk) where A ∈ R|V|×L is a latent embeddings matrix and β1, . . . , βk ∈ RL are latent “context vectors.” We assume that there is a joint distribution D supported on triples (x, w, 2) where x is a document, w is the topic distribution and @ is a label. Triples are generated by first sampling (w, 2) from the above topic model, and then sampling @ from some conditional distribution that depends on the topics w, denoted D(- | w). Our goal is to characterize the functional forms of conditional distribution that are most suited to contrastive learning. In the semi-supervised setting, we are given a collection := {£1,...,@ny} of unlabeled documents sampled from the marginal distribution D,, where topics and labels are suppressed. We also have access to ny < ny labeled samples £L := {(«1, 1), ---,(@n,,€n,} sampled from the distribution D;,¢, where only the topics are suppressed. In both datasets, we never observe any topic distributions w. From this data, we would like to learn a predictor f : «> é that predicts the label given the document. # 3 Contrastive learning algorithm In contrastive learning, examples come in the form of similar and dissimilar pairs of points, where the exact definition of similar/dissimilar depends on the task at hand. Our construction of similar pairs will take the form of randomly splitting a document into two documents, and our dissimilar pairs will consist of 3 Algorithm 1 Contrastive Estimation with Documents Input: Corpus U = {xi} of documents. S = ∅ for i = 1, . . . , n do Sample x1, x2 ∼ unif(U). Split xi = (x(1) , x(2) # i S ← S ∪ 1 , x(2) 1 , x(2) 1 , 1)} w.p 1/2 2 , 0)} w.p 1/2 end for Learn ˆf ← argminf ∈F Select landmarks documents l1, . . . , lM and embed +n) = f(x, li) ie lM (x) Goan cm). subsampled documents from two randomly chosen documents. In the generative modeling setup, since the words are i.i.d. conditional on the topic distribution, a natural way to split a document x into two is to simply call the first half of the words x(1) and the second half x(2). In our experiments, we split the documents randomly. The contrastive representation learning procedure is displayed in Algorithm 1. It utilizes a finite-sample approximation to the following contrastive distribution. • Sample a document x and partition it into (x(1), x(2)). Alternatively, we may think of our documents as coming ‘pre-partitioned,’ and denote the marginal distributions of x(1) and x(2) as µ1 and µ2, respectively. • With probability 1/2, output (x(1), x(2), 1). # • With probability 1/2, sample a second document (˜x(1), ˜x(2)) and output (x(1), ˜x(2), 0). We denote the above distribution over (x, x’, y) as D., and we frame the contrastive learning objective as a least squares problem between positive and negative examples. minimize (a,2! y)~De [Fe a’) —y) "| qd) In our algorithm, we approximate this expectation via sampling and optimize the empirical objective, which yields an approximate minimizer f (chosen from some function class F). We use f to form document representations by concatenating predictions on a set of landmark documents. Formally, we select documents 1,,...,l,¢ and represent document « via the mapping: fle.t) :@ ——.— :ie[M 5 > (Ga E| i). This yields the final document-level representation, which we use for downstream tasks. 4 For our analysis, let f* denote the Bayes optimal predictor, or the global minimizer, for Eq. (1). By Bayes’ theorem we have that g* := f*/(1 — f*) satisfies the following Letting l1, . . . , lM denote M fixed documents, the oracle representation of a document x is o (#, lim) = (g*(#,h1),---,9*(@ lar). (2) This representation takes the same form as éb except that the we have replaced the learned predictor f with the Bayes optimal one f*.! # 4 Recovering topic structure In this section, we focus on expressivity of the contrastive representation, showing that polynomial functions of the topic posterior can be represented as linear functions of the representation. To do so, we ignore statistical issues and assume that we have access to the oracle representations g*(, -). In the next section we address statistical issues. Recall the generative topic model process for a document x. • Draw a topic vector w ∈ ∆(K). • For i = 1, . . . , length(x): – Draw zi ∼ Categorical(w). – Draw xi ∼ O(·|zi). We will show that when documents are generated according to the above model, the embedding of a document x in Eq. (2) is closely related its underlying topic vector w. # 4.1 The single topic case To build intuition for the embedding in Eq. (2), we first consider the case where each document’s probability vector w is supported on a single topic, i.e., w ∈ {e1, . . . , eK} where ei is the ith standard basis element. Then we have the following lemma. Lemma 1. For any documents x, x’, where η(x)k := P(w = ek|x(1) = x) is the topic posterior distribution and ψ(x)k := P(x(2) = x|w = ek) is the likelihood. ‘Strictly speaking, we should first partition 2 = (2) a?) ), only use landmarks that occur as second-halves of documents, and embed x + g(a , lis). For the sake of clarity, we will ignore this technical issue here and in the remainder of the paper. 5 Proof. Conditioned on the topic vector w, x(1) and x(2) are independent. Thus, P (eax, 2?)=2') (a@Y=a) P (a) =2') P(w=e,)P(x =2|w=e,) P(x?) =2"|w=ex) P we P ow g(x, 2") las) Mea > ll ay P(w =e x)P(c©) = 2'|w = ex) Fares | TM) > > ll ay nla)" (2") w= al)’ 2s where the third equality follows from Bayes’ rule. The above characterization shows that g* contains information about the posterior topic distribution 7(-). To recover it, we must make sure that the «(-) vectors for our landmark documents span R. Formally, if 1,,..., ly are the landmarks, and we define the matrix L ¢ R**™” by L: [ wh) Lee wim) (3) ~~ | P@@=h) P@@=In1) |? then our representation satisfies g*(a, 1:47) = L'n(a). If our landmarks are chosen so that L has rank K, then there is a linear transformation of g* (x, l;.,7) that recovers the posterior distribution of w given 2, i.e., n(x). Formally, Lh g* (a, lim) = n(e) where † denotes the matrix pseudo-inverse. There are two interesting observations here. The first is that this argument naturally generalizes beyond the single topic setting to any setting where w can take values in a finite set S, which may include some mixtures of multiple topics, though of course the number of landmarks needed would grow at least linearly with |S|. The second is that we have made no use of the structure of x(1) and x(2), except for that they are independent conditioned on w. Thus, this argument applies to more exotic ways of partitioning a document beyond the bag-of-words approach. # 4.2 The general setting In the general setting, document vectors can be any probability vector in ∆(K), and we do not hope to recover the full posterior distribution over ∆(K). However, the intuition from the single topic case largely carries over, and we are able to recover the posterior moments. Let max be the length of the longest landmark document. Let SK := {a € ZS : 7), a, = m} denote the set of non-negative integer vectors that sum to m and let Mmax re _ i SEramae = UY Sin- m=0 Let π(w) denote the degree-mmax monomial vector in w as m(w) = (wft---wek rae SE rman) . 6 # For a positive integer m and a vector α ∈ SK m , define the set ("") = c € [K]": So ale =k) =a, VkeE xi} i=l . Then for a document x with length m, the degree-m polynomial vector ψm is defined by Wm (x) = S- [] Ctilz) :ae sk 2¢(iml) #1 and let u(x) = 0 for all d ¢ m. The cumulative polynomial vector w is given by ψ(x) := (ψ0(x), ψ1(x), · · · , ψmmax(x)). (4) Given these definitions, we have the following general case analogue of Lemma 1. Lemma 2. For any documents x, x’, where η(x) := E[π(w)|x(1) = x]. Proof sketch. The proof is similar to that of Lemma 1, albeit with more complicated definitions. The key insight is that the probabability of a document given topic factorizes as P(a|w) = S- (ie) (Tfotis0) i=1 2e[K]m \i=1 = 7(w)'Y(z). From here, a similar derivation to Lemma 1 applies. A full proof is deferred to the appendix. Therefore, we again have g* (x, ly...) = L™7(x), but now the columns of L correspond to vectors 4)(1;) from Eq. (4). When can we say something about the power of this representation? Our analysis so far shows that if we choose the landmarks such that LL" is invertible, then our representation captures all of the low-degree moments of the topic posterior. But how do we ensure that LL" is invertible? In the next theorem, we show that this is possible whenever each topic has an associated anchor word, i.e., a word that occurs with positive probability only within that topic. In this case, there is a set of landmark documents [.)7 such that any polynomial of 7(a) can be expressed by a linear function of g* (a, l.,7). Theorem 3. Suppose that (i) each topic has an associated anchor word, and (ii) the marginal distribution of w has positive probability on some subset of the interior of A(K). For any dg > 1, there is a collection of M= AT“) landmark documents l,,...,l\¢ such that if U(w) is a degree-do polynomial in w, then there is a vector 8 € R™ such that Va: (0,9%(a,hi:a)) = EfM(w)|2x = 2]. Combining Theorem 3 with the Stone-Weierstrass theorem (Stone, 1948) shows that, in principle, we can approximate the posterior mean of any continuous function of the topic vector using our representation. 7 Proof of Theorem 3. By assumption (i), there exists an anchor word a, for each topic k = 1,..., kK. By definition this means that O(a;|j) > 0 if and only if j = k. For each vector a € Zf such that 7 ap < do, create a landmark document consisting of a; copies of a; fork = 1,...,. This will result in (“i”) landmark documents. Moreover, from assumption (ii), we can see that each of these landmark documents has positive probability of occurring under the marginal distribution j12, which implies g*(z, 1) is well-defined for all our landmark documents J. Let l denote one of our landmark documents and let α ∈ ZK + be its associated vector. Since l only contains anchor words, ψ(l)β > 0 if and only if α = β. To see this, note that m K vYa= SY) [[ Ole) = [] Olaalk)%* > 0. 1 k=1 ze(Iml) i= ze(Iml) i= 8, = >>), On the other hand, if 8 a but >>; 8, = >>), ax, then there exists an index k such that 6, > ax + 1. Thus, for any z € (ml), there will be more than a, words in / assigned to topic k. Since every word in / is an anchor word and at most a, of them correspond to topic k, we will have m [] CG) = 0. i=1 Rebinding ψ(l) = (ψ0(l), . . . , ψd0(l)) and forming the matrix L using this definition, we see that LT can be diagonalized and inverted. For any target degree-d, polynomial II(w), there exists a vector v such that II(w) = (v, ma, (w)), where TMdo(w) denotes the degree-dy monomial vector. Thus, we may take 9 = L~‘v and get that for any document x: (L7tv)? L(a) = Bllv,ray(w)) |e = a] = E(M(w)|x™ = a. (9, 9° (a, li:m)) # 5 Error analysis Given a finite amount of data, we cannot hope to solve Eq. (1) exactly. Thus, our solution f will only be an approximation to f*. Since f is the basis of our representation, the fear is that the errors incurred in this approximation will cascade and cause our approximate representation (2) to differ so wildly from the ideal representation g*(2, l1.,7) that the results of Section 4 do not even approximately hold. In this section, we will show that, under certain conditions, such fears are unfounded. Specifically, we will show that there is an error transformation from the approximation error of f to the approximation error of linear functions in ¢. That is, if the target function is 7(x)'@*, then we will show that the risk of our approximate solution db, given by RO) = minEp~p,, (n(x) — d(2)"v), is bounded in terms of the approximation quality of ˆf as well as some other terms. Thus, for the specific setting of semi-supervised learning, an approximate solution to Eq. (1) is good enough. It is worth pointing out that Arora et al. (2019) also gave an error transformation from approximately solving a contrastive learning objective to downstream linear prediction. Also related, Langford et al. (2009) 8 showed that when the approximation errors for their tasks are driven to zero, their representations will be perfect. However, they did not analyze what happens when their solutions have non-zero errors. In this sense, the results in this section are closer in spirit to those of Arora et al. (2019). In order to establish our error transformation, we first need to make some assumptions. Our first assumption is a consistency guarantee on our contrastive learning algorithm. Assumption 1. For any 6 € (0,1), there is a decreasing sequence En = On (1), such that given n unlabeled documents the learning algorithm outputs a function f satisfying (enya, (fe) - fea’) | ce, (enya, (fe) - fea’) | ce, with probability 1 − δ. If f is chosen from a bounded capacity function class F by empirical risk minimization (ERM), Assump- tion | holds whenever f* € F. Although this assumption is not essential to our analysis, it is needed to establish consistency in a semi-supervised learning setting. There are a number of degrees of freedom for how to choose landmark documents. We consider a simple method: randomly sample them from the marginal distribution of x(2). Our next assumption is that this distribution satisfies certain regularity assumptions. Assumption 2. There is a constant σmin > 0 such that for any δ ∈ (0, 1), there is a number M0 such that for an iid sample l1, . . . , lM with M ≥ M0, with probability 1 − δ, the matrix L defined in Eq. (3) (with ψ as defined in Eq. (4)) has minimum singular value at least σmin Note that the smallest non-zero singular value of 1√ M L is the square-root of the smallest eigenvalue of an empirical second-moment matrix, i 1 iM S- PG@=LpP peloGy 4 Pe® = fj Hence, Assumption 2 holds under appropriate conditions on distribution over landmarks, for instance via tail bounds for sums of random matrices (Tropp, 2012) combined with matrix perturbation analysis (e.g., Weyl’s inequality). In the single topic setting with anchor words, it can be shown that for long enough documents, σmin is lower-bounded by a constant for M0 growing polynomially with K. We defer a detailed proof of this to the appendix. Our last assumption is that the predictions of fand f* are non-negative and bounded below 1. Assumption 3. There exists a value fmax ∈ (0, 1) such that for all documents x and landmarks li 0< f(x, li), f* (2, li) < fmax- Note that if Assumption 3 holds for f*, then it can be made to hold for f by thresholding. Moreover, it holds for f* whenever the vocabulary and document sizes are constants, since we have for A = 1— f*(zx, 2’), P(a = x)P(x® = a’) P(2@) = 2,22) = 2!) + P(e = 2)P(2@) = 2’) P(r?) = 2’) ~14P(22) = 2!) A= Since the landmarks are sampled, and there are a finite number of possible documents, there exists a constant pmin > 0 such that P(x(2) = l) ≥ pmin. Thus, Assumption 3 holds for fmax = 1/(1 + pmin). Given these assumptions, we have the following error transformation guarantee. The proof is deferred to the appendix. 9 Theorem 4. Fix any δ ∈ (0, 1), and suppose Assumptions 1-3 hold (with M0, σmin, and fmax). If M ≥ M0, there is a decreasing sequence εn = on(1) such that with probability at least 1 − δ over the random sample of l1, . . . lM and the procedure for fitting ˆf , |) 2 . n6) <= il (2604 Peas) min . We make a few observations here. The first is that \|O*||3 is a measure of the complexity of the target function. Thus, if the target function is some reasonable function, say a low-degree polynomial, of the posterior document vector, then we would expect \|0*||3 to be small. The second is that the dependence on fmax 1s probably not very tight. Third, note that n and M are both allowed to grow with the amount of unlabeled documents we have; indeed, none of the terms in Theorem 4 deal with labeled data. Finally, if we have nL i.i.d. labeled examples, and we learn a linear predictor ˆv with the representation ˆφ using ERM (say), then the bias-variance decomposition grants mse(ˆv) = R( ˆφ) + E x∼µ1 ( ˆφ(x)T(v∗−ˆv))2 = R( ˆφ) + OP ( 1 nL ) where mse(v) = Ezy, (n(a)"O* — d(x)"v)? and v* is the minimizer of mse(-). The second equality comes from known properties of the ERM (see, e.g., Hsu et al., 2014). # 6 Semi-supervised experiments We conducted experiments with our document level contrastive representations in a semi-supervised setting. In this section, we discuss the experimental details and findings. # 6.1 A closely related representation One unfortunate consequence of the results in Section 4 is that the number of landmarks required to obtain a useful representation can be quite large. To this end, we consider training models of the form f1, f2 : X → Rd via minimize Ep, [log (1 + exp (—yfi(x)" fo(2’)))] - (5) J15J2 We will consider the alternate embedding scheme of simply taking f(x) as our representation for document x. To justify this, first note that the Bayes optimal predictor (ff, f3) is given by the log-odds ratio # (ff, f3) P(y=1 axa! (SLEYES}) * ; P(y=1 axa! filo)" f5(a") := log (SLEYES}) . This predictor is related to our original g* function via the exponential: g(a, 2") = exp (f8(0)" Ble’) © 1+ (2) Be’), where the approximation comes from a Taylor expansion. Therefore, if 1, ...,/,¢ are landmark documents, then f}(2) is approximately affinely related to g* (a, ly.a2): g (elim) © T+ (fF) + fad)" f(a). When the Taylor expansion is accurate, we can expect that the approximate minimizer ˆf1(x) of Eq. (5) is as good of a representation as the version that uses landmarks. 10 # 6.2 Methodology We conducted semi-supervised experiments on the AG news topic classification dataset as compiled by Zhang et al. (2015). This dataset contains news articles that belong to one of four categories: world, sports, business, and sci/tech. There are 30,000 examples from each class in the training set, and 1,900 examples from each class in the testing set. We minimally preprocessed the dataset by removing punctuation and words that occurred in fewer than 10 documents, resulting in a vocabulary of approximately 16,700 words. We randomly selected 1,000 examples from each class to remain as our labeled training dataset, and we used the remaining 116,000 examples as our unlabeled dataset for learning representations. After computing representations on the unlabeled dataset, we fit a linear classifier on the labeled training set using logistic regression with cross validation to choose the 2 regularization parameter (Nfolds = 3). We compared our representation, NCE, against several representation baselines. • BOW – The standard bag-of-words representation. • BOW+SVD – A bag of words representation with dimensionality reduction. We first perform SVD on the bag-of-words representation using the unsupervised dataset to compute a low dimensional subspace, and train a linear classifier on the projected bag-of-words representations with the labeled dataset. • LDA – A representation derived from LDA. We fit LDA on the unsupervised dataset using online variational Bayes (Hoffman et al., 2010), and our representation is the inferred posterior distribution over topics given training document. • word2vec – Skip-gram word embeddings (Mikolov et al., 2013b). We fit the skip-gram word embeddings model on the unsupervised dataset and then averaged the word embeddings in each of the training documents to get their representation. For our representation, to solve Eq. (5), we considered neural network architectures of various depths. We used fully-connected layers with between 250 and 300 nodes per hidden layer. We used ReLU nonlinearities, dropout probability 1/2, batch normalization, and the default PyTorch initialization (Paszke et al., 2019). We optimized using RMSProp with momentum value 0.009 and weight decay 0.0001 as in Radhakrishnan et al. (2019). We started with learning rate 10−4 which we halved after 250 epochs, and we trained for 600 epochs. To sample a contrastive dataset, we first randomly partitioned each unlabeled document in half to create the positive pairs. To create the negative pairs, we again randomly partitioned each unlabeled document in half, randomly permuted one set of half documents, and discarded collisions. This results in a contrastive dataset whose size is roughly twice the number of unlabeled documents. In the course of training our models for the contrastive task, we resampled a contrastive dataset every 3 epochs to prevent overfitting on any one particular dataset. # 6.3 Results Below we illustrate and discuss the results of our experiments. In all line plots, the training examples axis refers to the number of randomly selected labeled examples used to train the linear classifier. The shaded regions denote 95% confidence intervals computed over 10 replicates of this random selection procedure. Baseline comparison. We compared the semi-supervised perfomance of NCE against all of the baselines. The left panel of Figure 1 displays the results of these experiments. Among the methods tested, NCE appears to outperform all the other methods, with dramatic improvements over all methods except word2vec in the low labeled data regime. Bag-of-words representations are quite competitive when there is an abundance of labeled data, but as the dimensionality of this representation is quite large, it has poor performance 11 Methods Comparisons Depth Comparisons NCE t-SNE os Ze 0.80 > Fo7s 3 0.70 B08 — Three Layer NCE ies 8 060 — wordavec oe oss — sow — Three Layer NCE mim Sci/Tech — Bow-svo — TwoLayer NCE mm Sports 0.50 — wa 9.20 — One-Layer NCE mmm World Test Accuracy 0 500 1000 1500 2000 2500 3000 3500 4000 © 500 1000 1500 2000 2500 3000 3500 4000 Training examples Training examples NCE vs word2vec Contrastive error vs Supervised Loss 087 a | ord os + — Contrastive error — Test loss os 4 Test Accuracy oad — Three-Layer NCE — wordavec 037 © 500 1000 1500 2000 2500 3000 3500 4000 o 100 ©«200+~«=« 3500S 400.-S=«500 SO Training examples ErroriAccuracy Figure 1: Experiments with AG news dataset. Left panel: test accuracy of methods as we increase the number of supervised training examples. Bottom left focuses in on NCE versus word2vec. Top middle: NCE performance as we vary network depth. Bottom middle: Relationship between contrastive error and test accuracy for NCE. Right: t-SNE visualizations of NCE and word2vec embeddings. with limited samples. However, unsupervised dimensionality reduction on this representation appears to be unhelpful and actually degrades performance uniformly. It is also worth noting that LDA performs quite poorly. This could be for several reasons, including that fitting a topic model directly could be challenging on the relatively short documents in the corpus or that the document category is not well-expressed by a linear function of the topic proportions. Finally, we point out that word embedding representations (word2vec) perform quite well, but our document-level NCE procedure is slightly better, particularly when there are few labeled examples. This may reflect some advantage in learning document-level non-linear representations, as opposed to averaging word-level ones. Model capacity. We investigated the effect of depth on the performance of NCE by training networks with one, two, and three hidden layers. In each case, the first hidden layer has 300 nodes and the additional hidden layers have 256 nodes. The top center panel of Figure 1 displays the results. It appears that using deeper models in the unsupervised phase leads to better performance when training a linear classifier on the learned representations. We did not experiment exhaustively with neural network architectures. Contrastive loss. We also tracked the contrastive loss of the model on a holdout validation contrastive dataset. The bottom center panel of Figure 1 plots how this loss evolves over training epochs. Along with this contrastive loss, we checkpoint the model, train a linear classifier on 1400 training examples, and evaluate the supervised test accuracy as the representation improves. We see that test accuracy steadily improves as contrastive loss decreases. This suggests that in these settings, contrastive loss (which we can measure using an unlabeled validation set) is a good surrogate for downstream performance (which may not be measurable until we have a task at hand). Visualizing embeddings. For a qualitative perspective, we visualize the embeddings from NCE using t-SNE with the default scikit-learn parameters (van der Maaten and Hinton, 2008; Pedregosa et al., 2011). To compare, we also used t-SNE to visualize the document-averaged word2vec embeddings. The right panels of Figure 1 shows these visualizations on the 7,600 test documents colored according to their true 12 label. While qualitiative, the visualization of the NCE embeddings appear to be more clearly separated into label-homogeneous regions than that of word2vec. # 7 Topic modeling simulations The results of Section 4 show that if a model is trained to minimize the contrastive learning objective, then that model must also recover certain topic posterior information in the corpus. However, there are a few practical questions that remain: can we train such a model, how much capacity should it have, and how much data is needed in order to train it? In this section, we present simulations designed to study these questions. # 7.1 Simulation setup We considered the following single topic generative model. • Draw topics θ1, . . . , θK i.i.d. from a symmetric Dirichlet(α/K) distribution over ∆|V|. • For each document: – Draw a length n ∼ Poisson(λ). – Draw a topic k ∼ Uniform([K]). – Draw n words i.i.d. from θk. This model can be thought of as a limiting case of the LDA model (Blei et al., 2003; Griffiths and Steyvers, 2004) when the document-level topic distribution is symmetric Dirichlet(3) with 8 < 1. In our experiments, we set K = 20, |V| = 5000, and A = 30, and we varied a from 1 to 10. Notice that as a increases, the Dirichlet prior becomes more concentrated around the uniform distribution, so the topic distributions are more likely to be similar. Thus, we expect the contrastive learning problem to be more difficult with larger values of a. We used contrastive models of the same form as Section 6, namely models of the form f;, f2 where the final prediction is f1(x)" fo(x’) and f; and f2 are fully-connected neural networks with three hidden layers. To measure the effect of model capacity, we trained two models — a smaller model with 256 nodes per hidden layer and a larger model with 512 nodes per hidden layer. Both models were trained for 100 epochs. We used all of the same optimization parameters as in Section 6 with the exception of dropout, which we did not use. To study the effect of training data, we varied the rate r at which we resampled our entire contrastive training set from the ground truth topic model. Specifically, after every 1/r-th training epoch, we resampled 60,000 new documents and constructed a contrastive dataset from these documents. We varied the resampling rate r from 0.1 to 1.0, where larger values of r imply more training data. The total amount of training data varies from 600K documents to 6M documents. Using the results from Section 4, we constructed the embedding φ(x) of a new document x using 1000 landmark documents, each sampled from the same generative model. We constructed the true likelihood matrix L of the landmark documents using the underlying topic model and recovered the model-based posterior L†φ(x). We measured accuracy as the fraction of testing documents for which the MAP topic under the model-based posterior matched the generating topic. We used 5000 testing documents and performed 5 replicates for each setting of parameters. 13 Topic Separation Topic Recovery 0.95 Small NN Large NN 0.90 Resample rate 6 Inter-topic TV distance 0.70 1 3 5 7 9 Topic similarity (Dirichlet parameter) 1 3 5 7 9 Topic similarity (Dirichlet parameter) 2 a FA) 0.0 os 1.0 7 é Topic similarity (Dirichlet parameter) ‘Accuracy Figure 2: Topic modeling simulations. Left: Average total variation distance between topics. Right: Topic recovery accuracy for contrastive models. Total number of documents sampled = 6M × rate. # 7.2 Results Figure 2 shows the results of our simulation study. In the left panel, we plot the average pairwise topic separation, measured in total variance distance, as a function of the Dirichlet hyperparameter α. We see that, indeed as we increase α the topics become more similar, which suggests that the contrastive learning problem will become more difficult. Then, in the center and right panels we visualize the accuracy of the MAP estimates on the test documents as a function of both the Dirichlet hyperparameter α and the resampling rate r. The center panel uses the small neural network with 256 nodes per hidden layer, while the right panel uses the larger network. The experiment identifies several interesting properties of the contrastive learning approach. First, as a sanity check, the algorithm does accurately predict the latent topics of the test documents in most experimental conditions and the accuracy is quite high when the problem is relatively easy (e.g., α is small). Second, the performance degrades as α increases, but this can be mitigated by increasing either the model capacity or the resampling rate. Specifically, we consistently see that for a fixed model and α, increasing the resampling rate improves the accuracy. A similar trend emerges when we fix α and rate and increase the model capacity. These empirical findings suggests that latent topics can be recovered by the contrastive learning approach, provided we have an expressive enough model and enough data. # 8 Discussion Our analysis shows that document-level contrastive learning under topic modeling assumptions yields a representation that exposes posterior topic information to linear predictors, and hence is suitable for downstream supervised learning. In semi-supervised learning experiments, we show that our contrastive learning procedure yields representations that improve classification accuracy, and the improvement is most striking when we have few labeled examples. We also explored the effects of model capacity and corpus size in a simulated topic modeling study, and we showed that increasing either of these factors leads to higher quality topic recovery. While we have focused on document representations and topic modeling assumptions in this work, our analysis more generally sheds light on the power of contrastive learning, which is empirically known to be useful in many settings. Aspects of our analysis may help characterize the expressiveness of contrastive learning representations under other modeling assumptions, for example in time-series modeling, and we hope to pursue these directions in future work. # Acknowledgements We thank Miro Dudík for initial discussions and suggesting the landmark embedding technique. This work was partially completed while CT and DH were visiting Microsoft Research NYC, and was supported in part 14 by NSF grant CCF-1740833. # References Naoki Abe, Bianca Zadrozny, and John Langford. Outlier detection by active learning. In International Conference on Knowledge Discovery and Data Mining, 2006. Sanjeev Arora, Rong Ge, and Ankur Moitra. Learning topic models–going beyond SVD. In Symposium on Foundations of Computer Science, 2012. Sanjeev Arora, Hrishikesh Khandeparkar, Mikhail Khodak, Orestis Plevrakis, and Nikunj Saunshi. A theoretical analysis of contrastive unsupervised representation learning. In International Conference on Machine Learning, 2019. David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of Machine Learning Research, 2003. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv:1810.04805, 2018. Thomas L Griffiths and Mark Steyvers. Finding scientific topics. Proceedings of the National academy of Sciences, 101:5228–5235, 2004. Michael Gutmann and Aapo Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In International Conference on Artificial Intelligence and Statistics, 2010. Matthew Hoffman, Francis R Bach, and David M Blei. Online learning for latent dirichlet allocation. In Advances in Neural Information Processing Systems, 2010. Daniel Hsu, Sham M. Kakade, and Tong Zhang. Random design analysis of ridge regression. Foundations of Computational Mathematics, 2014. Alex Kulesza, N Raj Rao, and Satinder Singh. Low-rank spectral learning. In International Conference on Artificial Intelligence and Statistics, 2014. John Langford, Ruslan Salakhutdinov, and Tong Zhang. Learning nonlinear dynamic models. In International Conference on Machine Learning, 2009. Lajanugen Logeswaran and Honglak Lee. An efficient framework for learning sentence representations. In International Conference on Learning Representations, 2018. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv:1301.3781, 2013a. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, 2013b. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, 2019. 15 F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 2011. Adityanarayanan Radhakrishnan, Mikhail Belkin, and Caroline Uhler. Overparameterized neural networks can implement associative memory. arXiv:1909.12362, 2019. Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y Ng. Self-taught learning: transfer learning from unlabeled data. In International Conference on Machine Learning, 2007. David Sontag and Dan Roy. Complexity of inference in latent Dirichlet allocation. In Advances in Neural Information Processing Systems, 2011. Ingo Steinwart, Don Hush, and Clint Scovel. A classification framework for anomaly detection. Journal of Machine Learning Research, 2005. Marshall H Stone. The generalized Weierstrass approximation theorem. Mathematics Magazine, 1948. Joel A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 2012. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of Machine Learning Research, 2008. Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In International Conference on Computer Vision, 2015. Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In Advances in Neural Information Processing Systems, 2015. 16 # A Proofs # A.1 Proof of general representation lemma Proof of Lemma 2. Fix a document x of length m and a document probability vector w. Conditioned on the assignment of each word in the document to a topic, probability of a document factorizes as (a2|w) =» Il-.0 (a;|zi) = S- (ite) (Iho (xz) )- m(w)'p(x), z€[K]™ i=1 ze€(K]™ \i=1 where the last line follows from collecting like terms. Using the form of g* from above, we have g(0,2") Pia =a, “ =) 2) to Pia) = = lw) =a! whe) dP(w) ? P(2) = x)P(al x!) P(al x)P(axl ” to P(e?) = dP P(w)2 =a) f, on b(a ae (wx = 2) n(x)" d(2") P(a@) = 2’) P(a@) = 2’) P(2®) = a) g(0,2") ? # A.2 Error analysis For the error analysis, recall that Dc is our contrastive distribution and f(a, 0’) = P(y=1] 2,2’), k *(a, a! Pia =a, oe Peal) = OR 8 1—ft(z,2’) P(a) = 2)P(x@ = 2)’ G(x) = GF (2b) = G*(@ 1), 9 ys ta where 1), ...,J)¢ are landmark documents. Also recall our approximation f to f*, and the resulting approxi- mations é or a! G(a, 2") = flea) 1— f(z, 2’) G(x) = (G(@,h),.-.,9(x, lar) Let η(x), ψ(x) denote the posterior/likelihood vectors from Lemma 1 or the posterior/likelihood polyno- mial vectors from Lemma 2. Say the length of this vector is N ≥ 1. Our goal is to show that linear functions in the representation ˆφ(x) can provide a good approximation to the target function xr n(x)'* where 6* € IRN is some fixed vector. To this end, define the risk of b as R(@) := min avy (1 (2) 'O* — B(a)"v)?. By Lemma | or Lemma 2, we know that for any «7, x’ we have 17 Recall the matrix L := ψ(l1) P(x(2) = l1) , . . . , ψ(lM ) P(x(2) = lM ) . This matrix is in RN ×M . If L has full row rank, then n(a)'O* = n(x)" LL*6* = 6*(x)"v* where ob (x) = (g*(a,h),.--,9*(@, lar) and v* = Lt@*. Thus, R(¢*) = 0. We will show that R(¢) can be bounded as well. Theorem 5. Suppose the following holds. (1) There is a constant σmin > 0 such that for any δ ∈ (0, 1), there is a number M0(δ) such that for an iid sample l1, . . . , lM with M ≥ M0(δ), with probability 1 − δ, the matrix L = · · · ψ(lM ) P(x(2)=lM ) ψ(l1) P(x(2)=l1) √ has minimum singular value at least σmin VM. (2) There exists a value fmax ∈ (0, 1) such that for all documents x and landmarks li 0< f(a, li), f* (a, li) < fmax- Let ˆf be the function returned by the contrastive learning algorithm, and let En 2= E(a,2!)~D, (ee ~ Fe, v)) | denote its mean squared error. For any δ ∈ (0, 1), if M ≥ M0(δ/2), then with probability at least 1 − δ over the random draw of l1, . . . , lM , we have |) 2 . n6) <= il (2604 eat) min . Remark 6. Theorem 4 follows from this theorem by additionally conditioning on the event that ˆf has the error bound in Assumption 1, and appropriately setting the failure probabilities δ. Proof of Theorem 5. We first condition on two events based on the sample l1, . . . , lM . The first is the event that L has full row rank and smallest non-zero singular value at least M σmin > 0; this event has probability at least 1 − δ/2. The second is the event that 5 2 * 2 2log(2/6 EY Bem (f(b) Fle)” < Blew ymmom (fet) — Flet)) +782 @ j=1 By Hoeffding’s inequality and the assumption that f and f* have range [0, fmax] © [0, 1], this event also has probability at least 1 — 6/2. By the union bound, both events hold simultaneously with probability at least 1 — 6.We henceforth condition on these two events for the remainder of the proof. 18 Since L has full row rank, via Cauchy-Schwarz, we have Rb) = min Epp, (9(2)" ~ O(2)"v)? < Exwp, (m(2)"6* — o(2)"0*)? = Banyy((6*(2)" ~ J@) Tv")? $ Eomys lle“ |] 6*(@)" - 5, = v"|[5- ILL We analyze the two factors on the right-hand side separately. Analysis of v*. For v*, we have Lyd 1. : «)2 2 2 2 le“ < Lt en < Spot OI, where we have used the fact that L has smallest non-zero singular value at least √ # V Moyin- Analysis of ¢* — b. For the other term, we have # Ex∼µ1 2 M =D Belts) ~ alah) =1 M . 2 7 Emm (P(e) - fle.) ante j=l a ( (2,2")~p1 @p2 (f(e,2") _ flee") + “a? > . o(2) ~ (a) where the final inequality follows from (6). Wrapping up. Putting everything together, we have . gr ||? . nig) < he ( ceatymmoya (F*(0.2") — Fla.a’)) "+ @) oF (1 _ fmax)* To conclude, we observe that half of the probability mass in Dc is µ1 ⊗ µ2, so n= Eeawyep. (F*(22!) ~ flea") > SBewy mene (F(22!) ~ flea") - & Rearranging and combining with (7) proves the claim. Calculations about the minimum singular value. Suppose we are in the single topic case where w ∈ {e1, . . . , eK}. Assume that mink Pr(w = ek) ≥ wmin. Further assumes that each topic k has an anchor word ak, satisfying O(ak|z = ek) ≥ amin. Then we will show that when M and m are large enough, the matrix L whose columns are ψ(x)/P(x) will have large singular values. 19 First note that if document x contains ak then ψ(x) is one sparse, and satisfies wW(a) e,P(x|w = ex) P(t) Sy P(w = k')P(2|w = k’) if a, € x: ex /P(w = k’) Therefore, the second moment matrix satisfies 2)\T K > SOP w = ex)P(az € x | ex) k=1 P(ay € x | ex) Yay = _ KS "| ek T | a, €x,w a Pw =e) Che k=l k Now, if the number of words per document is m ≥ 1/amin then P(ak ∈ x | ek) = 1 − (1 − O(ak | ek))m ≥ 1 − exp(−mO(ak|ek)) ≥ 1 − exp(−mamin) ≥ 1 − 1/e. Finally, using the fact that P(w = ek) ≤ 1, we see that the second moment matrix satisfies nee 1-1elkxK For the empirical ve we Perform a crude analysis and apply the Matrix-Hoeffding inequality. We have ||b(w)eb(«)"/P(w)? ||, < Kw; and so with probability at least 1 — 5, we have min 1 Avioli) vlad)" > P(i;) P(a)? 8k log(K/5) M W? in 2 . If we take M ≥ Ω(K log(K/δ)/w2 second moment matrix will be at least 1/2. min) then we will have that the minimum eigenvalue of the empirical 20
{ "id": "1909.12362" }
2003.01668
Model Assertions for Monitoring and Improving ML Models
ML models are increasingly deployed in settings with real world interactions such as vehicles, but unfortunately, these models can fail in systematic ways. To prevent errors, ML engineering teams monitor and continuously improve these models. We propose a new abstraction, model assertions, that adapts the classical use of program assertions as a way to monitor and improve ML models. Model assertions are arbitrary functions over a model's input and output that indicate when errors may be occurring, e.g., a function that triggers if an object rapidly changes its class in a video. We propose methods of using model assertions at all stages of ML system deployment, including runtime monitoring, validating labels, and continuously improving ML models. For runtime monitoring, we show that model assertions can find high confidence errors, where a model returns the wrong output with high confidence, which uncertainty-based monitoring techniques would not detect. For training, we propose two methods of using model assertions. First, we propose a bandit-based active learning algorithm that can sample from data flagged by assertions and show that it can reduce labeling costs by up to 40% over traditional uncertainty-based methods. Second, we propose an API for generating "consistency assertions" (e.g., the class change example) and weak labels for inputs where the consistency assertions fail, and show that these weak labels can improve relative model quality by up to 46%. We evaluate model assertions on four real-world tasks with video, LIDAR, and ECG data.
http://arxiv.org/pdf/2003.01668
Daniel Kang, Deepti Raghavan, Peter Bailis, Matei Zaharia
cs.AI, cs.LG
null
MLSys 2020
cs.AI
20200303
20200311
0 2 0 2 r a M 1 1 ] I A . s c [ 3 v 8 6 6 1 0 . 3 0 0 2 : v i X r a # MODEL ASSERTIONS FOR MONITORING AND IMPROVING ML MODELS # Daniel Kang * 1 Deepti Raghavan * 1 Peter Bailis 1 Matei Zaharia 1 ABSTRACT ML models are increasingly deployed in settings with real world interactions such as vehicles, but unfortunately, these models can fail in systematic ways. To prevent errors, ML engineering teams monitor and continuously improve these models. We propose a new abstraction, model assertions, that adapts the classical use of program assertions as a way to monitor and improve ML models. Model assertions are arbitrary functions over a model’s input and output that indicate when errors may be occurring, e.g., a function that triggers if an object rapidly changes its class in a video. We propose methods of using model assertions at all stages of ML system deployment, including runtime monitoring, validating labels, and continuously improving ML models. For runtime monitoring, we show that model assertions can find high confidence errors, where a model returns the wrong output with high confidence, which uncertainty-based monitoring techniques would not detect. For training, we propose two methods of using model assertions. First, we propose a bandit-based active learning algorithm that can sample from data flagged by assertions and show that it can reduce labeling costs by up to 40% over traditional uncertainty-based methods. Second, we propose an API for generating “consistency assertions” (e.g., the class change example) and weak labels for inputs where the consistency assertions fail, and show that these weak labels can improve relative model quality by up to 46%. We evaluate model assertions on four real-world tasks with video, LIDAR, and ECG data. # INTRODUCTION ML is increasingly deployed in complex contexts that re- quire inference about the physical world, from autonomous vehicles (AVs) to precision medicine. However, ML models can misbehave in unexpected ways. For example, AVs have accelerated toward highway lane dividers (Lee, 2018) and can rapidly change their classification of objects over time, causing erratic behavior (Coldewey, 2018; NTSB, 2019). As a result, quality assurance (QA) of models, including contin- uous monitoring and improvement, is of paramount concern. Unfortunately, performing QA for complex, real-world ML applications is challenging: ML models fail for diverse and reasons unknown before deployment. Thus, existing solutions that focus on verifying training, including formal verification (Katz et al., 2017), whitebox testing (Pei et al., 2017), monitoring training metrics (Renggli et al., 2019), and validating training code (Odena & Goodfellow, 2018), only give guarantees on a test set and perturbations thereof, so models can still fail on the huge volumes of deployment data that are not part of the test set (e.g., billions of images per day in an AV fleet). Validating input schemas (Polyzotis et al., 2019; Baylor et al., 2017) does not work for applications *Equal contribution 1Stanford University. Correspondence to: Daniel Kang <[email protected]>. Proceedings of the 3 rd MLSys Conference, Austin, TX, USA, 2020. Copyright 2020 by the author(s). with unstructured inputs that lack meaningful schemas, e.g., images. Solutions that check whether model performance re- mains consistent over time (Baylor et al., 2017) only apply to deployments that have ground truth labels, e.g., click-through rate prediction, but not to deployments that lack labels. As a step towards more robust QA for complex ML appli- cations, we have found that ML developers can often specify systematic errors made by ML models: certain classes of errors are repetitive and can be checked automatically, via code. For example, in developing a video analytics engine, we noticed that object detection models can identify boxes of cars that flicker rapidly in and out of the video (Figure 1), indicating some of the detections are likely wrong. Likewise, our contacts at an AV company reported that LIDAR and cam- era models sometimes disagree. While seemingly simple, similar errors were involved with a fatal AV crash (NTSB, 2019). These systematic errors can arise for diverse reasons, including domain shift between training and deployment data (e.g., still images vs. video), incomplete training data (e.g., no instances of snow-covered cars), and noisy inputs. To leverage the systematic nature of these errors, we propose model assertions, an abstraction to monitor and improve ML model quality. Model assertions are inspired by program assertions (Goldstine et al., 1947; Turing, 1949), one of the most common ways to monitor software. A model assertion is an arbitrary function over a model’s input and output that re- turns a Boolean (0 or 1) or continuous (floating point) severity Model Assertions for Monitoring and Improving ML Models (a) Frame 1, SSD (b) Frame 2, SSD (c) Frame 3, SSD (d) Frame 1, SSD (e) Frame 2, assertion (f) Frame 3, SSD corrected Figure 1. Top row: example of flickering in three consecutive frames of a video. The object detection method, SSD (Liu et al., 2016), failed to identify the car in the second frame. Bottom row: example of correcting the output of a model. The car bounding box in the second frame can be inferred using nearby frames based on a consistency assertion. marginal reduction in the number of assertions fired (§3). We show that our bandit algorithm can reduce labeling costs by up to 40% over traditional uncertainty-based methods. Third, we show that assertions can be used for weak supervision (Mintz et al., 2009; Ratner et al., 2017). We propose an API for writing consistency assertions about how attributes of a model’s output should relate that can also provide weak labels for training. Consistency assertions specify that data should be consistent between attributes and identifiers, e.g., a TV news host (identifier) should have consistent gender (attribute), or that certain predictions should (or should not) exist in temporally related outputs, e.g., cars in adjacent video frames (Figure 1). We demonstrate that this API can apply to a range of domains, including medical classification and TV news analytics. These weak labels can be used to improve relative model quality by up to 46% with no additional human labeling. score to indicate when faults may be occurring. For example, a model assertion that checks whether an object flickers in and out of video could return a Boolean value over each frame or the number of objects that flicker. While assertions may not offer a complete specification of correctness, we have found that assertions are easy to specify in many domains (§2). We explore several ways to use model assertions, both at runtime and training time. First, we show that model assertions can be used for runtime monitoring: they can be used to log unexpected behavior or automatically trigger corrective actions, e.g., shutting down an autopilot. Furthermore, model assertions can often find high confidence errors, where the model has high certainty in an erroneous output; these errors are problematic because prior uncertainty-based monitoring would not flag these errors. Additionally, and perhaps surprisingly, we have found that many groups are also interested in validating human- generated labels, which can be done using model assertions. We implement model assertions in a Python library, OMG1, that can be used with existing ML frameworks. We evaluate assertions on four ML applications: understanding TV news, AVs, video analytics, and classifying medical readings. We implement assertions for systematic errors reported by ML users in these domains, including checking for consistency between sensors, domain knowledge about object locations in videos, and medical knowledge about heart patterns. Across these domains, we find that model assertions we consider can be written with at most 60 lines of code and with 88-100% precision, that these assertions often find high-confidence errors (e.g., top 90th percentile by confidence), and that our new algorithms for active learning and weak supervision via assertions improve model quality over existing methods. In summary, we make the following contributions: 1. We introduce the abstraction of model assertions for monitoring and continuously improving ML models. Second, we show that assertions can be used for active learning, in which data is continuously collected to improve ML models. Traditional active learning algorithms select data to label based on uncertainty, with the intuition that “harder” data where the model is uncertain will be more informative (Settles, 2009; Coleman et al., 2020). Model assertions provide another natural way to find “hard” exam- ples. However, using assertions in active learning presents a challenge: how should the active learning algorithm select between data when several assertions are used? A data point can be flagged by multiple assertions or a single assertion can flag multiple data points, in contrast to a single uncertainty metric. To address this challenge, we present a novel bandit-based active learning algorithm (BAL). Given a set of data that have been flagged by potentially multiple model assertions, our bandit algorithm uses the assertions’ severity scores as context (i.e., features) and maximizes the 2. We show that model assertions can find high confidence errors, which would not be flagged by uncertainty metrics. 3. We propose a bandit algorithm to select data points for active learning via model assertions and show that it can reduce labeling costs by up to 40%. 4. We propose an API for consistency assertions that can automatically generate weak labels for data where the assertion fails, and show that weak supervision via these labels can improve relative model quality by up to 46%. # 2 MODEL ASSERTIONS We describe the model assertion interface, examples of model assertions, how model assertions can integrate into the ML development/deployment cycle, and its implementation in OMG. 1OMG is a recursive acronym for OMG Model Guardian. Model Assertions for Monitoring and Improving ML Models # 2.1 Model Assertions Interface We formalize the model assertions interface. Model assertions are arbitrary functions that can indicate when an error is likely to have occurred. They take as input a list of inputs and outputs from one or more ML models. They return a severity score, a continuous value that indicates the severity of an error of a specific type. By convention, the 0 value represents an abstention. Boolean values can be implemented in model assertions by only returning 0 and 1. The severity score does not need to be calibrated, as our algorithms only use the relative ordering of scores. As a concrete example, consider an AV with a LIDAR sensor and camera and object detection models for each sensor. To check that these models agree, a developer may write: Appendix). We further describe how model assertions can be implemented via our consistency API for TV news in §4. Autonomous vehicles (AVs). AVs are required to execute a variety of tasks, including detecting objects and tracking lane markings. These tasks are accomplished with ML models from different sensors, such as visual, LIDAR, or ultrasound sensors (Davies, 2018). For example, a vision model might be used to detect objects in video and a point cloud model might be used to do 3D object detection. Our contacts at an AV company noticed that models from video and point clouds can disagree. We implemented a model assertion that projects the 3D boxes onto the 2D cam- era plane to check for consistency. If the assertion triggers, then at least one of the sensors returned an incorrect answer. def sensor_agreement(lidar_boxes, camera_boxes): failures = 0 for lidar_box in lidar_boxes: if no_overlap(lidar_box, camera_boxes): failures += 1 return failures Notably, our library OMG can register arbitrary Python functions as model assertions. Video analytics. Many modern, academic video analytics systems use an object detection method (Kang et al., 2017; 2019; Hsieh et al., 2018; Jiang et al., 2018; Xu et al., 2019; Canel et al., 2019) trained on MS-COCO (Lin et al., 2014), a corpus of still images. These still image object detection methods are deployed on video for detecting objects. None of these systems aim to detect errors, even though errors can affect analytics results. # 2.2 Example Use Cases and Assertions In this section, we provide use cases for model assertions that arose in discussions with industry and academic contacts, including AV companies and academic labs. We show example of errors caught by the model assertions described in this section in Appendix A and describe how one might look for assertions in other domains in Appendix B. Our discussions revealed two key properties in real-world ML systems. First, ML models are deployed on orders of magnitude more data than can reasonably be labeled, so a labeled sample cannot capture all deployment conditions. For example, the fleet of Tesla vehicles will see over 100× more images in a day than in the largest existing image dataset (Sun et al., 2017). Second, complex ML deployments are developed by large teams, of which some developers may not have the ability to manage all parts of the application. As a result, it is critical to be able to do QA collaboratively to cover the application end-to-end. In developing such systems, we noticed that objects flicker in and out of the video (Figure 1) and that vehicles overlap in unrealistic ways (Figure 7, Appendix). We implemented assertions to detect these. Medical classification. Deep learning researchers have created deep networks that can outperform cardiologists for classifying atrial fibrillation (AF, a form of heart condition) from single-lead ECG data (Rajpurkar et al., 2019). Our re- searcher contacts mentioned that AF predictions from DNNs can rapidly oscillate. The European Society of Cardiology guidelines for detecting AF require at least 30 seconds of signal before calling a detection (EHRA, 2010). Thus, pre- dictions should not rapidly switch between two states. A developer could specify this model assertion, which could be implemented to monitor ECG classification deployments. # 2.3 Using Model Assertions for QA Analyzing TV news. We spoke to a research lab studying bias in media via automatic analysis. This lab collected over 10 years of TV news (billions of frames) and executed face detection every three seconds. These detections are subse- quently used to identify the faces, detect gender, and classify hair color using ML models. Currently, the researchers have no method of identifying errors and manually inspect data. However, they additionally compute scene cuts. Given that most TV new hosts do not move much between scenes, we can assert that the identity, gender, and hair color of faces that highly overlap within the same scene are consistent (Figure 6, We describe how model assertions can be integrated with ML development and deployment pipelines. Importantly, model assertions are complementary to a range of other ML QA techniques, including verification, fuzzing, and statistical techniques, as shown in Figure 2. First, model assertions can be used for monitoring and validating all parts of the ML development/deployment pipeline. Namely, model assertions are agnostic to the source of the output, whether they be ML models or human labelers. Perhaps surprisingly, we have found several groups Model Assertions for Monitoring and Improving ML Models ML developers ‘S & a Data collection |_,Model development, |_,! Statistical Deployment and and labeling training validation monitoring Fuzzing | (Verification, robust ML (DeepXplore) | [Held-out set Figure 2. A system diagram of how model assertions can integrate into the ML development/deployment pipeline. Users can collaboratively add to an assertion database. We also show how related work can be integrated into the pipeline. Notably, verification only gives guarantees on a test set and perturbations thereof, but not on arbitrary runtime data. to also be interested in monitoring human label quality. Thus, concretely, model assertions can be used to validate human labels (data collection) or historical data (validation), and to monitor deployments (e.g., to populate dashboards). that OMG uses to improve model quality: BAL for active learning and consistency assertions for weak supervision. # 3 USING MODEL ASSERTIONS FOR ACTIVE LEARNING WITH BAL We introduce an algorithm called BAL to select data for active learning via model assertions. BAL assumes that a set of data points has been collected and a subset will be labeled in bulk. We found that labeling services (sca, 2019) and our industrial contacts usually label data in bulk. Given a set of data points that triggered model assertions, OMG must select which points to label. There are two key challenges which make data selection intractable in its full generality. First, we do not know the marginal utility of selecting a data point to label without labeling the data point. Second, even with labels, estimating the marginal gain of data points is expensive to compute as training modern ML models is expensive. Second, model assertions can be used at training time to select which data points to label in active learning. We describe BAL, our algorithm for data selection, in §3. Third, model assertions can be used to generate weak labels to further train ML models without additional human labels. We describe how OMG accomplishes this via consistency assertions in §4. Users can also register their own weak supervision rules. To address these issues, we make simplifying assumptions. We describe the statistical model we assume, the resource- unconstrained algorithm, our simplifying assumptions, and BAL. We note that, while the resource-unconstrained algorithm can produce statistical guarantees, BAL does not. We instead empirically verify its performance in Section 5. # Implementing Model Assertions in OMG We implement a prototype library for model assertions, OMG, that works with existing Python ML training and deployment frameworks. We briefly describe OMG’s implementation. OMG logs user-defined assertions as callbacks. The simplest way to add an assertion is through AddAssertion(func), where func is a function of the inputs and outputs (see below). OMG also provides an API to add consistency asser- tions as described in §4. Given this database, OMG requires a callback after model execution that takes the model’s input and output as input. Given the model’s input and output, OMG will execute the assertions and record any errors. We assume the assertion signature is similar to the following; this assertion signature is for the example in Figure 1: Data selection as multi-armed bandits. We cast the data selection problem as a multi-armed bandit (MAB) problem (Auer et al., 2002; Berry & Fristedt, 1985). In MABs, a set of “arms” (i.e., individual data points) is provided and the user must select a set of arms (i.e., points to label) to achieve the maximal expected utility (e.g., maximize validation accuracy, minimize number of assertions that fire). MABs have been studied in a wide variety of settings (Radlinski et al., 2008; Lu et al., 2010; Bubeck et al., 2009), but we assume that the arms have context associated with them (i.e., severity scores from model assertions) and give submodular rewards (defined below). The rewards are possibly time-varying. We further assume there is an (unknown) smoothness parameter that determines the similarity between arms of similar contexts (formally, the α in the H¨older condition (Evans, 1998)). The following presentation is inspired by Chen et al. (2018). def flickering(recent_frames: List[PixelBuf], recent_outputs: List[BoundingBox]) -> Float For active learning, OMG will take a batch of data and return indices for which data points to label. For weak supervision, OMG will take data and return weak labels where valid. Users can specify weak labeling functions associated with assertions to help with this. Concretely, we assume the data will be labeled in T rounds and denote the rounds t = 1,...,T . We refer to the set of n data points as N = {1,...,n}. Each data point has a d dimensional feature vector associated with it, where d is the number of model assertions. We refer to the feature vector as xt i, where i is the data point index and t is the round index; from here, we will refer to the data points as xt i. Each entry in a feature vector is the severity score from a model assertion. The feature vectors can change over time as the model predictions, and therefore assertions, change over the course of training. In the following two sections, we describe two key methods Model Assertions for Monitoring and Improving ML Models Input: T , Bt, N , R Output: choice of arms St at rounds 1,...,T for t = 1,...,T do if Underexplored arms then Select arms St from under-explored contexts at random else Select arms St by highest marginal gain (Eq. 1): for i = 1,...,Bt do St i = argmaxj∈N \St i−1 ∆R({j},St i−1) end end # end Algorithm 1: A summary of the CC-MAB algorithm. CC-MAB first explores under-explored arms, then greedily selects arms with highest marginal gain. Full details are given in (Chen et al., 2018). Input: T , Bt, N , R Output: choice of arms St at rounds 1,...,T for t = 1,...,T do if t = 0 then Select data points uniformly at random from the d model assertions else Compute the marginal reduction rm of the number of times model assertion m = 1,...,d triggered from the previous round; if all rm < 1% then Fall back to baseline method; continue; end for i = 1,...,Bt do Select model assertion m proportional to rm; Select xi that triggers m, sample proportional to severity score rank; Add xi to St; end We assume there is a budget on the number of arms (i.e., data points to label), Bt, at every round. The user must select a set of arms St = {xs1 ,...,xsBt } such that |St| ≤ Bt. We assume that the reward from the arms, R(St), is submodular in St. Intuitively, submodularity implies diminishing marginal returns: adding the 100th data point will not improve the reward as much as adding the 10th data point. Formally, we first define the marginal gain of adding an extra arm: # end # end Algorithm 2: BAL algorithm for data selection for continuous training. BAL samples from the assertions at random in the first round, then selects the assertions that result in highest marginal reduction in the number of assertions that fire in subsequent rounds. BAL will default to random sampling or uncertainty sampling if none of the assertions reduce. ∆R({m},A) = R(A∪{m})−R(A). (1) where AC N isa subset of arms and m€ N is an additional arm such that m ¢ A. The submodularity condition states that, for any ACC CN andm¢C ∆R({m},A) ≥ ∆R({m},C). (2) Resource-unconstrained algorithm. Assuming an infinite labeling and computational budget, we describe an algorithm that selects data points to train on. Unfortunately, this algo- rithm is not feasible as it requires labels for every point and training the ML model many times. If we assume that rewards for individual arms can be queried, then a recent bandit algorithm, CC-MAB (Chen et al., 2018) can achieve a regret of O(cT 2αd 3αd log(T )) for α to be the smoothness parameter. A regret bound is the (asymptotic) difference with respect to an oracle algorithm. Briefly, CC-MAB explores under-explored arms until it is confident that certain arms have highest reward. Then, it greedily takes the highest reward arms. Full details are given in (Chen et al., 2018) and summarized in Algorithm 1. Unfortunately, CC-MAB requires access to an estimate of selecting a single arm. Estimating the gain of a single arm requires a label and requires retraining and reevaluating the model, which is computationally infeasible for expensive- to-train ML models, especially modern deep networks. Resource-constrained algorithm. We make simplify- ing assumptions and use these to modify CC-MAB for the resource-constrained setting. Our simplifying assumptions are that 1) data points with similar contexts (i.e., xt i) are inter- changeable, 2) data points with higher severity scores have higher expected marginal gain, and 3) reducing the number of triggered assertions will increase accuracy. Under these assumptions, we do not require an estimate of the marginal reward for each arm. Instead, we can approximate the marginal gain from selecting arms with similar contexts by the total number of these arms that were selected. This has two benefits. First, we can train a model on a set of arms (i.e., data points) in batches instead of adding single arms at a time. Second, we can select data points of similar contexts at random, without having to compute its marginal gain. Leveraging these assumptions, we can simplify Algorithm 1 to require less computation for training models and to not require labels for all data points. Our algorithm is described in Algorithm 2. Briefly, we approximate the marginal gain of selecting batches of arms and select arms proportional to the marginal gain. We additionally allocate 25% of the budget in each round to randomly sample arms that triggered different model assertions, uniformly; this is inspired by e-greedy algorithms (Tokic & Palm, 2011). This ensures that no contexts (i.e., model assertions) are underexplored as Model Assertions for Monitoring and Improving ML Models training progresses. Finally, in some cases (e.g., with noisy assertions), it may not be possible to reduce the number of assertions that fire. In this case, BAL will default to random sampling or uncertainty sampling, as specified by the user. outputs yi,j for each input. For example, each output could be an object detected in a video frame. The user provides two functions over outputs yi,j: # 4 CONSISTENCY ASSERTIONS AND WEAK SUPERVISION • Id(yi,j) returns an identifier for the output yi,j, which is simply an opaque value. • Attrs(yi,j) returns zero or more attributes for the output yi,j, which are key-value pairs. Although developers can write arbitrary Python functions as model assertions in OMG, we found that many assertions can be specified using an even simpler, high-level abstraction that we called consistency assertions. This interface allows OMG to generate multiple Boolean model assertions from a high-level description of the model’s output, as well as automatic correction rules that propose new labels for data that fail the assertion to enable weak supervision. The key idea of consistency assertions is to specify which attributes of a model’s output are expected to match across many invocations to the model. For example, consider a TV news application that tries to locate faces in TV footage and then identify their name and gender (one of the real-world applications we discussed in §2.2). The ML developer may wish to assert that, within each video, each person should consistently be assigned the same gender, and should appear on the screen at similar positions on most nearby frames. Consistency assertions let developers specify such requirements by providing two functions: In addition to checking attributes, we found that many appli- cations also expect their identifiers to appear in a “temporally consistent” fashion, where objects do not disappear and reappear too quickly. For example, one would expect cars identified in the video to stay on the screen for multiple frames instead of “flickering” in and out in most cases. To express this expectation, developers can provide a temporal consis- tency threshold, T , which specifies that each identifier should not appear or disappear for intervals less than T seconds. For example, we might set T to one second for TV footage that frequently cuts across frames, or 30 seconds for an activity classification algorithm that distinguishes between walking and biking. The full API for adding a consistency assertion is therefore AddConsistencyAssertion(Id, Attrs, T ). Examples. We briefly describe how one can use consistency assertions in several ML tasks motivated in §2.2: • An identification function that returns an identifier for each model output. For example, in our TV application, this could be the person’s name as identified by the model. Face identification in TV footage: This application uses multiple ML models to detect faces in images, match them to identities, classify their gender, and classifier their hair color. We can use the detected identity as our Id function and gender/hair color as attributes. • An attributes function that returns a list of named attributes expected to be consistent for each identifier. In our example, this could return the gender attribute. Given these two functions, OMG generates multiple Boolean assertions that check whether the various attributes of outputs with a common identifier match. In addition, it generates correction rules that can replace an inconsistent attribute with a guess at that attribute’s value based on other instances of the identifier (we simply use the most common value). By run- ning the model and these generated assertions over unlabeled data, OMG can thus automatically generate weak labels for data points that do not satisfy the consistency assertions. Notably, OMG provides another way of producing labels for training that is complementary to human-generated labels and other sources of weak labels. OMG is especially suited for unstructured sources, e.g., video. We show in §5 that these weak labels can automatically increase model quality. Video analytics for traffic cameras: This application aims to detect vehicles in video street traffic, and suffers from problems such as flickering or changing classifications for an object. The model’s output is bounding boxes with classes on each frame. Because we lack a globally unique identifier (e.g., license plate number) for each object, we can assign a new identifier for each box that appears and assign the same identifier as it persists through the video. We can treat the class as an attribute and set T as well to detect flickering. Heart rhythm classification from ECGs: In this application, domain experts informed us that atrial fibrillation heart rhythms need to persist for at least 30 seconds to be considered a problem. We used the detected class as our identifier and set T to 30 seconds. # 4.2 Generating Assertions and Labels from the API # 4.1 API Details The consistency assertions API supports ML applications that run over multiple inputs xi and produce zero or more Given the Id, Attrs, and T values, OMG automatically generates Boolean assertions to check for matching attributes and to check that when an identifier appears in the data, it persists for at least T seconds. These assertions are treated the same as user-provided ones in the rest of the system. Model Assertions for Monitoring and Improving ML Models OMG also automatically generates corrective rules that propose a new label for outputs that do not match their identifier’s other outputs on an attribute. The default behavior is to propose the most common value of that attribute (e.g., the class detected for an object on most frames), but users can also provide a WeakLabel function to suggest an alternative based on all of that object’s outputs. For temporal consistency constraints via T , OMG will as- sert by default that at most one transition can occur within a T -second window; this can be overridden. For example, an identifier appearing is valid, but an identifier appearing, disap- pearing, then appearing is invalid. If a violation occurs, OMG will propose to remove, modify, or add predictions. In the latter case, OMG needs to know how to generate an expected output on an input where the object was not identified (e.g., frames where the object flickered out in Figure 1). OMG requires the user to provide a WeakLabel function to cover this case, since it may require domain specific logic, e.g., averaging the locations of the object on nearby video frames. and appear. The multibox assertion fires when three boxes highly overlap (Figure 7, Appendix). The flicker and appear assertions are implemented with our consistency API as described in §4. Autonomous vehicles. We studied the problem of ob- ject detection for autonomous vehicles using the NuScenes dataset (Caesar et al., 2019), which contains labeled LIDAR point clouds and associated visual images. We split the data into separate train, unlabeled, and test splits. We detected vehicles only. We use the open-source Second model with PointPillars (Yan et al., 2018; Lang et al., 2019) for LIDAR detections and SSD for visual detections. We improve SSD via active learning and weak supervision in our experiments. As NuScenes contains time-aligned point clouds and images, we deployed a custom assertion for 2D and 3D boxes agreeing, and the multibox assertion. We deployed a custom weak supervision rule that imputed boxes from the 3D predictions. While other assertions could have been deployed (e.g., flicker), we found that the dataset was not sampled frequently enough (at 2 Hz) for these assertions. # 5 EVALUATION # 5.1 Experimental Setup We evaluated OMG and model assertions on four diverse ML workloads based on real industrial and academic use-cases: analyzing TV news, video analytics, autonomous vehicles, and medical classification. For each domain, we describe the task, dataset, model, training procedure, and assertions. A summary is given in Table 1. Medical classification. We studied the problem of clas- sifying atrial fibrillation (AF) via ECG signals. We used a convolutional network that was shown to outperform car- diologists (Rajpurkar et al., 2019). Unfortunately, the full dataset used in (Rajpurkar et al., 2019) is not publicly avail- able, so we used the CINC17 dataset (cin, 2017). CINC17 contains 8,528 data points that we split into train, validation, unlabeled, and test splits. TV news. Our contacts analyzing TV news provided us 50 hour-long segments that were known to be problematic. They further provided pre-computed boxes of faces, identities, and hair colors; this data was computed from a range of models and sources, including hand-labeling, weak labels, and custom classifiers. We implemented the consistency assertions described in §4. We were unable to access the training code for this domain so were unable to perform retraining experiments for this domain. We consulted with medical researchers and deployed an assertion that asserts that the classification should not change between two classes in under a 30 second time period (i.e., the assertion fires when the classification changes from A → B → A within 30 seconds), as described in §4. # 5.2 Model Assertions can be Written with High Precision and Few LOC Video analytics. Many modern video analytics systems use object detection as a core primitive (Kang et al., 2017; 2019; Hsieh et al., 2018; Jiang et al., 2018; Xu et al., 2019; Canel et al., 2019), in which the task is to localize and classify the objects in a frame of video. We focus on the object detection portion of these systems. We used a ResNet-34 SSD (Liu et al., 2016) (henceforth SSD) model pretrained on MS-COCO (Lin et al., 2014). We deployed SSD for detecting vehicles in the night-street (i.e., jackson) video that is commonly used (Kang et al., 2017; Xu et al., 2019; Canel et al., 2019; Hsieh et al., 2018). We used a separate day of video for training and testing. We first asked whether model assertions could be written succinctly. To test this, we implemented the model assertions described above and counted the lines of code (LOC) necessary for each assertion. We count the LOC for the identity and attribute functions for the consistency assertions (see Table 1 for a summary of assertions). We counted the LOC with and without the shared helper functions (e.g., computing box overlap); we double counted the helper functions when used between assertions. As we show in Table 2, both consistency and domain-specific assertions can be written in under 25 LOC excluding shared helper functions and under 60 LOC when including helper functions. Thus, model assertions can be written with few LOC. We deployed three model assertions: multibox, flicker, We then asked whether model assertions could be written Model Assertions for Monitoring and Improving ML Models Model Custom SSD (Liu et al., 2016) Assertions Consistency (§4, news) Three vehicles should not highly overlap (multibox), consistency assertions (flicker and appear) Agreement of Point cloud and image detections (agree), multibox Second (Yan et al., 2018), SSD ResNet (Rajpurkar et al., 2019) Consistency assertion within a 30s time window (ECG) # Task TV news Object detection (video) # Vehicle detection (AVs) AF classification Table 1. A summary of tasks, models, and assertions used in our evaluation. Assertion news ECG flicker appear multibox agree LOC (no helpers) 7 23 18 18 14 11 LOC (inc. helpers) 39 50 60 35 28 28 Table 2. Number of lines of code (LOC) for each assertion. Consistency assertions are on the top and custom assertions are on the bottom. All assertions could be written in under 60 LOC including helper functions, when double counting between assertions. The assertion main body could be written in under 25 LOC in all cases. The helper functions included utilities such as computing the overlap between boxes. Assertion news ECG flicker appear multibox agree Precision (identifier and output) 100% 100% 100% 100% N/A N/A Precision (model output only) 100% 100% 96% 88% 100% 98% 90 i ee 80 w Appear 9 te Multibox 5 70 oi Flicker a 60 50 + T T T Rank # Fs Figure 3. Percentile of confidence of the top-10 ranked errors by confidence found by OMG for video analytics. The x-axis is the rank of the errors caught by model assertions, ordered by rank. The y-axis is the percentile of confidence among all the boxes. As shown, model assertions can find errors where the original model has high confidence (94th percentile), allowing them to complement existing confidence-based methods for data selection. chali et al., 2019). Furthermore, sampling solutions that are based on confidence would be unable to identify these errors. Table 3. Precision of our model assertions we deployed on 50 randomly selected examples. The top are consistency assertions and the bottom are custom assertions. We report both precision in the ML model outputs only and when counting errors in the identi- fication function and ML model outputs for consistency assertions. As shown, model assertions can be written with 88-100% precision across all domains when only counting errors in the model outputs. To determine whether model assertions could identify high confidence errors, we collected the 10 data points with highest confidence error for each of the model assertions deployed for video analytics. We then plotted the percentile of the confidence among all the boxes for each error. with high precision. To test this, we randomly sampled 50 data points that triggered each assertion and manually checked whether that data point had an incorrect output from the ML model. The consistency assertions return clusters of data points (e.g., appear) and we report the precision for errors in both the identifier and ML model outputs and only the ML model outputs. As we show in Table 3, model assertions achieve at least 88% precision in all cases. As shown in Figure 3, model assertions can identify errors within the top 94th percentile of boxes by confidence (the flicker confidences were from the average of the surrounding boxes). Importantly, uncertainty-based methods of monitoring would not catch these errors. We further show that model assertions can identify errors in human labels, which effectively have a confidence of 1. These results are shown in Appendix E. # 5.4 Model Assertions can Improve Model Quality via Active Learning # 5.3 Model Assertions can Identify High-Confidence Errors We asked whether model assertions can identify high- confidence errors, or errors where the model returns the wrong output with high confidence. High-confidence errors are important to identify as confidence is used in downstream tasks, such as analytics queries and actuation decisions (Kang et al., 2017; 2019; Hsieh et al., 2018; Chin- We evaluated OMG’s active learning capabilities and BAL using the three domains for which we had access to the training code (visual analytics, ECG, AVs). Multiple model assertions. We asked whether multiple model assertions could be used to improve model quality via continuous data collection. We deployed three asser- tions over night-street and two assertions for NuScenes. Model Assertions for Monitoring and Improving ML Models (a) Active learning for night-street. (b) Active learning for NuScenes. 70.0 Random > —— Uncertainty © 67.5 7 —— BaL 5 3 gz 65.0 62.5 “+ T T T T T 0 1 2 3 4 5 Round Figure 5. Active learning results with a single assertion for the ECG dataset. As shown, with just a single assertion, model-assertion based active learning can match uncertainty sampling and outperform random sampling. Domain Video analytics (mAP) AVs (mAP) ECG (% accuracy) Pretrained Weakly supervised 49.9 34.4 14.1 10.6 72.1 70.7 Table 4. Accuracy of the pretrained and weakly supervised models for video analytics, AV and ECG domains. Weak supervision can improve accuracy with no human-generated labels. Figure 4. Performance of random sampling, uncertainty sampling, uniform sampling from model assertions, and BAL for active learn- ing. The round is the round of data collection (see §3). As shown in (a), BAL improves accuracy on unseen data and can achieve an accuracy target (62% mAP) with 40% fewer labels compared to random and uncertainty sampling for night-street. BAL also outperforms both baselines for the NuScenes dataset as shown in (b). We show figures with all rounds of active learning in Appendix D. model assertion could be used to improve model quality. We ran five rounds of data labeling with 100 examples each round for ECG datasets. We ran the experiment 8 times and report averages. We show results in Figure 5. As shown, data collection with a single model assertion generally matches or outperforms both uncertainty and random sampling. We used random sampling, uncertainty sampling with “least confident” (Settles, 2009), uniform sampling from data that triggered assertions, and BAL for the active learning strate- gies. We used the mAP metric for both datasets, which is widely used for object detection (Lin et al., 2014; He et al., 2017). We defer hyperparmeters to Appendix C. As we show in Figure 4, BAL outperforms both random sampling and uncertainty sampling on both datasets after the first round, which is required for calibration. BAL also out- performs uniform sampling from model assertions by the last round. For night-street, at a fixed accuracy threshold of 62%, BAL uses 40% fewer labels than random and uncer- tainty sampling. By the fifth round, BAL outperforms both random sampling and uncertainty sampling by 1.5% mAP. While the absolute change in mAP may seem small, doubling the model depth, which doubles the computational budget, on MS-COCO achieves a 1.7% improvement in mAP (ResNet- 50 FPN vs. ResNet-101 FPN) (Girshick et al., 2018). These results are expected, as prior work has shown that un- certainty sampling can be unsuited for deep networks (Sener & Savarese, 2017). # 5.5 Model Assertions can Improve Model Quality via Weak Supervision We used our consistency assertions to evaluate the impact of weak supervision using assertions for the domains we had weak labels for (video analytics, AVs, and ECG). For night-street, we used 1,000 additional frames with 750 frames that triggered flicker and 250 random frames with a learning rate of 5×10−6 for a total of 6 epochs. For the NuScenes dataset, we used the same 350 scenes to bootstrap the LIDAR model as in the active learning experiments. We trained with 175 scenes of weakly supervised data for one epoch with a learning rate of 5×10−5. For the ECG dataset, we used 1,000 weak labels and the same training procedure as in active learning. Table 4 shows that model assertion-based weak supervision can improve relative performance by 46.4% for video analyt- ics and 33% for AVs. Similarly, the ECG classification can also improve with no human-generated labels. These results show that model assertions can be useful as a primitive for improving model quality with no additional data labeling. # 6 RELATED WORK Single model assertion. Due to the limited data quantities for the ECG dataset, we were unable to deploy more than one assertion. Nonetheless, we further asked whether a single ML QA. A range of existing ML QA tools focus on validat- ing inputs via schemas or tracking performance over time Model Assertions for Monitoring and Improving ML Models (Polyzotis et al., 2019; Baylor et al., 2017). However, these systems apply to situations with meaningful schemas (e.g., tabular data) and ground-truth labels at test time (e.g., pre- dicting click-through rate). While model assertions could also apply to these cases, they also cover situations that do not contain meaningful schemas or labels at test time. ods encode structure/inductive biases into training proce- dures or models (BakIr et al., 2007; Haussler, 1988; BakIr et al., 2007). While promising, designing algorithms and models with specific inductive biases can be challenging for non-experts. Additionally, these methods generally do not contain runtime checks for aberrant behavior. Other ML QA systems focus on training pipelines (Renggli et al., 2019) or validating numerical errors (Odena & Goodfellow, 2018). These approaches are important at finding pre-deployment bugs, but do not apply to test-time scenarios; they are complementary to model assertions. White-box testing systems, e.g., DeepXplore (Pei et al., 2017), test ML models by taking inputs and perturbing them. However, as discussed, a validation set cannot cover all possibilities in the deployment set. Furthermore, these systems do not give guarantees under model drift. Weak Supervision, Semi-supervised Learning. Weak su- pervision leverages higher-level and/or noisier input from human experts to improve model quality (Mintz et al., 2009; Ratner et al., 2017; Jin et al., 2018). In semi-supervised learn- ing, structural assumptions over the data are used to leverage unlabeled data (Zhu, 2011). However, to our knowledge, both of these methods do not contain runtime checks and are not used in model-agnostic active learning methods. # 7 DISCUSSION Since our initial workshop paper (Kang et al., 2018), several works have extended model assertions (Arechiga et al., 2019; Henzinger et al., 2019). Verified ML. Verification has been applied to ML models in simple cases. For example, Reluplex (Katz et al., 2017) can verify that extremely small networks will make correct con- trol decisions given a fixed set of inputs and other work has shown that similarly small networks can be verified against minimal perturbations of a fixed set of input images (Raghu- nathan et al., 2018). However, verification requires a specifi- cation, which may not be feasible to implement, e.g., even humans may disagree on certain predictions (Kirillov et al., 2018). Furthermore, the largest verified networks we are aware of (Katz et al., 2017; Raghunathan et al., 2018; Wang et al., 2018; Sun et al., 2019) are orders of magnitude smaller than the networks we consider. While we believe model assertions are an important step towards a practical solution for monitoring and continuously improving ML models, we highlight three important limitations of model assertions, which may be fruitful directions for future work. First, certain model assertions may be difficult to express in our current API. While arbitrary code can be expressed in OMG’s API, certain temporal assertions may be better expressed in a complex event processing language (Wu et al., 2006). We believe that domain-specific languages for model assertions will be a fruitful area of future research. Second, we have not thoroughly evaluated model assertions’ performance in real-time systems. Model assertions may add overhead to systems where actuation has tight latency constraints, e.g., AVs. Nonetheless, model assertions can be used over historical data for these systems. We are actively collaborating with an AV company to explore these issues. Software Debugging. Writing correct software and verify- ing software has a long history, with many proposals from the research community. We hope that many such practices are adopted in deploying machine learning models; we focus on assertions in this work (Goldstine et al., 1947; Turing, 1949). Assertions have been shown to reduce the prevalence of bugs, when deployed correctly (Kudrjavets et al., 2006; Mahmood et al., 1984). There are many other such methods, such as formal verification (Klein et al., 2009; Leroy, 2009; Keller, 1976), conducting large-scale testing (e.g., fuzzing) (Takanen et al., 2008; Godefroid et al., 2012), and symbolic execution to trigger assertions (King, 1976; Cadar et al., 2008). Proba- bilistic assertions have been used to verify simple distribu- tional properties of programs, such as differentially private programs should return an expected mean (Sampson et al., 2014). However, ML developers may not be able to specify distributions and data may shift in deployment. Third, certain issues in ML systems, such as bias in training sets, are out of scope for model assertions. We hope that complementary systems, such as TFX (Baylor et al., 2017), can help improve quality in these cases. # 8 CONCLUSION In this work, we introduced model assertions, a model- agnostic technique that allows domain experts to indicate errors in ML models. We showed that model assertions can be used at runtime to detect high-confidence errors, which prior methods would not detect. We proposed methods to use model assertions for active learning and weak supervision to improve model quality. We implemented model assertions in a novel library, OMG, and demonstrated that they can apply to a wide range of real-world ML tasks, improving monitor- ing, active learning, and weak supervision for ML models. # Structured Prediction, Inductive Bias. Several ML meth- Model Assertions for Monitoring and Improving ML Models # ACKNOWLEDGEMENTS This research was supported in part by affiliate members and other supporters of the Stanford DAWN project—Ant Financial, Facebook, Google, Infosys, NEC, and VMware—as well as Toyota Research Institute, Northrop Grumman, Cisco, SAP, and the NSF under CAREER grant CNS-1651570 and Graduate Research Fellowship grant DGE-1656518. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. Toyota Research Institute (“TRI”) provided funds to assist the authors with their research but this article solely reflects the opinions and conclusions of its authors and not TRI or any other Toyota entity. We further acknowledge Kayvon Fatahalian, James Hong, Dan Fu, Will Crichton, Nikos Arechiga, and Sudeep Pillai for their productive discussions on ML applications. # REFERENCES Cadar, C., Dunbar, D., Engler, D. R., et al. Klee: Unassisted and automatic generation of high-coverage tests for In OSDI, volume 8, pp. complex systems programs. 209–224, 2008. Caesar, H., Bankiti, V., Lang, A. H., Vora, S., Liong, V. E., Xu, Q., Krishnan, A., Pan, Y., Baldan, G., and Beijbom, O. nuscenes: A multimodal dataset for autonomous driving. arXiv preprint arXiv:1903.11027, 2019. Canel, C., Kim, T., Zhou, G., Li, C., Lim, H., Andersen, D., Kaminsky, M., and Dulloor, S. Scaling video analytics on constrained edge nodes. SysML, 2019. Chen, L., Xu, J., and Lu, Z. Contextual combinatorial multi-armed bandits with volatile arms and submodular reward. In Advances in Neural Information Processing Systems, pp. 3247–3256, 2018. AF classification from a short single lead ECG record- ing: the physionet/computing in cardiology challenge URL https://physionet.org/ 2017, 2017. challenge/2017/. Chinchali, S., Sharma, A., Harrison, J., Elhafsi, A., Kang, D., Pergament, E., Cidon, E., Katti, S., and Pavone, M. Network offloading policies for cloud robotics: a learning- based approach. arXiv preprint arXiv:1902.05703, 2019. Scale API: The API for training data, 2019. https://scale.ai/. URL Arechiga, N., DeCastro, J., Kong, S., and Leung, K. Better AI through logical scaffolding. arXiv preprint arXiv:1909.06965, 2019. Coldewey, D. Uber in fatal crash detected pedestrian but had emergency braking disabled, 2018. URL https: //techcrunch.com/2018/05/24/uber-in- fatal-crash-detected-pedestrian-but- had-emergency-braking-disabled/. Athalye, A., Engstrom, L., Ilyas, A., and Kwok, K. Synthesizing robust adversarial examples. ICML, 2018. Auer, P., Cesa-Bianchi, N., and Fischer, P. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002. Coleman, C., Yeh, C., Mussmann, S., Mirzasoleiman, B., Bailis, P., Liang, P., Leskovec, J., and Zaharia, Selection via proxy: Efficient data selection M. In International Conference on for deep learning. URL https: Learning Representations, 2020. //openreview.net/forum?id=HJg2b0VYDr. BakIr, G., Hofmann, T., Sch¨olkopf, B., Smola, A. J., Taskar, B., and Vishwanathan, S. Predicting structured data. MIT press, 2007. Davies, A. How do self-driving cars see? (and URL https: how do they see me?), 2018. //www.wired.com/story/the-know-it- alls-how-do-self-driving-cars-see/. Baylor, D., Breck, E., Cheng, H.-T., Fiedel, N., Foo, C. Y., Haque, Z., Haykal, S., Ispir, M., Jain, V., Koc, L., et al. Tfx: A tensorflow-based production-scale machine learning platform. In SIGKDD. ACM, 2017. EHRA. Guidelines for the management of atrial fibrillation: the task force for the management of atrial fibrillation of the European society of cardiology (ESC). European heart journal, 31(19):2369–2429, 2010. Berry, D. A. and Fristedt, B. Bandit problems: sequential allocation of experiments (monographs on statistics and applied probability). London: Chapman and Hall, 5: 71–87, 1985. Bubeck, S., Munos, R., and Stoltz, G. Pure exploration In International in multi-armed bandits problems. conference on Algorithmic learning theory, pp. 23–37. Springer, 2009. Evans, L. C. Graduate studies in mathematics. In Partial differential equations. Am. Math. Soc., 1998. Girshick, R., Radosavovic, I., Gkioxari, G., Doll´ar, P., and He, K. Detectron. https://github.com/ facebookresearch/detectron, 2018. Godefroid, P., Levin, M. Y., and Molnar, D. Sage: whitebox fuzzing for security testing. Queue, 10(1):20, 2012. Model Assertions for Monitoring and Improving ML Models Goldstine, H. H., Von Neumann, J., and Von Neumann, J. Planning and coding of problems for an electronic computing instrument. 1947. Keller, R. M. Formal verification of parallel programs. Communications of the ACM, 19(7):371–384, 1976. Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. ICLR, 2015. Haussler, D. Quantifying inductive bias: AI learning algorithms and valiant’s learning framework. Artificial intelligence, 36(2):177–221, 1988. He, K., Gkioxari, G., Doll´ar, P., and Girshick, R. Mask r-cnn. In Computer Vision (ICCV), 2017 IEEE International Conference on, pp. 2980–2988. IEEE, 2017. King, J. C. Symbolic execution and program testing. Communications of the ACM, 19(7):385–394, 1976. Kirillov, A., He, K., Girshick, R., Rother, C., and Doll´ar, P. Panoptic segmentation. arXiv preprint arXiv:1801.00868, 2018. Klein, G., Elphinstone, K., Heiser, G., Andronick, J., Cock, D., Derrin, P., Elkaduwe, D., Engelhardt, K., Kolanski, R., Norrish, M., et al. sel4: Formal verification of an OS kernel. In Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles, pp. 207–220. ACM, 2009. Henzinger, T. A., Lukina, A., and Schilling, C. Outside the box: Abstraction-based monitoring of neural networks. arXiv preprint arXiv:1911.09032, 2019. Hirth, M., Hoßfeld, T., and Tran-Gia, P. Analyzing costs and accuracy of validation mechanisms for crowdsourcing platforms. Mathematical and Computer Modelling, 57 (11-12):2918–2932, 2013. Hsieh, K., Ananthanarayanan, G., Bodik, P., Venkataraman, S., Bahl, P., Philipose, M., Gibbons, P. B., and Mutlu, O. Focus: Querying large video datasets with low latency and low cost. In OSDI, pp. 269–286, 2018. Kudrjavets, G., Nagappan, N., and Ball, T. Assessing the relationship between software assertions and faults: An empirical investigation. In Software Reliability Engineer- ing, 2006. ISSRE’06. 17th International Symposium on, pp. 204–212. IEEE, 2006. Lang, A. H., Vora, S., Caesar, H., Zhou, L., Yang, J., and Beijbom, O. Pointpillars: Fast encoders for object detection from point clouds. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12697–12705, 2019. Jiang, J., Ananthanarayanan, G., Bodik, P., Sen, S., and Stoica, I. Chameleon: scalable adaptation of video analytics. In Proceedings of the 2018 Conference of the ACM Special Interest Group on Data Communication, pp. 253–266. ACM, 2018. Jin, S., RoyChowdhury, A., Jiang, H., Singh, A., Prasad, A., Chakraborty, D., and Learned-Miller, E. Unsupervised hard example mining from videos for improved object detection. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 307–324, 2018. Tesla says autopilot was active dur- https: ing fatal crash in mountain view. //arstechnica.com/cars/2018/03/tesla- says-autopilot-was-active-during- fatal-crash-in-mountain-view/, 2018. Leroy, X. Formal verification of a realistic compiler. Communications of the ACM, 52(7):107–115, 2009. Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ra- manan, D., Doll´ar, P., and Zitnick, C. L. Microsoft COCO: Common objects in context. In European conference on computer vision, pp. 740–755. Springer, 2014. Kang, D., Emmons, J., Abuzaid, F., Bailis, P., and Zaharia, M. Noscope: optimizing neural network queries over video at scale. Proceedings of the VLDB Endowment, 10 (11):1586–1597, 2017. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A. C. SSD: Single shot multibox detector. In European conference on computer vision, pp. 21–37. Springer, 2016. Kang, D., Raghavan, D., Bailis, P., and Zaharia, M. Model assertions for debugging machine learning. In NeurIPS MLSys Workshop, 2018. Lu, T., P´al, D., and P´al, M. Contextual multi-armed bandits. In Proceedings of the Thirteenth international conference on Artificial Intelligence and Statistics, pp. 485–492, 2010. Kang, D., Bailis, P., and Zaharia, M. Blazeit: Fast exploratory video queries using neural networks. PVLDB, 2019. Mahmood, A., Andrews, D. M., and McCluskey, E. J. Executable assertions and flight software. 1984. Katz, G., Barrett, C., Dill, D. L., Julian, K., and Kochenderfer, M. J. Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification, pp. 97–117. Springer, 2017. Mintz, M., Bills, S., Snow, R., and Jurafsky, D. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Model Assertions for Monitoring and Improving ML Models Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pp. 1003–1011. Association for Computational Linguistics, 2009. NTSB. Vehicle automation report, HWY18MH010, 2019. URL https://dms.ntsb.gov/public/62500- 62999/62978/629713.pdf. Odena, A. and Goodfellow, I. Tensorfuzz: Debugging neural networks with coverage-guided fuzzing. arXiv preprint arXiv:1807.10875, 2018. Pei, K., Cao, Y., Yang, J., and Jana, S. Deepxplore: Automated whitebox testing of deep learning systems. In Proceedings of the 26th Symposium on Operating Systems Principles, pp. 1–18. ACM, 2017. Polyzotis, N., Zinkevich, M., Roy, S., Breck, E., and Whang, S. Data validation for machine learning. SysML, 2019. Sun, X., Khedr, H., and Shoukry, Y. Formal verification In of neural network controlled autonomous systems. Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control, pp. 147–156. ACM, 2019. Takanen, A., Demott, J. D., and Miller, C. Fuzzing for software security testing and quality assurance. Artech House, 2008. Taylor, L. and Nitschke, G. Improving deep learning arXiv preprint using generic data augmentation. arXiv:1708.06020, 2017. Tokic, M. and Palm, G. Value-difference based exploration: adaptive control between epsilon-greedy and softmax. In Annual Conference on Artificial Intelligence, pp. 335–346. Springer, 2011. Radlinski, F., Kleinberg, R., and Joachims, T. Learning di- verse rankings with multi-armed bandits. In Proceedings of the 25th international conference on Machine learning, pp. 784–791. ACM, 2008. Raghunathan, A., Steinhardt, J., and Liang, P. Certified defenses against adversarial examples. arXiv preprint arXiv:1801.09344, 2018. Tran-Thanh, L., Venanzi, M., Rogers, A., and Jennings, N. R. Efficient budget allocation with accuracy guarantees for In Proceedings of crowdsourcing classification tasks. the 2013 international conference on Autonomous agents and multi-agent systems, pp. 901–908. International Foundation for Autonomous Agents and Multiagent Systems, 2013. Rajpurkar, P., Hannun, A. Y., Haghpanahi, M., Bourn, C., and Ng, A. Y. Cardiologist-level arrhythmia detection with convolutional neural networks. Nature Medicine, 2019. Turing, A. Checking a large routine. In Report on a Confer- ence on High Speed Automatic Calculating machines, pp. 67–69. Cambridge University Mathematics Lab, 1949. Ratner, A., Bach, S., Varma, P., and R´e, C. Weak supervision: The new programming paradigm for machine learning, 2017. URL https://dawn.cs.stanford.edu/ 2017/07/16/weak-supervision/. Renggli, C., Karla, B., Ding, B., Liu, F., Schawinski, K., Wu, W., and Zhang, C. Continuous integration of machine learning models with ease.ml/ci: Towards a rigorous yet practical treatment. SysML, 2019. Sampson, A., Panchekha, P., Mytkowicz, T., McKinley, K. S., Grossman, D., and Ceze, L. Expressing and verifying probabilistic assertions. ACM SIGPLAN Notices, 49(6): 112–122, 2014. Sener, O. and Savarese, S. Active learning for convolutional neural networks: A core-set approach. arXiv preprint arXiv:1708.00489, 2017. Settles, B. Active learning literature survey. Technical report, University of Wisconsin-Madison Department of Computer Sciences, 2009. Wang, J. and Perez, L. The effectiveness of data aug- mentation in image classification using deep learning. Convolutional Neural Networks Vis. Recognit, 2017. Wang, S., Pei, K., Whitehouse, J., Yang, J., and Jana, S. Formal security analysis of neural networks using symbolic intervals. In USENIX Security Symposium, pp. 1599–1614, 2018. Wu, E., Diao, Y., and Rizvi, S. High-performance complex event processing over streams. In Proceedings of the 2006 ACM SIGMOD international conference on Management of data, pp. 407–418. ACM, 2006. Xu, T., Botelho, L. M., and Lin, F. X. Vstore: A data store for analytics on large videos. In Proceedings of the Fourteenth EuroSys Conference 2019, pp. 16. ACM, 2019. Yan, Y., Mao, Y., and Li, B. Second: Sparsely embedded convolutional detection. Sensors, 18(10):3337, 2018. Zhu, X. Semi-supervised learning. In Encyclopedia of machine learning, pp. 892–897. Springer, 2011. Sun, C., Shrivastava, A., Singh, S., and Gupta, A. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of the IEEE international conference on computer vision, pp. 843–852, 2017. Model Assertions for Monitoring and Improving ML Models ELECTION DAY IN AMERICA CLINTON WINS MIDNIGHT VO om HANDFUL OF RESIDENTS CAST FIRST PI ELECTION CLINTON 'S CAST FIRST ELECTION DAY IN AMERICA CLINTON WINS MIDNIGHT VO om HANDFUL OF RESIDENTS CAST FIRST PI ELECTION DAY IN AMERICA CLINTON WINS MIDNIGHT VO 'S CAST FIRST PRESIDENTIAL VOTES IN DIXVI (a) Frame 1 (b) Frame 2 (a) Example error flagged by multibox. SSD predicts three trucks when only one should be detected. Figure 6. Two example frames from the same scene with an inconsistent attribute (the identity) from the TV news use case. (a) Example error 1. puree rar Nala (b) Example error flagged by agree. SSD misses the car on the right and the LIDAR model predicts the truck on the left to be too large. Figure 8. Examples of errors that the multibox and agree assertions catch for the NuScenes dataset. LIDAR model boxes are in pink and SSD boxes are in green. Best viewed in color. (b) Example error 2. # B CLASSES OF MODEL ASSERTIONS Figure 7. Examples errors when three boxes highly overlap (see multibox in Section 5). Best viewed in color. We present a non-exhaustive list of common classes of model assertions in Table 5 and below. Namely, we describe how one might look for assertions in other domains. # A EXAMPLES OF ERRORS CAUGHT BY MODEL ASSERTIONS In this section, we illustrate several errors caught by the model assertions used in our evaluation. Our taxonomization is not exact and several examples will contain features from several classes of model assertions. Prior work on schema validation (Polyzotis et al., 2019; Baylor et al., 2017) and data augmentation (Wang & Perez, 2017; Taylor & Nitschke, 2017) can be cast in the model assertion framework. As these have been studied, we do not focus on these classes of assertions in this work. First, we show an example error in the TV news use case in Figure 6. Recall that these assertions were generated with our consistency API (§4). In this example, the identifier is the box’s sceneid and the attribute is the identity. Second, we show an example error for the visual analytics use case in Figure 7 for the multibox assertion. Here, SSD erroneously detects multiple cars when there should be one. Consistency assertions. An important class of model as- sertions checks the consistency across multiple models or sources of data. The multiple sources of data could be the output of multiple ML models on the same data, multiple sensors, or multiple views of the same data. The output from the various sources should agree and consistency model assertions specify this constraint. These assertions can be generated via our API as described in §4. Third, we show two example errors for the AV use case in Figure 8 from the multibox and agree assertions. Domain knowledge assertions. In many physical domains, Model Assertions for Monitoring and Improving ML Models Assertion class Consistency Assertion sub-class Multi-source Model outputs from multiple Description Examples • Verifying human labels (e.g., number of sources should agree labelers that disagree) • Multiple models (e.g., number of models that disagree) Multi-modal Model outputs from multiple • Multiple sensors (e.g., number of disagree- modes of data should agree ments from LIDAR and camera models) • Multiple data sources (e.g., text and images) Multi-view Model outputs from multiple views of the same data should agree • Video analytics (e.g., results from overlapping views of different cameras should agree) • Medical imaging (e.g., different angles should agree) Domain knowledge Physical Physical constraints on model outputs • Video analytics (e.g., cars should not flicker) • Earthquake detection (e.g., earthquakes should appear across sensors in physically consistent ways) • Protein-protein interaction (e.g., number of overlapping atoms) Unlikely scenario Scenarios that are unlikely to occur • Video analytics (e.g., maximum confidence of 3 vehicles that highly overlap), • Text generation (e.g., two of the same word should not appear sequentially) Perturbation Insertion Inserting certain types of data should not modify model outputs • Visual analytics (e.g., synthetically adding a car to a frame of video should be detected as a car), • LIDAR detection (e.g., similar to visual analytics) Similar Replacing parts of the input with similar data should not modify model outputs • Sentiment analysis (e.g., classification should not change with synonyms) • Object detection (e.g., painting objects differ- ent colors should not change the detection) Noise Adding noise should not modify model outputs • Image classification (e.g., small Gaussian noise should not affect classification) • Time series (e.g., small Gaussian noise should • Time series (e.g., small Gaussian noise should not affect time series classification) # Input validation # Schema validation Inputs should conform to a schema • Boolean features should not have inputs that are not 0 or 1 • All features should be present Table 5. Example of model assertions. We describe several assertion classes, sub-classes, and concrete instantiations of each class. In parentheses, we describe a potential severity score or an application. Model Assertions for Monitoring and Improving ML Models domain experts can express physical constraints or unlikely scenarios. As an example of a physical constraint, when predicting how proteins will interact, atoms should not phys- ically overlap. As an example of an unlikely scenario, boxes of the visible part of cars should not highly overlap (Figure 7). In particular, model assertions of unlikely scenarios may not be 100% precise, i.e., will be soft assertions. Perturbation assertions. Many domains contain input and output pairs that can be perturbed (perhaps jointly) such that the output does not change. These perturbations have been widely studied through the lens of data augmentation (Wang & Perez, 2017; Taylor & Nitschke, 2017) and adversarial examples (Goodfellow et al., 2015; Athalye et al., 2018). Input validation assertions. Domains that contain schemas for the input data can have model assertions that validate the input data based on the schema (Polyzotis et al., 2019; Baylor et al., 2017). For example, boolean inputs that are encoded with integral values (i.e., 0 or 1) should never be negative. This class of assertions is an instance of preconditions for ML models. # C HYPERPARAMETERS Hyperparameters for active learning experiments. For night-street, we used 300,000 frames of one day of video for the training and unlabeled data. We sampled 100 frames per round for five rounds and used 25,000 frames of a differ- ent day of video for the test set. Due to the cost of obtaining labels, we ran each trial twice. For the NuScenes dataset, we used 350 scenes to bootstrap the LIDAR model, 175 scenes for unlabeled/training data for SSD, and 75 scenes for validation (out of the original 850 labeled scenes). We trained for one epoch at a learning rate of 5×10−5. We ran 8 trials. For the ECG dataset, we train for 5 rounds of active learning with 100 samples per round. We use a learning rate of 0.001 until the loss plateaus, which the original training code did. # D FULL ACTIVE LEARNING FIGURES We show active learning results for all rounds in Figure 9. # E MODEL ASSERTIONS CAN IDENTIFY ERRORS IN HUMAN LABELS (a) Active learning for night-street. (b) Active learning for NuScenes. Figure 9. Performance of random sampling, uncertainty sampling, uniform sampling from model assertions, and BAL for active learning. The round is the round of data collection (see §3). As shown, BAL improves accuracy on unseen data and can achieve the same accuracy (62% mAP) as random sampling with 40% fewer labels for night-street. BAL also outperforms both baselines for the NuScenes dataset. Description All labels Errors Errors caught Number 469 32 4 Table 6. Number of labels, errors, and errors caught from model assertions for Scale-annotated images for the video analytics task. As shown, model assertions caught 12.5% of the errors in this data. annotator identification which is necessary to perform this verification. We deployed a model assertion in which we tracked objects across frames of a video using an automated method and verified that the same object in different frames had the same label. We obtained labels for 1,000 random frames from night-street from Scale AI (sca, 2019), which is used by several autonomous vehicle companies. Table 6 summarizes our results. Scale returned 469 boxes, which we manually verified for correctness. There were no localization errors, but there were 32 classification errors, of which the model assertion caught 12.5%. Thus, we see that model assertions can also be used to verify human labels. We further asked whether model assertions could be used to identify errors in human-generated labels, i.e., a human is acting as a “ML model.” While verification of human labels has been studied in the context of crowd-sourcing (Hirth et al., 2013; Tran-Thanh et al., 2013), several production labeling services (e.g., Scale (sca, 2019)) do not provide
{ "id": "1807.10875" }
2003.01200
Natural Language Processing Advancements By Deep Learning: A Survey
Natural Language Processing (NLP) helps empower intelligent machines by enhancing a better understanding of the human language for linguistic-based human-computer communication. Recent developments in computational power and the advent of large amounts of linguistic data have heightened the need and demand for automating semantic analysis using data-driven approaches. The utilization of data-driven strategies is pervasive now due to the significant improvements demonstrated through the usage of deep learning methods in areas such as Computer Vision, Automatic Speech Recognition, and in particular, NLP. This survey categorizes and addresses the different aspects and applications of NLP that have benefited from deep learning. It covers core NLP tasks and applications and describes how deep learning methods and models advance these areas. We further analyze and compare different approaches and state-of-the-art models.
http://arxiv.org/pdf/2003.01200
Amirsina Torfi, Rouzbeh A. Shirvani, Yaser Keneshloo, Nader Tavaf, Edward A. Fox
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20200302
20210227
1 2 0 2 # b e F 7 2 ] L C . s c [ 4 v 0 0 2 1 0 . 3 0 0 2 : v i X r a TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING # Natural Language Processing Advancements By Deep Learning: A Survey Amirsina Torfi, Member, IEEE, Rouzbeh A. Shirvani, Yaser Keneshloo, Nader Tavaf, and Edward A. Fox, Fellow, IEEE Abstract—Natural Language Processing (NLP) helps empower intelligent machines by enhancing a better understanding of the human language for linguistic-based human-computer communi- cation. Recent developments in computational power and the ad- vent of large amounts of linguistic data have heightened the need and demand for automating semantic analysis using data-driven approaches. The utilization of data-driven strategies is pervasive now due to the significant improvements demonstrated through the usage of deep learning methods in areas such as Computer Vision, Automatic Speech Recognition, and in particular, NLP. This survey categorizes and addresses the different aspects and applications of NLP that have benefited from deep learning. It covers core NLP tasks and applications, and describes how deep learning methods and models advance these areas. We further analyze and compare different approaches and state-of-the-art models. Index Terms—Natural Language Processing, Deep Learning, Artificial Intelligence As a sequitur to remarkable progress achieved in adjacent disciplines utilizing deep learning methods, deep neural net- works have been applied to various NLP tasks, including part- of-speech tagging [14]–[17], named entity recognition [18], [18]–[21], and semantic role labeling [22]–[25]. Most of the research efforts in deep learning associated with NLP appli- cations involve either supervised learning1 or unsupervised learning2. This survey covers the emerging role of deep learning in the area of NLP, across a broad range of categories. The research presented in [26] is primarily focused on architectures, with little discussion of applications. More recent works [4], [27] are specific to certain applications or certain sub-fields of NLP [21]. Here we build on previous works by describing the challenges, opportunities, and evaluations of the impact of applying deep learning to NLP problems. # I. INTRODUCTION of computer science providing a bridge between natural languages and computers. It helps empower machines to un- derstand, process, and analyze human language [1]. NLP’s sig- nificance as a tool aiding comprehension of human-generated data is a logical consequence of the context-dependency of data. Data becomes more meaningful through a deeper understanding of its context, which in turn facilitates text analysis and mining. NLP enables this with the communication structures and patterns of humans. This survey has six sections, including this introduction. Section 2 lays out the theoretical dimensions of NLP and artificial intelligence, and looks at deep learning as an ap- proach to solving real-world problems. It motivates this study by addressing the question: Why use deep learning in NLP? The third section discusses fundamental concepts necessary to understand NLP, covering exemplary issues in representa- tion, frameworks, and machine learning. The fourth section summarizes benchmark datasets employed in the NLP domain. Section 5 focuses on some of the NLP applications where deep learning has demonstrated significant benefit. Finally, Section 6 provides a conclusion, also addressing some open problems and promising areas for improvement. Development of NLP methods is increasingly reliant on data-driven approaches which help with building more pow- erful and robust models [2]–[4]. Recent advances in com- putational power, as well as greater availability of big data, enable deep learning, one of the most appealing approaches in the NLP domain [2], [3], [5], especially given that deep learning has already demonstrated superior performance in adjoining fields like Computer Vision [6]–[10] and Speech Recognition [11]–[13]. These developments led to a paradigm shift from traditional to novel data-driven approaches aimed at advancing NLP. The reason behind this shift was simple: new approaches are more promising regarding results, and are easier to engineer. # II. BACKGROUND NLP has long been viewed as one aspect of artificial intelligence (AI), since understanding and generating natural language are high-level indications of intelligence. Deep learn- ing is an effective AI tool, so we next situate deep learning in the AI world. After that we explain motivations for applying deep learning to NLP. A. Artificial Intelligence and Deep Learning Amirsina Torfi, Yaser Keneshloo, and Edward A. Fox were with the Department of Computer Science, Virginia Polytechnic Institute and State University, Blacksburg, VA, 24060 USA e-mail: (amirsina.torfi@gmail.com, [email protected], [email protected]). Rouzbeh A. Shirvani is an independent re- searcher, e-mail: ([email protected]). Nader Tavaf was with the University of Minnesota Twin Cities, Minneapolis, MN, 55455 USA e-mail: ([email protected]). There have been “islands of success” where big data are processed via AI capabilities to produce information to achieve critical operational goals (e.g., fraud detection). Accordingly, 1Learning from training data to predict the type of new unseen test examples by mapping them to known pre-defined labels. 2Making sense of data without sticking to specific tasks and supervisory signals. 1 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING scientists and consumers anticipate enhancement across a variety of applications. However, achieving this requires un- derstanding of AI and its mechanisms and means (e.g., algo- rithms). Ted Greenwald, explaining AI to those who are not AI experts, comments: ”Generally AI is anything a computer can do that formerly was considered a job for a human” [28]. An AI goal is to extend the capabilities of information technology (IT) from those to (1) generate, communicate, and store data, to also (2) process data into the knowledge that decision makers and others need [29]. One reason is that the available data volume is increasing so rapidly that it is now impossible for people to process all available data. This leaves two choices: (1) much or even most existing data must be ignored or (2) AI must be developed to process the vast volumes of available data into the essential pieces of information that decision-makers and others can comprehend. Deep learning is a bridge between the massive amounts of data and AI. 1) Definitions: Deep learning refers to applying deep neu- ral networks to massive amounts of data to learn a procedure aimed at handling a task. The task can range from simple classification to complex reasoning. In other words, deep learning is a set of mechanisms ideally capable of deriving an optimum solution to any problem given a sufficiently extensive and relevant input dataset. Loosely speaking, deep learning is detecting and analyzing important structures/features in the data aimed at formulating a solution to a given problem. Here, AI and deep learning meet. One version of the goal or ambition behind AI is enabling a machine to outperform what the human brain does. Deep learning is a means to this end. 2) Deep Learning Architectures: Numerous deep learning architectures have been developed in different research areas, e.g., in NLP applications employing recurrent neural networks (RNNs) [30], convolutional neural networks (CNNs) [31], and more recently, recursive neural networks [32]. We focus our discussion on a review of the essential models, explained in relevant seminal publications. Multi Layer Perceptron: A multilayer perceptron (MLP) has at least three layers (input, hidden, and output layers). A layer is simply a collection of neurons operating to transform information from the previous layer to the next layer. In the MLP architecture, the neurons in a layer do not communicate with each other. An MLP employs nonlinear activation func- tions. Every node in a layer connects to all nodes in the next layer, creating a fully connected network (Fig. 1). MLPs are the simplest type of Feed-Forward Neural Networks (FNNs). FNNs represent a general category of neural networks in which the connections between the nodes do not create any cycle, i.e., in a FNN there is no cycle of information flow. Convolutional Neural Networks: Convolutional neural networks (CNNs), whose architecture is inspired by the human visual cortex, are a subclass of feed-forward neural networks. CNNs are named after the underlying mathematical operation, convolution, which yields a measure of the interoperability of its input functions. Convolutional neural networks are usually employed in situations where data is or needs to be represented with a 2D or 3D data map. In the data map representation, the proximity of data points usually corresponds to their iHiddeni boo a4 | Output | Lo 4 | Input | Lo——-4 Fig. 1. The general architecture of a MLP. information correlation. In convolutional neural networks where the input is an image, the data map indicates that image pixels are highly cor- related to their neighboring pixels. Consequently, the convolu- tional layers have 3 dimensions: width, height, and depth. That assumption possibly explains why the majority of research efforts dedicated to CNNs are conducted in the Computer Vision field [33]. A CNN takes an image represented as an array of numeric values. After performing specific mathematical operations, it represents the image in a new output space. This operation is also called feature extraction, and helps to capture and rep- resent key image content. The extracted features can be used for further analysis, for different tasks. One example is image classification, which aims to categorize images according to some predefined classes. Other examples include determining which objects are present in an image and where they are located. See Fig. 2. In the case of utilizing CNNs for NLP, the inputs are sen- tences or documents represented as matrices. Each row of the matrix is associated with a language element such as a word or a character. The majority of CNN architectures learn word or sentence representations in their training phase. A variety of CNN architectures were used in various classification tasks such as Sentiment Analysis and Topic Categorization [31], [34]–[36]. CNNs were employed for Relation Extraction and Relation Classification as well [37], [38]. Recurrent Neural Network: If we line up a sequence of FNNs and feed the output of each FNN as an input to the next one, a recurrent neural network (RNN) will be constructed. Like FNNs, layers in an RNN can be categorized into input, hidden, and output layers. In discrete time frames, sequences of input vectors are fed as the input, one vector at a time, e.g., after inputting each batch of vectors, conducting some operations and updating the network weights, the next input batch will be fed to the network. Thus, as shown in Fig. 3, at each time step we make predictions and use parameters of the current hidden layer as input to the next time step. Hidden layers in recurrent neural networks can carry infor- mation from the past, in other words, memory. This character- istic makes them specifically useful for applications that deal 2 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING Ground-truth bounding box Predicted bounding box Output Feature Space Fig. 2. A typical CNN architecture for object detection. The network provides a feature representation with attention to the specific region of an image (example shown on the left) that contains the object of interest. Out of the multiple regions represented (see an ordering of the image blocks, giving image pixel intensity, on the right) by the network, the one with the highest score will be selected as the main candidate. a) 2 2 2 2 iF Fig. 3. Recurrent Neural Network (RNN), summarized on the left, expanded on the right, for N timesteps, with X indicating input, h hidden layer, and O output to sequence modeling (see Section III-B [39]. Fig. 4 illustrates the schematic of an Autoencoder. Since autoencoders are unsupervised, there is no label corresponding to each input. They aim to learn a code representation for each input. The encoder is like a feed-forward neural network in which the input gets encoded into a vector (code). The decoder operates similarly to the encoder, but i.e., constructing an output based on the encoded input. In data compression applications, we want the created output to be as close as possible to the original input. Autoencoders are lossy, meaning the output is an approximate reconstruction of the input. with a sequence of inputs such as language modeling [39], i.e., representing language in a way that the machine understands. This concept will be described later in detail. RNNs can carry rich information from the past. Consider the sentence: “Michael Jackson was a singer; some people consider him King of Pop.” It’s easy for a human to identify him as referring to Michael Jackson. The pronoun him happens seven words after Michael Jackson; capturing this dependency is one of the benefits of RNNs, where the hidden layers in an RNN act as memory units. Long Short Term Memory Network (LSTM) [40] is one of the most widely used classes of RNNs. LSTMs try to capture even long time dependencies between inputs from different time steps. Modern Machine Translation and Speech Recognition often rely on LSTMs. Real Faces C5 Random Noise Real Discrimination Network Fake | Fake Faces —© Generator Network @ Code @ inp |@ @ Q | Reconstructed Output —+ | —+ Encoder : Decoder Fig. 4. Schematic of an Autoencoder unsupervised methods in deep learning. They are widely used in dimension- ality reduction3 or NLP applications which consist of sequence Fig. 5. Generative Adversarial Networks Generative Adversarial Networks: Goodfellow [41] intro- duced Generative Adversarial Networks (GANs). As shown in Fig. 5, a GAN is a combination of two neural networks, a discriminator and a generator. The whole network is trained in an iterative process. First, the generator network generates a fake sample. Then the discriminator network tries to determine whether this sample (ex.: an input image) is real or fake, i.e., whether it came from the real training data (data used for building the model) or not. The goal of the generator is to fool the discriminator in a way that the discriminator believes the artificial (i.e., generated) samples synthesized by the generator are real. 3Dimensionality reduction is an unsupervised learning approach which is the process of reducing the number of variables that were used to represent the data by identifying the most crucial information. This iterative process continues until the generator produces samples that are indistinguishable by the discriminator. In 3 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING other words, the probability of classifying a sample as fake or real becomes like flipping a fair coin for the discriminator. The goal of the generative model is to capture the distribution of real data while the discriminator tries to identify the fake data. One of the interesting features of GANs (regarding being generative) is: once the training phase is finished, there is no need for the discrimination network, so we solely can work with the generation network. In other words, having access to the trained generative model is sufficient. Different forms of GANs has been introduced, e.g., Sim GAN [8], Wasserstein GAN [42], info GAN [43], and DC GAN [44]. In one of the most elegant GAN implementations [45], entirely artificial, yet almost perfect, celebrity faces are generated; the pictures are not real, but fake photos produced by the network. GAN’s have since received significant atten- tion in various applications and have generated astonishing result [46]. In the NLP domain, GANs often are used for text generation [47], [48]. B. Motivation for Deep Learning in NLP Deep learning applications are predicated on the choices of (1) feature representation and (2) deep learning algo- rithm alongside architecture. These are associated with data representation and learning structure, respectively. For data there usually is a disjunction representation, surprisingly, between what to be important for the task at hand, versus what representation actually yields good results. For instance, lexicon semantics, syntactic structure, and context are assumed by some linguists to be of primary significance. Nevertheless, previous studies based on the bag-of-words (BoW) model demonstrated acceptable performance [49]. The bag-of-words model [50], often viewed as the vector space model, involves a representation which accounts only for the words and their frequency of occurrence. BoW ignores the order and interaction of words, and treats each word as a unique feature. BoW disregards syntactic structure, yet provides decent results for what some would consider syntax-dependent applications. This observation suggests that simple representations, when coupled with large amounts of data, may work as well or better than more complex representations. These findings corroborate the argument in favor of the importance of deep learning algorithms and architectures. Often the progress of NLP is bound to effective language modeling. A goal of statistical language modeling is the prob- abilistic representation of word sequences in language, which is a complicated task due to the curse of dimensionality. The research presented in [51] was a breakthrough for language modeling with neural networks aimed at overcoming the curse of dimensionality by (1) learning a distributed representation of words and (2) providing a probability function for se- quences. A key challenge in NLP research, compared to other do- mains such as Computer Vision, seems to be the complexity of achieving an in-depth representation of language using statistical models. A primary task in NLP applications is to provide a representation of texts, such as documents. This in- 4 volves feature learning, i.e., extracting meaningful information to enable further processing and analysis of the raw data. Traditional methods begin with time-consuming hand- crafting of features, through careful human analysis of a specific application, and are followed by development of algorithms to extract and utilize instances of those features. On the other hand, deep supervised feature learning methods are highly data-driven and can be used in more general efforts aimed at providing a robust data representation. Due to the vast amounts of unlabeled data, unsupervised feature learning is considered to be a crucial task in NLP. Un- supervised feature learning is, in essence, learning the features from unlabeled data to provide a low-dimensional representa- tion of a high-dimensional data space. Several approaches such as K-means clustering and principal component analysis have been proposed and successfully implemented to this end. With the advent of deep learning and abundance of unlabeled data, unsupervised feature learning becomes a crucial task for representation learning, a precursor in NLP applications. Cur- rently, most of the NLP tasks rely on annotated data, while a preponderance of unannotated data further motivates research in leveraging deep data-driven unsupervised methods. Given the potential superiority of deep learning approaches in NLP applications, to perform a com- it seems crucial prehensive analysis of various deep learning methods and architectures with particular attention to NLP applications. # III. CORE CONCEPTS IN NLP A. Feature Representation Distributed representations are a series of compact, low dimensional representations of data, each representing some distinct informative property. For NLP systems, due to issues related to the atomic representation of the symbols, is imperative to learn word representations. let’s concentrate on how the features are rep- resented, and then we focus on different approaches for learning word representations. The encoded input features can be characters, words [32], sentences [52], or other linguistic elements. Generally, it is more desirable to provide a compact representation of the words than a sparse one. <eos> You are welcome O—--Oo— to stay <eos> It was long bet <eos> <eos> | bet Fig. 6. Considering a given sequence, the skip-thought model generates the surrounding sequences using the trained encoder. The assumption is that the surrounding sentences are closely related, contextually. How to select the structure and level of text representa- tion used to be an unresolved question. After proposing the word2vec approach [53], subsequently, doc2vec was proposed in [52] as an unsupervised algorithm and was called Paragraph TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING Vector (PV). The goal behind PV is to learn fixed-length rep- resentations from variable-length text parts such as sentences and documents. One of the main objectives of doc2vec is to overcome the drawbacks of models such as BoW and to provide promising results for applications such as text classi- fication and sentiment analysis. A more recent approach is the skip-thought model which applies word2vec at the sentence- level [54]. By utilizing an encoder-decoder architecture, this model generates the surrounding sentences using the given sentence (Fig. 6). Next, let’s investigate different kinds of feature representation. In one-hot encoding, each unique element that needs to be represented has its dimen- sion which results in a very high dimensional, very sparse representation. Assume the words are represented with the one-hot encoding method. Regarding representation structure, there is no meaningful connection between different words in the feature space. For example, highly correlated words such as ‘ocean’ and ‘water’ will not be closer to each other (in the representation space) compared to less correlated pairs such as ‘ocean’ and ‘fire.’ Nevertheless, some research efforts present promising results using one-hot encoding [2]. 2) Continuous Bag of Words: Continuous Bag-of-Words model (CBOW) has frequently been used in NLP applica- tions. CBOW tries to predict a word given its surrounding context, which usually consists of a few nearby words [55]. CBOW is neither dependent on the sequential order of words nor necessarily on probabilistic characteristics. So it is not generally used for language modeling. This model is typi- cally trained to be utilized as a pre-trained model for more sophisticated tasks. An alternative to CBOW is the weighted CBOW (WCBOW) [56] in which different vectors get different weights reflective of relative importance in context. The sim- plest example can be document categorization where features are words and weights are TF-IDF scores [57] of the associated words. 3) Word-Level Embedding: Word embedding is a learned representation for context elements in which, ideally, words with related semantics become highly correlated in the rep- resentation space. One of the main incentives behind word embedding representations is the high generalization power as opposed to sparse, higher dimensional representations [58]. Unlike the traditional bag-of-words model in which different words have entirely different representations regardless of their usage or collocations, learning a distributed representation takes advantage of word usage in context to provide similar representations for semantically correlated words. There are different approaches to create word embeddings. Several re- search efforts, including [53], [55], used random initialization by uniformly sampling random numbers with the objective of training an efficient representation of the model on a large dataset. This setup is intuitively acceptable for initialization of the embedding for common features such as part-of-speech tags. However, this may not be the optimum method for rep- resentation of less frequent features such as individual words. For the latter, pre-trained models, trained in a supervised or unsupervised manner, are usually leveraged for increasing the performance. 4) Character-Level Embedding: The methods mentioned earlier are mostly at higher levels of representation. Lower- level representations such as character-level representation require special attention as well, due to their simplicity of representation and the potential for correction of unusual character combinations such as misspellings [2]. For generat- ing character-level embeddings, CNNs have successfully been utilized [14]. Character-level embeddings have been used in different NLP applications [59]. One of the main advantages is the ability to use small model sizes and represent words with lower-level language elements [14]. Here word embeddings are models utilizing CNNs over the characters. Another mo- tivation for employing character-level embeddings is the out- of-vocabulary word (OOV) issue which is usually encountered when, for the given word, there is no equivalent vector in the word embedding. The character-level approach may sig- nificantly alleviate this problem. Nevertheless, this approach suffers from a weak correlation between characters and se- mantic and syntactic parts of the language. So, considering the aforementioned pros and cons of utilizing character-level embeddings, several research efforts tried to propose and im- plement higher-level approaches such as using sub-words [60] to create word embeddings for OOV instances as well as creating a semantic bridge between the correlated words [61]. # B. Seq2Seq Framework Most underlying frameworks in NLP applications rely on sequence-to-sequence (seq2seq) models in which not only the input but also the output is represented as a sequence. These models are common in various applications including machine translation4, text summarization5, speech-to-text, and text-to- speech applications6. The most common seq2seq framework is comprised of an encoder and a decoder. The encoder ingests the sequence of input data and generates a mid-level output which is subse- quently consumed by the decoder to produce the series of final outputs. The encoder and decoder are usually implemented via a series of Recurrent Neural Networks or LSTM [40] cells. takes a sequence of The encoder takes a sequence of length T, X = {x1,x2,---,a7}, where a € V = {1,---,|V|} is the representation of a single input coming from the vocabulary V, and then generates the output state h,;. Subsequently, the decoder takes the last state from the encoder, ie., h,, and starts generating an output of size L, Y’ = {y{,y,--- yp}, based on its current state, s;, and the ground-truth output y;. In different applications, the decoder could take advantage of more information such as a context vector [62] or intra- attention vectors to generate better outputs. One of the most widely training approaches for seq2seq models is called Teacher Forcing [64]. Let us define y = 4The input is a sequence of words from one language (e.g., English) and the output is the translation to another language (e.g., French). 5The input is a complete document (sequence of words) and the output is a summary of it (sequence of words). 6The input is an audio recording of a speech (sequence of audible elements) and the output is the speech text (sequence of words). 5 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING {y1, y2, · · · , yL} as the ground-truth output sequence corre- spondent to a given input sequence X. The model training based on the maximum-likelihood criterion employs the fol- lowing cross-entropy (CE) loss minimization: L Lor =— > log po (ye t=1 Yt—-1; St, X) (d) where θ is the parameters of the model optimized during the training. Once the model is optimized using the cross-entropy loss, it can generate an entire sequence as follows. Let ˆyt denote the output generated by the model at time t. Then, the next output is generated by: ˆyt = arg max pθ(y|ˆyt−1, st) y (2) In NLP applications, one can improve the output by using beam search to find a reasonably good output sequence [3]. During beam search, rather than using argmax for selecting the best output, we choose the top K outputs at each step, generate K different paths for the output sequence, and finally choose the one that provides better performance as the final output. Although, there has been some recent studies [65], [66] on improving the beam search by incorporating a similar mechanism during training of them model, studying this is outside the scope of this paper. Given a series of the ground-truth output Y and the gener- ated model output ˆY , the model performance is evaluated us- ing a task-specific measures such as ROUGE [67], BLEU [68], and METEOR [69]. As an example, ROUGEL, which is an evaluation metric in NLP tasks, uses the largest common sub- string between ground-truth Y and model output ˆY to evaluate the generated output. # C. Reinforcement Learning in NLP Although the seq2seq models explained in Section III-B achieve great successes w.r.t. traditional methods, there are some issues with how these models are trained. Generally speaking, seq2seq models like the ones used in NLP applica- tions face two issues: (1) exposure bias and (2) inconsistency between training time and test time measurements [70]. Most of the popular seq2seq models are minimizing cross- entropy loss as their optimization objective via Teacher Forc- ing (Section III-B). In teacher forcing, during the training of the model, the decoder utilizes two inputs, the former decoder output state st−1 and the ground-truth input yt, to determine its current output state st. Moreover, it employs them to create the next token, i.e., ˆyt. However, at test time, the decoder fully relies on the previously created token from the model distribution. As the ground-truth data is not available, such a step is necessary to predict the next action. Henceforth, in training, the decoder input is coming from the ground truth, while, in the test phase, it relies on the previous prediction. This exposure bias [71] induces error growth through output the test phase. One approach to remedy this creation at problem is to remove the ground-truth dependency in training by solely relying on model distribution to minimize the cross- entropy loss. Scheduled sampling [64] is one popular method to handle this setback. During scheduled sampling, we first pre-train the model using cross-entropy loss and then slowly replace the ground-truth with samples the model generates. The second obstacle with seq2seq models is that, when training is finished using the cross-entropy loss, it is typically evaluated using non-differentiable measures such as ROUGE or METEOR. This will form an inconsistency between the training objective and the test evaluation metric. Recently, it has been demonstrated that both of these problems can be tack- led by utilizing techniques from reinforcement learning [70]. Among most of the well-known models in reinforcement learning, policy gradient techniques [72] such as the REIN- FORCE algorithm [73] and actor-critic based models such as value-based iteration [74], and Q-learning [75], are among the most common techniques used in deep learning in NLP. Using the model predictions (versus the ground-truth) for the sequence to sequence modeling and generation, at training time, was initially introduced by Daume et al. [76]. According to their approach, SEARN, the structured prediction can be characterized as one of the reinforcement learning cases as follows: The model employs its predictions to produce a sequence of actions (words sequences). Then, at each time step, a greedy search algorithm is employed to learn the optimal action, and the policy will be trained to predict that particular action. V » Actor Output (cround-truth) 4 {Actor Network (Encoder-Decoder Critic Network \___ Framework) lA 7 » ( ingutoata.) A / Critic NX” Feedback Fig. 7. A simple Actor-Critic framework. In Actor-Critic training, the actor is usually the same neural network used to generate the output, while the critic is a regression model that estimates how the actor performed on the input data. The actor later receives the feedback from the critic and improves its actions. Fig 7 shows this framework. It is worth noting that action in most of the NLP-related applications is like selecting the next output token while the state is the decoder output state at each stage of decoding. These models have mostly been used for robotic [77] and Atari games [78] due to the small action space in these applications. However, when we use them in NLP applications, they face multiple challenges. The action space in most of the NLP applications could be defined as the number of tokens in the vocabulary (usually between 50K to 150K tokens). Comparing this to the action space in a simple Atari game, which on average has less than 20 actions [78], shows why these Actor-Critic models face difficulties when applied to NLP applications. A major challenge is the massive action space in NLP applications, which not only causes difficulty 6 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING for the right action selection, but also will make the training process very slow. This makes the process of finding the best Actor-Critic model very complicated and model convergence usually requires a lot of tweaks to the models. IV. DATASETS Many different researchers for different tasks use bench- mark datasets, such as those discussed below. Benchmarking in machine learning refers to the assessment of methods and algorithms, comparing those regarding their capability to learn specific patterns. Benchmarking aids validation of a new approach or practice, relative to other existing methods. Benchmark datasets typically take one of three forms. 1) The first is real-world data, obtained from various real- world experiments. 2) The second is synthetic data, artificially generated to mimic real-world patterns. Synthetic data is generated for use instead of real data. Such datasets are of spe- cial interest in applications where the amount of data required is much larger than that which is available, or where privacy considerations are crucial and strict, such as in the healthcare domain. 3) The third type are toy datasets, used for demonstration and visualization purposes. Typically they are artificially generated; often there is no need to represent real-world data patterns. The foundation of Deep Learning utilization is the avail- ability of data to teach the system about pattern identification. The effectiveness of the model depends on the quality of the data. Despite the successful implementation of universal language modeling techniques such as BERT [79], however, such models can be used solely for pre-training the models. Afterward, the model needs to be trained on the data associated with the desired task. Henceforth, based on the everyday demands in different machine domains such as NLP, creating new datasets is crucial. On the other hand, creating new datasets is not usually an easy matter. Informally speaking, the newly created dataset should be: the right data to train on, sufficient for the eval- uation, and accurate to work on. Answering the questions of “what is the meaning of right and accurate data” is highly application-based. Basically, the data should have sufficient information, which depends on the quality and quantity of the data. To create a dataset, the first step is always asking “what are we trying to do and what problem do we need to solve?” and “what kind of data do we need and how much of it is required?” The next step is to create training and testing portions. The training data set is used to train a model to know how to find the connections between the inputs and the associated outputs. The test data set is used to assess the intelligence of the machine, i.e., how well the trained model can operate on the unseen test samples. Next, we must conduct data preparation to make sure the data and its format is simple and understandable for human experts. After that, the issue of data accessibility and ownership may arise. Distribution of data may need to have specific authorizations, especially if we are dealing with sensitive or private data. Given the aforementioned roadmap, creating proper datasets is complicated and of great importance. That’s why few datasets are frequently chosen by the researchers and develop- ers for benchmarking. A summary of widely used benchmark datasets is provided in Table I. # V. DEEP LEARNING FOR NLP TASKS This section describes NLP applications using deep learn- ing. Fig. 8 shows representative NLP tasks (and the categories they belong to). A fundamental question is: ”How can we evaluate an NLP algorithm, model, or system?” In [80], some of the most common evaluation metrics have been described. This reference explains the fundamental principles of evaluating NLP systems. A. Basic Tasks 1) Part-Of-Speech Tagging: Part-of-Speech tagging is one of the basic tasks in Natural Language Processing. It is the process of labeling words with their part of speech categories. Part of speech is leveraged for many crucial tasks such as named entity recognition. One commonly used dataset for Part-of-Speech tagging is the WSJ corpus7. This dataset contains over a million tokens and has been utilized widely as a benchmark dataset for the performance assessment of POS tagging systems. Traditional methods are still performing very well for this task [16]. However, neural network based methods have been proposed for Part-of-Speech tagging [81]. For example, the deep neural network architecture named CharWNN has been developed to join word-level and character-level representations using convolutional neural net- works for POS tagging [14]. The emphasis in [14] is the importance of character-level feature extraction as their exper- imental results show the necessity of employing hand-crafted features in the absence of character-level features for achieving the state-of-the-art. In [82], a wide variety of neural network based models have been proposed for sequence tagging tasks, e.g., LSTM networks, bidirectional LSTM networks, LSTM networks with a CRF8 layer, etc. Sequence tagging itself includes part of speech tagging, chunking, and named entity recognition. Likewise, a globally normalized transition-based neural network architecture has been proposed for POS- tagging [83]. State-of-the-art results are summarized in Table II. In [17], authors propose a bidirectional LSTM to perform parts of speech tagging and show that it performs better than conventional machine learning techniques on the same dataset. More recently, in [84], authors use a pretrained BERT model in combination with one bidirectional LSTM layer and train the latter layer only and outperform the prior state-of-the art POS architectures. 2) Parsing: Parsing is assigning a structure to a recognized types of parsing. Constituency string. There are different Parsing refers in particular to assigning a syntactic structure to a sentence. A greedy parser has been introduced in [92] which performs a syntactic and semantic summary of content 7Penn Treebank Wall Street Journal (WSJ-PTB). 8Conditional Random Field. 7 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING TABLE I BENCHMARK DATASETS. Task Machine Translation Text Summarization Reading Comprehension Question Answering Question Generation Semantic Parsing Sentiment Analysis Text Classification Natural Language Inference Dataset WMT 2014 EN-DE WMT 2014 EN-FR CNN/DM Newsroom DUC Gigaword ARC CliCR CNN/DM NewsQA RACE SQuAD Story Cloze Test NarativeQA Quasar SearchQA AMR parsing ATIS (SQL Parsing) WikiSQL (SQL Parsing) IMDB Reviews SST Yelp Reviews Subjectivity Dataset AG News DBpedia TREC 20 NewsGroup SNLI Corpus MultiNLI SciTail Proposition Bank OneNotes Link http://www-lium.univ-lemans.fr/∼schwenk/cslm joint paper/ https://cs.nyu.edu/∼kcho/DMQA/ https://summari.es/ https://www-nlpir.nist.gov/projects/duc/data.html https://catalog.ldc.upenn.edu/LDC2012T21 http://data.allenai.org/arc/ http://aclweb.org/anthology/N18-1140 https://cs.nyu.edu/∼kcho/DMQA/ https://datasets.maluuba.com/NewsQA http://www.qizhexie.com/data/RACE leaderboard https://rajpurkar.github.io/SQuAD-explorer/ http://aclweb.org/anthology/W17-0906.pdf https://github.com/deepmind/narrativeqa https://github.com/bdhingra/quasar https://github.com/nyu-dl/SearchQA https://amr.isi.edu/index.html https://github.com/jkkummerfeld/text2sql-data/tree/master/data https://github.com/salesforce/WikiSQL http://ai.stanford.edu/∼amaas/data/sentiment/ https://nlp.stanford.edu/sentiment/index.html https://www.yelp.com/dataset/challenge http://www.cs.cornell.edu/people/pabo/movie-review-data/ http://www.di.unipi.it/∼gulli/AG corpus of news articles.html https://wiki.dbpedia.org/Datasets https://trec.nist.gov/data.html http://qwone.com/∼jason/20Newsgroups/ https://nlp.stanford.edu/projects/snli/ https://www.nyu.edu/projects/bowman/multinli/ http://data.allenai.org/scitail/ http://propbank.github.io/ https://catalog.ldc.upenn.edu/LDC2013T19 Semantic Role Labeling TABLE II POS TAGGING STATE-OF-THE-ART MODELS EVALUATED ON THE WSJ-PTB DATASET. TABLE III CONSTITUENCY PARSING STATE-OF-THE-ART MODELS EVALUATED ON THE WSJ-PTB DATASET. Model Accuracy Character-aware neural language models [85] Transfer Learning + GRU [86] Bi-directional LSTM + CNNs + CRF [87] Adversarial Training + Bi-LSTM [88] Character Composition + Bi-LSTM [89] String Embedding + LSTM [90] Meta-BiLSTM [91] 97.53 97.55 97.55 97.59 97.78 97.85 97.96 Model Accuracy Recurrent neural network grammars (RNNG) [94] In-order traversal over syntactic trees + LSTM [95] Model Combination and Reranking [96] Self-Attentive Encoder [97] 93.6 94.2 94.6 95.1 using vector representations. To enhance the results achieved by [92], the approach proposed in [93] focuses on learning morphological embeddings. Recently, deep neural network models outperformed traditional algorithms. State-of-the-art results are summarized in Table III. Another type of parsing is called Dependency Parsing. De- pendency structure shows the structural relationships between the words in a targeted sentence. In dependency parsing, phrasal elements and phrase-structure rules do not contribute to the process. Rather, the syntactic structure of the sentence is expressed only in terms of the words in the sentence and the associated relations between the words. Neural networks have shown their superiority regarding generalizability and reducing the feature computation cost. In [98], a novel neural network-based approach was proposed for a transition-based dependency parser. Neural network based models that operate on task-specific transition systems have also been utilized for dependency parsing [83]. A regularized parser with bi-affine classifiers has been proposed for the pre- diction of arcs and labels [99]. Bidirectional-LSTMs have been used in dependency parsers for feature representation [100]. A new control structure has been introduced for sequence-to- sequence neural networks based on the stack LSTM and has been used in transition-based parsing [101]. [102] presents a transition based multilingual dependency parser which uses a bidirectional LSTM to adapt to target languages. In [103], 8 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING Semantics evnnnnnnennneneneeneeed ennnnnnnnenecennnenne Machine Translation Named Entity Recognition Part-of-Speech Tagging Parsing Natural Language Generation Question Answering Relationship Extraction Sentiment Analysis Semantic Role Labeling Discourse Speech eennnnnenneceneenenees eennennneneeene ene Automatic Summarization Dialogue Systems Coreference resolution Fig. 8. NLP tasks investigated in this study. the authors provide a comparison on the state of the art deep learning based parsing methods on a clinical text parsing task. More recently, in [104], a second-order TreeCRF extension was added to the biaffine [105] parser to demonstrate that structural learning can further improve parsing performance over the state-of-the-art bi-affine models. 3) Semantic Role Labeling: Semantic Role Labeling (SRL) is the process of identification and classification of text argu- ments. It is aimed at the characterization of elements to deter- mine “who” did “what” to “whom” as well as “how,” “where,” and “when.” It identifies the predicate-argument structure of a sentence. The predicate, in essence, refers to “what,” while the arguments consist of the associated participants and properties in the text. The goal of SRL is to extract the semantic relations between the predicate and the related arguments. Most of the previously-reported research efforts are based on explicit representations of semantic roles. Recently, deep learning approaches have achieved the SRL state-of-the-art without taking the explicit syntax representation into consider- ation [106]. On the other hand, it is argued that the utilization of syntactic information can be leveraged to improve the per- formance of syntactic-agnostic9 models [107]. A linguistically- informed self-attention (LISA) model has been proposed to leverage both multi-task learning and self-attention for effec- 9Note that being syntactic-agnostic does not imply discarding syntactic information. It means they are not explicitly employed. tive utilization of the syntactic information for SRL [108]. Current state-of-the-art methods employ joint prediction of predicates and arguments [109], novel word representation ap- proaches [110], and self-attention models [111]; see Table IV. Researchers in [25] focus on syntax and contextualized word representation to present a unique multilingual SRL model based on a biaffine scorer, argument pruning and bidirectional LSTMs, (see also [112]). TABLE IV SEMANTIC ROLE LABELING CURRENT STATE-OF-THE-ART MODELS EVALUATED ON THE ONTONOTES DATASET [113]. THE ACCURACY METRIC IS F1 SCORE. Model Accuracy (F1) Self-Attention + RNN [111] Contextualized Word Representations [110] Argumented Representations + BiLSTM [109] 83.9 84.6 85.3 B. Text Classification The primary objective of text classification is to assign predefined categories to text parts (which could be a word, sentence, or whole document) for preliminary classification purposes and further organization and analysis. A simple ex- ample is the categorization of given documents as to political or non-political news articles. 9 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING The use of CNNs for sentence classification, in which train- ing the model on top of pretrained word-vectors through fine- tuning, has resulted in considerable improvements in learning task-specific vectors [31]. Later, a Dynamic Convolutional Neural Network (DCNN) architecture – essentially a CNN with a dynamic k-max pooling method – was applied to capture the semantic modeling of sentences [114]. In addi- tion to CNNs, RNNs have been used for text classification. An LSTM-RNN architecture has been utilized in [115] for sentence embedding with particular superiority in a defined web search task. A Hierarchical Attention Network (HAN) has been utilized to capture the hierarchical structure of text, with a word-level and sentence-level attention mechanism [116]. Some models used the combination of both RNNs and CNNs for text classification such as [117]. This is a recurrent architecture in addition to max-pooling with an effective word representation method, and demonstrates superiority compared to simple window-based neural network approaches. Another unified architecture is the C-LSTM proposed in [118] for sentence and document modeling in classification. Current state-of-the-art methods are summarized in Table V. A more recent review of the deep learning based methods for text clas- sification is provided in [119]. The latter focuses on different architectures used for this task, including most recent works in CNN based models, as well as RNN based models, and graph neural networks. In [120], authors provide a comparison between various deep learning methods for text classification, concluding that GRUs and LSTMs can actually perform better than CNN-based models. TABLE V THE CLASSIFICATION ACCURACY OF STATE-OF-THE-ART METHODS, EVALUATED ON THE AG NEWS CORPUS DATASET [2]. Model Accuracy CNN [121] Deep Pyramid CNN [122] CNN [123] Universal Language Model Fine-tuning (ULMFiT) [124] 91.33 93.13 93.43 94.99 C. Information Extraction Information extraction identifies structured information from “unstructured” data such as social media posts and online news. Deep learning has been utilized for information extraction regarding subtasks such as Named Entity Recogni- tion, Relation Extraction, Coreference Resolution, and Event Extraction. 1) Named Entity Recognition: Named Entity Recogni- tion (NER) aims to locate and categorize named entities in context into pre-defined categories such as the names of people and places. The application of deep neural networks in NER has been investigated by the employment of CNN [125] and RNN architectures [126], as well as hybrid bidirectional LSTM and CNN architectures [19]. NeuroNER [127], a named-entity recognition tool, operates based on artificial neural networks. State-of-the-art models are reported in Table VI. [21] provides an extensive discussion on recent deep learning methods for named entity recognition. The latter concludes that the work presented in [128] outperforms other recent models (with an F-score of 93.5 on the CoNLL03 dataset). TABLE VI STATE OF THE ART MODELS REGARDING NAME ENTITY RECOGNITION. EVALUATION IS PERFORMED ON THE CONLL-2003 SHARED TASK DATASET [129]. THE EVALUATION METRIC IS F1 SCORE. Model Accuracy Semi-supervised Sequence Modeling [130] Google BERT [131] Contextual String Embeddings [90] 92.61 92.8 93.09 2) Relation Extraction: Relation Extraction aims to find the semantic relationships between entity pairs. The recursive neural network (RNN) model has been proposed for semantic relationship classification by learning compositional vector representations [132]. For relation classification, CNN archi- tectures have been employed as well, by extracting lexical and sentence level features [37]. More recently, in [133], bidirectional tree-structured LSTMs were shown to perform well for relation extraction. [134] provides a more recent review on relation extraction. 3) Coreference Resolution: Coreference resolution includes identification of the mentions in a context that refer to the same entity. For instance, the mentions “car,” “Camry,” and “it” could all refer to the same entity. For the first time in [135], Reinforcement Learning (RL) was applied to coreference resolution. Current widely used methods leverage an attention mechanism [136]. More recently, in [137], authors adopt a reinforcement learning policy gradient approach to coreference resolution and provide state-of-the art performance on the English OntoNotes v5.0 benchmark task. [138] reformulates coreference resolution as a span prediction task as in question answering and provide superior performance on the CoNLL- 2012 benchmark task. 4) Event Extraction: A specific type of extracted infor- mation from text is an event. Such extraction may involve recognizing trigger words related to an event and assign- ing labels to entity mentions that represent event triggers. Convolutional neural networks have been utilized for event detection; they handle problems with feature-based approaches including exhaustive feature engineering and error propagation phenomena for feature generation [139]. In 2018, Nguyen and Grishman applied graph-CNN (GCCN) where the con- volutional operations are applied to syntactically dependent words as well as consecutive words [140]; their adding entity information reflected the state-of-the-art using CNN models. [141] uses a novel inverse reinforcement learning approach based on generative adversarial networks (imitation learning) to tackle joint entity and event extraction. More recently, in [142], authors proposed a model for document-level event extraction using a combined dependency-based GCN (for local context) and a hypergraph (as an aggregator for global context). 10 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING # D. Sentiment analysis The primary goal in sentiment analysis is the extraction of subjective information from text by contextual mining. Sentiment analysis is considered high-level reasoning based on source data. Sentiment analysis is sometimes called opinion mining, as its primary goal is to analyze human opinion, sentiments, and even emotions regarding products, problems, and varied subjects. Seminal works on sentiment analysis or opinion mining include [143], [144]. Since 2000, much atten- tion has been given to sentiment analysis, due to its relation to a wide variety of applications [145], its associations with new research challenges, and the availability of abundant data. [146] provides a more recent review of the sentiment analysis methods relying on deep learning and gives an insightful discussion on the drawbacks as well as merits of deep learning methods for sentiment analysis. A critical aspect of research in sentiment analysis is content granularity. Considering this criterion, sentiment analysis is generally divided into three categories/levels: document level, sentence level, and aspect level. 1) Document-level Sentiment Analysis: At the document level, the task is to determine whether the whole document reflects a positive or negative sentiment about exactly one entity. This differs from opinion mining regarding multiple entries. The Gated Recurrent Neural Network architecture has been utilized successfully for effectively encoding the sentences’ relations in the semantic structure of the docu- ment [147]. Domain adaptation has been investigated as well, to deploy the trained model on unseen new sources [148]. More recently, in [149] authors provide an LSTM-based model for document-level sentiment analysis that captures semantic relations between sentences. In [150], authors use a CNN- bidirectional LSTM model to process long texts. 2) Sentence-level Sentiment Analysis: At the sentence- level, sentiment analysis determines the positivity, negativity, or neutrality regarding an opinion expressed in a sentence. One general assumption for sentence-level sentiment classification is the existence of only one opinion from a single opinion holder in an expressed sentence. Recursive autoencoders have been employed for sentence-level sentiment label prediction by learning the vector space representations for phrases [151]. Long Short-Term Memory (LSTM) recurrent models have also been utilized for tweet sentiment prediction [152]. The Sentiment Treebank and Recursive Neural Tensor Networks [153] have shown promise for predicting fine-grained sen- timent labels. [154] provides a cloud-based hybrid machine learning model for sentence level sentiment analysis. More recently in [155], propose A Lexicalized Domain Ontology and a Regularized Neural Attention model (ALDONAr) for sentence-level aspect-based sentiment analysis that uses a CNN classification module with BERT word embeddings and achieves state-of-the art results. 3) Aspect-level Sentiment Analysis: Document-level and sentence-level sentiment analysis usually focus on the senti- ment itself, not the target of the sentiment, e.g., a product. Aspect-level sentiment analysis directly targets an opinion, with the assumption of the existence of the sentiment and its target. A document or sentence may not have a generally posi- tive or negative sentiment, but may have multiple subparts with different targets, each with a positive or negative sentiment. This can make aspect-level analysis even more challenging than other types of sentiment categorization. Aspect-level sentiment analysis usually involves Aspect Sentiment Classification and Aspect Extraction. The former determines opinions on different aspects (positive, neutral, or negative) while the latter identifies the target aspect for evaluation in context. As an example consider the following sentence: “This car is old. It must be repaired and sold!”. “This car” is what to evaluation and must be extracted first. Here, the opinion about this aspect is negative. For aspect-level sentiment classification, attention-based the aspect and sentence LSTMs are proposed to connect content for sentiment classification [156]. For aspect extrac- tion, deep learning has successfully been proposed in opinion mining [157]. State-of-the-art methods rely on converting aspect-based sentiment analysis to sentence-pair classification tasks [79], post-training approaches [158] on the popular language model BERT [131], and employment of pre-trained embeddings [159]. [160] provides a recent comparative review on aspect-based sentiment analysis. Also recently, [161] pro- posed a dual-attention model which tries to extract the implicit relation between the aspect and opinion terms. In [162] authors propose a novel Aspect-Guided Deep Transition model for aspect-based sentiment analysis. # E. Machine Translation Machine Translation (MT) is one of the areas of NLP that has been profoundly affected by the advances in deep learning. The first subsection below explains methods used in the pre-deep learning period, as explained in reference NLP textbooks such as “Speech and Language Processing” [163]. The remainder of this section is dedicated to delving into recent innovations in MT which are based on neural networks, started by [164]. [165], [166] provide reviews on various deep learning architectures used for MT. the first demonstrations of machine translation happened in 1954 [167] in which the authors tried to translate from Russian to English. This translation system was based on six simple rules, but had a very limited vocabulary. It was not until the 1990s that successful statistical implementations of machine translation emerged as more bilingual corpora became available [163]. In [68] the BLEU score was introduced as a new evaluation metric, allowing more rapid improvement than when the only approach involved using human labor for evaluation. 2) Neural Machine Translation: It was after the success of the neural network in image classification tasks that re- searchers started to use neural networks in machine translation (NMT). Around 2013, research groups started to achieve breakthrough results in NMT. Unlike traditional statistical machine translation, NMT is based on an end-to-end neural network [168]. This implies that there is no need for extensive preprocessing and word alignments. Instead, the focus shifted toward network structure. 11 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING Fig. 11 shows an example of an end-to-end recurrent neural network for machine translation. A sequence of input tokens is fed into the network. Once it reaches an end-of-sentence (EOS) token, it starts generating the output sequence. The output sequence is generated in the same recurrent manner as the input sequence until it reaches an end-of-sentence token. One major advantage of this approach is that there is no need to specify the length of the sequence; the network takes it into account automatically. In other words, the end-of-sentence token determines the length of the sequence. Networks implic- itly learn that longer input sentences usually lead to longer output sentences with varying length, and that ordering can change. For instance, the second example in Fig. 9 shows that adjectives generally come before nouns in English but after nouns in Spanish. There is no need to explicitly specify this since the network can capture such properties. Moreover, the amount of memory that is used by NMT is just a fraction of the memory that is used in traditional statistical machine translation [169]. 1 like to play tennis ' ' i A H ' ' H me gusta jugar tenis Ihave a red hat A ‘ _ 7” ' ' re tengo un sombrero rojo cheesecake tarta de queso Fig. 9. Alignment in Machine Translation [164] was one of the early works that incorporated recurrent neural networks for machine translation. They were able to achieve a perplexity (a measure where lower values indicate better models) that was 43% less than the state-of-the-art alignment based translation models. Their recurrent continuous translation model (RCTM) is able to capture word ordering, syntax, and meaning of the source sentence explicitly. It maps a source sentence into a probability distribution over sentences in the target language. RCTM estimates the probability P (f |e) of translating a sentence e = e1 + ... + ek in the source language to target language sentence f = f1 +...+fm. RCTM estimates P (f |e) by considering source sentence e as well as the preceding words in the target language f1:i−1: m e) =]] Plfilftu-1,¢) @) i=1 P(f The representation generated by RCTM acts on n-grams in the lower layers, and acts more on the whole sentence as one moves to the upper layers. This hierarchical representation is performed by applying different layers of convolution. First a continuous representation of each word is generated; i.e., if the sentence is e = e1...ek, the representation of the word ei will be v(ei) ∈ Rq×1. This will result in sentence matrix Ee ∈ Rq×k in which Ee :,i = v(ei). This matrix representation of the sentence will be fed into a series of convolution layers in order to generate the final representation e for the recurrent neural network. The approach is illustrated in Fig. 10. Equations for the pipeline are as follows. s = S.csm(e) (4) h1 = σ(I.v(f1) + s) (5) hi+1 = σ(R.hi + I.v(fi+1) + s) (6) oi+1 = O.hi (7) In order to take into account the sentence length, the authors introduced RCTM II which estimates the length of the target sentence. RCTM II was able to achieve better perplexity on WMT datasets (see top portion of Table I) than other existing machine translation systems. Input Sentence csm s ea. Pe > 1 & [>] Lp}. a oe a [| es o tee HH Y P(fle,m) Lp} » ~ [—>| « b> t ex wml. > Fig. 10. Recurrent Continuous Translation Models (RCTM) [164]. [170] presented an end-to-end sequence learning approach without heavy assumptions on the structure of the sequence. Their approach consists of two LSTMs, one for mapping the input to a vector of fixed di- mension and another LSTM for decoding the output sequence from the vector. Their model was able to handle long sentences as well as sentence representations that are sensitive to word order. As shown in Fig. 11, the model reads ”ABC” as an input sequence and produces ”WXYZ” as output sequence. The < EOS > token indicates the end of prediction. The network was trained by maximizing the log probability of the translation (η) given the input sequence (ζ). In other words, the objective function is: 1/|D| logP (η|ζ) (η,ζ)∈D (8) is its size. One of the novelties of their approach was reversing word order of the source sentence. This helps the LSTM to learn long term dependencies. 12 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING w—| ° Fig. 11. Sequence to sequence learning with LSTM. Having a fixed-length vector in the decoder phase is one [168] of the bottlenecks of the encoder-decoder approach. argues that a network will have a hard time compressing all the information from the input sentence into a fixed-size vector. They address this by allowing the network to search segments of the source sentence that are useful for predicting the translation. Instead of representing the input sentence as a fixed-size vector, in [168] the input sentence is encoded to a sequence of vectors and a subset of them is chosen by using a method called attention mechanism as shown in Fig. 12. In their approach P (yi|y1, ..., yi−1, X) = g(yi−1, si, ci), in which si = f (si−1, yi−1, ci). While previously c was the same for all time steps, here c takes a different value, ci, at each time step. This accounts for the attention mechasim (context vector) around that specific time step. ci is computed according to the following: q= aa ayhj, oj = =e fe ij = a(s;-1, h;). Here a is the alignment model that is tepresented by a feed Here a is the alignment model that is tepresented by a feed ← hT forward neural network. Also hj = [ j ], which is a way to include information both about preceding and following words in hj. The model was able to outperform the simple encoder- decoder approach regardless of input sentence length. Improved machine translation models continue to emerge, driven in part by the growth in people’s interest and need to understand other languages Most of them are variants of the end-to-end decoder-encoder approach. For example, [171] tries to deal with the problem of rare words. Their LSTM network consists of encoder and decoder layers using residual layers along with the attention mechanism. Their system was able to decrease training time, speed up inference, and handle translation of rare words. Comparisons between some of the state-of-the-art neural machine translation models are summarized in Table VII. TABLE VII THE MACHINE TRANSLATION STATE-OF-THE-ART MODELS EVALUATED ON THE English-German dataset of ACL 2014 Ninth Workshop on Statistical Machine TRranslation. THE EVALUATION METRIC IS BLEU SCORE. Model Accuracy Convolutional Seq-to-Seq [172] Attention Is All You Need [173] Weighted Transformer [174] Self Attention [175] DeepL Translation Machine 10 Back-translation [176] 25.2 28.4 28.9 29.2 33.3 35.0 More recently, [177] provides an interesting single-model implementation of massively multilingual NMT. In [178], authors use BERT to extract contextual embeddings and com- Ya ” —" cee a osu so pan > - ou aus as ar >| >| os hy he hs by |} | --- Xt Xe Xs x Fig. 12. Attention Mechasim for Neural Machine Translation [168]. bine BERT with an attention-based NMT model and provide state-of-the-art results on various benchmark datasets. [179] proposes mBART which is a seq-to-seq denoising autoen- coder and reports that using a pretrained, locked (i.e. no modifications) mBART improves performance in terms of the BLEU point. [180] proposes an interesting adversarial framework for robustifying NMT against noisy inputs and reports performance gains over the Transformer model. [181] is also an insightful recent work where the authors sample context words from the predicted sequence as well as the ground truth to try to reconcile the training and inference processes. Finally, [182] is a successful recent effort to prevent the forgetting that often accompanies in translating pre-trained language models to other NMT task. [182] achieves that aim primarily by using a dynamically gated model and asymptotic distillation. # F. Question Answering Question answering (QA) is a fine-grained version of Infor- mation Retrieval (IR). In IR a desired set of information has to be retrieved from a set of documents. The desired information could be a specific document, text, image, etc. On the other hand, in QA specific answers are sought, typically ones that can be inferred from available documents. Other areas of NLP such as reading comprehension and dialogue systems intersect with question answering. Research in computerized question answering has pro- ceeded since the 1960s. In this section, we present a general overview of question answering system history, and focus on the breakthroughs in the field. Like all other fields in NLP, question answering was also impacted by the advancement of deep learning [183], so we provide an overview of QA in deep learning contexts. We briefly visit visual question answering as well. 1) Rule-based Question Answering: Baseball [184] is one of the early works (1961) on QA where an effort was made to answer questions related to baseball games by using a game database. The baseball system consists of (1) question read-in, (2) dictionary lookup for words in the question, (3) syntactic (POS) analysis of the words in question, (4) content analysis 13 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING for extracting the input question, and (5) estimating relevance regarding answering the input question. IBM’s [185] statistical question answering system consisted of four major components: 1) Question/Answer Type Classification 2) Query Expansion/Information Retrieval 3) Name Entity Making 4) Answer Selection Some QA systems fail when semantically equivalent re- lationships are phrased differently. [186] addressed this by proposing fuzzy relation matching based on mutual informa- tion and expectation maximization. 2) Question answering in the era of deep learning: Smart- phones (Siri, Ok Google, Alexa, etc.) and virtual personal assistants are common examples of QA systems with which many interact on a daily basis. While earlier such systems employed rule-based methods, today their core algorithm is based on deep learning. Table VIII presents some questions and answers provided by Siri on an iPhone. TABLE VIII TYPICAL QUESTION ANSWERING PERFORMANCE BASED ON DEEP LEARNING. Question Who invented polio vaccine? Who wrote Harry Potter? When was Einstein born? Answer The answer I found is Jonas Salk J.K.Rowling wrote Harry Potter in 1997 Albert Einstein was born March 14, 1879 [188] was one of the first machine learning based papers that reported results on QA for a reading comprehension test. The system tries to pick a sentence in the database that has an answer to a question, and a feature vector represents each question-sentence pair. The main contribution of [188] is proposing a feature vector representation framework which is aimed to provide information for learning the model. There are five classifiers (location, date, etc.), one for each type of question. They were able to achieve accuracy competitive with previous approaches. As illustrated in Fig. 13, [187] uses convolutional neural networks in order to encode Question-Answer sentence pairs in the form of fixed length vectors regardless of the length of the input sentence. Instead of using distance measures like cosine correlation, they incorporate a non-linear tensor layer to match the relevance between question and answer. Equation 9 calculates the matching degree between question q and its corresponding answer a. s(q,a) = ul f(v?M"ly, + I] +b) (9) a f is the standard element-wise non-linearity function, M[1:r]∈Rns×ns×r is a tensor, V ∈ Rr×2ns, b ∈ Rr, u ∈ Rr. The model tries to capture the interaction between question and answer. Inspired by findings in neuroscience, [81] incorpo- rated episodic memory11 in their Dynamic Memory Network 11A kind of long-term memory that includes conscious recall of previous activities together with their meaning. (DMN). By processing input sequences and questions, DMN forms episodic memories to answer relevant questions. As illustrated in Fig. 14, their system is trained based on raw Input-Question-Answer triplets. DMN consists of four modules that communicate with each other as shown in Fig. 15. The input module encodes raw input text into a distributed vector representation; likewise the question module encodes a question into its distributed vector representation. The episodic memory module uses the attention mechanism in order to focus on a specific part of the input module. Through an iterative process, this module produces a memory vector representation that considers the question as well as previous memory. The answer module uses the final memory vector to generate an answer. The model improved upon state-of-the-art results on tasks such as the ones shown in Fig. 14. DMN is one of the architectures that could potentially be used for a variety of NLP applications such as classification, question answering, and sequence modeling. [189] introduced a Dynamic Coattention Network (DCN) in order to address local maxima corresponding to incorrect answers; it is considered to be one of the best approaches to question answering. 3) Visual Question Answering: Given an input image, Vi- sual Question Answering (VQA) tries to answer a natural language question about the image [190]. VQN addresses mul- tiple problems such as object detection, image segmentation, sentiment analysis, etc. [190] introduced the task of VQA by providing a dataset containing over 250K images, 760K questions, and around 10M answers. [191] proposed a neural- based approach to answer the questions regarding the input images. As illustrated in Fig. 16, Neural-Image-QA is a deep network consisting of CNN and LSTM. Since the questions can have multiple answers, the problem is decomposed into predicting a set of answer words aq,x = {a1, a2, ..., aN (q,x)} from a finite vocabulary set ν where N (q, x) represents the count of answer words regarding a given question. Do humans and computers look at the same regions to answer questions about an image? [193] tries to answer this question by conducting large-scale studies on human attention in VQA. Their findings show that VQAs do not seem to be looking at the same regions as humans. Finally, [192] incorporates a spatial memory network for VQA. Fig. 17 shows the inference process of their model. As illustrated in the figure, the specific attention mechanism in their system can highlight areas of interest in the input image. [194] introduces BLOCK, a bilinear fusion model based on superdiagonal tensor decomposition for the VQA task, with state-of-the- art performance and the code made public on github. To improve the generalization of existing models to test data of different distribution, [195] introduces a self-critical training objective to help find visual regions of prominent visual/textual correlation with a focus on recognizing influential objects and detecting and devaluing incorrect dominant answers. G. Document Summarization Document summarization refers to a set of problems involv- ing generation of summary sentences given one or multiple documents as input. 14 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING Question Sentence Fixed Length Vector Representation Pooling Layer Convolution Layer ‘Answer Sentence Fig. 13. Fixed length vector sentence representation for input Questions and Answers [187]. Input: Jane went to the hallway. Input: — Mary walked to the bathroom. Input: Sandra went to the garden. Input: Daniel went back to the garden. Input: Sandra took the milk there. Question: Where is the milk? Answer: Garden Input: It started boring, but then it got interesting. Question: What’s the sentiment? Answer: Positive Question: POS tags? Answer: PRP VBD JJ, CC RB PRP VBD JJ. Fig. 14. Example of Dynamic Memory Network (DMN) input-question- answer triplet lawn ina <end> usm] [ust] »{tstm] »[istw] [iste] fist] *[istw] usta what is behing] the dog 2 ‘NN Input Text Sequence Episodic Memory Answer Fig. 16. Neural Image Question Answering [191]. ‘Whatare the animals in this scene? Giraffe ‘Whatis the person holding in his hand? Fish Fig. 15. Interaction between four modules of Dynamic Memory Network [78]. Generally, text summarization fits into two categories: Fig. 17. Spatial Memory Network for VQA. Bright Areas are regions the model is attending [192]. 1) Extractive Summarization, where the goal is to iden- tify the most salient sentences in the document and return them as the summary. 2) Abstractive Summarization, where the goal is to gen- erate summary sentences from scratch; they may contain novel words that do not appear in the original document. Each of these methods has its own advantages and disad- vantages. Extractive summarization is prone to generate long and sometimes overlapping summary sentences; however, the result reflects the author’s mode of expression. Abstractive methods generate a shorter summary but they are hard to train. There is a vast amount of research on the topic of text summarization using extractive and abstractive methods. As one of the earliest works on using neural networks for ex- tractive summarization, [196] proposed a framework that used a ranking technique to extract the most salient sentences in the input. This model was improved by [197] which used a document-level encoder to represent sentences, and a classifier to rank these sentences. On the other hand, in 15 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING abstractive summarization, it was [198] which, for the first time, used attention over a sequence-to-sequence (seq2seq) model for the problem of headline generation. However, since simple attention models perform worse than extractive models, therefore more effective attention models such as graph-based attention [199] and transformers [173] have been proposed for this task. To further improve abstractive text summarization models, [200] proposed the first pointer-generator model and applied it to the DeepMind QA dataset [201]. As a result of this work, the CNN/Daily Mail dataset emerged which is now one of the widely used datasets for the summarization task. A copy mechanism was also adopted by [202] for similar tasks. But their analysis reveals a key problem with attention-based encoder-decoder models: they often generate unusual summaries consisting of repeated phrases. Recently, [62] reached state-of-the-art results on the abstractive text summarization using a similar framework. They alleviated the unnatural summaries by avoiding generating unknown tokens and replacing these words with tokens from the input article. Later, researchers moved their focus to methods that use sentence-embedding to first select the most salient sentence in the document and then change them to make them more abstractive [203], [204]. In these models, salient sentences are extracted first and then a paraphrasing model is used to make them abstractive. The extraction employs a sentence classifier or ranker while the abstractor tries to remove the extra information in a sentence and present it as a shorter summary. Fast-RL [203] is the first framework in this family of works. In Fast-RL, the extractor is pre-trained to select salient sentences and the abstractor is pre-trained using a pointer- generator model to generate paraphrases. Finally, to merge these two non-differentiable components, they propose using Actor-Critic Q-learning methods in which the actor receives a single document and generates the output while the critic evaluates the output based on comparison with the ground- truth summary. Though the standard way to evaluate the performance of summarization models is with ROUGE [67] and BLEU [68], there are major problems with such measures. For instance, the ROUGE measure focuses on the number of shared n-grams between two sentences. Such a method incorrectly assigns a low score to an abstractive summary that uses different words yet provides an excellent paraphrase that humans would rate highly. Clearly, better automated evaluation methods are needed in such cases. There are additional problems with current summarization models. Shi et al. [205] provides a comprehensive survey on text summarization. [206] provides a recent survey on summarization methods. [207] provides an advanced composite deep learning model, based on LSTMs and Restricted Boltzmann Machine, for multi-doc opinion summarization. A very influential recent work, [208], introduces HIBERT ( HIerachical Bidirectional Encoder Representations from Transformers) as a pre-trained initialization for document summarization and report state-of- the-art performance. H. Dialogue Systems Dialogue Systems are quickly becoming a principal in- strument in human-computer interaction, due in part to their promising potential and commercial value [209]. One appli- cation is automated customer service, supporting both online and bricks-and-mortar businesses. Customers expect an ever- increasing level of speed, accuracy, and respect while dealing with companies and their services. Due to the high cost of knowledgeable human resources, companies frequently turn to intelligent conversational machines. Note that the phrases conversational machines and dialogue machines are often used interchangeably. Dialogue systems are usually task-based or non-task- based (Fig. 18). Though there might be Automatic Speech Recognition (ASR) and Language-to-Speech (L2S) compo- nents in a dialogue system, the discussion of this section is solely about the linguistic components of dialogue systems; concepts associated with speech technology are ignored. Despite useful statistical models employed in the backend of dialogue systems (especially in language understanding modules), most deployed dialogue systems rely on expensive hand-crafted and manual features for operation. Furthermore, the generalizability of these manually engineered systems to other domains and functionalities is problematic. Hence, recent attention has focused on deep learning for the enhancement of performance, generalizability, and robustness. Deep learning facilitates the creation of end-to-end task-oriented dialogue systems, which enriches the framework to generalize conver- sations beyond annotated task-specific dialogue resources. 1) Task-based Systems: The structure of a task-based dia- logue system usually consists of the following elements: • Natural Language Understanding (NLU): This compo- nent deals with understanding and interpreting user’s spoken context by assigning a constituent structure to the spoken utterance (e.g., a sentence) and captures its syn- tactic representation and semantic interpretation, to allow the back-end operation/task. NLU is usually leveraged regardless of the dialogue context. • Dialogue Manager (DM): The generated representation by NLU would be handled by the dialogue manager, which investigates the context and returns a reasonable semantic-related response. • Natural Language Generation (NLG): The natural lan- guage generation (NLG) component produces an utter- ance based on the response provided by the DM compo- nent. The general pipeline is as follows: NLU module (i.e., semantic decoder) transforms the output of the speech recog- nition module to some dialogue elements. Then the DM processes these dialogue elements and provides a suitable response which is fed to the NLG for response generation. The main pipeline in NLU is to classify the user query domain and user intent, and fill a set of slots to create a semantic frame. It is usually customary to perform the intent prediction and the slot filling simultaneously [210]. Most of the task- oriented dialogue systems employ slot-filling approaches to classify user intent in the specific domain of the conversation. 16 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING PLEASE SCHEDULE A MEETING FOR ME IN THE NEXT WEEK ZIN Speech Recognition Language Understanding - Dialogue Manager Model Language Generation + Lo Task Oriented Non-task Oriented Systems as Natural Language Generator (Chat bots) Language to Speech I WOULD BE HAPPY TO HELP! WHICH DAY? Fig. 18. The framework of a dialogue system. A dialogue system can be task oriented or used for natural language generation based on the user input which is also known as a chat bot. For this aim, having predefined tasks is required; this depends on manually crafted states with different associated slots. Henceforth, a designed dialogue system would be of limited or no use for other tasks. to produce suitable responses when such responses are not in the corpus. However, as opposed to retrieval-based models, they are more prone to grammatical and conceptual mistakes arising from their generative models. Recent task-oriented dialogue systems have been designed based on deep reinforcement learning, which provided promis- ing results regarding performance [211], domain adapta- tion [212], and dialogue generation [213]. This was due to a shift towards end-to-end trainable frameworks to design and deploy task-oriented dialogue systems. Instead of the traditionally utilized pipeline, an end-to-end framework in- corporates and uses a single module that deals with external databases. Despite the tractability of end-to-end dialogue systems (i.e., easy to train and simple to engineer), due to their need for interoperability with external databases via queries, they are not well-suited for task-oriented settings. Some approaches to this challenge include converting the user input into internal representations [214], combining supervised and reinforced learning [215], and extending the memory network approach [216] for question-answering to a dialog system [217]. Retrieval-based methods select an appropriate response from the candidate responses. Therefore, the key element is the query-response operation. In general, this problem has been formulated as a search problem and uses IR techniques for task completion [219]. Retrieval-based methods usually employ either Single-turn Response Matching or Multi-turn Response Matching. In the first type, the current query (message) is solely used to select a suitable response [220]. The latter type takes the current message and previous utterances as the system input and retrieves a response based on the instant and temporal information. The model tries to choose a response which considers the whole context to guarantee conversation consistency. An LSTM-based model has been proposed [221] for context and response vectors creation. In [222], various features and multiple data inputs have been incorporated to be ingested using a deep learning framework. Current base models regarding retrieval-based chatbots rely on multi-turn response selection augmented by an attention mechanism and sequence matching [223]. 2) Non-task-based Systems: As opposed to task-based dia- logue systems, the goal behind designing and deploying non- task-based dialogue systems is to empower a machine with the ability to have a natural conversation with humans [218]. Typically, chatbots are of one of the following types: retrieval- based methods and generative methods. Retrieval-based mod- els have access to information resources and can provide more concise, fluent, and accurate responses. However, they are limited regarding the variety of responses they can provide due to their dependency on backend data resources. Generative models, on the other hand, have the advantage of being able Generative models don’t assume the availability of pre- defined responses. New responses are produced from scratch and are based on the trained model. Generative models are typically based on sequence to sequence models and map an input query to a target element as the response. In general, designing and implementing a dialogue agent to be able to converse at the human level is very challenging. The typical approach usually consists of learning and imitating human conversation. For this goal, the machine is generally trained on 17 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING large corpora of conversations. However, this does not directly remedy the issue of encountering out-of-corpus conversation. The question is: How can an agent be taught to generate proper responses to conversations that it never has seen? It must handle content that is not exactly available in the data corpus that the machine has been trained on, due to the lack of content matching between the query and the corresponding response, resulting from the wide range of plausible queries that humans can provide. To tackle the aforementioned general problem, some fun- damental questions must be answered: (1) What are the core characteristics of a natural conversation? (2) How can these characteristics be measured? (3) How can we incorporate this knowledge in a machine, i.e., the dialogue system? Effective integration of these three elements determines the intelligence of a machine. A qualitative criterion is to observe if the generated utterances can be distinguished from natural human dialogues. For quantitative evaluation, adversarial evaluation was initially used for quality assessment of sentence gener- ation [224] and employed for quality evaluation of dialogue systems [225]. Recent advancements in sequence to sequence modeling encouraged many research efforts regarding natural language generation [226]. Furthermore, deep reinforcement learning yields promising performance in natural language generation [213]. 3) Final note on dialogue systems: Despite remarkable advancements in AI and much attention dedicated to dia- logue systems, in reality, successful commercial tools, such as Apple’s Siri and Amazon’s Alexa, still heavily rely on handcrafted features. It still is very challenging to design and train data-driven dialogue machines given the complexity of the natural language, the difficulties in framework design, and the complex nature of available data sources. VI. CONCLUSION In this article, we presented a comprehensive survey of the most distinguished works in Natural Language Processing using deep learning. We provided a categorized context for introducing different NLP core concepts, aspects, and applica- tions, and emphasized the most significant conducted research efforts in each associated category. Deep learning and NLP are two of the most rapidly developing research topics nowadays. Due to this rapid progress, it is hoped that soon, new effective models will supersede the current state-of-the-art approaches. This may cause some of the references provided in the survey to become dated, but those are likely to be cited by new publications that describe improved methods Neverthless, one of the essential characteristics of this survey is its educational aspect, which provides a precise understanding of the critical elements of this field and explains the most notable research works. Hopefully, this survey will guide students and researchers with essential resources, both to learn what is necessary to know, and to advance further the integration of NLP with deep learning. # REFERENCES [1] C. D. Manning, C. D. Manning, and H. Sch¨utze, Foundations of statistical natural language processing. MIT Press, 1999. [2] X. Zhang, J. Zhao, and Y. LeCun, “Character-level convolutional networks for text classification,” in Advances in neural information processing systems, pp. 649–657, 2015. [3] K. Cho, B. Van Merri¨enboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations us- ing RNN encoder-decoder for statistical machine translation,” arXiv preprint arXiv:1406.1078, 2014. [4] S. Wu, K. Roberts, S. Datta, J. Du, Z. Ji, Y. Si, S. Soni, Q. Wang, Q. Wei, Y. Xiang, B. Zhao, and H. Xu, “Deep learning in clinical the natural American Medical Informatics Association, vol. 27, pp. 457–470, mar 2020. [5] R. Collobert and J. Weston, “A unified architecture for natural lan- guage processing: Deep neural networks with multitask learning,” in Proceedings of the 25th international conference on Machine learning, pp. 160–167, ACM, 2008. [6] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732, 2014. [7] M. Oquab, L. Bottou, I. Laptev, and J. Sivic, “Learning and transferring mid-level image representations using convolutional neural networks,” in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1717–1724, 2014. [8] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb, “Learning from simulated and unsupervised images through adversarial training,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2107–2116, 2017. [9] A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep Learning for Computer Vision: A Brief Review,” Computational Intelligence and Neuroscience, Feb 2018. [10] N. O’Mahony, S. Campbell, A. Carvalho, S. Harapanahalli, G. V. Hernandez, L. Krpalkova, D. Riordan, and J. Walsh, “Deep learning vs. traditional computer vision,” in Advances in Computer Vision (K. Arai and S. Kapoor, eds.), (Cham), pp. 128–144, Springer International Publishing, 2020. [11] A. Graves and N. Jaitly, “Towards end-to-end speech recognition with recurrent neural networks,” in International Conference on Machine Learning, pp. 1764–1772, 2014. [12] D. Amodei, S. Ananthanarayanan, R. Anubhai, J. Bai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, Q. Cheng, G. Chen, et al., “Deep speech 2: End-to-end speech recognition in English and Mandarin,” in ICML, pp. 173–182, 2016. [13] U. Kamath, J. Liu, and J. Whitaker, Deep learning for NLP and speech recognition, vol. 84. Springer, 2019. [14] C. D. Santos and B. Zadrozny, “Learning character-level representa- tions for part-of-speech tagging,” in Proceedings of the 31st Interna- tional Conference on Machine Learning (ICML-14), pp. 1818–1826, 2014. [15] B. Plank, A. Søgaard, and Y. Goldberg, “Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxil- iary loss,” arXiv preprint arXiv:1604.05529, 2016. [16] C. D. Manning, “Part-of-speech tagging from 97% to 100%: is it time for some linguistics?,” in International Conference on Intelligent Text Processing and Computational Linguistics, pp. 171–189, Springer, 2011. [17] R. D. Deshmukh and A. Kiwelekar, “Deep learning techniques for language processing,” in 2020 part of speech tagging by natural 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA), pp. 76–81, IEEE, 2020. [18] G. Lample, M. Ballesteros, S. Subramanian, K. Kawakami, and C. Dyer, “Neural architectures for named entity recognition,” arXiv preprint arXiv:1603.01360, 2016. [19] J. P. Chiu and E. Nichols, “Named entity recognition with bidirectional LSTM-CNNs,” arXiv preprint arXiv:1511.08308, 2015. in named entity recognition from deep learning models,” arXiv preprint arXiv:1910.11470, 2019. [21] J. Li, A. Sun, J. Han, and C. Li, “A survey on deep learning for named entity recognition,” IEEE Transactions on Knowledge and Data Engineering, 2020. [22] J. Zhou and W. Xu, “End-to-end learning of semantic role labeling using recurrent neural networks,” in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), vol. 1, pp. 1127–1137, 2015. 18 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING [23] D. Marcheggiani, A. Frolov, and I. Titov, “A simple and accurate syntax-agnostic neural model for dependency-based semantic role labeling,” arXiv preprint arXiv:1701.02593, 2017. [24] L. He, K. Lee, M. Lewis, and L. Zettlemoyer, “Deep semantic role labeling: What works and what’s next,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 473–483, 2017. [25] S. He, Z. Li, and H. Zhao, “Syntax-aware multilingual semantic role labeling,” arXiv preprint arXiv:1909.00310, 2019. [26] T. Young, D. Hazarika, S. Poria, and E. Cambria, “Recent trends in deep learning based natural language processing,” IEEE Computational Intelligence Magazine, vol. 13, no. 3, pp. 55–75, 2018. [27] Y. Kang, Z. Cai, C.-W. Tan, Q. Huang, and H. Liu, “Natural language processing (NLP) in management research: A literature review,” Jour- nal of Management Analytics, vol. 7, pp. 139–172, apr 2020. artificial [28] T. Greenwald, “What anyway?.” exactly in- https://www.wsj.com/articles/ April is telligence, what-exactly-is-artificial-intelligence-anyway-1525053960, 2018. Wall Street Journal Online Article. [29] U. Sivarajah, M. M. Kamal, Z. Irani, and V. Weerakkody, “Critical analysis of big data challenges and analytical methods,” Journal of Business Research, vol. 70, pp. 263–286, 2017. [30] Z. C. Lipton, J. Berkowitz, and C. Elkan, “A critical review of recurrent neural networks for sequence learning,” arXiv preprint arXiv:1506.00019, 2015. [31] Y. Kim, “Convolutional neural networks for sentence classification,” arXiv preprint arXiv:1408.5882, 2014. [32] R. Socher, C. C. Lin, C. Manning, and A. Y. Ng, “Parsing natural scenes and natural language with recursive neural networks,” in Proceedings of the 28th international conference on machine learning (ICML-11), pp. 129–136, 2011. [33] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012. [34] C. dos Santos and M. Gatti, “Deep convolutional neural networks for sentiment analysis of short texts,” in Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pp. 69–78, 2014. [35] R. Johnson and T. Zhang, “Effective use of word order for text categorization with convolutional neural networks,” arXiv preprint arXiv:1412.1058, 2014. [36] R. Johnson and T. Zhang, “Semi-supervised convolutional neural networks for text categorization via region embedding,” in Advances in neural information processing systems, pp. 919–927, 2015. [37] D. Zeng, K. Liu, S. Lai, G. Zhou, and J. Zhao, “Relation classification via convolutional deep neural network,” in Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pp. 2335–2344, 2014. [38] T. H. Nguyen and R. Grishman, “Relation extraction: Perspective from convolutional neural networks,” in Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, pp. 39–48, 2015. [39] T. Mikolov, M. Karafi´at, L. Burget, J. ˇCernock`y, and S. Khudanpur, “Recurrent neural network based language model,” in Eleventh Annual Conference of the International Speech Communication Association, 2010. [40] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural computation, vol. 9, no. 8, pp. 1735–1780, 1997. [41] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in neural information processing systems, pp. 2672–2680, 2014. [42] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” arXiv preprint arXiv:1701.07875, 2017. [43] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan: Interpretable representation learning by informa- tion maximizing generative adversarial nets,” in Advances in neural information processing systems, pp. 2172–2180, 2016. [44] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” arXiv preprint arXiv:1511.06434, 2015. [45] T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of GANs for improved quality, stability, and variation,” arXiv preprint arXiv:1710.10196, 2017. [46] N. Tavaf, A. Torfi, K. Ugurbil, and P.-F. Van de Moortele, “GRAPPA-GANs for Parallel MRI Reconstruction,” arXiv preprint arXiv:2101.03135, Jan 2021. [47] L. Yu, W. Zhang, J. Wang, and Y. Yu, “Seqgan: Sequence generative adversarial nets with policy gradient,” in Thirty-First AAAI Conference on Artificial Intelligence, 2017. [48] J. Li, W. Monroe, T. Shi, S. Jean, A. Ritter, and D. Jurafsky, “Adversarial learning for neural dialogue generation,” arXiv preprint arXiv:1701.06547, 2017. [49] B. Pang, L. Lee, and S. Vaithyanathan, “Thumbs up?: sentiment classi- fication using machine learning techniques,” in Proceedings of the ACL- 02 conference on Empirical methods in natural language processing- Volume 10, pp. 79–86, Association for Computational Linguistics, 2002. [50] Z. S. Harris, “Distributional structure,” Word, vol. 10, no. 2-3, pp. 146– 162, 1954. [51] Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, “A neural proba- bilistic language model,” Journal of machine learning research, vol. 3, no. Feb., pp. 1137–1155, 2003. [52] Q. Le and T. Mikolov, “Distributed representations of sentences and documents,” in International Conference on Machine Learning, pp. 1188–1196, 2014. [53] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed representations of words and phrases and their compo- sitionality,” in Advances in neural information processing systems, pp. 3111–3119, 2013. [54] R. Kiros, Y. Zhu, R. R. Salakhutdinov, R. Zemel, R. Urtasun, A. Tor- ralba, and S. Fidler, “Skip-thought vectors,” in Advances in neural information processing systems, pp. 3294–3302, 2015. [55] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv preprint arXiv:1301.3781, 2013. [56] G. Lebanon et al., Riemannian geometry and statistical machine learning. LAP LAMBERT Academic Publishing, 2015. [57] J. Leskovec, A. Rajaraman, and J. D. Ullman, Mining of massive datasets. Cambridge University Press, 2014. [58] Y. Goldberg, “Neural network methods for natural language process- ing,” Synthesis Lectures on Human Language Technologies, vol. 10, no. 1, pp. 1–309, 2017. [59] J. Wehrmann, W. Becker, H. E. Cagnini, and R. C. Barros, “A character- based convolutional neural network for language-agnostic Twitter sen- timent analysis,” in Neural Networks (IJCNN), 2017 International Joint Conference on, pp. 2384–2391, IEEE, 2017. [60] P. Bojanowski, E. Grave, A. Joulin, and T. Mikolov, “Enriching word vectors with subword information,” arXiv preprint arXiv:1607.04606, 2016. [61] J. Botha and P. Blunsom, “Compositional morphology for word rep- resentations and language modelling,” in International Conference on Machine Learning, pp. 1899–1907, 2014. [62] A. See, P. J. Liu, and C. D. Manning, “Get to the point: Summarization with pointer-generator networks,” in ACL, vol. 1, pp. 1073–1083, 2017. [63] R. Paulus, C. Xiong, and R. Socher, “A deep reinforced model for abstractive summarization,” arXiv preprint arXiv:1705.04304, 2017. [64] S. Bengio, O. Vinyals, N. Jaitly, and N. Shazeer, “Scheduled sampling for sequence prediction with recurrent neural networks,” in Advances in Neural Information Processing Systems, pp. 1171–1179, 2015. [65] K. Goyal, G. Neubig, C. Dyer, and T. Berg-Kirkpatrick, “A continuous relaxation of beam search for end-to-end training of neural sequence models,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [66] W. Kool, H. Van Hoof, and M. Welling, “Stochastic beams and where to find them: The gumbel-top-k trick for sampling sequences with- out replacement,” in International Conference on Machine Learning, pp. 3499–3508, 2019. [67] C.-Y. Lin, “Rouge: A package for automatic evaluation of summaries,” in Text summarization branches out, pp. 74–81, 2004. [68] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, “BLEU: a method for automatic evaluation of machine translation,” in Proceedings of the 40th annual meeting on Association for Computational Linguistics, pp. 311–318, Association for Computational Linguistics, 2002. [69] S. Banerjee and A. Lavie, “METEOR: An automatic metric for MT evaluation with improved correlation with human judgments,” in Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization, pp. 65–72, 2005. [70] Y. Keneshloo, T. Shi, C. K. Reddy, and N. Ramakrishnan, “Deep rein- forcement learning for sequence to sequence models,” arXiv preprint arXiv:1805.09461, 2018. 19 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING [71] M. Ranzato, S. Chopra, M. Auli, and W. Zaremba, “Sequence training with recurrent neural networks,” arXiv preprint level arXiv:1511.06732, 2015. [72] W. Zaremba and I. Sutskever, “Reinforcement learning neural Turing machines-revised,” arXiv preprint arXiv:1505.00521, 2015. [73] R. J. Williams, “Simple statistical gradient-following algorithms for learning,” in Reinforcement Learning, connectionist pp. 5–32, Springer, 1992. reinforcement [74] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT Press, 2018. [75] C. J. Watkins and P. Dayan, “Q-learning,” Machine Learning, vol. 8, no. 3-4, pp. 279–292, 1992. [76] H. Daum´e, J. Langford, and D. Marcu, “Search-based structured prediction,” Machine learning, vol. 75, no. 3, pp. 297–325, 2009. [77] S. Levine, C. Finn, T. Darrell, and P. Abbeel, “End-to-end training of deep visuomotor policies,” The Journal of Machine Learning Research, vol. 17, no. 1, pp. 1334–1373, 2016. [78] V. Mnih, N. Heess, A. Graves, et al., “Recurrent models of visual information processing systems, attention,” in Advances in neural pp. 2204–2212, 2014. [79] C. Sun, L. Huang, and X. Qiu, “Utilizing BERT for aspect-based sentiment analysis via constructing auxiliary sentence,” arXiv preprint arXiv:1903.09588, 2019. [80] P. Resnik and J. Lin, “Evaluation of NLP systems,” The handbook of computational linguistics and natural language processing, vol. 57, pp. 271–295, 2010. [81] A. Kumar, O. Irsoy, P. Ondruska, M. Iyyer, J. Bradbury, I. Gulrajani, V. Zhong, R. Paulus, and R. Socher, “Ask me anything: Dynamic memory networks for natural language processing,” in International Conference on Machine Learning, pp. 1378–1387, 2016. [82] Z. Huang, W. Xu, and K. Yu, “Bidirectional LSTM-CRF models for sequence tagging,” arXiv preprint arXiv:1508.01991, 2015. [83] D. Andor, C. Alberti, D. Weiss, A. Severyn, A. Presta, K. Ganchev, S. Petrov, and M. Collins, “Globally normalized transition-based neural networks,” arXiv preprint arXiv:1603.06042, 2016. [84] X. Xue and J. Zhang, “Part-of-speech tagging of building codes empowered by deep learning and transformational rules,” Advanced Engineering Informatics, vol. 47, p. 101235, 2021. [85] L. Liu, J. Shang, X. Ren, F. F. Xu, H. Gui, J. Peng, and J. Han, “Empower sequence labeling with task-aware neural language model,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [86] Z. Yang, R. Salakhutdinov, and W. W. Cohen, “Transfer learning for sequence tagging with hierarchical recurrent networks,” arXiv preprint arXiv:1703.06345, 2017. [87] X. Ma and E. Hovy, “End-to-end sequence labeling via bi-directional LSTM-CNNs-CRF,” arXiv preprint arXiv:1603.01354, 2016. [88] M. Yasunaga, J. Kasai, and D. Radev, “Robust multilingual training,” arXiv preprint part-of-speech tagging via adversarial arXiv:1711.04903, 2017. [89] W. Ling, T. Lu´ıs, L. Marujo, R. F. Astudillo, S. Amir, C. Dyer, A. W. Black, and I. Trancoso, “Finding function in form: Compositional character models for open vocabulary word representation,” arXiv preprint arXiv:1508.02096, 2015. [90] A. Akbik, D. Blythe, and R. Vollgraf, “Contextual string embeddings for sequence labeling,” in Proceedings of the 27th International Con- ference on Computational Linguistics, pp. 1638–1649, 2018. [91] B. Bohnet, R. McDonald, G. Simoes, D. Andor, E. Pitler, and J. Maynez, “Morphosyntactic tagging with a Meta-BiLSTM model over context sensitive token encodings,” arXiv preprint arXiv:1805.08237, 2018. [92] J. Legrand and R. Collobert, “Joint RNN-based greedy parsing and word composition,” arXiv preprint arXiv:1412.7028, 2014. [93] J. Legrand and R. Collobert, “Deep neural networks for syntactic parsing of morphologically rich languages,” in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 573–578, 2016. [94] A. Kuncoro, M. Ballesteros, L. Kong, C. Dyer, G. Neubig, and N. A. Smith, “What do recurrent neural network grammars learn about syntax?,” arXiv preprint arXiv:1611.05774, 2016. [95] J. Liu and Y. Zhang, “In-order transition-based constituent parsing,” arXiv preprint arXiv:1707.05000, 2017. [96] D. Fried, M. Stern, and D. Klein, “Improving neural parsing by disentangling model combination and reranking effects,” arXiv preprint arXiv:1707.03058, 2017. [97] N. Kitaev and D. Klein, “Constituency parsing with a self-attentive encoder,” arXiv preprint arXiv:1805.01052, 2018. [98] D. Chen and C. Manning, “A fast and accurate dependency parser using neural networks,” in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 740–750, 2014. [99] T. Dozat and C. D. Manning, “Deep biaffine attention for neural (99 T. Dozat and C. D. Manning, “Deep biaffine attention for neural dependency parsing,” arXiv preprint arXiv: 1611.01734, 2016. dependency parsing,” arXiv preprint arXiv:1611.01734, 2016. [100] E. Kiperwasser and Y. Goldberg, “Simple and accurate dependency parsing using bidirectional LSTM feature representations,” arXiv preprint arXiv:1603.04351, 2016. [101] C. Dyer, M. Ballesteros, W. Ling, A. Matthews, and N. A. Smith, “Transition-based dependency parsing with stack long short-term mem- ory,” arXiv preprint arXiv:1505.08075, 2015. [102] S. Jaf and C. Calder, “Deep learning for natural language parsing,” IEEE Access, vol. 7, pp. 131363–131373, 2019. [103] Y. Zhang, F. Tiryaki, M. Jiang, and H. Xu, “Parsing clinical text using the state-of-the-art deep learning based parsers: a systematic comparison,” BMC medical informatics and decision making, vol. 19, no. 3, p. 77, 2019. [104] Y. Zhang, Z. Li, and M. Zhang, “Efficient second-order treecrf for neural dependency parsing,” arXiv preprint arXiv:2005.00975, 2020. [105] T. Dozat and C. D. Manning, “Deep biaffine attention for neural dependency parsing,” 2017. [106] Z. Tan, M. Wang, J. Xie, Y. Chen, and X. Shi, “Deep semantic role labeling with self-attention,” arXiv preprint arXiv:1712.01586, 2017. [107] D. Marcheggiani and I. Titov, “Encoding sentences with graph convolutional networks for semantic role labeling,” arXiv preprint arXiv:1703.04826, 2017. [108] E. Strubell, P. Verga, D. Andor, D. Weiss, and A. McCallum, “Linguistically-informed self-attention for semantic role labeling,” arXiv preprint arXiv:1804.08199, 2018. [109] L. He, K. Lee, O. Levy, and L. Zettlemoyer, “Jointly predicting predicates and arguments in neural semantic role labeling,” arXiv preprint arXiv:1805.04787, 2018. [110] M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer, “Deep contextualized word representations,” arXiv preprint arXiv:1802.05365, 2018. [111] Z. Tan, M. Wang, J. Xie, Y. Chen, and X. Shi, “Deep semantic role labeling with self-attention,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [112] Z. Li, S. He, H. Zhao, Y. Zhang, Z. Zhang, X. Zhou, and X. Zhou, “Dependency or span, end-to-end uniform semantic role labeling,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 6730–6737, 2019. [113] S. Pradhan, A. Moschitti, N. Xue, H. T. Ng, A. Bj¨orkelund, O. Uryupina, Y. Zhang, and Z. Zhong, “Towards robust linguistic anal- ysis using OntoNotes,” in Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pp. 143–152, 2013. [114] N. Kalchbrenner, E. Grefenstette, and P. Blunsom, “A convo- lutional neural network for modelling sentences,” arXiv preprint arXiv:1404.2188, 2014. [115] H. Palangi, L. Deng, Y. Shen, J. Gao, X. He, J. Chen, X. Song, and R. Ward, “Deep sentence embedding using long short-term memory networks: Analysis and application to information retrieval,” IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP), vol. 24, no. 4, pp. 694–707, 2016. [116] Z. Yang, D. Yang, C. Dyer, X. He, A. Smola, and E. Hovy, “Hierar- chical attention networks for document classification,” in Proceedings of the 2016 Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies, pp. 1480–1489, 2016. [117] S. Lai, L. Xu, K. Liu, and J. Zhao, “Recurrent convolutional neural networks for text classification.,” in AAAI, vol. 333, pp. 2267–2273, 2015. [118] C. Zhou, C. Sun, Z. Liu, and F. Lau, “A C-LSTM neural network for text classification,” arXiv preprint arXiv:1511.08630, 2015. [119] S. Minaee, N. Kalchbrenner, E. Cambria, N. Nikzad, M. Chenaghlu, and J. Gao, “Deep learning based text classification: A comprehensive review,” arXiv preprint arXiv:2004.03705, 2020. [120] M. Zulqarnain, R. Ghazali, Y. M. M. Hassim, and M. Rehan, “A comparative review on deep learning models for text classification,” Indones. J. Electr. Eng. Comput. Sci, vol. 19, no. 1, pp. 325–335, 2020. [121] A. Conneau, H. Schwenk, L. Barrault, and Y. LeCun, “Very deep convolutional networks for text classification,” in Proceedings of the 15th Conference of the Association for the European Chapter of Computational Linguistics: Volume 1, Long Papers, vol. 1, pp. 1107– 1116, 2017. 20 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING [122] R. Johnson and T. Zhang, “Deep pyramid convolutional neural net- works for text categorization,” in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 562–570, 2017. [123] R. Johnson and T. Zhang, “Supervised and semi-supervised text categorization using LSTM for region embeddings,” arXiv preprint arXiv:1602.02373, 2016. [124] J. Howard and S. Ruder, “Universal language model fine-tuning for text classification,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 328–339, 2018. [125] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa, “Natural language processing (almost) from scratch,” Journal of Machine Learning Research, vol. 12, no. Aug., pp. 2493–2537, 2011. [126] G. Mesnil, X. He, L. Deng, and Y. Bengio, “Investigation of recurrent- neural-network architectures and learning methods for spoken language understanding.,” in Interspeech, pp. 3771–3775, 2013. [127] F. Dernoncourt, J. Y. Lee, and P. Szolovits, “NeuroNER: an easy-to- use program for named-entity recognition based on neural networks,” Conference on Empirical Methods on Natural Language Processing (EMNLP), 2017. [128] A. Baevski, S. Edunov, Y. Liu, L. Zettlemoyer, and M. Auli, “Cloze- driven pretraining of self-attention networks,” 2019. [129] E. F. Tjong Kim Sang and F. De Meulder, “Introduction to the CoNLL- 2003 shared task: Language-independent named entity recognition,” in Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pp. 142–147, Association for Compu- tational Linguistics, 2003. [130] K. Clark, M.-T. Luong, C. D. Manning, and Q. V. Le, “Semi- supervised sequence modeling with cross-view training,” arXiv preprint arXiv:1809.08370, 2018. [131] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre- training of deep bidirectional transformers for language understanding,” arXiv preprint arXiv:1810.04805, 2018. [132] R. Socher, B. Huval, C. D. Manning, and A. Y. Ng, “Semantic compo- sitionality through recursive matrix-vector spaces,” in Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning, pp. 1201– 1211, Association for Computational Linguistics, 2012. [133] Z. Geng, G. Chen, Y. Han, G. Lu, and F. Li, “Semantic relation extraction using sequential and tree-structured lstm with attention,” Information Sciences, vol. 509, pp. 183–192, 2020. [134] X. Han, T. Gao, Y. Lin, H. Peng, Y. Yang, C. Xiao, Z. Liu, P. Li, M. Sun, and J. Zhou, “More data, more relations, more context and more openness: A review and outlook for relation extraction,” arXiv preprint arXiv:2004.03186, 2020. learn- ing for mention-ranking coreference models,” arXiv preprint arXiv:1609.08667, 2016. [136] K. Lee, L. He, and L. Zettlemoyer, “Higher-order coreference resolu- tion with coarse-to-fine inference,” arXiv preprint arXiv:1804.05392, 2018. [137] H. Fei, X. Li, D. Li, and P. Li, “End-to-end deep reinforcement learning based coreference resolution,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 660– 665, 2019. [138] W. Wu, F. Wang, A. Yuan, F. Wu, and J. Li, “Corefqa: Coreference resolution as query-based span prediction,” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6953–6963, 2020. [139] Y. Chen, L. Xu, K. Liu, D. Zeng, and J. Zhao, “Event extraction via dynamic multi-pooling convolutional neural networks,” in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), vol. 1, pp. 167–176, 2015. [140] T. H. Nguyen and R. Grishman, “Graph convolutional networks with argument-aware pooling for event detection,” in Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [141] T. Zhang, H. Ji, and A. Sil, “Joint entity and event extraction with generative adversarial imitation learning,” Data Intelligence, vol. 1, no. 2, pp. 99–120, 2019. [142] W. Zhao, J. Zhang, J. Yang, T. He, H. Ma, and Z. Li, “A novel joint biomedical event extraction framework via two-level modeling of documents,” Information Sciences, vol. 550, pp. 27–40, 2021. [143] T. Nasukawa and J. Yi, “Sentiment analysis: Capturing favorability using natural language processing,” in Proceedings of the 2nd Interna- tional Conference on Knowledge Capture, pp. 70–77, ACM, 2003. [144] K. Dave, S. Lawrence, and D. M. Pennock, “Mining the peanut gallery: Opinion extraction and semantic classification of product reviews,” in Proceedings of the 12th international conference on World Wide Web, pp. 519–528, ACM, 2003. [145] A. R. Pathak, B. Agarwal, M. Pandey, and S. Rautaray, “Application of deep learning approaches for sentiment analysis,” in Deep Learning- Based Approaches for Sentiment Analysis, pp. 1–31, Springer, 2020. [146] A. Yadav and D. K. Vishwakarma, “Sentiment analysis using deep learning architectures: a review,” Artificial Intelligence Review, vol. 53, no. 6, pp. 4335–4385, 2020. [147] D. Tang, B. Qin, and T. Liu, “Document modeling with gated recurrent neural network for sentiment classification,” in Proceedings of the 2015 conference on empirical methods in natural language processing, pp. 1422–1432, 2015. [148] X. Glorot, A. Bordes, and Y. Bengio, “Domain adaptation for large- scale sentiment classification: A deep learning approach,” in Proceed- ings of the 28th international conference on machine learning (ICML- 11), pp. 513–520, 2011. [149] G. Rao, W. Huang, Z. Feng, and Q. Cong, “Lstm with sentence representations for document-level sentiment classification,” Neuro- computing, vol. 308, pp. 49–57, 2018. [150] M. Rhanoui, M. Mikram, S. Yousfi, and S. Barzali, “A cnn-bilstm model for document-level sentiment analysis,” Machine Learning and Knowledge Extraction, vol. 1, no. 3, pp. 832–847, 2019. [151] R. Socher, J. Pennington, E. H. Huang, A. Y. Ng, and C. D. Manning, “Semi-supervised recursive autoencoders for predicting sentiment dis- tributions,” in Proceedings of the conference on empirical methods in natural language processing, pp. 151–161, Association for Computa- tional Linguistics, 2011. [152] X. Wang, Y. Liu, S. Chengjie, B. Wang, and X. Wang, “Predicting polarities of tweets by composing word embeddings with long short- term memory,” in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), vol. 1, pp. 1343–1353, 2015. [153] R. Socher, A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts, “Recursive deep models for semantic compositionality over a sentiment treebank,” in Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631–1642, 2013. [154] R. Arulmurugan, K. Sabarmathi, and H. Anandakumar, “Classification of sentence level sentiment analysis using cloud machine learning techniques,” Cluster Computing, vol. 22, no. 1, pp. 1199–1209, 2019. [155] D. Meˇskel˙e and F. Frasincar, “Aldonar: A hybrid solution for sentence- level aspect-based sentiment analysis using a lexicalized domain ontol- ogy and a regularized neural attention model,” Information Processing & Management, vol. 57, no. 3, p. 102211, 2020. [156] Y. Wang, M. Huang, L. Zhao, et al., “Attention-based LSTM for aspect- level sentiment classification,” in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 606–615, 2016. [157] Y. Ma, H. Peng, T. Khan, E. Cambria, and A. Hussain, “Sentic lstm: a hybrid network for targeted aspect-based sentiment analysis,” Cognitive Computation, vol. 10, no. 4, pp. 639–650, 2018. [158] H. Xu, B. Liu, L. Shu, and P. S. Yu, “BERT post-training for review reading comprehension and aspect-based sentiment analysis,” arXiv preprint arXiv:1904.02232, 2019. [159] H. Xu, B. Liu, L. Shu, and P. S. Yu, “Double embeddings and CNN-based sequence labeling for aspect extraction,” arXiv preprint arXiv:1805.04601, 2018. [160] H. H. Do, P. Prasad, A. Maag, and A. Alsadoon, “Deep learning for aspect-based sentiment analysis: a comparative review,” Expert Systems with Applications, vol. 118, pp. 272–299, 2019. [161] S. Rida-E-Fatima, A. Javed, A. Banjar, A. Irtaza, H. Dawood, H. Da- wood, and A. Alamri, “A multi-layer dual attention deep learning model with refined word embeddings for aspect-based sentiment analysis,” IEEE Access, vol. 7, pp. 114795–114807, 2019. [162] Y. Liang, F. Meng, J. Zhang, J. Xu, Y. Chen, and J. Zhou, “A novel aspect-guided deep transition model for aspect based sentiment analysis,” arXiv preprint arXiv:1909.00324, 2019. [163] D. Jurafsky and J. H. Martin, Speech and Language Processing. Prentice Hall, 2008. 21 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING [164] N. Kalchbrenner and P. Blunsom, “Recurrent continuous translation models,” in Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1700–1709, 2013. [165] S. P. Singh, A. Kumar, H. Darbari, L. Singh, A. Rastogi, and S. Jain, “Machine translation using deep learning: An overview,” in 2017 international conference on computer, communications and electronics (comptelix), pp. 162–167, IEEE, 2017. [166] S. Yang, Y. Wang, and X. Chu, “A survey of deep learning techniques for neural machine translation,” arXiv preprint arXiv:2002.07526, 2020. [167] L. E. Dostert, “The Georgetown-IBM experiment,” 1955). Machine translation of languages. John Wiley & Sons, New York, pp. 124–135, 1955. [168] D. Bahdanau, K. Cho, and Y. Bengio, “Neural machine translation by jointly learning to align and translate,” arXiv preprint arXiv:1409.0473, 2014. [169] K. Cho, B. Van Merri¨enboer, D. Bahdanau, and Y. Bengio, “On the properties of neural machine translation: Encoder-decoder approaches,” arXiv preprint arXiv:1409.1259, 2014. [170] I. Sutskever, O. Vinyals, and Q. V. Le, “Sequence to sequence learning with neural networks,” in Advances in neural information processing systems, pp. 3104–3112, 2014. [171] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, et al., “Google’s neural machine translation system: Bridging the gap between human and machine translation,” arXiv preprint arXiv:1609.08144, 2016. [172] J. Gehring, M. Auli, D. Grangier, D. Yarats, and Y. N. Dauphin, sequence to sequence learning,” arXiv preprint “Convolutional arXiv:1705.03122, 2017. [173] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” in Advances in Neural Information Processing Systems, pp. 5998–6008, 2017. [174] K. Ahmed, N. S. Keskar, and R. Socher, “Weighted transformer network for machine translation,” arXiv preprint arXiv:1711.02132, 2017. [175] P. Shaw, J. Uszkoreit, and A. Vaswani, “Self-attention with relative position representations,” arXiv preprint arXiv:1803.02155, 2018. [176] S. Edunov, M. Ott, M. Auli, and D. Grangier, “Understanding back- translation at scale,” arXiv preprint arXiv:1808.09381, 2018. [177] R. Aharoni, M. Johnson, and O. Firat, “Massively multilingual neural machine translation,” 2019. [178] J. Zhu, Y. Xia, L. Wu, D. He, T. Qin, W. Zhou, H. Li, and T.-Y. Liu, “Incorporating bert into neural machine translation,” 2020. [179] Y. Liu, J. Gu, N. Goyal, X. Li, S. Edunov, M. Ghazvininejad, M. Lewis, and L. Zettlemoyer, “Multilingual denoising pre-training for neural machine translation,” Transactions of the Association for Computational Linguistics, vol. 8, pp. 726–742, 2020. [180] Y. Cheng, L. Jiang, and W. Macherey, “Robust neural machine transla- tion with doubly adversarial inputs,” arXiv preprint arXiv:1906.02443, 2019. [181] W. Zhang, Y. Feng, F. Meng, D. You, and Q. Liu, “Bridging the gap between training and inference for neural machine translation,” 2019. [182] J. Yang, M. Wang, H. Zhou, C. Zhao, W. Zhang, Y. Yu, and L. Li, “Towards making the most of bert in neural machine translation,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 9378–9385, 2020. [183] A. Bordes, S. Chopra, and J. Weston, “Question answering with subgraph embeddings,” arXiv preprint arXiv:1406.3676, 2014. [184] B. F. Green Jr, A. K. Wolf, C. Chomsky, and K. Laughery, “Baseball: an automatic question-answerer,” in Papers presented at the May 9-11, 1961, Western Joint IRE-AIEE-ACM Computer Conference, pp. 219– 224, ACM, 1961. [185] A. Ittycheriah, M. Franz, W.-J. Zhu, A. Ratnaparkhi, and R. J. Mam- mone, “IBM’s statistical question answering system.,” in TREC, 2000. [186] H. Cui, R. Sun, K. Li, M.-Y. Kan, and T.-S. Chua, “Question answering passage retrieval using dependency relations,” in Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pp. 400–407, ACM, 2005. [187] X. Qiu and X. Huang, “Convolutional neural tensor network architec- ture for community-based question answering.,” in IJCAI, pp. 1305– 1311, 2015. [188] H. T. Ng, L. H. Teo, and J. L. P. Kwan, “A machine learning approach to answering questions for reading comprehension tests,” in Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics-Volume 13, pp. 124–132, Association for Computational Linguistics, 2000. [189] C. Xiong, V. Zhong, and R. Socher, “Dynamic coattention networks for question answering,” arXiv preprint arXiv:1611.01604, 2016. [190] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. Lawrence Zit- nick, and D. Parikh, “VQA: Visual question answering,” in Proceedings of the IEEE international conference on computer vision, pp. 2425– 2433, 2015. [191] M. Malinowski, M. Rohrbach, and M. Fritz, “Ask your neurons: A neural-based approach to answering questions about images,” in Proceedings of the IEEE international conference on computer vision, pp. 1–9, 2015. [192] H. Xu and K. Saenko, “Ask, attend and answer: Exploring question- guided spatial attention for visual question answering,” in European Conference on Computer Vision, pp. 451–466, Springer, 2016. [193] A. Das, H. Agrawal, L. Zitnick, D. Parikh, and D. Batra, “Human attention in visual question answering: Do humans and deep networks look at the same regions?,” Computer Vision and Image Understanding, vol. 163, pp. 90–100, 2017. [194] H. Ben-Younes, R. Cadene, N. Thome, and M. Cord, “Block: Bilinear superdiagonal fusion for visual question answering and visual relation- ship detection,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 8102–8109, 2019. [195] J. Wu and R. J. Mooney, “Self-critical reasoning for robust visual question answering,” 2019. [196] R. Nallapati, F. Zhai, and B. Zhou, “SummaRuNNer: A recurrent neural network based sequence model for extractive summarization of documents.,” in AAAI, pp. 3075–3081, 2017. [197] S. Narayan, S. B. Cohen, and M. Lapata, “Ranking sentences for ex- tractive summarization with reinforcement learning,” in NAACL:HLT, vol. 1, pp. 1747–1759, 2018. [198] A. M. Rush, S. Chopra, and J. Weston, “A neural attention model for abstractive sentence summarization,” in EMNLP, 2015. [199] J. Tan, X. Wan, and J. Xiao, “Abstractive document summarization with a graph-based attentional neural model,” in ACL, vol. 1, pp. 1171–1181, 2017. [200] R. Nallapati, B. Zhou, C. dos Santos, C. Gulcehre, and B. Xiang, “Abstractive text summarization using sequence-to-sequence RNNs and beyond,” in Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning, pp. 280–290, 2016. [201] K. M. Hermann, T. Kocisky, E. Grefenstette, L. Espeholt, W. Kay, M. Suleyman, and P. Blunsom, “Teaching machines to read and comprehend,” in NIPS, pp. 1693–1701, 2015. [202] J. Gu, Z. Lu, H. Li, and V. O. Li, “Incorporating copying mechanism in sequence-to-sequence learning,” in ACL, vol. 1, pp. 1631–1640, 2016. [203] Y.-C. Chen and M. Bansal, “Fast abstractive summarization with reinforce-selected sentence rewriting,” in ACL, 2018. [204] Q. Zhou, N. Yang, F. Wei, S. Huang, M. Zhou, and T. Zhao, “Neu- ral document summarization by jointly learning to score and select sentences,” in ACL, pp. 654–663, ACL, 2018. [205] T. Shi, Y. Keneshloo, N. Ramakrishnan, and C. K. Reddy, “Neural abstractive text summarization with sequence-to-sequence models,” arXiv preprint arXiv:1812.02303, 2018. [206] C. Ma, W. E. Zhang, M. Guo, H. Wang, and Q. Z. Sheng, “Multi- document summarization via deep learning techniques: A survey,” arXiv preprint arXiv:2011.04843, 2020. [207] A. Abdi, S. Hasan, S. M. Shamsuddin, N. Idris, and J. Piran, “A hybrid deep learning architecture for opinion-oriented multi-document sum- marization based on multi-feature fusion,” Knowledge-Based Systems, vol. 213, p. 106658, 2021. [208] X. Zhang, F. Wei, and M. Zhou, “Hibert: Document level pre-training of hierarchical bidirectional transformers for document summariza- tion,” arXiv preprint arXiv:1905.06566, 2019. [209] E. Merdivan, D. Singh, S. Hanke, and A. Holzinger, “Dialogue sys- tems for intelligent human computer interactions,” Electronic Notes in Theoretical Computer Science, vol. 343, pp. 57–71, 2019. [210] D. Hakkani-T¨ur, G. T¨ur, A. Celikyilmaz, Y.-N. Chen, J. Gao, L. Deng, and Y.-Y. Wang, “Multi-domain joint semantic frame parsing using bi-directional RNN-LSTM,” in Interspeech, pp. 715–719, 2016. [211] C. Toxtli, J. Cranshaw, et al., “Understanding chatbot-mediated task management,” in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 58, ACM, 2018. [212] V. Ilievski, C. Musat, A. Hossmann, and M. Baeriswyl, “Goal-oriented chatbot dialog management bootstrapping with transfer learning,” arXiv preprint arXiv:1802.00500, 2018. 22 TORFI et. al., NLP ADVANCEMENTS BY DEEP LEARNING [213] J. Li, W. Monroe, A. Ritter, D. Jurafsky, M. Galley, and J. Gao, “Deep reinforcement learning for dialogue generation,” in Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 1192–1202, 2016. [214] T.-H. Wen, D. Vandyke, N. Mrksic, M. Gasic, L. M. Rojas-Barahona, P.-H. Su, S. Ultes, and S. Young, “A network-based end-to-end train- able task-oriented dialogue system,” arXiv preprint arXiv:1604.04562, 2016. [215] J. D. Williams and G. Zweig, “End-to-end LSTM-based dialog control optimized with supervised and reinforcement learning,” arXiv preprint arXiv:1606.01269, 2016. [216] S. Sukhbaatar, J. Weston, R. Fergus, et al., “End-to-end memory information processing systems, networks,” in Advances in neural pp. 2440–2448, 2015. [217] A. Bordes, Y.-L. Boureau, and J. Weston, “Learning end-to-end goal- oriented dialog,” arXiv preprint arXiv:1605.07683, 2016. [218] A. Ritter, C. Cherry, and W. B. Dolan, “Data-driven response generation in social media,” in Proceedings of the conference on empirical methods in natural language processing, pp. 583–593, Association for Computational Linguistics, 2011. [219] Z. Ji, Z. Lu, and H. Li, “An information retrieval approach to short text conversation,” arXiv preprint arXiv:1408.6988, 2014. [220] B. Hu, Z. Lu, H. Li, and Q. Chen, “Convolutional neural network architectures for matching natural language sentences,” in Advances in neural information processing systems, pp. 2042–2050, 2014. [221] R. Lowe, N. Pow, I. Serban, and J. Pineau, “The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems,” arXiv preprint arXiv:1506.08909, 2015. [222] R. Yan, Y. Song, and H. Wu, “Learning to respond with deep neural networks for retrieval-based human-computer conversation system,” in Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval, pp. 55–64, ACM, 2016. [223] X. Zhou, L. Li, D. Dong, Y. Liu, Y. Chen, W. X. Zhao, D. Yu, and H. Wu, “Multi-turn response selection for chatbots with deep attention matching network,” in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 1118–1127, 2018. [224] S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, and S. Bengio, “Generating sentences from a continuous space,” arXiv preprint arXiv:1511.06349, 2015. [225] A. Kannan and O. Vinyals, “Adversarial evaluation of dialogue mod- els,” arXiv preprint arXiv:1701.08198, 2017. [226] O. Vinyals and Q. Le, “A neural conversational model,” arXiv preprint arXiv:1506.05869, 2015. 23
{ "id": "1511.08308" }
2002.11833
Policy Evaluation Networks
Many reinforcement learning algorithms use value functions to guide the search for better policies. These methods estimate the value of a single policy while generalizing across many states. The core idea of this paper is to flip this convention and estimate the value of many policies, for a single set of states. This approach opens up the possibility of performing direct gradient ascent in policy space without seeing any new data. The main challenge for this approach is finding a way to represent complex policies that facilitates learning and generalization. To address this problem, we introduce a scalable, differentiable fingerprinting mechanism that retains essential policy information in a concise embedding. Our empirical results demonstrate that combining these three elements (learned Policy Evaluation Network, policy fingerprints, gradient ascent) can produce policies that outperform those that generated the training data, in zero-shot manner.
http://arxiv.org/pdf/2002.11833
Jean Harb, Tom Schaul, Doina Precup, Pierre-Luc Bacon
cs.LG, cs.AI, stat.ML
12 pages, 11 figures
null
cs.LG
20200226
20200226
0 2 0 2 b e F 6 2 ] G L . s c [ 1 v 3 3 8 1 1 . 2 0 0 2 : v i X r a # Policy Evaluation Networks # Jean Harb 1 2 Tom Schaul 2 Doina Precup 2 Pierre-Luc Bacon 3 # Abstract Many reinforcement learning algorithms use value functions to guide the search for better poli- cies. These methods estimate the value of a single policy while generalizing across many states. The core idea of this paper is to flip this convention and estimate the value of many policies, for a single set of states. This approach opens up the possibility of performing direct gradient ascent in policy space without seeing any new data. The main challenge for this approach is finding a way to represent complex policies that facilitates learn- ing and generalization. To address this problem, we introduce a scalable, differentiable fingerprint- ing mechanism that retains essential policy infor- mation in a concise embedding. Our empirical results demonstrate that combining these three ele- ments (learned Policy Evaluation Network, policy fingerprints, gradient ascent) can produce policies that outperform those that generated the training data, in zero-shot manner. # 1. Introduction Value functions are core quantities estimated by most rein- forcement learning (RL) algorithms, and used to inform the search for good policies. Usually, value functions receive as input a description of the state (or state-action pair) and estimate the expected return for some distribution of inputs, conditioned on some behavior, or policy. When the policy changes, value estimates have to keep up. One way to view this process is that value functions are trained using large amounts of transitions, coming from the set of policies that have been used by the agent in the past, without seeing from which policy each transition came. As the policy changes, the value function estimate is still influenced by previously seen policies, possibly in an unpre- dictable way, because the policy is not typically represented as an input. Our goal is to explore the idea of pushing the agent to generalize its value representation among different policies, by providing a policy description as input. We hypothesize that an agent trained in this way could predict the value of a new, unseen policy with sufficient accuracy to guide further search through policy space. In particular, if we were to train a network to estimate the value of a policy from an initial start state distribution, we would have access to the gradient of the expected return with respect to the policy’s parameters and could use it to improve the policy. The value function would not need to take states as input at all, relying instead on an encoding of the policy itself in order to predict the expected return of the entire episode, from the starting state distribution. Our first contribution is to introduce the Policy Evaluation Network (PVN), a network that learns to predict the ex- pected return of a policy in a fully differentiable way. Using a dataset of policy networks along with their returns, the PVN is trained with supervised learning. However, it is not trivial to embed a policy in a way that allows the embedding to be sufficiently informative for the value function, yet not too large. For example, the naive approach of flattening a policy network into a large vec- tor loses information on the dependencies between layers, while the vector can still become intractably large. The second major contribution of this paper is a new method to fingerprint a policy network in order to create an embedding that is sufficiently small, yet retains information about the network structure. Finally, we introduce a novel policy gradient algorithm, which performs gradient ascent through a learned PVN in order to obtain, in zero-shot manner, policies that are superior to any of those evaluated during training. We present small scale experiments to illustrate the behavior of our approach, then present experiments that show how fingerprinting allows us to evaluate not only linear policy networks, but also multi-layered ones. # 2. Background 1Mila - McGill University 2DeepMind 3Mila - University of Montreal. Correspondence to: Jean Harb <jean.merheb- [email protected]>. In RL, an agent’s goal is to learn how to maximize its re- turn by interacting with an environment modelled, which is usually assumed to be a Markov Decision Processes (MDP) (S,A,P, 7,7), where S is a set of states, A is a Policy Evaluation Networks set of actions, P : S × A × S → [0, 1] represents the environment dynamics, γ ∈ [0, 1] is a discount factor and r : S × A → R is the reward function. A randomized policy π : S × A → [0, 1] is a distribution over actions, conditioned on states. expected return for any policy in a zero-shot fashion: the performance measure itself. In vector notation, we can write the performance measure explicitly as: T _ J(0) = do'(I-7Pxo) ron - (2) In this paper, we consider the problem of finding the param- eters 0 € R* of a stochastic policy 7 which maximize the expected discounted return from a distribution over initial states do: J(0) = E[Svf29 7'r(St, At)| So ~ do]. The policy gradient theorem (Sutton et al., 2000) shows that gradient of this objective is given by: aJ(6 Org(a|s) OF) des(s) So PQ. (550), ses acA where Pπθ and rθ π are the transition matrix and reward model induced by πθ. Hence, there is a function of θ that can compute the value of πθ. Policy Evaluation Networks aim to approximate this func- tion J, so that the gradient of a parameterized policy can be obtained instantaneously. Furthermore, we show that this can be achieved in a completely model-free fashion, without having to estimate Pπθ and rπθ , or to form any of the matrices in (2). and where Qπθ (s, a) is the action-value function corre- sponding to policy πθ and dπθ is a discounted weighting of states: dry(s) = S> do(so) Â¥. 7! Pro (St = 8/50 = 80) - =0 soES t Here, Pπθ (St = s|S0 = s0) is the probability of reaching s from s0 at time step t, given the environment dynamics and the fact that actions are chosen from πθ. The gradient (1) is typically estimated with samples taken under the undiscounted distribution induced by πθ in the MDP (Thomas, 2014). In actor-critic architectures (Sutton, 1984), a learned estimate of the action-value function Qπθ is usually maintained in a two-timescale manner (Konda & Tsitsiklis, 2000), ie., by allowing the iterates of Qπθ to con- verge faster than the policy parameters. This requirement is crucial for the stability of such methods in practice (Fuji- moto et al., 2018). Distributional Predictions. While approximating J could be directly cast as a regression problem, we view it instead as a classification problem: that of predicting the bucket index corresponding to the estimated return of a given input policy. This strategy has proven to be effective in the train- ing of deep neural networks, both in the supervised learning regime (van den Oord et al., 2016) and in reinforcement learning (Bellemare et al., 2017a). Predicting buckets in- stead of exact real values provides a regularization effect, similar to the idea of learning with auxiliary tasks (Caruana, 1997; Sutton et al., 2011; Jaderberg et al., 2017c; Lyle et al., 2019). Inspired by Bellemare et al. (2017b), we discretize the set of sampled returns into buckets of the same size. The PVN then outputs a probability distribution over the set of buckets, and the loss we use is the KL-divergence between the predicted and target distributions. In practice, the target is determined by rolling out multiple episodes per policy and discretizing the resulting returns. In this paper, we propose a new gradient-based optimization method which does not rely on maintaining a value function estimate in this two-timescale fashion. Our key observation is that rather than using a stochastic gradient of the perfor- mance measure, we can learn an estimate of J(θ) directly (as a neural network) and compute a deterministic gradient from it using any automatic differentiation package. We will now introduce the core idea of this approach. # 3. Policy Evaluation Networks By definition, a value function represents the expected re- turn associated with a given policy. So, one could expect, as in regression methods, that training could be done on some policies and generalize to others. But, value func- tion approximation methods are not typically designed with the goal of leveraging any information about the policy it- self in order to generalize to new policies. Conceptually, though, there exists a function capable of computing the Maximizing Undiscounted Returns. RL algorithms have traditionally been viewed as optimizing for the discounted return, with some given discount factor γ. However, the practice of policy gradient methods has evolved towards the use of discounting as a knob to control the bias-variance tradeoff (Xi-Ren Cao & Yat-Wah Wan, 1998; Baxter & Bartlett, 2001). Controlling the variance of the gradient estimator is of paramount importance, because it can grow exponentially in the horizon (Glynn & Olvera-Cravioto, 2019). Hence, rather than seeing the discount factor as being part of the task specification, practitioners tend to view it as a hyperparameter to be optimized (Schulman et al., 2016). We are ultimately interested in the undiscounted perfor- mance of a policy as in Thomas (2014). Because Policy Evaluation Networks form a deterministic gradient of an approximated loss, they sidestep the need to pick a discount factor and optimize a proxy objective. Our experiments Policy Evaluation Networks show that our approach is capable of optimizing for the undiscounted performance directly, and outperforms the state-of-the art discounted methods. Learning Objective for the PVN. When the PVN (denoted ψ) provides a categorical output over return bins, we need to specify how a scalar expected return estimate is obtained from this output. A straightforward approach consists in using the midpoint of each interval as a representative of the return, which we then weight by the predicted probability (denoted by ψi(θ)) for this interval: 1 h: m (Gmax _ Grin) m1 J(8): » i (0) (cin + e + in) where m is the number of bins, h is the width of each bin, and Gmin and Gmax are the minimum and maximum returns observed in the dataset. However, rather than minimizing the L2 distance between ˆJ(θ) and samples of J(θ), we use a classification loss: the KL-divergence between the output of the PVN and an empirical probability mass function over the discretized returns. We then view a PVN as a function of the form ψ(θ; w) where w are the parameters of the network itself and θ are the policy network parameters fed as input. We then use stochastic gradient descent to minimize the expected KL loss: minE [Dx (Po||Â¥(8;w))] where the expectation is taken under a given distribution over policy network weights (random in our experiments) and ˆPθ is obtained from a histogram of the returns induced by a policy πθ. # 4. Network Fingerprinting While the performance measure J is by definition a function of the policy network parameters, we need to figure out how to provide the policy as an input to a PVN, in a way which does not require too much space and also leads to good generalization among policies. If we try to naively flatten the policy into a vector input, we run into severla issues. First, the number of weights in the policy network can be very large, which requires a weight matrix in the input layer of the PVN of at least the same size. Second, the dependencies between layers contain valuable information, which is lost by a direct concatenation of the parameters. (ey = th Figure 1. Diagram of the complete Policy Evaluation Network setup, including Network Fingerprinting (in gray). The blue color of the probing states and PVN indicates that they can be seen as one set of weights, trained in unison. by feeding it a set of n learned probing states as input φ = [s1, . . . , sn] ∈ Rn×k, where k is the dimensionality of a state vector. φ is a synthetic input to the policy net- work whose sole purpose is to elicit a representative output. This output can be a distribution over discrete actions or continuous action vectors, which we then concatenate and use as input to the PVN. In discrete action spaces, we can view a PVN as function ψ : Rn|A| → Rm receiving a |A|n-dimensional vector of probabilities over actions and returning a distribution over m categories corresponding to the discretized return. The output of the PVN is then computed through the composition ψ(πθ (· | φ)) Motivation for Network Fingerprinting. To see why this might be a viable method, we can show an equivalence between using n probing states and having n hidden units in the first layer of a PVN which evaluates linear policies. Consider a linear, deterministic policy in a 1D action space, defined by 79(s) = 0's € R. Probing this policy in n states @ produces the fingerprint vector y = 7(@) = 0'@ € R”. On the other hand, feeding the full policy weights 6 as input to a PVN with n hidden units in the first layer will produce the hidden activations y = 9'W € R” at that layer. Clearly then, there is a choice of probing states @ and network initialisation W that produces exactly equivalent results. Choosing Probing States. The choice of probing states is important. One possible approach is to randomly generate states. Since we know the dimensionality of inputs to the policy, one could simply choose a sampling distribution and create fake states made of noise. Alternatively, we could also sample states from the environment, because we are interested in learning to evaluate the performance of the policy on states that we actually observe. To address these issues, we propose network fingerprinting: a methodology which allows us to learn a neural network embedding in a fully differentiable way, independently of the number of parameters in the policy network. The intu- itive idea is to characterize the response of a policy network Because the entire PVN architecture is differentiable, we can also use backpropagation to learn the probing states, as we can see in Fig. 1. These states can be viewed as weights of the PVN, helping extract information from the policy to improve prediction accuracy. In this paper, we adopt an hybrid approach. First, we initialize the probing states Policy Evaluation Networks by sampling random noise. Then, we refine those probing states throughout learning, while learning the PVN weights jointly. When using the classification loss, our minimization problem becomes: min E [Der (Po || b(ro(-16):))] which we optimize by stochastic gradient descent. Algorithm 1 Train PVN with fingerprinting Initialize PVN parameters w, φ, choose learning rate α for step s = 1 to S do Sample training batch {(πθi, Ri)}B L(w, φ) ← DKL(hist(RB)||ψ(πθB (·|φ); w)) update parameters w ← w − adam(α, ∇wL(w, φ)) update parameters φ ← φ − adam(α, ∇φL(w, φ)) i=1 ∼ D end for # 5. Policy Improvement By Gradient Ascent Algorithm 2 Gradient Ascent Through a Trained PVN Using a trained PVN, it is possible to do gradient ascent in the space of parameterized policies without having to interact with the environment. Because the PVN is an ap- proximation to the real performance measure in functional form, we can directly apply automatic differentiation to ob- tain an exact deterministic gradient. For example, when using network fingerprinting inside a PVN ψ parameterized by w, our gradient ascent procedure computes the iterates: θt+1 = θt + ηt∇θtψ(πθt(·|φ); w) , where ηt is the learning rate at time t and φ are the learned probe states. Θ ← ∅ for ascent policies i = 1 to A do Initialize policy πi parameters θi using Glorot Initial- ization D ← ∅ for ascent steps j = 1 to T do ˆJ(θi) θi ← θi + β∇θi Sample Monte-Carlo return GM C using policy πθi D ← D ∪ (θi, GM C) end for Θ ← Θ ∪ (argmaxθi (D), max(D)) end for return argmaxθ(Θ) A benefit of not having to interact with the environment to get fresh gradient estimates for every new policy is that we can do gradient ascent in parallel via our learned PVN. This feature is particularly useful to escape local maxima, as many concurrent solutions can be maintained simulta- neously as in (Jaderberg et al., 2017b). Statistical active learning techniques (Cohn et al., 1995) could make this pro- cess more efficient, but would require PVNs with epistemic uncertainty. known dynamics, which allows us to visualize how our al- gorithm works. We then move on to function approximation on the Cart Pole environment, where we analyze the effects of using Network Fingerprinting on single and multi-layered networks. Finally, we move to the Swimmer environment to show our algorithm’s performance on a continuous control task. Because the gradient ascent procedure may lead us to re- gions of the policy space where fewer samples have been obtained to train the PVN, being able to maintain multiple candidate solutions in parallel is particularly useful. To fur- ther avoid falling too quickly into the out-of-distribution regime, we can also limit the number of gradient steps and periodically verify that the performance is still increasing. # 6. Experiments In order to test our proposed method, we first create a set of policies by choosing a neural network architecture and initializing it a number of times. Second, we obtain the expected returns of these policies by averaging returns from a number of Monte-Carlo rollouts. This results in a dataset, where the policies are the inputs and the expected returns are the targets. Finally, we create a PVN and regress on the dataset. We ran experiments on a variety of environments of differ- ent complexities. We start with a simple 2-state MDP with # 6.1. Polytope For the first set of experiments, we built a 2-state, 2-action MDP (details described in Appendix), for which we know the transition matrix and reward function. Having this in- formation allows us to calculate exact expected returns and policy gradients for any policy, ∇θJ(θ). Moreover, be- cause the MDP is so small, we can visualize the value polytope (Dadashi et al., 2019), as seen in Figure 2a. A value polytope maps the space of policies to value space, meaning that any one point in the polytope is a policy and its coordinates are its values in states 1 and 2. Note that the corners of the polytope represent deterministic policies. In this environment, a policy can be represented as a simple 2- dimensional vector, π = [P (a1|s1), P (a1|s2)], from which we can infer P (a2|s) = 1 − P (a1|s). To run our experiments, we first obtain a random set of 40 policies by sampling from a uniform distribution over the policy space. Then we evaluate them, using eq. (2), to get their values, and split the data into a training and test set. Policy Evaluation Networks 1.25 1.00 0.75 0.0 v"(s2) v"(s2) 0.25 0.00 0.25 08 06 00 08 v"sa) ta 02 L2 loss 04 2500 S000 7500 1000 12800 15000 17800 20000, Training Steps 6 v™(si) a (a) This polytope represents the space of policies (in green) in value space. The cor- ners (in orange) are deterministic policies. (b) Sampled training set (in blue) and test set (in red) policies in the value polytope. (c) Training curves on training and test sets. Figure 2. Visualization of a value polytope and a sampled dataset of policies. Training curves show a PVN can learn to generalize and predict the points in the test set. Fr ae rs Paras re Peres pees (oevery leererey Fe ae Saree area Pane apes Serres Saran Parra fal rer ery Preyer yy vs a ee “i a a tl hoe Fe ee Oe Pe Oe Se Se 3" ay a E Sj. y ey bw hw kOe éjÂ¥ + * be ke woe ft go ee Se SS ss ons or 2 vey ye ey a0 wily voy wey yD wk Yow eee ENN os ° maa|si) “ta 06 vrsi) “ta 02 D ) qi "a maalsi. (a) Exact gradient field of the value in policy space. (b) Approximated gradient field of the value in policy space, calculated from a trained PVN. (c) Comparison of gradient ascent through the exact and approximated value functions. Figure 3. Comparison of gradient fields of the exact and approximated value functions. The two axes in Figures 3a and 3b are the policy spaces in each of the two states, and the arrows represent the gradient ∂J(θ) . The blue and red dots are steps of the gradient ∂θ ascent process, mapped onto the polytope in Figure 3c. Both ascents were run for 100 steps. The two sets of policies can be seen in Figure 2b. Finally, we train a 2-layered neural network on this regression task, where the network takes policy vectors as input and outputs their expected return. is noisier, they still both converge to the optimal solution. Figure 3c compares the two paths in polytope space, to give a different perspective. Both ascents performed a series of 100 policy gradient steps. Policy Search. Once we have a learned PVN, we can per- form gradient ascent through the network to search for the optimal policy. First, we can compare the exact policy gra- dients ∂J(θ) and those calculated from the learned PVN ∂θ ∂ψ(θ) ∂θ . In Figures 3a and 3b, we show the difference in the discretized gradient fields of the true policy gradients and the learned ones. As we can see, the gradient fields are quite similar, with only a few differences around the edges of the policy space. Finally, to perform gradient ascent, we start with an arbitrary policy, in this case [0.5, 0], calculate its gradient through the learned PVN, take a small step in that direction and repeat the process with the new policy. In Figures 3a and 3b, we compare performing gradient ascent with the exact policy gradient and with our approximated one. The dots represent the paths taken by the gradient ascent process, and we can see that while the path given by the learned function These results indicate that PVNs can both approximate pol- icy values and be used to calculate policy gradients in a practical manner. # 6.2. Cart Pole In the next set of experiments, we move to the Cart Pole environment with policy functions that map states to action distributions. The environment was run using ’CartPole-v0’ in OpenAI Gym (Brockman et al., 2016). This environment has an observation space of 4 features and 2 discrete actions, accelerating to the left and right. The main goal of this section is to show that Network Fingerprinting is crucial to scaling PVNs to larger networks. Linear Policy. First we start with the simplest case, a policy network consisting of a single linear layer with a softmax over the logits. Policy Evaluation Networks (a) Linear Policy (b) MLP Policy Figure 4. Plots showing histograms of training policies’ expected returns and the performance of gradient ascent through a learned PVN. The effects of Network Fingerprinting are drastic when using MLP policies. To generate a dataset in this setting, we start by creating a set of randomly generated linear policy networks, and getting a set of Monte-Carlo returns for each network. As the exact expected returns cannot be known, we instead have to approximate them from samples. Of course, with a large enough number of rollouts, the approximation can become accurate. This is now our dataset, where the policy networks are the inputs and the discretized distributions of sampled returns are the desired outputs to the PVN. In these experiments, we generate a set of 1000 randomly initialized policies and run each policy for 100 episodes. The return distributions are discretized into 41 evenly sized bins. When generating randomly initialized networks, some of the networks can have near optimal performance, making it difficult to show that our method can lead to policies better than the training set. In order to deal with this, we filter out of the training set any policy with an expected return above a specified level. This allows us to see if the PVN generalizes outside of the distribution of policies seen in training and whether the gradient ascent can be effective and find substantially better policies. In our experiments, the training set consists of randomly initialized policies with an expected undiscounted return no higher than 30, while the maximum possible return is 100. Once we have a trained PVN, we then perform gradient ascent for 100 steps on 5 randomly sampled starting policies. When comparing methods, the same 5 policies are given to all. This allows for a fairer comparison, as one policy might be easier to improve than another. The remaining figures are designed in two parts. The left subplot is a histogram showing the distribution of expected returns achieved by the training set policies. In green is the training data used by the PVN to train. In red is the data that was initially generated but then thrown away, as it was above our set expected return limit of 30. This allows us to show the performance achievable by gradient ascent relative to the training data. The right subplot is the performance of policies as they perform a number of gradient steps, in a lighter color. The darker line is the average policy ascent path. The starting policies are randomly selected by network initialization. Finally, the red dashed line marks the training set performance limit of 30. We compare training PVNs in 2 ways: with and without Net- work Fingerprinting. As discussed in Section 4, performing network fingerprinting on linear networks is equivalent to flattening the network’s weights and feeding them directly as input to a PVN. As expected, both the flattened weights and network fingerprinting worked similarly. Furthermore, gradient ascent using either approach led to policies with an expected return of around 70, well above the training set limit of 30. MLP Policy. The main challenge of building PVNs was to build a scalable mechanism allowing us to give a multi- layered policy network (MLP) as input to a PVN. In these experiments we will show that network fingerprinting allows us to do so. We built the network fingerprinting with 20 probing states, leading to a policy fingerprint of 40 dimensions. These probing states were randomly initialized and trained jointly with the PVN. We performed 400 steps of gradient ascent. The rest of the training procedure is exactly as described for the linear policy. In Figure 4b, we can clearly see that giving a flattened neu- ral net as input to a PVN does not work. On the other hand, network fingerprinting sees no scalability issues and allows the policy to improve up to the optimal policy. Furthermore, gradient ascent was consistently successful across the differ- ent starting policies, improving to near optimal performance from all starting points. Comparing the histograms from Figures 4a and 4b, we can notice that the distribution of generated linear policies has a long tail, with some randomly generated policies achieving near optimal performance. On the other hand, the distribu- Policy Evaluation Networks Histogram of Training Set Values Performance Through Gradient Ascent 250 Training Set Expected Return — PVN =~ ddpa =~ td3 400-200 oO 200 400 600 800 1000 Count # Gradient Updates We Figure 5. Gradient ascent performed on Swimmer. and compare plot Horizontal the dashed lines are baselines. Their scores were taken from https://spinningup.openai.com/en/latest/spinningup/bench.html tion of randomly generated MLP policies has much lower ceiling of performance. As networks become larger, it be- comes more difficult to randomly generate good policies. # 6.3. MuJoCo - Swimmer terminology. Sometimes they are characterized as black-box optimization, as they treat the mapping from policy param- eters to return (or “fitness”) as a black box, sometimes as evolutionary algorithms, with recent incarnations in (Sal- imans et al., 2017; Such et al., 2017; Mania et al., 2018). They are related to this work in two dimensions: a number of black-box methods also pursue a form of policy gradi- ent ascent (Spall et al., 1992; Peters et al., 2010; Wierstra et al., 2014), generally by employing a noisy form of finite differences. In theory, one could use finite differences to compute the exact gradient; however, there are too many parameters to make this a tractable solution. Our method on the other hand, is a low-variance but biased estimate of a policy’s expected return, which in turn gives us a biased gra- dient. The second dimension of similarity is the analogue of our PVNs, also called ‘surrogate models’ or ‘response sur- faces’ which aim to generalize across the fitness landscape, for example (Booker et al., 1998; Box & Wilson, 1951; Moore & Schneider, 1996; Ong et al., 2003; Loshchilov et al., 2012); see (Jones, 2001) for an overview. In contrast to our approach, which explicitly introduces an inductive bias to make suitable generalizations across policies, these methods make fewer assumptions and model only (often local) surface-level regularities of the fitness. Our last set of experiments is on Swimmer, a Mu- JoCo (Todorov et al., 2012) continuous control environment where the agent is a small worm that has to move forward by moving its joints. We test our approach on this task to show whether PVNs can scale to larger experiments. Also, as explained in Section 3, state of the art RL algorithms tend to do poorly on Swimmer, compared to other MuJoCo tasks, because of the discounting. Agents optimizing for the discounted return learn to act in a myopic way, sacrificing long term gains, due to the discount. Since our approach can be used without discounting, we can avoid this problem by optimizing for the true objective directly. Our work has some similarities to synthetic gradients (Jader- berg et al., 2017a), which are networks that learn to output another network’s gradients. However, our networks are never trained to output gradients; these are available to us as a byproduct of the architecture. Generalized policy improvement (Barreto et al., 2017) finds a better policy as a mixture of a set of policies, which is a similar objective to ours, but the methodology is very different, as it relies on having Q-values for each policy. In these experiments, we trained PVNs with Network Fin- gerprinting on a dataset of 2000 deterministic policies and 500 rollouts each to estimate their returns. The return dis- tributions are discretized into 51 evenly sized bins. Once the PVN is trained, we do gradient ascent with 5 randomly initialized starting policies for 1000 steps each. The rest of the algorithmic details are in the Appendix. In Figure 5, we compare the performance of the 5 policy ascents with 3 baselines, DDPG (Lillicrap et al., 2015), SAC (Haarnoja et al., 2018) and TD3 (Fujimoto et al., 2018), which are state of the art model-free RL algorithms. We can see that the set of policies all finished with an expected return above all baselines. The best of the curves achieved expected returns around 250, substantially outperforming other algorithms. Universal Value Function Approximators (UVFA, Schaul et al., 2015) are value functions that generalize across both states and goals. More specifically, since many policies can achieve the same goals, UVFAs output the value of the optimal policy for a certain goal. In contrast, our method generalizes across policies, regardless of their optimality, and is less complex because it does not depend on state. Finally, our work can be considered a case of off-policy learning (Sutton & Barto (2018), Precup (2000), Munos et al. (2016)), a class of algorithms which allow one to evaluate and improve policies given data generated from a set of different policies. One major difference is that our method only looks at expected returns of policies, as opposed to all transitions generated by the set of policies, as is usually done. # 7. Related Work Methods that aim to solve RL problems by searching di- rectly in policy space have a long history, and often different Policy Evaluation Networks # 8. Conclusion and future work # References We introduced a network that can generalize in policy space, by taking policy fingerprints as inputs. These fingerprints are differentiable policy embeddings obtained by inspect- ing the policy’s behaviour in a set of key states. We also described a novel policy gradient algorithm which is per- formed directly through the Policy Evaluation Network, allowing the computation of a policy gradient estimate for any policy, even if it has never been seen by the network. Barreto, A., Dabney, W., Munos, R., Hunt, J. J., Schaul, T., van Hasselt, H. P., and Silver, D. Successor features for transfer in reinforcement learning. In Advances in neural information processing systems, pp. 4055–4065, 2017. Baxter, J. and Bartlett, P. L. Infinite-horizon policy-gradient estimation. J. Artif. Int. Res., 15(1):319350, November 2001. ISSN 1076-9757. Extension to value functions. Until now, we have only looked at Policy Evaluation Networks which output the expected return of a policy, from the initial state distribu- tion. While this has benefits over the usual way of doing policy gradient, there are also disadvantages. Traditional RL algorithms can usually learn on-line, that is learn as more samples are seen, whereas our method requires en- tire trajectories before learning. This, however, does not have to be the case. Our method is extendable to the state- dependent value function setting V (s, θ). This can give rise to zero-shot policy evaluation for a variety of algorithms. In actor-critic algorithms for instance, when the policy up- dates, the value function is lagging behind and requires samples from the new policy before it becomes accurate. Our method would allow a value function to generalize to unseen policies, meaning when the policy is updated, the value function would immediately update as well. This has the potential to improve data efficiency, as policies would not have to wait for value functions to catch up. Bellemare, M. G., Dabney, W., and Munos, R. A distribu- tional perspective on reinforcement learning. In Proceed- ings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 Au- gust 2017, pp. 449–458, 2017a. Bellemare, M. G., Dabney, W., and Munos, R. A distribu- tional perspective on reinforcement learning. In Proceed- ings of the 34th International Conference on Machine Learning-Volume 70, pp. 449–458. JMLR. org, 2017b. Booker, A., Dennis, J. J., Frank, P., and Serafini, D. Opti- mization using surrogate objectives on a helicopter test example. Computational Methods in Optimal Design and Control, 1998. Box, G. E. P. and Wilson, K. B. On the experimental at- tainment of optimum conditions. Journal of the Royal Statistical Society, 13(1):1–45, 1951. Inductive Biases. We designed PVNs in the simplest way possible, as feed-forward neural networks. However, the structure of the network is very important. Inductive biases incorporated into the architecture can substantially improve data efficiency and generalization in policy space (Wolpert & Macready, 1997). There is much structure in MDPs that can be leveraged. Instead of simply building an MLP, one can build a state transition model and use this information to make value predictions. Other works, such as TreeQN (Farquhar et al., 2018), the Predictron (Silver et al., 2017) and Value Prediction Networks (Oh et al., 2017) are exam- ples of value functions built with inductive biases, looking to improve generalization. Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., and Zaremba, W. OpenAI gym. arXiv preprint arXiv:1606.01540, 2016. Caruana, R. Multitask learning. Machine Learning, 28(1): 41–75, Jul 1997. Cohn, D. A., Ghahramani, Z., and Jordan, M. I. Active learning with statistical models. Journal of Artificial Intelligence Research, 4:129–145, 1995. Dadashi, R., Taiga, A. A., Roux, N. L., Schuurmans, D., and Bellemare, M. G. The value function polytope in reinforcement learning. arXiv preprint arXiv:1901.11524, 2019. # ACKNOWLEDGMENTS We’d like to acknowledge Simon Osindero, Kory Mathew- son, Tyler Jackson, Chantal Remillard and Sasha Vezhnevets for useful discussions and feedback on the paper. Most im- portantly, we’d like to thank Emma Brunskill for inviting Jean Harb to her lab, where this work was started. Pierre- Luc Bacon is supported by the Facebook CIFAR AI chair program and IVADO. Doina Precup is a CIFAR fellow and is supported by the Canada CIFAR chair program. Farquhar, G., Rockt¨aschel, T., Igl, M., and Whiteson, S. TreeQN and ATreeC: Differentiable tree planning for deep reinforcement learning. In International Conference on Learning Representations. International Conference on Learning Representations, 2018. Fujimoto, S., van Hoof, H., and Meger, D. Addressing function approximation error in actor-critic methods. In Proceedings of the 35th International Conference on Ma- chine Learning, ICML 2018, Stockholmsm¨assan, Stock- holm, Sweden, July 10-15, 2018, pp. 1582–1591, 2018. Policy Evaluation Networks Glynn, P. W. and Olvera-Cravioto, M. Likelihood ratio gra- dient estimation for steady-state parameters. Stochastic Systems, 9(2):83–100, June 2019. Munos, R., Stepleton, T., Harutyunyan, A., and Bellemare, M. Safe and efficient off-policy reinforcement learning. In Advances in Neural Information Processing Systems, pp. 1054–1062, 2016. Haarnoja, T., Zhou, A., Abbeel, P., and Levine, S. Soft actor-critic: Off-policy maximum entropy deep reinforce- ment learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018. Oh, J., Singh, S., and Lee, H. Value prediction network. In Advances in Neural Information Processing Systems, pp. 6118–6128, 2017. Jaderberg, M., Czarnecki, W. M., Osindero, S., Vinyals, O., Graves, A., Silver, D., and Kavukcuoglu, K. Decoupled neural interfaces using synthetic gradients. In Proceed- ings of the 34th International Conference on Machine Learning-Volume 70, pp. 1627–1635. JMLR. org, 2017a. Jaderberg, M., Dalibard, V., Osindero, S., Czarnecki, W. M., Donahue, J., Razavi, A., Vinyals, O., Green, T., Dunning, I., Simonyan, K., et al. Population based training of neural networks. arXiv preprint arXiv:1711.09846, 2017b. Ong, Y. S., Nair, P. B., and Keane, A. J. Evolutionary optimization of computationally expensive problems via surrogate modeling. AIAA journal, 41(4):687–696, 2003. Peters, J., Mulling, K., and Altun, Y. Relative entropy policy search. In Twenty-Fourth AAAI Conference on Artificial Intelligence, 2010. Precup, D. Eligibility traces for off-policy policy evalua- tion. Computer Science Department Faculty Publication Series, pp. 80, 2000. Jaderberg, M., Mnih, V., Czarnecki, W. M., Schaul, T., Leibo, J. Z., Silver, D., and Kavukcuoglu, K. Reinforce- ment learning with unsupervised auxiliary tasks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Confer- ence Track Proceedings, 2017c. Jones, D. R. A taxonomy of global optimization methods based on response surfaces. Journal of Global Optimiza- tion, 21:345–383, 2001. Konda, V. R. and Tsitsiklis, J. N. Actor-critic algorithms. In NIPS, pp. 1008–1014, 2000. Salimans, T., Ho, J., Chen, X., Sidor, S., and Sutskever, I. Evolution strategies as a scalable alternative to rein- forcement learning. arXiv preprint arXiv:1703.03864, 2017. Schaul, T., Horgan, D., Gregor, K., and Silver, D. Uni- In International versal value function approximators. Conference on Machine Learning, pp. 1312–1320, 2015. Schulman, J., Moritz, P., Levine, S., Jordan, M. I., and Abbeel, P. High-dimensional continuous control using generalized advantage estimation. In International Con- ference on Learning Representations (ICLR), 2016. Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., and Wierstra, D. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. Loshchilov, I., Schoenauer, M., and Sebag, M. Self-adaptive surrogate-assisted covariance matrix adaptation evolution strategy. In Proceedings of the 14th annual conference on Genetic and evolutionary computation, pp. 321–328, 2012. Silver, D., van Hasselt, H., Hessel, M., Schaul, T., Guez, A., Harley, T., Dulac-Arnold, G., Reichert, D., Rabinowitz, N., Barreto, A., et al. The predictron: End-to-end learning and planning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 3191– 3199. JMLR. org, 2017. Spall, J. C. et al. Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE transactions on automatic control, 37(3):332–341, 1992. Lyle, C., Bellemare, M. G., and Castro, P. S. A compara- tive analysis of expected and distributional reinforcement In Proceedings of the AAAI Conference on learning. Artificial Intelligence, volume 33, pp. 4504–4511, 2019. Such, F. P., Madhavan, V., Conti, E., Lehman, J., Stanley, K. O., and Clune, J. Deep neuroevolution: Genetic algo- rithms are a competitive alternative for training deep neu- ral networks for reinforcement learning. arXiv preprint arXiv:1712.06567, 2017. Mania, H., Guy, A., and Recht, B. Simple random search provides a competitive approach to reinforcement learn- ing. arXiv preprint arXiv:1803.07055, 2018. Moore, A. W. and Schneider, J. Memory-based stochastic optimization. In Advances in Neural Information Pro- cessing Systems, 1996. Sutton, R. S. Temporal credit assignment in reinforce- ment learning. PhD thesis, University of Massachusetts Amherst, 1984. Sutton, R. S. and Barto, A. G. Reinforcement learning: An introduction. MIT press, 2018. Policy Evaluation Networks Sutton, R. S., McAllester, D. A., Singh, S. P., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. In Advances in neural in- formation processing systems, pp. 1057–1063, 2000. Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M., White, A., and Precup, D. Horde: a scalable real-time architecture for learning knowledge from unsu- pervised sensorimotor interaction. In 10th International Conference on Autonomous Agents and Multiagent Sys- tems (AAMAS 2011), Taipei, Taiwan, May 2-6, 2011, Volume 1-3, pp. 761–768, 2011. Thomas, P. Bias in natural actor-critic algorithms. In Pro- ceedings of the 31th International Conference on Ma- chine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pp. 441–448, 2014. Todorov, E., Erez, T., and Tassa, Y. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ Inter- national Conference on Intelligent Robots and Systems, pp. 5026–5033. IEEE, 2012. van den Oord, A., Kalchbrenner, N., and Kavukcuoglu, K. Pixel recurrent neural networks. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pp. 1747–1756, 2016. Wierstra, D., Schaul, T., Glasmachers, T., Sun, Y., Peters, J., and Schmidhuber, J. Natural evolution strategies. The Journal of Machine Learning Research, 15(1):949–980, 2014. Wolpert, D. H. and Macready, W. G. No free lunch theo- rems for optimization. IEEE transactions on evolutionary computation, 1(1):67–82, 1997. Xi-Ren Cao and Yat-Wah Wan. Algorithms for sensitivity analysis of markov systems through potentials and pertur- bation realization. IEEE Transactions on Control Systems Technology, 6(4):482–494, July 1998. ISSN 2374-0159. # Supplementary Material # A. Polytope Experiment Details # A.1. Markov Decision Process Specifications To describe the 2 state MDP used in our polytope experiments, we use the following convention (Dadashi et al., 2019): r(si, aj) = ˆr[i × |A| + j] P (sk|si, aj) = ˆP [i × |A| + j][k] where our MDP has the following properties. |A| = 2, γ = 0.8 ˆr = [−0.45, −0.1, 0.5, 0.5] ˆP = [[0.6, 0.4], [0.99, 0.01], [0.2, 0.8], [0.99, 0.01]] # A.2. Visualization of Predictions In the polytope experiments, 40 points were randomly sampled and split into equally sized training and test sets of 20 points. In Figure A.1a, we show the sampled points projected in the value polytope. In contrast, Figure A.1b shows the values of the points predicted by a trained PVN. (a) Exact policy values (b) Policy values predicted by the trained PVN # BS Figure A.1. Results of training a network to predict policy values, projected in a value polytope. The training set is in blue and the test set is in red. Policy Evaluation Networks # B. Hyperparameters Table B.1. Hyperparameters used in various experiments. The last three rows are only applicable when using probing states. Parameter Polytope Cartpole Linear Cartpole MLP Policy architecture (hidden layer sizes) PVN Architecture (hidden layer sizes) NN activations Temperature # Bins Grad ascent learning rate Grad ascent optimizer Grad ascent steps Discount factor (γ) Batch size PVN learning rate PVN optimizer Training steps # Policies # Returns per policy Training set performance limit N/A [50] ReLU N/A N/A 0.1 SGD 100 0.8 32 0.01 RMSProp 20000 20 N/A N/A [] [80] ReLU 3 41 0.001 Adam 100 1 32 0.003 Adam 3000 1000 100 30 [30] [80] ReLU 3 41 0.001 Adam 400 1 32 0.003 Adam 3000 1000 100 30 Train probing states Randomly generate probing states # Probing states N/A N/A N/A True True 20 True True 20 Swimmer [30] [80] ReLU 3 51 0.002 Adam 1000 1 32 0.003 Adam 5000 2000 500 N/A True True 20 # C. Dataset collection algorithm # Algorithm 3 Collect dataset of policies for PVN training Choose number of policies K and rollouts B D ← ∅ for policy i = 1 to K do Initialize policy πi parameters θi using Glorot Initialization R ← ∅ for rollout b = 1 to B do Sample Monte-Carlo return GM C using policy πθi R ← R ∪ GM C end for D ← D ∪ (θi, R) end for return D
{ "id": "1703.03864" }
2002.11794
Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers
Since hardware resources are limited, the objective of training deep learning models is typically to maximize accuracy subject to the time and memory constraints of training and inference. We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute: self-supervised pretraining and high-resource machine translation. We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps. Moreover, this acceleration in convergence typically outpaces the additional computational overhead of using larger models. Therefore, the most compute-efficient training strategy is to counterintuitively train extremely large models but stop after a small number of iterations. This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models. However, we show that large models are more robust to compression techniques such as quantization and pruning than small models. Consequently, one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models.
http://arxiv.org/pdf/2002.11794
Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joseph E. Gonzalez
cs.CL, cs.LG
ICML 2020
null
cs.CL
20200226
20200623
0 2 0 2 n u J 3 2 ] L C . s c [ 2 v 4 9 7 1 1 . 2 0 0 2 : v i X r a # Train Large, Then Compress: Rethinking Model Size for Efficient Training and Inference of Transformers # Zhuohan Li * 1 Eric Wallace * 1 Sheng Shen * 1 Kevin Lin * 1 Kurt Keutzer 1 Dan Klein 1 Joseph E. Gonzalez 1 # Abstract Since hardware resources are limited, the ob- jective of training deep learning models is typ- ically to maximize accuracy subject to the time and memory constraints of training and inference. We study the impact of model size in this set- ting, focusing on Transformer models for NLP tasks that are limited by compute: self-supervised pretraining and high-resource machine transla- tion. We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps. Moreover, this acceleration in conver- gence typically outpaces the additional computa- tional overhead of using larger models. Therefore, the most compute-efficient training strategy is to counterintuitively train extremely large models but stop after a small number of iterations. This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models. However, we show that large models are more robust to compression techniques such as quantization and pruning than small models. Consequently, one can get the best of both worlds: heavily compressed, large models achieve higher accuracy than lightly compressed, small models. Common TrainSmall | | Stop Training Lightly Practice Model When Converged Compress . Train Large Stop Training Heavily Optimal Model Early Compress Figure 1. Under the usual presumption that models are trained to convergence, only small models that are fast-to-execute are feasible in resource-constrained settings. Our work shows that the most compute-efficient training scheme is instead to train very large models, stop them well short of convergence, and then heavily compress them to meet test-time constraints. et al., 2019; Hnaff et al., 2019), which allows training to scale to massive amounts of unlabeled data and very large neural models. Consequently, computational resources are increasingly the critical constraint on improving model ac- curacy. This constraint causes the (often implicit) goal of model training to be maximizing compute efficiency: how to achieve the highest model accuracy given a fixed amount of hardware and training time. Maximizing compute efficiency requires rethinking com- mon assumptions about model training. In particular, there is typically an implicit assumption that models must be trained until convergence, which makes larger models ap- pear less viable for limited compute budgets. We challenge this assumption by demonstrating the opportunity to in- crease model size at the cost of convergence. Concretely, we show that the fastest way to train Transformer mod- els (Vaswani et al., 2017) is to substantially increase model size but stop training very early. # 1. Introduction In the current deep learning paradigm, using more compute (e.g., increasing model size, dataset size, or training steps) typically leads to higher model accuracy (Brock et al., 2019; Raffel et al., 2019). This phenomenon is exacerbated by the recent success of self-supervised pretraining (Devlin *Equal contribution 1UC Berkeley. Correspondence to: Zhuo- han Li <[email protected]>. Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s). In our experiments, we vary the width and depth of Trans- former models and evaluate their training time and accu- racy on self-supervised pretraining (ROBERTA (Liu et al., 2019b) trained on Wikipedia and BookCorpus) and machine translation (WMT14 English→French). For these tasks, we first show that larger models converge to lower validation error in fewer gradient updates than smaller models (Sec- tion 3). Moreover, this increase in convergence outpaces the additional computational overhead of using larger models— the most compute-efficient models are extremely large and stopped well short of convergence (e.g., Figure 2, left). We Rethinking Model Size for Efficient Training and Inference of Transformers Effect of ROBERTa Depth on Training = r=) Model Depth —3 Layers zz —6 Layers 3 =12 Layers >. —18 Layers oa 8 —24 Layers oa < 2 & 3 Ss 6 = a = 4 250000 500000 750000 1000000 Wall Clock (Seconds) 0 MNLI Validation Accuracy Effect of ROBERTa Depth on Pruning 0.85 0.80 Original Size -~= 3 Layers 6 Layers 15 -e 12 Layers +18 Layers -= 24 Layers 50 100 150 Number of Parameters (Millions) S N a 200 (a) (b) Figure 2. Increasing Transformer model size results in lower validation error as a function of wall-clock time and better test-time accuracy for a given inference budget. (a) demonstrates the training speedup for ROBERTA models of different sizes on the masked language modeling pretraining task. In (b), we take ROBERTA checkpoints that have been pretrained for the same amount of wall-clock time and finetune them on a downstream dataset (MNLI). We then iteratively prune model weights to zero and find that the best models for a given test-time memory budget are ones which are trained large and then heavily compressed. also show that this acceleration in wall-clock convergence is largely a function of parameter count and only weakly influenced by model width, depth, and batch size. Although larger models train faster, they also increase the computational and memory requirements of inference. This increased cost is especially problematic in real-world appli- cations, where the cost of inference dominates the cost of training (Jouppi et al., 2017; Crankshaw et al., 2017; Metz, 2017). However, we show that for ROBERTA, this apparent trade-off can be reconciled with compression: large models are considerably more robust to compression as compared to small models (Section 4). Thus, large, heavily compressed models outperform small, lightly compressed models using comparable inference costs (e.g., Figure 2, right). We finally analyze when and why large models train fast and compress well (Section 5). We show that the optimal model size is closely linked to the dataset size. In particular, large models perform favorably in big data settings where overfitting is a limited concern. We then analyze why larger models are more compressible by measuring the difference in weights when using quantized or sparse weight matrices. This error decreases as model size increases, i.e., greater overparameterization leads to easy-to-compress weights. # 2. Experimental Setup # 2.1. Tasks, Models, and Datasets We train state-of-the-art models for two NLP tasks: self- supervised pretraining using masked language modeling and high-resource machine translation. We chose these tasks because accuracy continues to improve as models are made larger (Shazeer et al., 2018), trained for more steps (Liu et al., 2019b), and trained using larger batches (Raffel et al., 2019). Thus, a critical factor in improving accuracy for these tasks is to maximize the compute efficiency of training. Self-supervised Pretraining (MLM) We closely follow the pretraining setup and model from ROBERTA (Liu et al., 2019b) with a few minor exceptions. We move the model’s layer normalization layers (Ba et al., 2016) to the input of every sub-layer (often called pre-norm). This slightly improves results and stabilizes training (Wang et al., 2019b). We also use an input sequence length of 128 and a batch size of 8192, unless otherwise noted. For ROBERTA, we vary the depth in {3, 6, 12, 18, 24}, and the hidden size in {256, 512, 768, 1024, 1536}. The dataset for pretraining ROBERTA is not publicly avail- able. We instead follow BERT (Devlin et al., 2019) and con- catenate the BookCorpus (Zhu et al., 2015) and a Wikipedia dump to use for training. Since the BookCorpus is no longer publicly available, we follow Devlin et al. (2019) and crawl http://smashwords.com. Our final dataset is roughly 3.4 billion words in total. We hold out a random 0.5% of the data for validation and report the masked language model- ing (MLM) perplexity on this data. We also evaluate the model by finetuning on MNLI (Williams et al., 2018) and SST-2 (Socher et al., 2013). We found the variance in accu- racy for these two tasks to be lower than the other GLUE tasks (Wang et al., 2019a). Rethinking Model Size for Efficient Training and Inference of Transformers Machine Translation For machine translation (MT) we train the standard Transformer architecture and hyperpa- rameters on the WMT14 English→French dataset. We use the standard dataset splits: 36M sentences for train- ing, newstest2013 for validation, and newstest2014 for testing. We follow standard practice and report tokenized case-sensitive BLEU (Papineni et al., 2002) with compound splitting (Vaswani et al., 2017). We vary the model depth in {2, 6, 8} and hidden size in {128, 256, 512, 1024, 2048}. # 2.2. Evaluation Metrics: FLOPs and Wall-Clock Time Recent work on resource-constrained training uses the total number of training steps (Li et al., 2020) or the total num- ber of training FLOPs (Schwartz et al., 2019; Clark et al., 2020) as the main evaluation metric. These metrics do not adequately capture the true training time. In particular, re- porting gradient steps does not account for the cost of using bigger batches or models. Moreover, although reporting FLOPs is useful for comparison as it is hardware-agnostic, it neglects the fact that parallel operations are significantly cheaper than sequential operations on modern hardware. We instead directly report wall-clock time as our main eval- uation metric.1 Since the runtime varies across machines (the hardware setups are different, the jobs are not isolated, etc.), we use a single machine to benchmark the time per gradient step for each model size. In particular, we train models and wait for the time per gradient step to stabi- lize, and then we use the average time over 100 steps to calculate the training duration. We conduct the timing on one NVIDIA 16GB V100 GPU and use gradient accumu- lation to fit larger models and batches. In order to be fair to smaller models, we increase the batch size to the largest size that fits in memory. This means that smaller models use fewer gradient accumulation steps and thus take less time per gradient step (which we confirmed empirically). We use Tensor2Tensor (Vaswani et al., 2018) for MT and fairseq (Ott et al., 2019) for RoBERTa. We train using a mix of v3-8 TPUs and 8xV100 GPUs for both tasks. # 3. Larger Models Train Faster Effect of ROBERTa Depth 10 Model Depth —3 Layers = —6 Layers 3 —12 Layers Ss —18 Layers @ 8- —24 Layers ao = =! = S = 6 Zz oo > 4 120000 0 40000 80000 Number of Gradient Steps Figure 3. Deeper ROBERTA models converge faster than shallow models with respect to the gradient steps (wall-clock time shown in Figure 2, left). Increase Model Width and Sometimes Depth For the masked language modeling task, the validation perplexity weakly depends on the shape of the model. Instead, the total number of model parameters is the key determiner of the convergence rate. Thus, increasing either the width or the depth is effective at accelerating model training. On the other hand, the preferred way to scale models for MT is to increase their width as wider models usually outperform deep models in final performance (Vaswani et al., 2017; Shazeer et al., 2018). Increase Model Size, Not Batch Size Another factor that affects the training efficiency is the batch size. In particu- lar, there is a trade-off between using fast-to-execute small batches and slow-but-accurate large batches. We study the effect of scaling batch size because it provides an alternative to scaling model size. In particular, what if we use gradi- ent accumulation to increase the batch size rather than the model size? We vary the batch size for the 12 layer, 768H model and increase the learning rate as is common prac- tice (Goyal et al., 2017; Liu et al., 2019b). We report the best found learning rate values in Table 1 in Appendix A. Wider and deeper Transformer models are more sample- efficient than small models: they reach the same level of performance using fewer gradient steps (Figures 3–5). More- over, this increase in convergence outpaces the additional computational overhead from increasing model size, even though we need to use more steps of gradient accumulation. Consequently, after adjusting for wall-clock time, the larger models are faster to train than smaller models (Figures 4–5). 1We also report selected learning curves as a function of FLOPs in Appendix A.1. These curves show that our conclusion that larger models are faster to train is not specific to our hardware setup. We show the training curves in Figure 13 in Appendix A. Bigger batch sizes cause the model to converge in fewer steps. However, when adjusting for wall-clock time, in- creasing the batch size beyond a certain point only provides marginal improvements.2 In particular, varying the batch size has little impact when training with a batch size in the 2Note that our timing is done by accumulating gradients on a single GPU machine. For multi-GPU setups, the cost of accumu- lating gradients is lower as it naturally helps to balance out uneven runtimes across workers (Ott et al., 2018). In this setup, the wall- clock improvements from increasing batch sizes by accumulating gradients may be slightly larger. Rethinking Model Size for Efficient Training and Inference of Transformers Effect of ROBERTa Hidden Size Hidden Size =256H > =512H = =768H ® 10 =1024H 2 =1536H 5 a = = 27.5 c s s Zz S5 120000 0 40000 80000 Number of Gradient Steps Effect of ROBERTa Hidden Size Hidden Size =256H 2 =512H a4 —768H 2 10 —1024H fog —1536H 5 a = = 275 = 2 5 Zz $5 0 250000 1000000 500000 Wall Clock (Seconds) 750000 Figure 4. Wider models converge faster than narrower models as function of both gradient steps (left plot) and wall-clock time (right plot). range from 2048–16384. This aligns with the findings of McCandlish et al. (2018): training efficiency is maximized when models are trained near some critical batch size. An additional downside of increasing the batch size is that it requires simultaneously tuning the learning rate. On the other hand, scaling model size provides improvements in training efficiency without adjusting any hyperparameters. Overall, our results show that one should increase the batch size (and learning rate) until the critical batch size region is reached and then to focus on increasing model size. # 4. Larger Models Compress Better Although the most compute-efficient training scheme is to use larger models, this results in models which are less infer- ence efficient. Here, we demonstrate how to get the best of both worlds. In particular, we show that since large models are more compressible than small models, they can outper- form small models while using similar inference costs. # 4.1. Compression Methodology and Evaluation Larger Models Are Not Harder to Finetune Although the larger models minimize validation MLM perplexity faster, one concern is that they may not minimize down- stream task error faster. For instance, larger models may overfit on small downstream datasets. We investigate this by training ROBERTA models of different sizes and stopping them when they reach the same MLM perplexity (the larger models have been trained for less wall-clock time). We then finetune each model using the ROBERTA finetuning hyperparameters (Liu et al., 2019b) on MNLI and SST-2. We report the model accuracies in Table 2 in Appendix B. All models reach comparable accuracies (in fact, the larger models typically outperform the smaller ones), which shows that larger models are not more difficult to finetune. Returns Diminish As Size Increases For both RoBERTa and MT, the largest models have reached the point where they stop improving convergence with respect to wall-clock time. For example, the largest model for MT (6L, 2048H) starts to converge slower with respect to wall-clock time than the second-largest model (6L, 1024H). These diminishing returns occur because (1) the per-step convergence improve- ments from using larger models decreases as the model gets larger and (2) the computational overhead increases as our hardware becomes increasingly compute-bound. We further analyze when and why returns diminish in Section 5. Compression Methods Model compression methods re- duce the inference costs of trained models. For example, model compression can reduce inference latency to enable real-time applications like simultaneous MT (See et al., 2016) or reduce memory usage to save energy for mobile de- vices (Han et al., 2016). We focus on compression methods which are fast to perform—methods which require signif- icant amounts of compute will negate the speedup from using larger models.3 In particular, we consider two com- pression techniques: quantization (Section 4.2) and pruning (Section 4.3), as well as their combination.4 Quantization stores model weights in low precision formats to (1) accel- erate operations when using hardware with reduced preci- sion support and (2) reduce overall memory footprint (Han et al., 2016; Dong et al., 2019). Pruning sets neural network weights to zero to (1) remove operations and (2) reduce the memory footprint when models are stored in sparse matrix formats (LeCun et al., 1990; Han et al., 2015). We apply both quantization and pruning post-hoc to the finetuned models to limit the additional computational overhead. 3For example, we avoid using model distillation methods be- cause they can add a significant computational overhead (Sanh et al., 2019; Turc et al., 2019) or cause a significant degradation in accuracy (Liu et al., 2019a; Sun et al., 2019). 4We also experiment with parameter sharing (Lan et al., 2020; Dehghani et al., 2019)—tying the weights of the Transformer lay- ers together—and find that it slows convergence (see Appendix C). Rethinking Model Size for Efficient Training and Inference of Transformers Effect of MT Model Size 0.40 ° w a Validation BLEU oO we oO S iy a Model Size =128H, 2L —256H, 2L —512H, 6L =1024H, 6L —2048H, 8L 0 100000 200000 Number of Gradient Steps 300000 Effect of MT Model Size 0.40 S w a Validation BLEU Oo we Oo Model Size 0.25 wooeH OL —256H, 2L =—512H, 6L 0.20 =1024H, 6L —2048H, 8L 0 100000 200000 300000 400000 Wall Clock (Seconds) Figure 5. BLEU Scores on the English→French validation set (newstest2013) using models of different sizes. Larger models typically converge faster as a function of both iterations (left plot) and wall-clock time (right plot). When models become too large (2048H, 6L), they converge faster per iteration but their overhead on our limited hardware negates their convergence improvements. Finetuning Setup and Compression Evaluation We fo- cus on compressing the finetuned ROBERTA models as a case study. We train models of different sizes for 1,000,000 seconds,5 finetune them on MNLI/SST-2, and then apply quantization/pruning. For evaluation, even though pruning and quantization will improve inference latency/throughput, quantifying these improvements is challenging because they are highly hardware-dependent. Instead, we follow past work and report the memory needed to store the model parameters (Thakker et al., 2019; Shen et al., 2020). Results The quantization results for MNLI are shown on the left of Figure 6 (SST-2 results are in Appendix D). We plot each model’s accuracy at different quantization levels as a function of its total memory usage. The larger models are more robust to quantization than the smaller models (the accuracy drop is smaller when the precision is reduced). Hence, the models which are trained using large parame- ter counts and then heavily quantized achieve the highest accuracy for almost all memory budgets. # 4.3. Larger Models Are More Robust to Pruning # 4.2. Larger Models Are More Robust to Quantization We quantize every parameter, including the embedding ma- trix, but keep the model activations at full precision. We use floating point precisions in {4, 6, 8, 32} bits (using lower than 4-bits resulted in severe accuracy loss). We apply quan- tization post-hoc which adds no additional time. We quantize uniformly: the range of floats is equally split and represented by unsigned integers in {0, . . . , 2k − 1}, where k is the precision. We accomplish this by quantizing the weights W as: W' = Clamp(W, qo, dox—1), W'—4 A Quantize(W) = AW! + qo, wr=| ], where A = 24-1 ~ 40 where Clamp() clamps all elements to the min/max range, W! isa set of integer indices, |-] is the round operator, A is the distance between two adjacent quantized points, and [Go, Gas] indicates the quantization range. 5We expect similar conclusions to hold for other budgets. We use iterative magnitude pruning (Str¨om, 1997; Han et al., 2016): we iteratively zero out the smallest magnitude param- eters and continue finetuning the model on the downstream task to recover lost accuracy. Concretely, we consider models with sparsity levels of 15%, 30%, 45%, 60%, 75%, and 90%. We first find the 15% of weights with the smallest magnitude and set them to zero.6 We then finetune the model on the downstream task until it reaches within 99.5% of its original validation accuracy or until we reach one training epoch. We then repeat this process—we prune another 15% of the smallest magnitude weights and finetune—stopping when we reach the desired sparsity level. The additional training overhead from this it- erative process is small because the model typically recovers its accuracy in significantly less than one epoch (sometimes it does not require any retraining to maintain 99.5%). For example, pruning to 45% can be done with one or two addi- tional epochs of finetuning on MNLI. 6It also may be possible to remove entire attention heads in addition to zeroing out weights (Michel et al., 2019; Voita et al., 2019). This may further improve our compression results. Rethinking Model Size for Efficient Training and Inference of Transformers RoBERTa Quantization seb seb 2 2 Ss 0 a Ss ce) So Original Size -e-3 Layers, 768H = 6 Layers, 768H = 12 Layers, 768H ~@ 18 Layers, 768H 24 Layers, 768H 12 Layers, 256H -e 12 Layers, 512H S Ny So MNLI Validation Accuracy S a 0.65 -©-12 Layers, 1024H «12 Layers, 1536H 0 500 1000 1500 Memory Usage (MB) RoBERTa Pruning 45% ao, 1H 08 60% Original Size -o3 Layers, 768H a 6 Layers, 768H m-12 Layers, 768H -@ 18 Layers, 768H 24 Layers, 768H ~-12 Layers, 256H #12 Layers, 512H -@12 Layers, 1024H 12 Layers, 1536H 0 100 200 300 Number of Parameters (Millions) MNLI Validation Accuracy Figure 6. We first pretrain ROBERTA models of different sizes for the same total wall-clock time (larger models are trained for fewer steps). We then finetune each model on MNLI and compress them using quantization (left) and pruning (right). For most budgets (x-axis), the highest accuracy models are the ones which are trained large and then heavily compressed. The labels above each point indicate the compression amount (e.g., 4-bit quantization or 45% sparsity); we omit cluttered labels. SST-2 results are shown in Appendix D. Results The pruning results for MNLI are shown in the right of Figure 6. We report the model’s accuracy as a function of the total number of nonzero parameters.7 The larger models can be pruned more than the smaller models without significantly hurting accuracy. Consequently, the large, heavily pruned models provide the best accuracy- efficiency trade-off. We find that deep networks are more robust to pruning than wider networks, e.g., the 24 Layer, 768H model outperforms the 12 Layer, 1536H model at most test budgets. ing ROBERTA models starting from different pretraining checkpoints (e.g., 3 epochs, 6 epochs, etc.) on MNLI. We then quantize the models to 4-bits. Figure 8 shows the results. Quantization is hardly affected by pretraining convergence—the drop in accuracy between the full precision and the 4-bit precision MNLI models is comparable as the pretrained model becomes more con- verged. Instead, the factor that determines compressibility is model size—the drop in accuracy is very large when compressing smaller models and vice versa. Combining Quantization and Pruning Results Pruning and quantization are complementary techniques for com- pressing Transformer models. We first prune models to various sparsity levels (e.g., 15%, 30%, etc.) and then apply varying amounts of quantization (e.g., 8-bit, 4-bit, etc.) to each model. In Figure 7 we plot combinations of pruning and quantization that lie at or near the Pareto frontier. Large models that are heavily compressed still provide the best trade-off between accuracy and efficiency when leveraging both pruning and quantization. A particularly strong com- pression method is to prune 30-40% of the weights and then quantize the model to 6-8 bits. # 4.4. Convergence Does Not Affect Compressibility # 5. When and Why Are Larger Models Better? This section presents results and discussion on why larger Transformer models train faster and compress better. # 5.1. Better Sample Efficiency With Larger Models For larger models to train faster, they must converge faster test error) per iteration. While there is a robust (w.r.t. literature studying why larger models achieve better final test accuracy,8 there is considerably less work exploring if and why larger models converge faster. One initial step in this direction is Arora et al. (2018a), who show that for deep linear neural networks, increasing depth can promote movement along directions already taken by the optimizer. Although larger Transformer models are more compress- ible, there is a confounding factor that our larger models are also less converged on the pretraining task. Is it the larger model size or the lack of convergence that causes the enhanced compressibility? We investigate this by finetun- 7Since the reduction in memory from storing sparse matrices is highly dependent on the data structure used, we follow past work and report the number of nonzero model parameters (Luo et al., 2017; Li et al., 2017). 8Chiefly, this work seeks to reconcile the conflict between modern deep learning practice and the classical bias-variance trade- off. For instance, it studies forms of implicit regularization (Zhang et al., 2017; Belkin et al., 2018), characterizes the expressivity of deep models (Raghu et al., 2017; Lu et al., 2017), and bounds the neural network generalization error (Du et al., 2019; Arora et al., 2018b). Rethinking Model Size for Efficient Training and Inference of Transformers RoBERTa Quantization + Pruning 0.85 By iJ 5 8 0.80 Ss Original Size £ od Layers, 768H os 0.75 6 Layers, 768H x= = 12 Layers, 768H > 18 Layers, 768H a 24 Layers, 768H z 12 Layers, 256H = 0.70 © 12 Layers, 512H 12 Layers, 1024H +12 Layers, 1536H 0 50 100 150 Bits x Parameter Count (Bits x Millions) Effect of Pretraining Convergence on Quantization => 30.84 5 oO oO =< 50.80 = oO Zz s&s Model Size . > 0.76 24 Layers, 32-bit ay. 24 Layers, 4-bit = 12 Layers, 32-bit 12 Layers, 4-bit 6 Layers, 32-bit 0.72 ~~6 Layers, 4-bit 30000 60000 90000 Number of Pretraining Gradient Steps Figure 7. We combine pruning and quantization and find their gains to be complementary. The models which are trained large and then compressed are the best performing for each test-time budget. Fast Minimization and the Role of Overfitting One em- pirical reason for the acceleration in convergence is that larger Transformer models minimize the training error faster. And, since the generalization gap is small for our tasks due to very large training sets, the larger models also converge faster w.r.t test error. In fact, the challenge in the MLM task is not overfitting, but instead, it is fitting the data—even 8 billion parameter models do not overfit to large pretraining corpora (Shoeybi et al., 2019). When overfitting is a concern, larger models start to con- verge slower (w.r.t test error). We demonstrate this by ran- domly subsampling our pretraining dataset to 5% and 1% of its original size and training ROBERTA models of var- ious sizes. When subsampling the data to 5% (top row of Figure 14 in Appendix A), the largest models do not im- prove on the training time of the smaller models (e.g., 12 layer ROBERTA trains just as fast as a 24 layer ROBERTA). Moreover, when the data is subsampled to 1% (bottom row of Figure 14), the largest models are worse in terms of perplexity due to overfitting. Thus, although our main con- clusion that increasing model size accelerates convergence still holds for the smaller models (e.g., the 12 layer model outperforms the 3 layer one), overfitting causes it to break down for the largest models. # 5.2. Manageable Compute Costs for Large Models Figure 8. We disentangle whether model size or pretraining con- vergence causes the enhanced compressibility of larger models. We finetune ROBERTA models starting from different pretrain- ing checkpoints on MNLI. We then quantize the models to 4-bits. Quantization is hardly affected by convergence—the drop in MNLI accuracy due to quantization is comparable as the pretrained model becomes more converged. Instead, the factor that determines com- pressibility is model size—the drop in accuracy is very large when compressing smaller models and vice versa. et al., 2017), language modeling (Kitaev et al., 2020), and other tasks (Jain et al., 2020). Thus, larger models will more fully utilize the available compute, causing their slow- down to be sublinear. Moreover, when larger models cause hardware to run out of memory, gradient accumulation can trade-off memory for compute while still preserving the gains of large models, as shown in our experiments. # 5.3. Smaller Compression Error for Larger Models Large transformer models are more compressible than small transformer models.9 Here, we present initial experiments to better understand why this occurs. Quantization Error is Smaller for Larger Models We first measure the quantization error—the difference between the full-precision and low-precision weights—for the 4-bit ROBERTA models. On the left of Figure 9, we plot this value for models of varying depths (6, 12, and 24 layers) averaged across different Transformer modules (e.g., in- projection matrix of the self-attention). The mean and vari- ance of the quantization error are smaller for deeper models. For larger models to train faster with respect to wall-clock time, their convergence improvements must not be negated by their slowdown in per-iteration time. Fortunately, par- allel hardware (e.g., GPUs, TPUs) is usually not compute bound when training deep learning models. Instead, mem- ory storage/movement is the limiting factor in image classi- fication (Gomez et al., 2017), semantic segmentation (Chen Pruning Error is Smaller for Larger Models Similarly, we measure the pruning error—the difference between the 9Similar findings hold for large but sparse audio synthesis models (Kalchbrenner et al., 2018) and convolutional models for computer vision (Zhu & Gupta, 2018; Elsen et al., 2019; Evci et al., 2020; Kusupati et al., 2020). Rethinking Model Size for Efficient Training and Inference of Transformers RoBERTa Quantization Error 2.5- mM 6Layers 3 mm 2Layers 5 mm 2A Layers £20 = z 2 — ois 2 s T 3 2 7 ES e410 =e + 3 : & 4 £ 2 AE ma i fo fos — == Self-attention ‘Self-attention Feed-forward Feed-forward in Projection Out Projection In Projection Out Projection RoBERTa Pruning Error Em 6 Layers + Tm 12 Layers Tm 24 Layers 25 Average Floating Point Difference Self-attention In Projection Self-attention Out Projection Feed-forward In Projection Feed-forward Out Projection Figure 9. We finetune ROBERTA models of different sizes (6 layers, 12 layers, and 24 layers) on MNLI. We then quantize models to 4-bits or prune models to 60% sparsity. We plot the difference between the weights of the original and the quantized/pruned models averaged across different modules in the Transformer. The mean and variance of the weight difference after quantization (left) is consistently lower for the deeper models compared to the shallower models. The same holds for the difference after pruning (right). This shows that the larger model’s weights are naturally easier to approximate with low-precision / sparse matrices than smaller models. original weights and the sparse weights—for the 60% sparse ROBERTA models. The mean and variance of the pruning error are smaller for deeper models (Figure 9, right). These two results show that the larger model’s weights are more easily approximated by low-precision or sparse matri- ces. Interestingly, this phenomenon naturally occurs without directly optimizing for it; an area for future work is to study why these weight patterns emerge in larger models. Connection to the Lottery Ticket Hypothesis Our com- pression findings have deep connections to recent conjec- tures such as the lottery ticket hypothesis (Frankle & Carbin, 2019). The lottery ticket hypothesis argues that larger mod- els are preferable as they have a higher chance of finding a lucky initialization in one of their subnetworks. Our work shows that, for certain accuracies, as models become in- creasingly large, they contain increasingly small subnet- works which achieve that accuracy. lenges (Goyal et al., 2017; Ott et al., 2018; You et al., 2020). Our work instead looks to choose the optimal model size for a fixed (small) hardware budget. Future work can study whether our conclusion that large models are more compute- efficient also holds in this highly-distributed setting, where the “budget” is extremely large. Hyperparameter Tuning and AutoML In our work, we have an initial setting for the hyperparameters and optimize the model size. However, good initial models and hyper- parameters are unknown when approaching new problems. For these cases, the optimal training strategy must consider the cost of experimenting with different architectures and hyperparameters; future work can study the effect of model size in this setting. More generally, our findings may impact the design of automated methods for solving/optimizing machine learning problems (Feurer et al., 2015; Zoph & Le, 2017; Jaderberg et al., 2017). In particular, the compute- efficiency of these methods may improve by following our train large, then compress methodology. # 6. Related Work Improving Training Speed and Efficiency There is a large body of work on accelerating model training, tradi- tionally accomplished via improved optimizers (Nesterov, 1983; Kingma & Ba, 2015). More recent work improves training efficiency by modifying loss functions (Clark et al., 2020), model structures/sparsities (Louizos et al., 2018; Gong et al., 2019; Tan & Le, 2019), backpropagation stor- age requirements (Gruslys et al., 2016), or learning rate schedules (Loshchilov & Hutter, 2017; Li et al., 2020). We study the impact of model size, which is largely orthogonal to these other training efficiency improvements. Training Efficiency of Large Models Recent and concur- rent work also considers the impact of model size on the compute efficiency of training. Raffel et al. (2019) show that training a 4x larger Transformer model is a good usage of 4x more compute. Ardalani et al. (2019) show that larger RNN models take fewer gradient iterations to converge but do not consider that larger models are faster when adjusting for wall-clock time. In concurrent work, Kaplan et al. (2020) study the impact of model size on the training efficiency of Transformer language models. They make similar conclu- sions that large, undertrained models are superior to small, well-trained models. Our work differs in that we study ma- chine translation and the impact of training large models on downstream tasks (model finetuning and compression). Scaling Model Training Another line of work scales model training to large amounts of distributed hardware and ad- dresses the associated systems and machine learning chal- Rethinking Model Size for Efficient Training and Inference of Transformers # 7. Conclusion and Future Work We studied the impact of Transformer model size on the efficiency of training and inference. We show that increasing model width and depth accelerates convergence in terms of both gradient steps and wall-clock time. Moreover, even though large models appear less efficient during inference, we demonstrate that they are more robust to compression. Therefore, we conclude that the best strategy for resource- constrained training is to train large models and then heavily compress them. Clark, K., Luong, M.-T., Le, Q. V., and Manning, C. D. ELECTRA: Pre-training text encoders as discriminators rather than generators. In ICLR, 2020. Crankshaw, D., Wang, X., Zhou, G., Franklin, M. J., Gon- zalez, J. E., and Stoica, I. Clipper: A low-latency online prediction serving system. In NSDI, 2017. Dehghani, M., Gouws, S., Vinyals, O., Uszkoreit, J., and Kaiser, Ł. Universal transformers. In ICLR, 2019. In the future, we will examine these conclusions on more domains such as computer vision. Moreover, we look to answer the questions that are raised by our results: why do larger transformer models train fast and compress well, how does model size impact overfitting and hyperparameter tuning, and more generally, what other common design deci- sions should be rethought in the compute-efficient setting? Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for lan- guage understanding. In NAACL, 2019. Dong, Z., Yao, Z., Gholami, A., Mahoney, M. W., and Keutzer, K. HAWQ: Hessian aware quantization of neural networks with mixed-precision. In ICCV, 2019. # Acknowledgements Du, S. S., Zhai, X., Poczos, B., and Singh, A. Gradient descent provably optimizes over-parameterized neural networks. In ICLR, 2019. This research was supported by the Berkeley RISE Lab. We would like to thank the Google Cloud TPU team for their hardware support. We are also grateful to Shi Feng, Yang Liu, Suchin Gururangan, Nelson Liu, the members of Berkeley NLP, and the members of the Berkeley RISE Lab for their valuable feedback. Elsen, E., Dukhan, M., Gale, T., and Simonyan, K. Fast sparse convnets. arXiv preprint arXiv:1911.09723, 2019. Evci, U., Gale, T., Menick, J., Castro, P. S., and Elsen, E. Rigging the lottery: Making all tickets winners. In ICML, 2020. # References Ardalani, N., Hestness, J., and Diamos, G. Empirically char- acterizing overparameterization impact on convergence. OpenReview: S1lPShAqFm, 2019. Arora, S., Cohen, N., and Hazan, E. On the optimization of deep networks: Implicit acceleration by overparameteri- zation. In ICML, 2018a. Arora, S., Ge, R., Neyshabur, B., and Zhang, Y. Stronger generalization bounds for deep nets via a compression approach. In ICML, 2018b. Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. In NeurIPS, 2016. Feurer, M., Klein, A., Eggensperger, K., Springenberg, J., Blum, M., and Hutter, F. Efficient and robust automated machine learning. In NeurIPS, 2015. Frankle, J. and Carbin, M. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In ICLR, 2019. Gomez, A. N., Ren, M., Urtasun, R., and Grosse, R. B. The reversible residual network: Backpropagation without storing activations. In NeurIPS, 2017. Gong, L., He, D., Li, Z., Qin, T., Wang, L., and Liu, T. Efficient training of BERT by progressively stacking. In ICML, 2019. Belkin, M., Hsu, D., Ma, S., and Mandal, S. Reconciling modern machine learning and the bias-variance trade-off. In PNAS, 2018. Goyal, P., Doll´ar, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K. Accurate, large minibatch SGD: Training ImageNet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. Brock, A., Donahue, J., and Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. In ICLR, 2019. Gruslys, A., Munos, R., Danihelka, I., Lanctot, M., and Graves, A. Memory-efficient backpropagation through time. In NeurIPS, 2016. Chen, L.-C., Papandreou, G., Kokkinos, I., Murphy, K., and Yuille, A. L. DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. In TPAMI, 2017. Han, S., Pool, J., Tran, J., and Dally, W. Learning both weights and connections for efficient neural network. In NeurIPS, 2015. Rethinking Model Size for Efficient Training and Inference of Transformers Han, S., Mao, H., and Dally, W. J. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. In ICLR, 2016. Liu, L., Wang, H., Lin, J., Socher, R., and Xiong, C. Atten- tive student meets multi-task teacher: Improved knowl- edge distillation for pretrained models. arXiv preprint arXiv:1911.03588, 2019a. Hnaff, O. J., Srinivas, A., Fauw, J. D., Razavi, A., Doer- sch, C., Eslami, S. M. A., and van den Oord, A. Data- efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272, 2019. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. RoBERTa: A robustly optimized BERT pretraining ap- proach. arXiv preprint arXiv:1907.11692, 2019b. Jaderberg, M., Dalibard, V., Osindero, S., Czarnecki, W. M., Donahue, J., Razavi, A., Vinyals, O., Green, T., Dunning, I., Simonyan, K., et al. Population based training of neural networks. arXiv preprint arXiv:1711.09846, 2017. Loshchilov, I. and Hutter, F. SGDR: Stochastic gradient descent with warm restarts. In ICLR, 2017. Jain, P., Jain, A., Nrusimha, A., Gholami, A., Abbeel, P., Keutzer, K., Stoica, I., and Gonzalez, J. E. Checkmate: Breaking the memory wall with optimal tensor remateri- alization. In MLSys, 2020. Jouppi, N. P., Young, C., Patil, N., Patterson, D., Agrawal, G., Bajwa, R., Bates, S., Bhatia, S., Boden, N., Borchers, A., et al. In-datacenter performance analysis of a tensor processing unit. In ISCA, 2017. Louizos, C., Welling, M., and Kingma, D. P. Learning sparse neural networks through L0 regularization. In ICLR, 2018. Lu, Z., Pu, H., Wang, F., Hu, Z., and Wang, L. The expres- sive power of neural networks: A view from the width. In NeurIPS, 2017. Luo, J.-H., Wu, J., and Lin, W. ThiNet: A filter level pruning method for deep neural network compression. In ICCV, 2017. Kalchbrenner, N., Elsen, E., Simonyan, K., Noury, S., Casagrande, N., Lockhart, E., Stimberg, F., Oord, A. v. d., Dieleman, S., and Kavukcuoglu, K. Efficient neural audio synthesis. In ICML, 2018. McCandlish, S., Kaplan, J., Amodei, D., and Team, O. D. arXiv An empirical model of large-batch training. preprint arXiv:1812.06162, 2018. Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., and Amodei, D. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361, 2020. Metz, C. Building an AI chip saved Google from building a dozen new data centers. Wired, 2017. Michel, P., Levy, O., and Neubig, G. Are sixteen heads really better than one? In NeurIPS, 2019. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In ICLR, 2015. Nesterov, Y. A method of solving a convex programming problem with convergence rate O(1/k2). In Soviet Math- ematics Doklady, 1983. Kitaev, N., Kaiser, L., and Levskaya, A. Reformer: The efficient transformer. In ICLR, 2020. Ott, M., Edunov, S., Grangier, D., and Auli, M. Scaling neural machine translation. In WMT, 2018. Kusupati, A., Ramanujan, V., Somani, R., Wortsman, M., Jain, P., Kakade, S., and Farhadi, A. Soft threshold weight reparameterization for learnable sparsity. In ICML, 2020. Ott, M., Edunov, S., Baevski, A., Fan, A., Gross, S., Ng, N., Grangier, D., and Auli, M. Fairseq: A fast, extensible toolkit for sequence modeling. In NAACL Demo, 2019. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. ALBERT: A lite BERT for self-supervised learning of language representations. In ICLR, 2020. Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. BLEU: a method for automatic evaluation of machine translation. In ACL, 2002. LeCun, Y., Denker, J. S., and Solla, S. A. Optimal brain damage. In NeurIPS, 1990. Li, H., Kadav, A., Durdanovic, I., Samet, H., and Graf, H. P. Pruning filters for efficient convnets. In ICLR, 2017. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019. Li, M., Yumer, E., and Ramanan, D. Budgeted training: Rethinking deep neural network training under resource constraints. In ICLR, 2020. Raghu, M., Poole, B., Kleinberg, J., Ganguli, S., and Dick- stein, J. S. On the expressive power of deep neural net- works. In ICML, 2017. Rethinking Model Size for Efficient Training and Inference of Transformers Sanh, V., Debut, L., Chaumond, J., and Wolf, T. Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. In NeurIPS EM C 2 Workshop, 2019. Voita, E., Talbot, D., Moiseev, F., Sennrich, R., and Titov, I. Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. In ACL, 2019. Schwartz, R., Dodge, J., Smith, N. A., and Etzioni, O. Green AI. arXiv preprint arXiv:1907.10597, 2019. See, A., Luong, M.-T., and Manning, C. D. Compression of neural machine translation models via pruning. In CoNLL, 2016. Shazeer, N., Cheng, Y., Parmar, N., Tran, D., Vaswani, A., Koanantakool, P., Hawkins, P., Lee, H., Hong, M., Young, C., et al. Mesh-TensorFlow: Deep learning for supercomputers. In NeurIPS, 2018. Shen, S., Dong, Z., Ye, J., Ma, L., Yao, Z., Gholami, A., Mahoney, M. W., and Keutzer, K. Q-BERT: Hessian based ultra low precision quantization of BERT. In AAAI, 2020. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In ICLR, 2019a. Wang, Q., Li, B., Xiao, T., Zhu, J., Li, C., Wong, D. F., and Chao, L. S. Learning deep transformer models for machine translation. In ACL, 2019b. Williams, A., Nangia, N., and Bowman, S. R. A broad- coverage challenge corpus for sentence understanding through inference. In NAACL, 2018. You, Y., Li, J., Reddi, S., Hseu, J., Kumar, S., Bhojanapalli, S., Song, X., Demmel, J., Keutzer, K., and Hsieh, C.- J. Large batch optimization for deep learning: Training BERT in 76 minutes. In ICLR, 2020. Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. Megatron-LM: Training multi-billion parameter language models using GPU model parallelism. arXiv preprint arXiv:1909.08053, 2019. Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. Understanding deep learning requires rethinking general- ization. In ICLR, 2017. Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A., and Potts, C. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013. Zhu, M. and Gupta, S. To prune, or not to prune: exploring the efficacy of pruning for model compression. In ICLR Workshop Track, 2018. Str¨om, N. Sparse connection and pruning in large dynamic artificial neural networks. In EUROSPEECH, 1997. Sun, S., Cheng, Y., Gan, Z., and Liu, J. Patient knowledge distillation for BERT model compression. In EMNLP, 2019. Zhu, Y., Kiros, R., Zemel, R., Salakhutdinov, R., Urta- sun, R., Torralba, A., and Fidler, S. Aligning books and movies: Towards story-like visual explanations by watch- ing movies and reading books. In CVPR, 2015. Zoph, B. and Le, Q. V. Neural architecture search with reinforcement learning. In ICLR, 2017. Tan, M. and Le, Q. V. EfficientNet: Rethinking model scal- ing for convolutional neural networks. In ICML, 2019. Thakker, U., Beu, J., Gope, D., Zhou, C., Fedorov, I., Dasika, G., and Mattina, M. Compressing RNNs for IOT de- vices by 15-38x using kronecker products. arXiv preprint arXiv:1906.02876, 2019. Turc, I., Chang, M.-W., Lee, K., and Toutanova, K. Well- read students learn better: The impact of student ini- tialization on knowledge distillation. arXiv preprint arXiv:1908.08962, 2019. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention is all you need. In NeurIPS, 2017. Vaswani, A., Bengio, S., Brevdo, E., Chollet, F., Gomez, A. N., Gouws, S., Jones, L., Kaiser, Ł., Kalchbrenner, N., Parmar, N., et al. Tensor2Tensor for neural machine translation. In AMTA, 2018. Rethinking Model Size for Efficient Training and Inference of Transformers # A. Additional Training Curves # A.1. Training Cost Using FLOPs In Figure 10, we plot selected learning curves from the main text as a function of FLOPs rather than seconds. We compute FLOPs using the code provided by Clark et al. (2020). et al., 2020; Dehghani et al., 2019) while providing a re- duction in memory consumption. In addition, models with shared layers are slightly faster to execute because they require less memory movement and reduced inter-device communication. Similar to Lan et al. (2020), we experi- ment with two types of layer sharing: sharing all layers and sharing only the attention layers. # A.2. The Impact of Batch Size Figure 13 shows the learning curves associated with differ- ent batch sizes. Table 1 shows the learning rates associated with each batch size. We use the hyperparameters from Liu et al. (2019b) as a starting point and then lightly tune them. Sharing layers reduces the maximum memory requirements, especially for small batch sizes. For example, sharing all the layers of a ROBERTA model with batch size 32 re- duces total memory usage by 41%. However, both forms of sharing lead to slower training convergence and thus worse performance in the resource-constrained setting (Figure 11). Consequently, we do not recommend sharing layers for compute-efficient training or inference of transformers. Batch Size Learning Rate 256 2048 4096 8192 16384 .0002 .001 .00125 .0015 .001875 Table 1. The learning rate for each batch size in Figure 13. # A.3. The Impact of Dataset Size Figure 14 shows the learning curves for models trained using 5% and 1% of the training data. # B. Finetuning Models of Different Sizes Effect of ROBERTa Layer Sharing 10 i Sharing Type wrattention Shared = =No Sharing ss 2 a a 8 a = a! = = 2 6 & Zz os > 4 0 20000 40000 60000 Number of Gradient Steps 80000 Table 2 shows that models with more parameters are not harder to finetune. Model Perplexity MNLI SST-2 12-layer, 768H 18-layer, 768H 24-layer, 768H 4.3 4.1 4.0 84.3 85.4 85.2 93.0 92.6 93.1 12-layer, 768H 12-layer, 1024H 12-layer, 1536H 4.3 3.9 4.3 84.3 85.5 85.1 93.0 93.2 93.8 Table 2. We train ROBERTA models of different sizes and stop them at roughly the same pretraining perplexity (the bigger models are trained for less wall-clock time). We then finetune each model on MNLI and SST-2. All models reach comparable accuracies (in fact, the big models often outperform small ones), which shows that larger models are not harder to finetune. Figure 11. Sharing attention layers reduces the maximum memory consumption of ROBERTA but causes slower convergence and worse final accuracy. # D. Compression Results for SST-2 We follow Liu et al. (2019b) and report results on SST- 2 (Socher et al., 2013) in addition to MNLI. Since the SST-2 dataset is smaller than MNLI it requires a more significant tuning of the finetuning hyperparameters. We tune the batch size in {16, 32, 64}, the learning rate in {5e−4, 3e−4, 1e−4}, the seed which controls the classifier initialization and training data shuffling in {100, 300, 500}, and the dropout in {0.1, 0.2, 0.3}. We choose the best value using the validation set for each model size. We then per- form quantization, pruning, and quantization and pruning on all finetuned models. Similar to MNLI, the bigger mod- els provide the highest accuracy for a given test budget (Figure 12). # C. Negative Results: Layer Sharing Sharing weights across transformer layers can provide a small or negligible degradation in final performance (Lan Rethinking Model Size for Efficient Training and Inference of Transformers Effect of ROBERTa Depth Effect of ROBERTa Hidden Size Effect of RoBERTa Batch Size 10 Model Depth 9 i Madel Depth Model Dep Batch Size r= —6 Layers z “SH = 2048 = ria Layers 2 10 ~1024 Zenon 5 8 =24 Layers & = =16384 = = s a = 275 5 5 @ 6 & z z = $5 4 4. 0 20 40 60 0 20 40 60 0 20 40 60 exaFLOPS exaFLOPS exaFLOPs Figure 10. Floating Point Operations. We show Figures 2, 4, and 13 in terms of exaFLOPs instead of wall-clock time. Bigger models achieve better results than smaller models using the same number of floating point operations. RoBERTa Pruning RoBERTa Quantization + Pruning RoBERTa Quantization 0.94 > > > 8 $ 0.925 8 0.925. 5 5 5 8 0.92 8 8 < < 000 F090 5 & s = Original size s Original Size s Original size 3 0.90 @3 Layers, 768H 3s ed Layers, 768H 3s © 3 Layers, 768H = 6 Layers, 768H s 6 Layers, 768H +3 0.875. ~=6 Layers, 768H = = 12 Layers, 768H s 12 Layers, 768H s = 12 Layers, 768H o @18 Layers, 768H gy 0.875 18 Layers, 768H My #18 Layers, 768H 5 +24 Layers, 768H 5 24 Layers, 768H 5 24 Layers, 768H 0.88 8 8 0.850 a #12 Layers, 512H a 12 Layers, 512H a = 12 Layers, 512H #12 Layers, 1024H @12 Layers, 1024H #12 Layers, 1024H +12 Layers, 1536H 0.850 12 Layers, 1536H +12 Layers, 1536H 0 500 1000 1500 0 100 200 300 0 50 100 Memory Usage (MB) Number of Parameters (Millions) Bits x Parameter Count (Bits x Millions) Figure 12. Compression for SST-2. For most budgets (x-axis), the highest accuracy SST-2 models are the ones which are trained large and then heavily compressed. We show results for quantization (left), pruning (center), and quantization and pruning (right). Effect of RoBERTa Batch Size Effect of RoBERTa Batch Size 9 Batch Size Batch Size = = 256 2 =256 3 =2048 38 2048 a 4096 x 4096 e =8192 e 8192 Pa 7 —16384 2 J =16384 = = —t —t = = 5° 8° 3 s 3° ss 4 4 0 40000 80000 120000 0 500000 1000000 1500000 Number of Gradient Steps Wall Clock (Seconds) Figure 13. Increasing the batch size and the associated learning rate accelerates convergence in terms of gradient steps. However, increasing the batch size beyond 2048 provides only marginal improvements with respect to wall-clock time. Note that the wall-clock time includes the cost of accumulating gradients on a single machine (see Section 2.2). In other words, beyond a certain point increasing the batch size only provides speedups when additional hardware is available. The 256 batch size result is far to the right in the left plot. Rethinking Model Size for Efficient Training and Inference of Transformers Effect of ROBERTa Model Size with 5% Data Effect of ROBERTa Model Size with 5% Data 15 15 2 Model pis 2 “STL xs =768H, 3L —768H, 312.5 =768H, 12L 3125 =768H, 12L o =768H, 24L 5 =768H, 24L 2 i=" = = = 10 = 10 c c 2 2 s s 275 = 7.5 s s 0 25000 50000 75000 100000 0 100000 200000 300000 400000 500000 Number of Gradient Steps Wall Clock (Seconds) Effect of ROBERTa Model Size with 1% Data Effect of ROBERTa Model Size with 1% Data 25 25 Model Si: i r= wn 35GH 12 2 woe SL x =768H, 3L 5 —768H, 3L -s =768H, 12L a —768H, 12L o 20 =768H, 24L 5 20 —768H, 24L a a = = Ss Ss = = c c 215 215 & & 3s 3s s Bs > > 10 10 0 10000 20000 30000 40000 0 100000 200000 300000 Number of Gradient Steps Wall Clock (Seconds) Figure 14. Effect of Smaller Datasets. In our experiments on the full dataset (see main text), the largest models we trained are always faster in terms of wall-clock time. However, when subsampling the data to 5% (top row), the biggest models do not improve on the speed of the smaller models (e.g., compare 24 Layer ROBERTA and 12 Layer ROBERTA). When the data is subsampled to 1% (bottom row), the bigger models are worse in terms of perplexity due to overfitting. This illustrates that the optimal model size depends on the dataset size.
{ "id": "2001.08361" }
2002.12880
Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data
The translation equivariance of convolutional layers enables convolutional neural networks to generalize well on image problems. While translation equivariance provides a powerful inductive bias for images, we often additionally desire equivariance to other transformations, such as rotations, especially for non-image data. We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group with a surjective exponential map. Incorporating equivariance to a new group requires implementing only the group exponential and logarithm maps, enabling rapid prototyping. Showcasing the simplicity and generality of our method, we apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems. For Hamiltonian systems, the equivariance of our models is especially impactful, leading to exact conservation of linear and angular momentum.
http://arxiv.org/pdf/2002.12880
Marc Finzi, Samuel Stanton, Pavel Izmailov, Andrew Gordon Wilson
stat.ML, cs.LG
ICML 2020. Code available at https://github.com/mfinzi/LieConv
null
stat.ML
20200225
20200924
0 2 0 2 p e S 4 2 ] L M . t a t s [ 3 v 0 8 8 2 1 . 2 0 0 2 : v i X r a # Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data # Marc Finzi 1 # Samuel Stanton 1 # Pavel Izmailov 1 # Andrew Gordon Wilson 1 # Abstract The translation equivariance of convolutional lay- ers enables convolutional neural networks to gen- eralize well on image problems. While translation equivariance provides a powerful inductive bias for images, we often additionally desire equivari- ance to other transformations, such as rotations, especially for non-image data. We propose a gen- eral method to construct a convolutional layer that is equivariant to transformations from any specified Lie group with a surjective exponential map. Incorporating equivariance to a new group requires implementing only the group exponential and logarithm maps, enabling rapid prototyping. Showcasing the simplicity and generality of our method, we apply the same model architecture to images, ball-and-stick molecular data, and Hamil- tonian dynamical systems. For Hamiltonian sys- tems, the equivariance of our models is especially impactful, leading to exact conservation of linear and angular momentum. “, aoe oi Ce (TF ( “, Ce (TF oi Figure 1. Many modalities of spatial data do not lie on a grid, but still possess important symmetries. We propose a single model to learn from continuous spatial data that can be specialized to respect a given continuous symmetry group. the translation equivariance of convolutional layers in neu- ral networks (LeCun et al., 1995): when an input (e.g. an image) is translated, the output of a convolutional layer is translated in the same way. # 1. Introduction Group theory provides a mechanism to reason about symme- try and equivariance. Convolutional layers are equivariant to translations, and are a special case of group convolu- tion. A group convolution is a general linear transformation equivariant to a given group, used in group equivariant con- volutional networks (Cohen and Welling, 2016a). Symmetry pervades the natural world. The same law of gravitation governs a game of catch, the orbits of our plan- ets, and the formation of galaxies. It is precisely because of the order of the universe that we can hope to understand it. Once we started to understand the symmetries inherent in physical laws, we could predict behavior in galaxies bil- lions of light-years away by studying our own local region of time and space. For statistical models to achieve their full potential, it is essential to incorporate our knowledge of naturally occurring symmetries into the design of algo- rithms and architectures. An example of this principle is In this paper, we develop a general framework for equiv- ariant models on arbitrary continuous (spatial) data repre- N sented as coordinates and values i=1. Spatial data (xi, fi) } { is a broad category, including ball-and-stick representations of molecules, the coordinates of a dynamical system, and images (shown in Figure 1). When the inputs or group elements lie on a grid (e.g., image data) one can simply enumerate the values of the convolutional kernel at each group element. But in order to extend to continuous data, we define the convolutional kernel as a continuous function on the group parameterized by a neural network. 1New York University. Correspondence to: Marc Finzi <[email protected]>, Samuel Stanton <[email protected]>, Pavel Izmalov <[email protected]>, Andrew Gordon Wilson <an- [email protected]>. Proceedings of the 37 th International Conference on Machine Learning, Online, PMLR 119, 2020. Copyright 2020 by the au- thor(s). We consider the large class of continuous groups known as Lie groups. In most cases, Lie groups can be parameterized in terms of a vector space of infinitesimal generators (the Lie algebra) via the logarithm and exponential maps. Many use- ful transformations are Lie groups, including translations, rotations, and scalings. We propose LieConv, a convolu- Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data tional layer that can be made equivariant to a given Lie group by defining exp and log maps. We demonstrate the expressivity and generality of LieConv with experiments on images, molecular data, and dynamical systems. We emphasize that we use the same network architecture for all transformation groups and data types. LieConv achieves state-of-the-art performance in these domains, even com- pared to domain-specific architectures. In short, the main contributions of this work are as follows: We propose LieConv, a new convolutional layer equiv- ariant to transformations from Lie groups. Models composed with LieConv layers can be applied to non- homogeneous spaces and arbitrary spatial data. We evaluate LieConv on the image classification bench- mark dataset rotMNIST (Larochelle et al., 2007), and the regression benchmark dataset QM9 (Blum and Rey- mond, 2009; Rupp et al., 2012). LieConv outperforms state-of-the-art methods on some tasks in QM9, and in all cases achieves competitive results. data types like spherical images (Esteves et al., 2018; Co- hen et al., 2018), voxel data (Weiler et al., 2018), and point clouds (Thomas et al., 2018; Anderson et al., 2019), the requirement of working out the representation theory for the group can be cumbersome and is limited to compact groups. Our approach reduces the amount of work to implement equivariance to a new group, enabling rapid prototyping. There is also work applying Lie group theory to deep neural networks. Huang et al. (2017) define a network where the intermediate activations of the network are 3D rotations representing skeletal poses and embed elements into the Lie algebra using the log map. Bekkers (2019) use the log map to express an equivariant convolution kernel through the use of B-splines, which they evaluate on a grid and apply to image problems. While similar in motivation, their method is not readily applicable to point data and can only be used when the equivariance group acts transitively on the input space. Both of these issues are addressed by our work. # 3. Background We apply LieConv to modeling the Hamiltonian of physical systems, where equivariance corresponds to the preservation of physical quantities (energy, angular momentum, etc.). LieConv outperforms state-of-the- art methods for the modeling of dynamical systems. We make code available at https://github.com/mfinzi/LieConv # 2. Related Work # 3.1. Equivariance A mapping h( ) is equivariant to a set of transformations · G if when we apply any transformation g to the input of h, the output is also transformed by g. The most common example of equivariance in deep learning is the translation equivariance of convolutional layers: if we translate the input image by an integer number of pixels in x and y, the output is also translated by the same amount (ignoring the regions close to the boundary of the image). Formally, if A, and G is a set of transformations acting on A, h : A we say h is equivariant to G if One approach to constructing equivariant CNNs, first in- troduced in Cohen and Welling (2016a), is to use standard convolutional kernels and transform them or the feature maps for each of the elements in the group. For discrete groups this approach leads to exact equivariance and uses the so-called regular representation of the group (Cohen et al., 2019). This approach is easy to implement, and has also been used when the feature maps are vector fields (Zhou et al., 2017; Marcos et al., 2017), and with other representa- tions (Cohen and Welling, 2016b), but only on image data where locations are discrete and the group cardinality is small. This approach has the disadvantage that the compu- tation grows quickly with the size of the group, and some groups like 3D rotations cannot be easily discretized onto a lattice that is also a subgroup. ∀ ∈ # g ∀ ∈ h(ga) = gh(a). (1) The continuous convolution of a function f : R the kernel k : R sense that Lt(k ∗ function by t: Ltf (x) = f (x − It is easy to construct invariant functions, where transfor- mations on the input do not affect the output, by simply discarding information. Strict invariance unnecessarily lim- its the expressive power by discarding relevant information, and instead it is necessary to use equivariant transformations that preserve the information. # 3.2. Groups of Transformations and Lie Groups Another approach, drawing on harmonic analysis, finds a ba- sis of equivariant functions and parametrizes convolutional kernels in that basis (Worrall et al., 2017; Weiler and Cesa, 2019; Jacobsen et al., 2017). These kernels can be used to construct networks that are exactly equivariant to continu- ous groups. While the approach has been applied on general Many important sets of transformations form a group. To form a group the set must be closed under composition, include an identity transformation, each element must have an inverse, and composition must be associative. The set of 2D rotations, SO(2), is a simple and instructive example. Composing two rotations r1 and r2, yields another rotation Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data G that maps r = r2 ◦ every point in R2 to itself (i.e., rotation by a zero angle). 1 And for every rotation r, there exists an inverse rotation r− r = id. Finally, the composition such that r ◦ of rotations is an associative operation: (r1 ◦ r3 = r3). Satisfying these conditions, SO(2) is indeed (r2 ◦ r1 ◦ a group. on G at u is given by h(u) = (k ∗ f )(u) = k(v− 1u)f (v)dµ(v). (2) (Kondor and Trivedi, 2018; Cohen et al., 2019) We can also adopt a more familiar view of SO(2) in terms 2 is of angles, where a rotation matrix R : R parametrized as R(θ) = exp(Jθ). J is the antisymmet- R2 × → 1 − 0 ric matrix J = (an infinitesimal generator of the oO 1 is group) and exp is the matrix exponential. Note that θ is totally unconstrained. Using R(θ) we can add and subtract 1R(θ2) = rotations. Given θ1, θ2 we can compute R(θ1)− θ1). R(θ) = exp(Jθ) is an exp( example of the Lie algebra parametrization of a group, and SO(2) forms a Lie group. More generally, a Lie group is a group whose elements form a smooth manifold. Since G is not necessarily a vector space, we cannot add or subtract group elements. However, the Lie algebra of G, the tangent space at the identity, g = TidG, is a vector space and can be understood informally as a space of infinitesimal transformations from the group. As a vector space, one can readily expand elements in a basis A = The exponential map exp : g G gives a mapping from the Lie algebra to the Lie group, converting infinitesimal transformations to group elements. In many cases, the image of the exponential map covers the group, and an inverse g can be defined. For matrix groups the mapping log : G exp map coincides with the matrix exponential (exp(A) = I + A + A2/2! + ... ), and the log map with the matrix logarithm. Matrix groups are particularly amenable to our method because in many cases the exp and log maps can be computed in closed form. For example, there are analytic solutions for the translation group T(d), the 3D rotation group SO(3), the translation and rotation group SE(d) for d = 2, 3, the rotation-scale group R∗ SO(2), and many others (Eade, 2014). In the event that an analytic solution is not available there are reliable numerical methods at our disposal (Moler and Van Loan, 2003). # 3.3. Group Convolutions # 3.4. PointConv Trick In order to extend learnable convolution layers to point clouds, not having the regular grid structure in images, Dai et al. (2017), Simonovsky and Komodakis (2017), and Wu et al. (2019) go back to the continuous definition of a con- volution for a single channel between a learned function cin and an in- ) : Rd (convolutional filter) kθ( × · Rcin yielding the function ) : Rd put feature map f ( · Rcout , h( ) : Rd · → h(x) = (kθ ∗ f )(x) = kθ(x − y)f (y)dy. (3) We approximate the integral using a discretization: h(xi) = (V /n) kθ(xi − xj)f (xj) . (4) Here V is the volume of the space integrated over and n is 3 convolutional the number of quadrature points. In a 3 layer for images, where points fall on a uniform square grid, the filter kθ has independent parameters for each of the inputs ( 1, 0), . . . , (1, 1). In order to accom- modate points that are not on a regular grid, kθ can be parametrized as a small neural network, mapping input off- sets to filter matrices, explored with MLPs in Simonovsky and Komodakis (2017). The compute and memory costs has severely limited this approach, for typical CIFAR-10 images with batchsize = 32, N = 32 32, cin = cout = 3, evaluating a single layer requires computing 256, n = 3 × 20 billion values for k. In PointConv, Wu et al. (2019) develop a trick where clever reordering of the computation cuts memory and computa- tional requirements by 2 orders of magnitude, allowing them to scale to the point cloud classification, segmenta- tion datasets ModelNet40 and ShapeNet, and the image dataset CIFAR-10. We review and generalize the Efficient- PointConv trick in Appendix A.1, which we will use to accelerate our method. Adopting the convention of left equivariance, one can define a group convolution between two functions on the group, which generalizes the translation equivariance of convolu- tion to other groups: Definition 1. Let k, f : G measure on G. For any u ) be the Haar R, and µ( → · G, the convolution of k and f ∈ # 4. Convolutional Layers on Lie Groups We now introduce LieConv, a new convolutional layer that can be made equivariant to a given Lie group. Models with LieConv layers can act on arbitrary collections of coordi- N nates and values and fi ∈ V { is usually a low where V is a vector space. The domain # X Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data (a) Data (b) Trivial (c) T (2) (d) SO(2) (e) R ∗ × SO(2) (f) SE(2) Figure 2. Visualization of the lifting procedure. Panel (a) shows a point x in the original input space X . In panels (b)–(f) we illustrate the lifted embeddings for different groups in the form [u, q], where u ∈ G is an element of the group and q ∈ X /G identifies the orbit (see Section 4.5). For SE(2) the lifting is multi-valued. dimensional domain like R2 or R3 such as for molecules, point clouds, the configurations of a mechanical system, images, time series, videos, geostatistics, and other kinds of spatial data. We begin with a high-level overview of the method. In Section 4.1 we discuss transforming raw inputs xi into group elements ui on which we can perform group convolution. We refer to this process as lifting. Section 4.2 addresses the irregular and varied arrangements of group elements that result from lifting arbitrary continuous input data by parametrizing the convolutional kernel k as a neural network. In Section 4.3, we show how to enforce the locality of the kernel by defining an invariant distance on the group. In Section 4.4, we define a Monte Carlo estimator for the group convolution integral in Eq. (2) and show that this estimator is equivariant in distribution. In Section 4.5, we extend the procedure to cases where the group does not act transitively on the input space (when we cannot map any point to any other point with a transformation from the group). Additionally, in Appendix A.2, we show that our method generalizes coordinate transform equivariance when G is Abelian. At the end of Section 4.5 we provide a concise algorithmic description of the lifting procedure and our new convolution layer. # 4.1. Lifting from to G # X If is a homogeneous space of G, then every two elements are connected by an element in G, and one can lift ele- in ments by simply picking an origin o and defining Lift(x) = : all elements in the group that map u { the origin to x. This procedure enables lifting tuples of co- N,K i=1,k=1, ordinates and features (uik, fi) } with up to K group elements for each input.1 To find all the elements , one simply needs to find } one element ux and use the elements in the stabilizer of 1When fi = f (xi), lifting in this way is equivalent to defining f ↑(u) = f (uo) as in Kondor and Trivedi (2018). , to generate the rest the origin H = } with Lift(x) = . For continuous groups ∈ the stabilizer may be infinite, and in these cases we sample uniformly using the Haar measure µ which is described in Appendix C.2. We visualize the lifting procedure for different groups in Figure 2. # 4.2. Parameterization of the Kernel The conventional method for implementing an equivariant convolutional network (Cohen and Welling, 2016a) requires ) over the elements of the enumerating the values of k( · group, with separate parameters for each element. This procedure is infeasible for irregularly sampled data and problematic even for a discretization because there is no generalization between different group elements. Instead of having a discrete mapping from each group element to the kernel values, we parametrize the convolutional kernel as a continuous function kθ using a fully connected neural network with Swish activations, varying smoothly over the elements in the Lie group. However, as neural networks are best suited to learn on euclidean data and G does not form a vector space, we propose to model k by mapping onto the Lie Algebra g, which is a vector space, and expanding in a basis for the space. To do so, we restrict our attention in this paper to Lie groups whose exponential maps are surjective, where every element has a logarithm. This means defining kθ(u) = exp)θ is the function (k ◦ parametrized by an MLP, ˜kθ : g Rcout cin. Surjectivity → of the exp map guarantees that exp log = id, although not ◦ in the other order. # 4.3. Enforcing Locality Important both to the inductive biases of convolutional neu- ral networks and their computational efficiency is the fact that convolutional filters are local, kg(u; — uj) = 0 for |u; — uy|| > r. In order to quantify locality on matrix Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data 717 1>]- the Monte Carlo estimator for (1) as h(ui) = (k ® f)(ui) == > k(v; us) f(v,), (7) * jenbhd(i) # jenbhd(i) ∗× SO(2), Figure 3. A visualization of the local neighborhood for R in terms of the points in the input space. For the computation of h at the point in orange, elements are sampled from colored region. Notice that the same points enter the calculation when the image is transformed by a rotation and scaling. We visualize the neighborhoods for other groups in Appendix C.6. the number of points sampled in each nbhd(i) | | where ni = neighborhood. For vj ∼ |nbhd(u)( µ equivariant (in distribution). ), · the Monte Carlo estimator is groups, we introduce the function: Proof: Recalling that we can absorb the local neigh- borhood into the definition of kθ using an indicator function, we have d(u, v) := log(u− 1v) (5) where log is the matrix logarithm, and F' is the Frobenius norm. The function is left invariant, since d(wu, wv) = || log(u-tw-twe)||- = d(u,v), and is a semi-metric (it does not necessarily satisfy the triangle inequality). In Ap- pendix A.3 we show the conditions under which d(w, v) is additionally the distance along the geodesic connect- ing u,v , a generalization of the well known formula for the geodesic distance between rotations ||log(R7 R2)||r (Kuffner, 2004). To enforce that our learned convolutional filter k is local, we can use our definition of distance to only evaluate the sum 1u) = 0 outside a for d(u, v) < r, implicitly setting kθ(v− , local neighborhood nbhd(u) = v : d(u, v) { ≤ } h(u) = kθ(v− 1u)f (v)dµ(v). nbhd(u) (6) ∈ This restriction to a local neighborhood does not break invariant. equivariance precisely because d( , · 1u, id) this restriction is equiva- Since d(u, v) = d(v− lent to multiplying by the indicator function kθ(v− → 1u. kθ(v− Note that equivariance would have been broken if we used neighborhoods that depend on fixed regions in the input space like the square 3 3 region. Figure 3 shows what these neighborhoods look like in terms of the input space. (k ® Lwf)(ui) = (1/ns) DMF = (1/ni) SMe £63 Nw" lui) f(w*v;) ui) f (@;) ui) = Lu (k Â¥ f)(ui). and the last line follows from the fact = WU 5, that the random variables wv; £ vj are equal in distribu- tion because they are sampled from the Haar measure with property du(wv) = dy(v). The equivariance also holds de- terministically when the sampling locations are transformed along with the function vj — wv;. Now that we have the discretization hy = (1/ni) Y ;enbnaci) ko(log(v; ui) fis we can accelerate this computation using the Efficient- PointConv trick, with the argument of aj; = log(v; tus) for the MLP. See Appendix A.1 for more details. Note that we can also apply this discretization of the convolution when the inputs are not functions f; = f(x;), but sim- ply coordinates and values {(2;, f;)}*,, and the mapping {(ui, fi}, 2 {(ui,hi)}%, is still equivariant, which we also demonstrate empirically in Table B.1. We also de- tail two methods for equivariantly subsampling the elements to further reduce the cost in Appendix A.4. Here 0; # 4.5. More Than One Orbit? # 4.4. Discretization of the Integral Assuming that we have a collection of quadrature points N j=1 as input and the function fj = f (vj) evaluated vj} { at these points, we can judiciously choose to evaluate the N convolution at another set of group elements i=1, so as to have a set of quadrature points to approximate an integral in a subsequent layer. Because we have restricted the integral (6) to the compact neighbourhood nbhd(u), we can define a proper sampling distribution µ |nbhd(u) to estimate the integral, unlike for the possibly unbounded G. Computing the outputs only at these target points, we use S) 2. Figure 4. Orbits of SO(2) and T(1)y containing input points in R Unlike T(2) and SE(2), not all points are not contained in a single orbit of these small groups. Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data In this paper, we consider groups both large and small, and we require the ability to enable or disable equivariances like translations. To achieve this functionality, we need to go beyond the usual setting of homogeneous spaces considered in the literature, where every pair of elements in are related by an element in G. Instead, we consider the quotient /G, consisting of the distinct orbits of G in space Q = Q ∈ X is a homogeneous space of the group, and when is a homogeneous space of G then there is only a single orbit. But in general, there will be many distinct orbits, and lifting should preserve the information on which orbit each point is on. Discretizing (8) as we did in (7), we get hi = 1 ni nbhd(i) ˜kθ(log(v− 1 j ui), qi, qj)fj, (9) # jenbhd(i which again can be accelerated with the Efficient-PointConv 1 trick by feeding in aij = Concat([log(v− j ui), qi, qj]) as input to the MLP. If we want the filter to be local over orbits also, we can extend the distance d((ui, qi), (vj, qj))2 = 2, which need not be invariant to d(ui, vj)2 + α transformations on q. To the best of our knowledge, we are the first to systematically address equivariances of this kind, where # X Since the most general equivariant mappings will use this orbit information, throughout the network the space of el- ements should not be G but rather G ∈ X is lifted to the tuples (u, q) for u Q. This mapping may be one-to-one or one-to-many depending on the size of H, but will preserve the information in x as uoq = x where oq is the chosen origin for each orbit. Gen- eral equivariant linear transforms can depend on both the input and output orbit, and equivariance only constrains the dependence on group elements and not the orbits. When the space of orbits Q is continuous we can write the equivariant integral transform as noua) =f Betwaa sealiduteyaa’.— @) To recap, Algorithms 1 and 2 give a concise overview of our lifting procedure and our new convolution layer respectively. Please consult Appendix C.1 for additional implementation details. Algorithm 1 Lifting from Algorithm 1 Lifting from to G /G × X N i=1 (xi ∈ X } { X (xi, fi) { Rcin ). , fi ∈ N K j=1. (uj, qj, fj) } Inputs: spatial data Returns: matrix-orbit-value tuples For each orbit q ∈ X For each oq, compute its stabilizer Hq. for i = 1, . . . , N do /G, choose an origin oq. Find the orbit qi ∈ X Sample j=1, where vj ∼ vj} { Compute an element ui ∈ K j=1. (uivj, qi, fi) Zi = } /G , s.t. xi ∈ qi. K µ(Hqi) (see C.2). G s.t. uioq = xi. { end return Z When G is the trivial group {id}, this equation simplifies to the integral transform h(x) = f k(x, 2") f(2")dax" where each element in Â¥ is in its own orbit. X In general, even if is a smooth manifold and G is a Lie /G is a manifold (Kono group it is not guaranteed that and Ishitoya, 1987). However in practice this is not an issue as we will only have a finite number of orbits present in the data. All we need is an invertible way of embedding the orbit information into a vector space to be fed into kθ. One option is to use an embedding of the orbit origin oq, or simply find enough invariants of the group to identify the orbit. To give a few examples: 1. Â¥ # X = Rd and G = SO(d) : Embed(q(x)) = ||| Algorithm 2 The Lie Group Convolution Layer Inputs: matrix-orbit-value tuples {(u;, qj, fj) } 741 Returns: convolved matrix-orbit-values {(u;, qi, hi) }™4 fori =1,...,mdo uy = exp(~log(u,)). nbhd; = {7 : d((ui, a), (uy, qj)) < r)}- | aij = Concat([log(u; bu;), gi. qj))- end hy = (1/ni) DV jenbha, ko (aij) fj (see A.1). end return (u, q, h) 2. X = Rd and G = R∗ : Embed(q(x)) = x x 3. # X # X = Rd and G = T(k) : Embed(q(x)) = x[k+1:d] ?When Â¥ is a homogeneous space and the quantity of interest is the quotient with the stabilizer of the origin H: G/H ~ X, which has been examined extensively in the literature. Here we concerned with the separate quotient space Q = Â¥/G, relevant when Â¥ is not a homogeneous space. # 5. Applications to Image and Molecular Data First, we evaluate LieConv on two types of problems: clas- sification on image data and regression on molecular data. With LieConv as the convolution layers, we implement a bottleneck ResNet architecture with a final global pooling layer (Figure 5). For a detailed architecture description, Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data Table 1. Classification Error (%) on RotMNIST dataset for LieConv with different group equivariances and baselines: G-CNN (Cohen and Welling, 2016a), H-Net (Worrall et al., 2017), ORN (Zhou et al., 2017), TI-Pooling (Laptev et al., 2016), RotEqNet (Marcos et al., 2017), E(2)-Steerable CNNs (Weiler and Cesa, 2019) . Baseline Methods LieConv (Ours) G-CNN H-NET ORN TI-Pooling RotEqNet 2.28 1.69 1.54 1.2 1.09 E(2)-Steerable 0.68 Trivial 1.58 T(1)y 1.49 T(2) 1.44 SO(2) 1.42 ∗ SO(2)×R 1.27 SE(2) 1.24 Table 2. QM9 Molecular Property Mean Absolute Error Task Units ∆ε bohr3 meV α εHOMO meV εLUMO meV R2 µ U0 D cal/mol K meV meV bohr2 meV meV Cν G H U ZPVE meV NMP SchNet Cormorant LieConv(T3) .092 .235 .085 .084 69 63 61 49 43 41 34 30 38 34 38 25 .030 .033 .038 .032 .040 .033 .026 .038 19 14 20 22 17 14 21 24 .180 .073 .961 .800 20 19 21 19 20 14 22 19 1.500 1.700 2.027 2.280 see Appendix C.3. We use the same model architecture for all tasks and achieve performance competitive with task- specific specialized methods. # 5.1. Image Equivariance Benchmark The RotMNIST dataset consists of 12k randomly rotated MNIST digits with rotations sampled uniformly from SO(2), separated into 10k for training and 2k for validation. This commonly used dataset has been a standard benchmark for equivariant CNNs on image data. To apply LieConv to image data we interpret each input image as a collection = R2 with associated binary 28 points on of N = 28 784 values: i=1 to which we apply a circular center } crop. We note that LieConv is broadly targeting generic spatial data, and more practical equivariant methods exist specialized to images (e.g. Weiler and Cesa (2019)). How- ever, as we demonstrate in Table 1, we are able to easily incorporate equivariance to a variety of different groups without changes to the method or the architecture of the network, while achieving performance competitive with methods that are not applicable beyond image data. Linear Â¥ : xL BatchNorm BatchNorm Downsample v SHEN | sect Linear Loe Linear Figure 5. A visual overview of the LieConv model architecture, which is composed of L LieConv bottleneck blocks that couple the values at different group elements together. The BottleBlock is a residual block with a LieConv layer between two linear layers. # 5.2. Molecular Data Now we apply LieConv to the QM9 molecular property learning task (Wu et al., 2018). The QM9 regression dataset consists of small inorganic molecules encoded as a collec- tion of 3D spatial coordinates for each of the atoms, and their atomic charges. The labels consist of various properties of the molecules such as heat capacity. This is a challenging task as there is no canonical origin or orientation for each molecule, and the target distribution is invariant to E(3) (translation, rotation, and reflection) transformations of the coordinates. Successful models must generalize across dif- ferent spatial locations and orientations. We first perform an ablation study on the Homo problem of predicting the energy of the highest occupied molecular orbital for the molecules. We apply LieConv with different equivariance groups, combined with SO(3) data augmen- tation. The results are reported in Table 5.2. Of the three groups, our SE(3) network performs the best. We then apply T(3)-equivariant LieConv layers to the full range of tasks in the QM9 dataset and report the results in Table 2. We per- form competitively with state-of-the-art methods (Gilmer et al., 2017; Schütt et al., 2018; Anderson et al., 2019), with lowest MAE on several of the tasks. See B.1 for a demon- stration of the equivariance property and efficiency with Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data Cormorant Trivial SO(3) T(3) SE(3) 34 31.7 65.4 29.6 26.8 K(p) + V (q). The dynamics can also be written compactly as @= JVzH for J = ", al: − Table 3. LieConv performance (Mean Absolute Error in meV) for different groups on the HOMO regression problem. limited data. — truth =--. HOGN — truth <=) HLieConv-T2 As shown in Greydanus et al. (2019), a neural network parametrizing ˆ Hθ(z, t) can be learned directly from tra- jectory data, providing substantial benefits in generaliza- tion over directly modeling Fθ(z, t), and with better en- ergy conservation. We follow the approach of Sanchez- Gonzalez et al. (2019) and Zhong et al. (2019). Given an z ˆ initial condition z0 and Fθ(z, t) = J Hθ, we employ a twice-differentiable model architecture and a differentiable ODE solver (Chen et al., 2018) to compute predicted states (ˆz1, . . . , ˆzT ) = ODESolve(z0, Fθ, (t1, t2, ..., tT )). The pa- rameters of the Hamiltonian model ˆ Hθ can be trained di- rectly through the L2 loss, L(θ) = 1 T T ˆzt − || 2 zt || 2. (11) t=1 # 6.2. Exact Conservation of Momentum Figure 6. A qualitative example of the trajectory predictions over 100 time steps on the 2D spring problem given a set of initial con- ditions. We see that HLieConv (right) yields predictions that are accurate over a longer time than HOGN (left), a SOTA architecture for modeling interacting physical systems. # 6. Modeling Dynamical Systems Accurate transition models for macroscopic physical sys- tems are critical components in control systems (Lenz et al., 2015; Kamthe and Deisenroth, 2017; Chua et al., 2018) and data-efficient reinforcement learning algorithms (Nagabandi et al., 2018; Janner et al., 2019). In this section we show how to enforce conservation of quantities such as linear and angular momentum in the modeling of Hamiltonian systems through LieConv symmetries. While equivariance is broadly useful as an inductive bias, it has a very special implication for the modeling of Hamil- tonian systems. Noether’s Hamiltonian theorem states that each continuous symmetry in the Hamiltonian of a dynami- cal system has a corresponding conserved quantity (Noether, 1971; Butterfield, 2006). Symmetry with respect to the con- tinuous transformations of translations and rotations lead directly to conservation of the total linear and angular mo- mentum of the system, an extremely valuable property for modeling dynamical systems. In fact, all models that ex- actly conserve linear and angular momentum must have a corresponding translational and rotational symmetry. See Appendix A.5 for a primer on Hamiltonian symmetries, Noether’s theorem, and the implications in the current set- ting. # 6.1. Predicting Trajectories with Hamiltonian Mechanics For dynamical systems, the equations of motion can be writ- ten in terms of the state z and time t: ˙z = F (z, t). Many physically occurring systems have Hamiltonian structure, meaning that the state can be split into generalized coordi- nates and momenta z = (q, p), and the dynamics can be written as As showed in Section 4, we can construct models that are equivariant to a large variety of continuous Lie Group sym- metries, and therefore we can exactly conserve associated quantities like linear and angular momentum. Figure 7(a) shows that using LieConv layers with a given T(2) and/or SO(2) symmetry, the model trajectories conserve linear and/or angular momentum with relative error close to ma- chine epsilon, determined by the integrator tolerance. As there is no corresponding Noether conservation for discrete symmetry groups, discrete approaches to enforcing symme- try (Cohen and Welling, 2016a; Marcos et al., 2017) would not be nearly as effective. d q dt = ∂ H ∂ p d p dt = − ∂ H ∂ q (10) for some choice of scalar Hamiltonian is often the total energy of the system, and can sometimes (q, p) = be split into kinetic and potential energy terms # H # 6.3. Results For evaluation, we compare a fully-connected (FC) Neural- ODE model (Chen et al., 2018), ODE graph networks Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data (a) (b) (c) Figure 7. Left: We can directly control whether linear and angular momentum is conserved by changing the model’s symmetries. The components of linear and angular momentum of the integrated trajectories are conserved up to integrator tolerance. Middle: Momentum along the rollout trajectories for the LieConv models with different imposed symmetries. Right: Our method outperforms HOGN, a state-of-the-art model, on both state rollout error and system energy conservation. (OGN) (Battaglia et al., 2016), Hamiltonian ODE graph networks (HOGN) (Sanchez-Gonzalez et al., 2019), and our own LieConv architecture on predicting the motion of point particles connected by springs as described in (Sanchez-Gonzalez et al., 2019). Figure 6 shows exam- ple rollout trajectories, and our quantitative results are pre- sented in Figure 7. In the spring problem N bodies with mass m1, . . . , mN interact through pairwise spring forces with constants k1, . . . , kN N . The system preserves energy, linear momentum, and angular momentum. The behavior of the system depends both the values of the system pa- rameters (s = (k, m)) and the initial conditions z0. The dynamics model must learn not only to predict trajectories across a broad range of initial conditions, but also infer the dependence on varied system parameters, which are addi- tional inputs to the model. We compare models that attempt to learn the dynamics Fθ(z, t) = dz/dt directly against models that learn the Hamiltonian as described in section 6.1. — FC —— HFC —— HOGN —— HLieConv(T2) 10-7 w B =10% 8 e 10% 10! 10? 103 10¢ 10° N Figure 8. Test MSE as a function of the number of examples in the training dataset, N . As the inductive biases of Hamiltonian, Graph- Network, and LieConv equivariance are added, generalization performance improves. LieConv outperforms the other methods across all dataset sizes. The shaded region corresponds to a 95% confidence interval, estimated across 3 trials. In Figure 7(a) and 7(b) we show that by changing the in- variance of our Hamiltonian models, we have direct control over the conservation of linear and angular momentum in the predicted trajectories. Figure 7(c) demonstrates that our method outperforms HOGN, a SOTA architecture for dynamics problems, and achieves significant improvement over the naïve fully-connected (FC) model. We summa- rize the various models and their symmetries in Table 6. Finally, in Figure 8 we evaluate test MSE of the different models over a range of training dataset sizes, highlighting the additive improvements in generalization from the Hamil- tonian, Graph-Network, and equivariance inductive biases successively. # 7. Discussion We presented a convolutional layer to build networks that can handle a wide variety of data types, and flexibly swap out the equivariance of the model. While the image, molec- ular, and dynamics experiments demonstrate the generality of our method, there are many exciting application domains (e.g. time-series, geostats, audio, mesh) and directions for future work. We also believe that it will be possible to bene- fit from the inductive biases of HLieConv models even for systems that do not exactly preserve energy or momentum, such as those found in control systems and reinforcement learning. The success of convolutional neural networks on images has highlighted the power of encoding symmetries in models for learning from raw sensory data. But the variety and com- plexity of other modalities of data is a significant challenge in further developing this approach. More general data may not be on a grid, it may possess other kinds of symmetries, or it may contain quantities that cannot be easily combined. We believe that central to solving this problem is a decou- pling of convenient computational representations of data as dense arrays from the set of geometrically sensible opera- Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data tions they may have. We hope to move towards models that can ‘see’ molecules, dynamical systems, multi-scale objects, heterogeneous measurements, and higher mathematical ob- jects, in the way that convolutional neural networks perceive images. Taco S Cohen and Max Welling. Steerable cnns. arXiv preprint arXiv:1612.08498, 2016b. Taco S Cohen, Mario Geiger, Max Welling. arXiv:1801.10130, 2018. Spherical cnns. Jonas Köhler, and arXiv preprint # Acknowledgements MF, SS, PI and AGW are supported by an Amazon Research Award, Amazon Machine Learning Research Award, Face- book Research, NSF I-DISRE 193471, NIH R01 DA048764- 01A1, NSF IIS-1910266, NSF 1922658 NRT-HDR: FU- TURE Foundations, Translation, and Responsibility for Data Science, and by the United States Department of De- fense through the National Defense Science & Engineering Graduate (NDSEG) Fellowship Program. We thank Alex Wang for helpful comments. Taco S Cohen, Mario Geiger, and Maurice Weiler. A gen- eral theory of equivariant cnns on homogeneous spaces. In Advances in Neural Information Processing Systems, pages 9142–9153, 2019. Jifeng Dai, Haozhi Qi, Yuwen Xiong, Yi Li, Guodong Zhang, Han Hu, and Yichen Wei. Deformable convolu- tional networks. In Proceedings of the IEEE international conference on computer vision, pages 764–773, 2017. Ethan Eade. Lie groups for computer vision. Cambridge Univ., Cam-bridge, UK, Tech. Rep, 2014. # References Brandon Anderson, Truong Son Hy, and Risi Kondor. Cor- morant: Covariant molecular neural networks. In Ad- vances in Neural Information Processing Systems, pages 14510–14519, 2019. Peter Battaglia, Razvan Pascanu, Matthew Lai, Interaction net- Danilo Jimenez Rezende, et al. works for learning about objects, relations and physics. In Advances in neural information processing systems, pages 4502–4510, 2016. Erik J Bekkers. B-spline cnns on lie groups. arXiv preprint arXiv:1909.12057, 2019. Carlos Esteves, Christine Allen-Blanchette, Xiaowei Zhou, and Kostas Daniilidis. Polar transformer networks. arXiv preprint arXiv:1709.01889, 2017. Carlos Esteves, Christine Allen-Blanchette, Ameesh Maka- dia, and Kostas Daniilidis. Learning so (3) equivariant representations with spherical cnns. In Proceedings of the European Conference on Computer Vision (ECCV), pages 52–68, 2018. Justin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In Proceedings of the 34th In- ternational Conference on Machine Learning-Volume 70, pages 1263–1272. JMLR. org, 2017. L. C. Blum and J.-L. Reymond. 970 million druglike small molecules for virtual screening in the chemical universe database GDB-13. J. Am. Chem. Soc., 131:8732, 2009. Samuel Greydanus, Misko Dzamba, and Jason Yosinski. In Advances in Neural Hamiltonian neural networks. Information Processing Systems, pages 15353–15363, 2019. Jeremy Butterfield. On symmetry and conserved quanti- ties in classical mechanics. In Physical theory and its interpretation, pages 43–100. Springer, 2006. Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David K Duvenaud. Neural ordinary differential equa- tions. In Advances in neural information processing sys- tems, pages 6571–6583, 2018. Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement learning in a handful In Ad- of trials using probabilistic dynamics models. vances in Neural Information Processing Systems, pages 4754–4765, 2018. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. Zhiwu Huang, Chengde Wan, Thomas Probst, and Luc Van Gool. Deep learning on lie groups for skeleton-based action recognition. In Proceedings of the IEEE confer- ence on computer vision and pattern recognition, pages 6099–6108, 2017. Jörn-Henrik Jacobsen, Bert De Brabandere, and Arnold WM Smeulders. Dynamic steerable blocks in deep residual networks. arXiv preprint arXiv:1706.00598, 2017. Taco Cohen and Max Welling. Group equivariant convolu- tional networks. In International conference on machine learning, pages 2990–2999, 2016a. Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. arXiv preprint arXiv:1906.08253, 2019. Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data Sanket Kamthe and Marc Peter Deisenroth. Data-efficient reinforcement learning with probabilistic model predic- tive control. arXiv preprint arXiv:1706.06491, 2017. Robotics and Automation (ICRA), pages 7559–7566. IEEE, 2018. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Risi Kondor and Shubhendu Trivedi. On the general- ization of equivariance and convolution in neural net- works to the action of compact groups. arXiv preprint arXiv:1802.03690, 2018. Akira Kono and Kiminao Ishitoya. Squaring operations in mod 2 cohomology of quotients of compact lie groups by maximal tori. In Algebraic Topology Barcelona 1986, pages 192–206. Springer, 1987. Emmy Noether. Invariant variation problems. Transport Theory and Statistical Physics, 1(3):186–207, 1971. Prajit Ramachandran, Barret Zoph, and Quoc V Le. arXiv preprint Searching for activation functions. arXiv:1710.05941, 2017. M. Rupp, A. Tkatchenko, K.-R. Müller, and O. A. von Lilienfeld. Fast and accurate modeling of molecular atom- ization energies with machine learning. Physical Review Letters, 108:058301, 2012. Alvaro Sanchez-Gonzalez, Victor Bapst, Kyle Cranmer, and Peter Battaglia. Hamiltonian graph networks with ode integrators. arXiv preprint arXiv:1909.12790, 2019. James J Kuffner. Effective sampling and distance metrics for 3d rigid body path planning. In IEEE International Conference on Robotics and Automation, 2004. Proceed- ings. ICRA’04. 2004, volume 4, pages 3993–3998. IEEE, 2004. Kristof T Schütt, Huziel E Sauceda, P-J Kindermans, Alexandre Tkatchenko, and K-R Müller. Schnet–a deep learning architecture for molecules and materials. The Journal of Chemical Physics, 148(24):241722, 2018. Dmitry Laptev, Nikolay Savinov, Joachim M Buhmann, and Marc Pollefeys. Ti-pooling: transformation-invariant pooling for feature learning in convolutional neural net- works. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, pages 289–297, 2016. Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, and Yoshua Bengio. An empirical evaluation of deep architectures on problems with many factors of vari- ation. In Proceedings of the 24th international conference on Machine learning, pages 473–480, 2007. Martin Simonovsky and Nikos Komodakis. Dynamic edge- conditioned filters in convolutional neural networks on graphs. In CVPR, 2017. Nathaniel Thomas, Tess Smidt, Steven Kearnes, Lusann Yang, Li Li, Kai Kohlhoff, and Patrick Riley. Ten- sor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. arXiv preprint arXiv:1802.08219, 2018. General e (2)- equivariant steerable cnns. In Advances in Neural Infor- mation Processing Systems, pages 14334–14345, 2019. Yann LeCun, Yoshua Bengio, et al. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361(10):1995, 1995. Ian Lenz, Ross A Knepper, and Ashutosh Saxena. Deepmpc: Learning deep latent features for model predictive control. In Robotics: Science and Systems. Rome, Italy, 2015. Diego Marcos, Michele Volpi, Nikos Komodakis, and Devis Tuia. Rotation equivariant vector field networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 5048–5057, 2017. Cleve Moler and Charles Van Loan. Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. SIAM review, 45(1):3–49, 2003. Maurice Weiler, Mario Geiger, Max Welling, Wouter Boomsma, and Taco S Cohen. 3d steerable cnns: Learn- ing rotationally equivariant features in volumetric data. In Advances in Neural Information Processing Systems, pages 10381–10392, 2018. Benjamin Willson. Reiter nets for semidirect products of amenable groups and semigroups. Proceedings of the American Mathematical Society, 137(11):3823–3832, 2009. Daniel E Worrall, Stephan J Garbin, Daniyar Turmukham- betov, and Gabriel J Brostow. Harmonic networks: Deep translation and rotation equivariance. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5028–5037, 2017. Anusha Nagabandi, Gregory Kahn, Ronald S Fearing, and Sergey Levine. Neural network dynamics for model- based deep reinforcement learning with model-free fine- In 2018 IEEE International Conference on tuning. Wenxuan Wu, Zhongang Qi, and Li Fuxin. Pointconv: Deep convolutional networks on 3d point clouds. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9621–9630, 2019. Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data Zhenqin Wu, Bharath Ramsundar, Evan N Feinberg, Joseph Gomes, Caleb Geniesse, Aneesh S Pappu, Karl Leswing, and Vijay Pande. Moleculenet: a benchmark for molecu- lar machine learning. Chemical science, 9(2):513–530, 2018. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. Yaofeng Desmond Zhong, Biswadip Dey, and Amit Chakraborty. Symplectic ode-net: Learning hamiltonian dynamics with control. arXiv preprint arXiv:1909.12077, 2019. Yanzhao Zhou, Qixiang Ye, Qiang Qiu, and Jianbin Jiao. Oriented response networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 519–528, 2017. Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data # Appendices # A. Derivations and Additional Methodology # A.1. Generalized PointConv Trick at indices i and j is also necessary when there are differ- ent numbers of points per minibatch but batched together using zero padding. The generalized PointConv trick can thus be applied in batch mode when there may be varied number of points per example and varied number of points per neighborhood. The matrix notation becomes very cumbersome for manipu- lating these higher order n-dimensional arrays, so we will instead use index notation with Latin indices i, j, k index- ing points, Greek indices α, β, γ indexing feature channels, and c indexing the coordinate dimensions of which there are d = 3 for PointConv and d = dim(G) + 2 dim(Q) for LieConv.3 As the objects are not geometric tensors but simply n-dimensional arrays, we will make no distinction between upper and lower indices. After expanding into in- dices, it should be assumed that all values are scalars, and that any free indices can range over all of the values. Let kα,β ac ij} { as input and acts independently over the locations i, j. For xc PointConv, the input ac ij = xc j and for LieConv the 1 j ui), qi, qj])c. input ac We wish to compute hα i = ij f β kα,β j . (12) # A.2. Abelian G and Coordinate Transforms in a single orbit, the For Abelian groups that cover computation is very similar to ordinary Euclidean convo- lution. Defining ai = log(ui), bj = log(vj), and using bj eai = eai the fact that e− j ui) = − bj). Defining ˜f = f exp; exp)(ai − (log ◦ we get 1 n ˜h(ai) = (˜kθ ◦ bj) ˜f (bj), proj)(ai − nbhd(i) (15) jenbhd(é) where proj = logo exp projects to the image of the loga- rithm map. Apart from a projection and a change to logarith- mic coordinates, this is equivalent to Euclidean convolution in a vector space with dimensionality of the group. When the group is Abelian and Â¥ is a homogeneous space, then the dimension of the group is the dimension of the input. In these cases we have a trivial stabilizer group H and single origin 0, so we can view f and h as acting on the input Xi = Ujo. In Wu et al. (2019), it was observed that since kee is the output of an MLP, kee =>, wees i; for some final weight matrix W and ‘penultimate activations 8) (s} 5 simply the result of the MLP after the last nontiaeerity). With this in mind, we can rewrite (12) This directly generalizes some of the existing coordinate transform methods for achieving equivariance from the liter- ature such as log polar coordinates for rotation and scaling equivariance (Esteves et al., 2017), and using hyperbolic coordinates for squeeze and scaling equivariance. hα i = W α,β γ sγ i,j f β j (13) s},f;) = j (14) In practice, the intermediate number of channels is much less than the product of cin and cout: and so this reordering of the computation leads to a massive reduction in both memory and compute. Furthermore, bγ,β j sγ j can be implemented with regular ma- i = trix multiplication and hα β,γ W α,β can be also ε W α,εbε by flattening (β, γ) into a single axis ε: hα i . The sum over index j can be restricted to a subset j(i) (such as a chosen neighborhood) by computing f β ) at each of the ( · required indices and padding to the size of the maximum subset with zeros, and computing bγ,β j(i) us- ing dense matrix multiplication. Masking out of the values Log Polar Coordinates: Consider the Abelian Lie group SO(2) acting of positive scalings and rotations: G = R∗ on R2. Elements of the group M G can be expressed as a 2 × r cos(θ) r sin(θ) r sin(θ) − r cos(θ) M (r, θ) = R. The matrix logarithm is* for r R+ and θ ∈ ∈ θ mod 2π log(r) r cos(θ) r sin(θ) r sin(θ) − r cos(θ) log(r) θ mod 2π log = − or more compactly log(M(r,4)) = log(r)I+(@ mod 27) J, which is [log(7’), 9 mod 27’ in the basis for the Lie algebra [I, J]. It is clear that proj = logo exp is simply mod 27 on the J component. As R2 is a homogeneous space of G, one can choose the R2. A little algebra shows that global origin o = [1, 0] ∈ 3dim(Q) is the dimension of the space into which Q, the orbit identifiers, are embedded. 4Here θ mod 2π is defined to mean θ + 2πn for the integer n such that the value is in (−π, π), consistent with the principal matrix logarithm. (θ + π)%2π − π in programming notation. , Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data lifting to the group yields the transformation u; = M (rj, 4;) for each point pj; = ujo, where r = \/a2?2+y?, and 0 = atan2(y, x) are the polar coordinates of the point p;. Observe that the logarithm of vj uj has a simple expression highlighting the fact that it is invariant to scale and rotational transformations of the elements, # A.3. Sufficient Conditions for Geodesic Distance In general, the function d(u, v) = || log(v~*u)|| 7, defined on the domain of GL(d) covered by the exponential map, satisfies the first three conditions of a distance metric but not the triangle inequality, making it a semi-metric: 1 log(v− j ui) = log(M (rj, θj)− 1M (ri, θi)) θj mod 2π)J. = log(ri/rj)I + (θi − 1. d(u, v) 0 ≥ 2. d(u, v) = 0 3. d(u, v) = 1v) = 0 log(u− ⇔ log(v− = u = v ⇔ log(u− = d(v, u). Now writing out our Monte Carlo estimation of the integral: h(pi) = 1 n ˜kθ(log(ri/rj), θi − θj mod 2π)f (pj), # j which is a discretization of the log polar convolution from Esteves et al. (2017). This can be trivially extended to encompass cylindrical coordinates with the group T (1) R∗ However for certain subgroups of GL(d) with additional structure, the triangle inequality holds and the function is the distance along geodesics connecting group elements u and v according to the metric tensor (A,B), == Tr(ATu-Tu7'B), (16) (A,B), T denotes inverse and transpose. − × Hyperbolic coordinates: For another nontrivial example, SQ consider the group of scalings and squeezes G = R∗ × R2 : x > acting on the positive orthant . Elements of the group can be expressed as the 0, y > 0 } product of a squeeze mapping and a scaling mon Jb J-E a] R*. As the group is abelian, the logarithm for any r, s splits nicely in terms of the two generators I and A: m((5 f])-oonl fmol 8) Again Â¥ is a homogeneous space of G’, and we choose a single origin o = [1,1]. With a little algebra, it is clear that M(ri,si)o = pi where r = \/zy and s = \/x/y are the hyperbolic coordinates of p;. Specifically, if the subgroup G is in the image of the exp : g G map and each infinitesmal generator commutes with its transpose: [A, AT ] = 0 for g, then d(u, v) = → A ∈ 1u) log(v− is the geodesic distance between u,v. Geodesic Equation: Geodesics of (16) satisfying V4) = 0 can equivalently be derived by minimizing the energy functional 1 using the calculus of variations. Minimizing curves y(t), connecting elements u and v in G (7(0) = v,y(1) = uw) satisfy 1 0 = δE = δ Tr( ˙γT γ− T γ− 1 ˙γ)dt 0 Noting that δ(γ− trace, 1) = − 1δγγ− 1 and the linearity of the Expressed in the basis we see that B = [I, A] for the Lie algebra above, 1 log(v− j ui) = log(ri/rj)I + log(si/sj)A 2 1 Tr( ˙γT γ− T γ− 1δ ˙γ) − Tr( ˙γT γ− T γ− 1δγγ− 1 ˙γ)dt = 0. 0 yielding the expression for convolution Using the cyclic property of the trace and integrating by parts, we have that h(pi) = 1 n ˜kθ(log(ri/rj), log(si/sj))f (pj), # j which is equivariant to squeezes and scalings. As demonstrated, equivariance to groups that contain the input space in a single orbit and are abelian can be achieved with a simple coordinate transform; however our approach generalizes to groups that are both ’larger’ and ’smaller’ than the input space, including coordinate transform equivariance as a special case. 1 dip _-7.- 1::T.-T.- 2 [ m((Z0% Pa Nay yyty ty ‘)on Jar=o, 0 1 0 T γ− 1δγ) vanishes since where the boundary term Tr( ˙γγ− (δγ)(0) = (δγ)(1) = 0. As δγ may be chosen to vary arbitrarily along the path, γ must satisfy the geodesic equation: d dt ( ˙γT γ− T γ− 1) + γ− 1 ˙γ ˙γT γ− T γ− 1 = 0. (17) Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data 1u) satisfies [A, AT ] = 0, Solutions: When A = log(v− 1u)) is a solution to the the curve γ(t) = v exp(t log(v− geodesic equation (17). Clearly γ connects u and v, γ(0) = v and γ(1) = u. Plugging in ˙γ = γA into the left hand side of equation (17), we have the subsampled set because the distances are left invariant d(ui, uj) = d(wui, wuj). Now we can use either of these methods for Subp( ) to · equivariantly subsample the quadrature points in each neigh- borhood used to estimate the integral to a fixed number p, = d dt (AT γ− 1 1) + AAT γ− AT γ− = = [A, AT ]γ− 1 ˙γγ− 1 1 + AAT γ− − 1 = 0 Length of γ: The length of the curve γ connecting u and v is ~ [ Tr(AP A)dt = ||Al|r = ||log(v'u) || hi = 1 p Subp(nbhd(ui)) 1 kθ(v− j ui)fj. (19) # j €Subp Doing so has reduced the cost of estimating the convolution from O(N 2) to O(pN ), ignoring the cost of computing Subp and # N i=1. nbhd(ui) } { # A.5. Review and Implications of Noether’s Theorem In the Hamiltonian setting, Noether’s theorem relates the continuous symmetries of the Hamiltonian of a system with conserved quantities, and has been deeply impactful in the understanding of classical physics. We give a review of Noether’s theorem, loosely following Butterfield (2006). Of the Lie Groups that we consider in this paper, all of which have a single connected component, the groups G = SQ satisfy this property SO(d), R∗ T (d), SO(d), R∗ that [g, gT ] = 0; however, the SE(d) groups do not. # A.4. Equivariant Subsampling Even if all distances and neighborhoods are precomputed, the cost of computing equation (6) for i = 1, ..., N is still quadratic, O(nN ) = O(N 2), because the number of points in each neighborhood n grows linearly with N as f is more densely evaluated. So that our method can scale to handle a large number of points, we show two ways two equivariantly subsample the group elements, which we can use both for the locations at which we evaluate the convolution and the locations that we use for the Monte Carlo estimator. Since the elements are spaced irregularly, we cannot readily use the coset pooling method described in (Cohen and Welling, 2016a), instead we can perform: # More on Hamiltonian Dynamics As introduced earlier, the Hamiltonian is a function acting on the state H(z) = H(q, p), (we will ignore time depen- dence for now) can be viewed more formally as a function on the cotangent bundle (q, p) = z M = T ∗C where C is the coordinate configuration space, and this is the setting for Hamiltonian dynamics. In general, on a manifold M as an assignment of a directional derivative along each point z coordinate charts X = acts on functions f by X(f ) = each of the components X α are functions of z. In Hamiltonian mechanics, for two functions on M , there is the Poisson bracket which can be written in terms of the canonical coordinates qi, pi, 5 Random Selection: Randomly selecting a subset of p points from the original n preserves the original sampling distribution, so it can be used. f, g { } = ∂f ∂pi ∂g ∂qi − ∂f ∂qi ∂g ∂pi . Farthest Point Sampling: Given a set of group elements G, we can select a subset S∗p of size p by S = ui} maximizes the minimum distance between any two elements in that subset, Subp(S) := S∗p = arg max Sp S u,v min Sp:u ∈ =v d(u, v), (18) ⊂ farthest point sampling on the group. Acting on a set S∗p , the farthest point sub- of elements, Subp : S sampling is equivariant Subp(wS) = wSubp(S) for any G. Meaning that applying a group element to each w of the elements does not change the chosen indices in The Poisson bracket can be used to associate each function f to a vector field Xf = f, { ·} = ∂f ∂pi ∂ ∂qi − ∂f ∂qi ∂ ∂pi , which specifies, by its action on another function g, the di- rectional derivative of g along Xf : Xf (g) = . Vector fields that can be written in this way are known as Hamil- tonian vector fields, and the Hamiltonian dynamics of the 5Here we take the definition of the Poisson bracket to be nega- tive of the usual definition in order to streamline notation. Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data system is a special example XH = . This vector field in canonical coordinates z = (p, q) is the vector field the symplectic gradient, as ∇zH (i.e. XH = F (z) = J discussed in Section 6.1). Making this connection clear, a given scalar quantity evolves through time as ˙f = . } But this bracket can be used to evaluate the rate of change of a scalar quantity along the flows of vector fields other than the dynamics, such as the flows of continuous symmetries. # Noether’s Theorem The flow φX R of a vector field X is the set of integral curves, the unique solution to the system of ODEs ˙zα = X α with initial condition z and at parameter value λ, or more abstractly the iterated application of X: φX λ = exp(λX). Continuous symmetries transformation are the transformations that can be written as the flow φX λ of a vector field. The directional derivative characterizes how a function such as the Hamiltonian changes along the flow of X and is a special case of the Lie Derivative # L LX H = d dλ (H ◦ φX λ ) λ=0 = X(H) A scalar function is invariant to the flow of a vector field if and only if the Lie Derivative is zero This implication goes both ways, if f is conserved then φXf λ is necessarily a symmetry of the Hamiltonian, and if φXf λ is a symmetry of the Hamiltonian then f is conserved. # Hamiltonian vs Dynamical Symmetries So far we have been discussing Hamiltonian symmetries, in- variances of the Hamiltonian. But in the study of dynamical systems there is a related concept of dynamical symmetries, symmetries of the equations of motion. This notion is also captured by the Lie Derivative, but between vector fields. A dynamical system ˙z = F (z), has a continuous dynami- cal symmetry φX λ if the flow along the dynamical system commutes with the symmetry: λ (φF φX t (z)) = φF t (φX λ (z)). (20) Meaning that applying the symmetry transformation to the state and then flowing along the dynamical system is equiv- alent to flowing first and then applying the symmetry trans- formation. Equation (20) is satisfied if and only if the Lie Derivative is zero: # where [ · LX F = [X, F ] = 0, ] is the Lie bracket on vector fields.7 · H(φX λ (z)) = H(z) ⇔ LX H = 0. For all transformations that respect the Poisson Bracket6, which we add as a requirement for a symmetry, the vector field X is (locally) Hamiltonian and there exists a function If M is a contractible f such that X = Xf = f, { domain such as R2n, then f is globally defined. For every continuous symmetry φXf λ , For Hamiltonian systems, every Hamiltonian symmetry is also a dynamical symmetry. In fact, it is not hard to show that the Lie and Poisson brackets are related, [Xf , Xg] = X { f,g } and this directly shows the implication. If Xf is a Hamilto- = 0, and then nian symmetry, } [Xf , F ] = [Xf , XH ] = X { { = 0. f,H } = H, f = XH (f ), # LXf H = Xf (H) = # f, H { } −{ } − by the antisymmetry of the Poisson bracket. So if φX λ is a symmetry of H, then X = Xf for some function f , and H(φXf λ (z)) = H(z) implies LXf H = 0 f (φXH τ ⇔ LXH f = 0 (z)) = f (z) ⇔ or in other words f (z(t+τ )) = f (z(t)) and f is a conserved quantity of the dynamics. However, the converse is not true, dynamical symmetries of a Hamiltonian system are not necessarily Hamiltonian symmetries and thus might not correspond to conserved quantities. Furthermore even if the system has a dynamical symmetry which is the flow along a Hamiltonian vector field φX , but the dynamics F are not Hamil- λ , X = Xf = tonian, then the dynamics will not conserve f in general. Both the symmetry and the dynamics must be Hamiltonian for the conservation laws. ®More precisely, the Poisson Bracket can be formulated in a coordinate free manner in terms of a symplectic two form w, {f,g} = w(X>,X,). In the original coordinates w = )>; dpi A dq‘, and this coordinate basis, w is represented by the matrix J from earlier. The dynamics Xy are determined by dH = w(Xu,:) = Lx ,,W. Transformations which respect the Poisson Bracket are symplectic, £.xw = 0. With Cartan’s magic formula, this implies that d(uxw) = 0. Because the form 1xw is closed, Poincare’s Lemma implies that locally (.xw) = df) for some function f and hence X = Xf is (locally) a Hamiltonian vector field. For more details see Butterfield (2006). This fact is demonstrated by Figure 9, where the dynamics of the (non-Hamiltonian) equivariant LieConv-T(2) model has a T(2) dynamical symmetry with the generators ∂x, ∂y which are Hamiltonian vector fields for f = px, f = py, and yet linear momentum is not conserved by the model. 7The Lie bracket on vector fields produces another vector field and is defined by how it acts on functions, for any smooth function g: [X, F ](g) = X(F (g)) − F (X(g)) Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data Ener 100 mosey 50 0 50 100 150 200 250 25 Linear Momentum 2.0 15 0 50 100 t 150 200 250 s— LieConv-T2 == HLieConv-Trivial === HLieConv-T2 ----~ Truth ˙qm = Aq, ˙pm = Aq which have the solution φXnL θ (qm, pm) = (eθAqm, eθApm) = (Rθqm, Rθpm), where Rθ is a rotation about the axis n by the angle θ, which follows from the Rodriguez rotation formula. Therefore, the flow of the Hamiltonian vector field of angular momentum along a given axis is a global rotation of the position and momentum of each particle about that axis. Again, the dynamics of a neural network modeling a Hamiltonian con- serve total angular momentum if and only if the network is invariant to simultaneous rotation of all particle positions and momenta. Figure 9. Equivariance alone is not sufficient, for conservation we need both to model H and incorporate the given symmmetry. For comparison, LieConv-T(2) is T(2)-equivariant but models F , and HLieConv-Trivial models H but is not T(2)-equivariant. Only HLieConv-T(2) conserves linear momentum. # B. Additional Experiments # B.1. Equivariance Demo # Conserving Linear and Angular Momentum Consider a system of N interacting particles described in Eu- clidean coordinates with position and momentum qim, pim, such as the multi-body spring problem. Here the first index i = 1, 2, 3 indexes the spatial coordinates and the second m = 1, 2, ..., N indexes the particles. We will use the bolded notation qm, pm to suppress the spatial indices, but still indexing the particles m as in Section 6.1. The total linear momentum along a given direction n is n m pm). Expanding the Poisson bracket, the Hamiltonian vector field 6) Yim ∂ ∂qm n P, = n XnP = = ni { · ·} · # m im m which has the flow OX"? (Gms Pm) = (dm + An, Pm), a translation of all particles by An. So our model of the Hamiltonian conserves linear momentum if and only if it is invariant to a global translation of all particles, (e.g. T(2) invariance for a 2D spring system). While (7) shows that the convolution estimator is equivari- ant, we have conducted the ablation study below examining the equivariance of the network empirically. We trained LieConv (Trivial, T(3), SO(3), SE(3)) models on a limited subset of 20k training examples (out of 100k) of the HOMO task on QM9 without any data augmentation. We then evalu- ate these models on a series of modified test sets where each example has been randomly transformed by an element of the given group (the test translations in T(3) and SE(3) are sampled from a normal with stddev 0.5). In table B.1 the rows are the models configured with a given group equiv- ariance and the columns N/G denote no augmentation at training time and transformations from G applied to the test set (test translations in T(3) and SE(3) are sampled from a normal with stddev 0.5). Model N/N N/T(3) N/SO(3) N/SE(3) Trivial T(3) SO(3) SE(3) 173 113 159 62 183 113 238 62 239 133 160 63 243 133 240 62 The total angular momentum along a given axis n is Table 4. Test MAE (in meV) on HOMO test set randomly trans- formed by elements of G. Despite no data augmentation (N), G equivariant models perform as well on G transformed test data. n-L=n- » Am XPm = m ijkym m > €igkNidjmPkm = > PmAdm n · m ijkym m , where €;;;, is the Levi-Civita symbol and we have defined the antisymmetric matrix A by Ay; = 0; €ijnni- # €ijnni- # mAqm Notably, the performance of the LieConv-G models do not degrade when random G transformations are applied to the test set. Also, in this low data regime, the added equivari- ances are especially important. (2) (a) Xout = {n-L,-}= Arjdjm=— — Ajr —— nL = { a} >») a ae ikPim 6) Xt T AT T 4T ¢ m= Do (ana + Pn GS) m # m where the second line follows from the antisymmetry of A. We can find the flow of XnL from the differential equations # B.2. RotMNIST Comparison While the RotMNIST dataset consists of 12k rotated MNIST digits, it is standard to separate out 10k to be used for train- ing and 2k for validation. However, in Ti-Pooling and E(2)- Steerable CNNs, it appears that after hyperparameters were tuned the validation set is folded back into the training set Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data to be used as additional training data, a common approach used on other datasets. Although in table 1 we only use 10k training points, in the table below we report the perfor- mance with and without augmentation trained on the full 12k examples. In this paper, the groups we use in which the lifting map is multi-valued are SE(2), SO(3), and SE(3). The process is especially straightforward for SE(2) and SE(3) as these groups can be expressed as a semi-direct product of two groupsG = H «x N, dµG(h, n) = δ(h)dµH (h)dµN (n), (21) Aug SO(2) None Trivial 1.44 1.60 Ty 1.35 2.64 T(2) 1.32 2.34 SO(2) 1.27 1.26 ∗ SO(2)×R 1.13 1.25 SE(2) 1.13 1.15 where d(h) = ati (Willson, 2009). For G = SE(d) = SO(d) «x T(d), 6(h) = 1 since the Lebesgue measure djp(a)(x) = dA(x) = dz is invariant to rotations. So simply dusgia)(R, x) = duso(a)(R)dz. Table 5. Classification Error (%) on RotMNIST dataset for LieConv with different group equivariances and baselines: # C. Implementation Details # C.1. Practical Considerations While the high-level summary of the lifting procedure (Al- gorithm 1) and the LieConv layer (Algorithm 2) provides a useful conceptual understanding of our method, there are some additional details that are important for a practical implementation. 1. According to Algorithm 2, aij is computed in every LieConv layer, which is both highly redundant and costly. In practice, we precompute aij once after lift- ing and feed it through the network with layers op- instead of erating on the state aij} fi} , { { N i=1. Doing so requires fixing the group (ui, qi, fi) { } elements that will be used at each layer for a given forwards pass. 2. In practice only p elements of nbhdi are sampled (ran- domly) for computing the Monte Carlo estimator in order to limit the computational burden (see Appendix A.4). So lifts of a point x to SE(d) consistent with the µ are just TxR, the multiplication of a translation by x and randomly sampled rotations R ). There are · multiple easy methods to sample uniformly from SO(d) given in (Kuffner, 2004), for example sampling uniformly from SO(3) can be done by sampling a unit quaternion from the 3-sphere, and identifying it with the corresponding rotation matrix. # C.3. Model Architecture We employ a ResNet-style architecture (He et al., 2016), using bottleneck blocks (Zagoruyko and Komodakis, 2016), and replacing ReLUs with Swish activations (Ramachan- dran et al., 2017). The convolutional kernel gθ internal to each LieConv layer is parametrized by a 3-layer MLP with 32 hidden units, batch norm, and Swish nonlinearities. Not only do the Swish activations improve performance slightly, but unlike ReLUs they are twice differentiable which is a requirement for backpropagating through the Hamiltonian dynamics. The stack of elementwise linear and bottleneck blocks is followed by a global pooling layer that computes the average over all elements, but not over channels. Like for regular image bottleneck blocks, the channels for the convolutional layer in the middle are smaller by a factor of 4 for increased parameter and computational efficiency. 3. We use the analytic forms for the exponential and loga- rithm maps of the various groups as described in Eade (2014). # C.2. Sampling from the Haar Measure for Various groups When the lifting map from /G is multi-valued, X → × X we need to sample elements of u G that project down to x: uo = x in a way consistent with the Haar measure µ( ). · In other words, since the restriction µ( |nbhd is a distribu- ) · tion, then we must sample from the conditional distribution |nbhd. In general this can be done by u parametrizing the distribution of µ as a collection of random variables that includes x, and then sampling the remaining variables. Downsampling: As is traditional for image data, we in- crease the number of channels and the receptive field at every downsampling step. The downsampling is performed with the farthest point downsampling method described in Appendix A.4. For a downsampling by a factor of s < 1, 1/2 and the radius of the neighborhood is scaled up by s− 1/2. When an image is the channels are scaled up by s− downsampled with s = (1/2)2 that is typical in a CNN, this results in 2x more channels and a radius or dilation of 2x. In the bottleneck block, the downsampling operation is fused with the LieConv layer, so that the convolution is only evaluated at the downsampled query locations. We perform downsampling only on the image datasets, which have more points. BatchNorm: In order to handle the varied number of group elements per example and within each neighborhood, we Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data use a modified batchnorm that computes statistics only over elements from a given mask. The batch norm is computed per channel, with statistics averaged over the batch size and each of the valid locations. # C.4. Details for Hamiltonian Models # Model Symmetries: As the position vectors are mean centered in the model for- ward pass q’ = q; — q, HOGN and HLieConv-SO2* have additional T(2) invariance, yielding SE(2) invariance for HLieConv-SO2*. We also experimented with a HLieCony- SE2 equivariant model, but found that the exponential map for SE2 (involving taylor expands and masking) was not numerically stable enough for for second derivatives, re- quired for optimizing through the Hamiltonian dynamics. So instead we benchmark the HLieConv-SO2 (without cen- tering) and the HLieConv-SO2* (with centering) models separately. Layer equivariance is preferable for not prema- turely discarding useful information and for better modeling performance, but invariance alone is sufficient for the con- servation laws. Additionally, since we know a priori that the spring problem has Euclidean coordinates, we need not model the kinetic energy K(p,m) = S07, || pj ||?/m; and instead focus on modeling the potential V(q, k). We observe that this additional inductive bias of Euclidean co- ordinates improves model performance. Table 6 shows the invariance and equivariance properties of the relevant mod- els and baselines. For Noether conservation, we need both to model the Hamiltonian and have the symmetry property. i, the position and momentum of body j were distributed as q(i) (0, 0.36I). Using the analytic form of the Hamiltonian for the spring problem, (q, p) = K(p, m) + V (q, k), we use the RK4 numerical H integration scheme to generate 5 second ground truth tra- jectories broken up into 500 evaluation timesteps. We use a fixed step size scheme for RK4 chosen automatically (as implemented in Chen et al. (2018)) with a relative tolerance of 1e-8 in double precision arithmetic. We then randomly se- lected a single segment for each trajectory, consisting of an initial state zt and τ = 4 transition states: (z(i) t+τ ). Training: All models were trained in single precision arith- metic (double precision did not make any appreciable differ- ence) with an integrator tolerance of 1e-4. We use a cosine decay for the learning rate schedule and perform early stop- ping over the validation MSE. We trained with a minibatch size of 200 and for 100 epochs each using the Adam opti- mizer (Kingma and Ba, 2014) without batch normalization. With 3k training examples, the HLieConv model takes about 20 minutes to train on one 1080Ti. For the examination of performance over the range of dataset sizes in 8, we cap the validation set to the size of the training set to make the setting more realistic, and we also scale the number of training epochs up as the size of the dataset shrinks (epochs = 100(,/10°/D)) which we found to be sufficient to fit the training set. For D < 200 we use the full dataset in each minibatch. # Hyperparameters: Dataset Generation: To generate the spring dynam- ics datasets we generated D systems each with N = 6 particles connected by springs. The system param- eters, mass and spring constant, are set by sampling m(i) i=1, m(i) (0.1, 3.1), { k(i) (0, 5). Following Sanchez-Gonzalez et al. (2019), j ∼ U we set the spring constants as kij = kikj. For each system channels layers lr (H)FC (H)OGN (H)LieConv 256 256 384 4 1 4 1e-2 1e-2 1e-3 Hyperparameter tuning: Model hyperparameters were tuned by grid search over channel width, number of layers, and learning rate. The models were tuned with training, validation, and test datasets consisting of 3000, 2000, and 2000 trajectory segments respectively. F(a,t) | H(z,t) | T(2) | SO(2) FC e OGN e HOGN e * LieConv-T(2) e 6 HLieConv-Trivial e HLieConv-T(2) e x) HLieConv-SO(2) e 6 HLieConv-SO(2)* e * o • • • Table 6. Model characteristics. Models with layers invariant to G are denoted with *, and those with equivariant layers with ©. # C.5. Details for Image and Molecular Experiments RotMNIST Hyperparameters: For RotMNIST we train each model for 500 epochs using the Adam optimizer with learning rate 3e-3 and batch size 25. The first linear layer maps the 1-channel grayscale input to k = 128 channels, and the number of channels in the bottleneck blocks follow the scaling law from Appendix C.3 as the group elements are downsampled. We use 6 bottleneck blocks, and the total downsampling factor S = 1/10 is split geometrically between the blocks as s = (1/10)1/6 per block. The initial radius r of the local neighborhoods in the first layer is set so Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data as to include 1/15 of the total number of elements in each neighborhood and is scaled accordingly. The subsampled neighborhood used to compute the Monte Carlo convolution estimator uses p = 25 elements. The models take less than 12 hours to train on a 1080Ti. QM9 Hyperparameters: For the QM9 molecular data, we use the featurization from Anderson et al. (2019), where the input features fi are determined by the atom type (C,H,N,O,F) and the atomic charge. The coordinates xi are simply the raw atomic coordinates measured in angstroms. A separate model is trained for each prediction task, all using the same hyperparameters and early stopping on the validation MAE. We use the same train, validation, test split as Anderson et al. (2019), with 100k molecules for train, 10% for test and the remaining for validation. Like with the other experiments, we use a cosine learning rate decay schedule. Each model is trained using the Adam optimizer for 1000 epochs with a learning rate of 3e-3 and batch size of 100. We use SO(3) data augmentation, 6 bottleneck blocks, each with k = 1536 channels. The radius of the local neighborhood is set to r = to include all elements. The model takes about 48 hours to train on a single 1080Ti. # C.6. Local Neighborhood Visualizations In Figure 10 we visualize the local neighborhood used with different groups under three different types of transforma- tions: translations, rotations and scaling. The distance and neighborhood are defined for the tuples of group elements and orbit. For Trivial, T(2), SO(2), R SO(2) the corre- spondence between points and these tuples is one-to-one and we can identify the neighborhood in terms of the input points. For SE(2) each point is mapped to multiple tuples, each of which defines its own neighborhood in terms of other tuples. In the Figure, for SE(2) for a given point we vi- sualize the distribution of points that enter the computation of the convolution at a specific tuple. Generalizing Convolutional Neural Networks for Equivariance to Lie Groups on Arbitrary Continuous Data (a) Trivial (b) T (2) (c) SO(2) (d) R ∗ × SO(2) (e) SE(2) Figure 10. A visualization of the local neighborhood for different groups, in terms of the points in the input space. For the computation of the convolution at the point in red, elements are sampled from colored region. In each panel, the top row shows translations, middle row shows rotations and bottom row shows scalings of the same image. For SE(2) we visualize the distribution of points entering the computation of the convolution over multiple lift samples. For each of the equivariant models that respects a given symmetry, the points that enter into the computation are not affected by the transformation.
{ "id": "1906.08253" }
2002.09402
Addressing Some Limitations of Transformers with Feedback Memory
Transformers have been successfully applied to sequential, auto-regressive tasks despite being feedforward networks. Unlike recurrent neural networks, Transformers use attention to capture temporal relations while processing input tokens in parallel. While this parallelization makes them computationally efficient, it restricts the model from fully exploiting the sequential nature of the input. The representation at a given layer can only access representations from lower layers, rather than the higher level representations already available. In this work, we propose the Feedback Transformer architecture that exposes all previous representations to all future representations, meaning the lowest representation of the current timestep is formed from the highest-level abstract representation of the past. We demonstrate on a variety of benchmarks in language modeling, machine translation, and reinforcement learning that the increased representation capacity can create small, shallow models with much stronger performance than comparable Transformers.
http://arxiv.org/pdf/2002.09402
Angela Fan, Thibaut Lavril, Edouard Grave, Armand Joulin, Sainbayar Sukhbaatar
cs.LG, cs.CL, stat.ML
null
null
cs.LG
20200221
20210125
1 2 0 2 n a J 5 2 ] G L . s c [ 3 v 2 0 4 9 0 . 2 0 0 2 : v i X r a # ADDRESSING SOME LIMITATIONS OF TRANSFORMERS WITH FEEDBACK MEMORY Angela Fan†, Thibaut Lavril, Edouard Grave, Armand Joulin, Sainbayar Sukhbaatar Facebook AI Research, †LORIA {angelafan,thibautlav,egrave,ajoulin,sainbar}@fb.com # ABSTRACT Transformers have been successfully applied to sequential, auto-regressive tasks de- spite being feedforward networks. Unlike recurrent neural networks, Transformers use attention to capture temporal relations while processing input tokens in parallel. While this parallelization makes them computationally efficient, it restricts the model from fully exploiting the sequential nature of the input. The representation at a given layer can only access representations from lower layers, rather than the higher level representations already available. In this work, we propose the Feedback Transformer architecture that exposes all previous representations to all future representations, meaning the lowest representation of the current timestep is formed from the highest-level abstract representation of the past. We demon- strate on a variety of benchmarks in language modeling, machine translation, and reinforcement learning that the increased representation capacity can create small, shallow models with much stronger performance than comparable Transformers. # INTRODUCTION In recent years, the Transformer architecture (Vaswani et al., 2017) has brought large improvements to a wide range of Natural Language Processing tasks such as machine translation, sentence rep- resentation (Devlin et al., 2019), and summarization (Edunov et al., 2019). Transformers are also successfully used as an autoregressive model on sequential tasks such as language modeling (Dai et al., 2019; Rae et al., 2020) and reinforcement learning (Parisotto et al., 2019). Unlike more traditional recurrent architectures such as RNNs and LSTMs, the Transformer architecture processes a sequence in parallel in an order-invariant way. Techniques such as position embeddings (Sukhbaatar et al., 2015; Shaw et al., 2018) and attention masking are required to capture input order information. In this work, we focus on several limitations of the Transformer architecture as an autoregressive model and present a straightforward solution — Feedback memory. These limitations and our proposed solution target sequential token prediction tasks, such as language modeling or other auto-regressive generative tasks. The feedforward nature of Transformers makes them efficient on modern hardware, but restricts the Transformer from taking full advantage of the input’s sequential property. In particular, the current hidden representation of a Transformer only accesses the past representations of lower layers, even though higher level representations of the past have already been computed as an autoregressive model. At generation, the Transformer generates only one token at a time, so it could access these representations for better performance, but does not exploit these at training time due to parallelization. However, if these past higher level representations could be used at training time, they would enrich future lower level representations, enabling shallower models to have the same representation power. Another inherent limitation of Transformers on sequential tasks is the lack of recursive computa- tion (Dehghani et al., 2018), and the number of transformations possible on the input is bounded by the model depth. Such disadvantages have impact on tasks that require careful tracking of a world state or modeling hierarchical structures (Tran et al., 2018; Hahn, 2020). On the other hand, while RNNs can maintain an internal state for an unbounded time while accumulating more computations upon it, the size of this internal state is limited by the dimension of the hidden state. In this work, we propose a novel autoregressive model, the Feedback Transformer, that makes all previous hidden representations accessible to the computation of a representation at any depth — 1 the model feeds back previous computations to itself. The feedback allows the model to perform recursive computation, building stronger representations iteratively upon previous states. To achieve this, we modify self-attention to attend to higher level representations rather than lower ones. As shown in Figure 1, the Feedback Transformer merges the hidden states from all layers into a single vector for every time step and stores them in a memory. Instead of self-attention, all subsequent layers attend to this memory, which means every previously computed representation is accessible by all future layers, mediated by the memory. This allows Feedback Transformers to recursively compute and transform an input as many times as the input length, which is something Transformers cannot achieve. While RNNs can perform recursive computation, the amount of information that Feedback Transformers can maintain is not limited by the number of layers. There are computational benefits to this straightforward modification. First, it uses less memory because all the layers share a single Feedback memory, thus reducing the memory size by L times, where L is the number of layers. There is also less computation because we share the key and value projections during attention computation, which increases the speed of the attention over the Feedback Memory. Further, the GPU memory usage is reduced due to the memory sharing — the overall model is 2x smaller — allowing the batch size to be increased for computational efficiency. During inference, the increased batch size contributes to substantially faster decoding speeds. In summary, our main contributions are: (1) The Feedback Transformer architecture, which com- pletely changes the way a Transformer works to access available higher level representations im- mediately. (2) We show the Feedback Transformer can achieve state of the art results with smaller, shallower models that have faster decoding speed and smaller memory footprint. (3) The Feedback Transformer uses substantially less memory during training and inference time. # 2 RELATED WORK Several previous works have analyzed the limitations of Transformer architectures, such as the inability to process input sequentially (Dehghani et al., 2018) or represent hierarchical structure (Tran et al., 2018). Hahn (2020) demonstrate that Transformers cannot model structures involving bounded recursion, such as closing parentheses. Pérez et al. (2019) study Transformers in the context of Turing machines, where they must produce unbounded numbers of decoding steps. Various work in probing Transformers identified several limitations where Transformers may not have the computational capacity of recurrent architecture like an LSTM (Hahn, 2020). From the architectural perspective, our work shares similarities with recurrent networks augmented with external shared memories (Graves et al., 2014; Joulin & Mikolov, 2015; Sukhbaatar et al., 2015). For example, the stack augmented RNN of Joulin & Mikolov (2015) adds an external memory to a recurrent network to keep long term dependencies. Closer to our work, the Neural Turing Machine of Graves et al. (2014) models an unconstrained memory that resembles the self-attention layer of a Transformer. Further improvements to recurrent networks, such as the Gated Feedback RNN (Chung et al., 2015), are based on better controlling signal from different layers and extended to feedback through multiple pathways (Jin et al., 2017). These works are built on recurrent networks with additional components to store long term dependencies. Other works have studied modifications to the Transformer architecture by enriching its structure with components inspired by recurrent networks. For example, Wang et al. (2019) propose adding a local recurrent sublayer to the Transformer layer to remove the need for position embeddings in the multi-head self-attention layers. Universal Transformer (Dehghani et al., 2018) share the parameters between the layers of a Transformer, leading a recurrent network in depth. Hao et al. (2019) and Chen et al. (2018) augment Transformers with a second, recurrent encoder. As opposed to our work, these prior investigations do not change the computational path in a Transformer to reduce the discrepancy between the training and inference time. Closer to our work, Merity (2019) proposes adding a self-attention layer on top of the past outputs from an LSTM cell. However, this approach keeps the recurrent and the self-attention mechanisms decoupled, as opposed to ours which makes the attention mechanism recurrent. In particular, the LSTM layer of Merity (2019) still intrinsically has a bottleneck corresponding to the dimension of the hidden layer. 2 Memory Hh aL, é: at te b- o % ; 5 » . Le te i ins v Feedback Transformer Transformer tl | a i Ll Ll "a tl t tl t Figure 1: The Feedback Transformer merges past hidden representations from all layers into a single vector and stores it in memory. Figure 2: Difference between Feedback and Transformer. t indicates the timestep and l indicates the layer. # 3 METHOD In this section, we propose the Feedback Transformer, which provides capacity to build richer representations of each timestep t of a sequential modeling task. 3.1 TRANSFORMER ARCHITECTURES We briefly describe the Transformer (Vaswani et al., 2017). Each layer is composed of a multi- head self-attention sublayer (Attn) followed by a feedforward sublayer (FF), and each sublayer is followed by an add-norm operation that combines a skip-connection (He et al., 2016) and layer normalization (Lei Ba et al., 2016). The l-th layer of a Transformer processes an input sequence of vectors Xl = (xl t) into a sequence of vectors of the same length. First, the self-attention sublayer computes a representation for each time step t by taking its related input vector xt along with its past context, t−τ , ..., xl xl : t−1} t = Attn(xl zl t, { t−τ , . . . , xl xl ). t−1} { Within the self-attention sublayer, xl t is used to form query vectors while its context is used to compute key and value vectors, forming a memory of the past information. Then the feedforward sublayer processes each vector zl t). The Transformer layer transforms its input sequence into an output sequence Xl+1 = FF(Attn(Xl)). t−M +1, . . . , xl xl t} { In practice, a block of steps is computed in parallel during training, where M can be seen as the backpropagation through time (BPTT) length. This makes training Transformers efficient on hardware such as GPUs. However, to operate on sequences of unbounded length, Transformers require modifications such as caching and relative position embeddings (Dai et al., 2019; Sukhbaatar et al., 2019). 3.2 LIMITATIONS OF TRANSFORMERS Previous work has analyzed the impact of several limitations of the Transformer architecture, such as the inability to track long sequences and process hierarchical inputs (Hahn, 2020). In this work, we focus on two major limitations of Transformer architectures. Limited Access to Higher Level Representations. Layer by layer, Transformers build more abstract, high level representations of the input sequence. At each layer, the representations for the input sequence are treated in parallel. As a consequence, a Transformer does not leverage the highest level representations from the past to compute the current representation, even though these highest level representations have already been computed for autoregressive models. Maintaining a Belief State. Many sequential tasks require models to maintain an internal state for two main purposes. First, internal states act as memory for recalling past inputs, where Transformers excel because their internal state xl 3 The second role of an internal state is to act as a belief state that tracks the world state that is not directly observable in inputs. For example, when inputs are actions taken on a Markov Decision Process, an internal state can apply those changes to the current belief state and correctly predict the outcome. As a feedforward model, Transformer have inherent limitations in this area — only a fixed number of transformations can be applied to its internal states. Since both Attn and FF sublayers contain a fixed number of transformations and there are L layers of them, the total number of transformations between the input and output is limited by the depth. This means Transformers cannot maintain an internal state for long time if it has to be frequently updated. 3.3 FEEDBACK TRANSFORMER We propose to change the Transformer architecture by using the most abstract representations from the past directly as inputs for the current timestep. This means that the model does not form its representation in parallel, but sequentially token by token. More precisely, we replace the context inputs to attention modules with memory vectors that are computed over the past, i.e., t = Attn(xl zl t, mt−τ , . . . , mt−1 { ), } where memory vectors mt are computed by summing the representations of all layers at time step t: L m, = Yo softmax(w')xt, (1) 1=0 l=0 where wl are learnable scalar parameters. Note these scalars are the only new parameters introduced by our change, with all else the same as the standard Transformer. Here l = 0 corresponds to token embeddings. The weighting of different layers by a softmax output gives the model more flexibility as it can average them or select one of them. This modification of the self-attention input adapts the computation of the Transformer from parallel to sequential, summarized in Figure[2| Indeed, it provides the ability to formulate the representation x} 41 based on past representations from any layer J’, while in a standard Transformer this is only true for l’ < 1. This change can be viewed as exposing all previous computations to all future computations, providing better representations of the input. Such capacity would allow much shallower models to capture the same level of abstraction as a deeper architecture. This has several practical advantages, as more shallow models have reduced memory footprint and increased decoding speed. An alternative view of such an architecture modification is providing the capacity for recursive computation — outputs from a sublayer can feed back to the same sublayer through the memory. The model can then maintain an internal state for unbounded time. This is a clear advantage over Transformers, in which a submodule never looks at its own output. While an RNN can also repeat its computation on its internal state, its internal state has a limited capacity determined by the number of layers and their hidden dimension. In contrast, the internal state of a Feedback Transformer is its whole memory, which can grow with the input length. This allows the model to keep track of a large number of things within its internal state. While our modification requires sequential computation, we significantly improve training speed by sharing the key and value projections W l v across all layers. This sharing reduces computation because we need to compute key and value vectors only once instead of computing them per layer # t = kt = Wkmt vl kl t = kt = Wkmt vl kl t = vt = Wvmt. For the same reason, the memory footprint is smaller than a standard Transformer because only one set of kt, vt needs to be stored. To be more precise, the memory requirement for processing a single T ) to O(T ), where L is the number of layers and T is the context size. token is reduced from O(L Further, the reduced memory usage allows the batch size to be increased to recover some of the lost parallelism, which improves training speed. Thus, the Feedback Transformer is not much slower compared to the standard Transformer. Note that the same sharing of projections will not make the standard Transformer efficient because those projections are applied to different representations at each layer (the key and value vectors will not the same for all layers). Lastly, we note that the sequential nature of the Feedback Transformer does not affect the performance during generation where one needs to compute one step at a time anyway. The same is true for online reinforcement learning where the input must be processed sequentially even during training. 4 Task Trans- former Feedback Trans. Copy Char Seq 59.1 6.2 76.2 23.6 Reverse Char Seq 50.2 5.9 74.8 29.2 Counting Len 50 Len 1K 99.6 82.4 99.7 95.3 Random Walk 68 100 Algorithmic 3 vars 5 vars 33.7 37.5 99.1 92.6 0.9 06 —— Transformer 05 =@- Feedback Transformer 20 40 60 80 100, Memory size Table 1: Accuracy on toy tasks. Char is character accuracy, Seq is sequence accuracy. Figure 3: Results on the Corridor task. The Transformer degrades as the memory size de- creases, but the Feedback Transformer main- tains performance. # 4 EXPERIMENTS We explore different sequential input tasks in natural language processing and reinforcement learning. First, we demonstrate the downsides of the standard Transformer architecture on tasks where the Transformer performs poorly. We show that the Feedback Transformer is able to overcome challenges and retain long memory. Next, we highlight the strength of the Feedback architecture in building complex, high level representations even with shallow models. We demonstrate that the Feedback model can achieve significantly stronger results than Transformer models, an effect that is exaggerated as models get smaller. Finally, we compare the Feedback architecture to the Transformer architecture with other work on standard long-context language modeling tasks. In experiments on large datasets, we use the shared key-value projections to improve training time. Additional experimental details and results can be found in the appendix. 4.1 LIMITATIONS OF TRANSFORMER: ILLUSTRATIVE TASKS 4.1.1 LIMITED ACCESS TO LONG MEMORY First, we examine the Transformer’s limited access to long memory on several simple, straightforward tasks that illustrate this. Unlike the standard Transformer, the Feedback architecture is able to remember information over many timesteps. Walking down a Corridor. In this reinforcement learning task, each agent is placed at the start of a long corridor with either a blue or green object. The agent must look at the object’s color, walk down the corridor, and go through the corresponding colored door at the end. The only task is to remember the color and not become distracted by walking down the very long hallway. Results are shown in Figure 3 and show that the performance of the Transformer degrades quickly as the memory size shrinks, but the Feedback Transformer maintains strong performance at all memory sizes. Copy and Reverse. We experiment next on two algorithmic tasks, copy and reverse (Kaiser & Sutskever, 2015). We train on sequences of length 40 consisting of integers 0 through 9, and test on sequences of length 400. Models read the input and then either copy or reverse, which requires memory over the sequence and the ability to track position, as well as generalization capability as the train and test settings are different lengths. We consider two variations of copying and reversing: either at the character level or at the sequence level. Results are shown in Table 1. The Feedback architecture has large improvements in accuracy, indicating improved memory and positional tracking. Counting. Finally, we experiment on a counting task, where models have a sequence of A’s in a row, and must output the corresponding quantity of the letter B. The model must count the number of the A’s to output the correct number of B’s. We consider two settings: training on short sequences of 5 lengths up to 50 and training on long sequences of lengths up to 1000. We show results in Table 1, where we demonstrate the Feedback model is much better at counting over long sequences. 4.1.2 LIMITED STATE UPDATES The complexity of the representations the Transformer is able to formulate is strictly dependent on the depth, as each layer of the Transformer allows for additional nonlinearity. The Transformer, then, can only update its state the same number of times as it has layers. We demonstrate that the Feedback Transformer does not have this limitation — in tasks where the model must carefully track and update its state, the Feedback architecture is able to update its state at each timestep. Random Walk. We consider a random walk in a small grid where actions are: go forward 1 step, left turn, and right turn. Given a history of actions and the agent’s initial position, it is strictly possible to calculate the current position. The task is trivial because a human could write down the current location and direction and keep updating with each action. However, Transformers cannot do this because they lack a storage that can be updated with each input. Its hidden state can store this information, but with each update, that information has to go up one layer. An alternative approach to this task is to solve it all at once given a sequence of actions, which is feasible for Transformers since they can access all inputs with their attention. However, this approach is challenging because the effect of each action depends on the direction at that point and whether the agent is on the edges, which itself is not known yet. This can be seen in Table 1, where the Transformer struggles and only reaches 68% accuracy. In contrast, the Feedback Transformer achieves 100% accuracy, which indicates the ability to track state for a long period of time. Both models are trained on 10K sequences, each containing 100 random actions and positions. Algorithmic task. A more complex setting where tracking and updating of a state is crucial is code executions. A model needs keep track of all variable values and update them if necessary. To demonstrate this, we create a simple algorithmic task that consists of the following simple statements: assignments (e.g. x=5), increments and decrements (e.g. y--), conditionals (e.g. if x==4: y++), and print commands (e.g. print(x)). Each task consists of 100 randomly selected statements. We consider two settings with 3 and 5 different variables. Processing of each statement in parallel will not work because conditional statements cannot be executed without knowing the current variable value, which itself can depend on another conditional. As shown Table 1, Transformers cannot solve this task because every time a variable increment or decrement, its value can only be found one layer up in the model, and eventually will be lost. Doubling their layers from 4 to 8 does help little, bringing the accuracy to 47.4% on the 3 variable version and 29.1% on the 5 variable version, but their performance is far from perfect. A recurrent model like LSTM is capable of storing a variable value while updating it, thus perform well on the 3 variables version with an accuracy of 82.8%. However, its performance drop to 32.1% when there are more variables because it has to store all their values in a single vector. The Feedback Transformer does not have this bottleneck, and can access updated variable values from the lowest layer, so it gives strong performance on this task. 4.2 ADVANTAGES OF FEEDBACK ARCHITECTURE We examined two limitations of standard Transformers that we improve upon: limited memory span and limited ability to update state. In the Feedback model, we improve on these limitations and now analyze performance on practical tasks including translation and reinforcement learning. 4.2.1 STRONG PERFORMANCE WITH SMALL, SHALLOW MODELS The Feedback Transformer is able to create higher level, more abstract representations with fewer layers and less capacity, as a layer can use all of the most recently created representations of previous timesteps. We demonstrate on neural machine translation that the Feedback model performs much better than Transformers at small, shallow sizes. Note that for sequence to sequence, we use Feedback Transformers only in the decoder because the encoder inputs are available simultaneously. 6 29.0 Test BLEU fa 28.0 —— Transformer —— Transformer 27.57 —@= Feedback Transformer —— Feedback Transformer 2000 3000 4000 5000 1 2 3 4 Decoding speed (wps) Training steps x 10° Figure 4: (left) Machine Translation on WMT14 En-De, test set BLEU and decoding speed in words-per-second for varying decoder depths. (right) Maze Navigation in Gridworld. We display average reward comparing Feedback Transformer to standard Transformers. We evaluate the performance of the Feedback Transformer on the WMT14 En-De machine trans- lation benchmark of 4.5 million pairs. We follow Vaswani et al. (2017) and train on WMT16 using newstest2013 as dev and newstest2014 as test. We learn 32K joint byte pair encodings (Sennrich et al., 2016), generate with beam size 5, tuning a length penalty on the dev set. We average the last 10 checkpoints and apply compound splitting and compute tokenized BLEU. In Figure 4 (left), we display results when making the model shallower only — layers are removed from a Feedback Transformer decoder compared to Transformers. As the decoder becomes shallow, the gap in performance between the two architectures widens. While the 1-layer Transformer model can only reach 27.3, the Feedback Transformer has 28.3 BLEU. Shallow decoders are critical to fast inference — reducing to 1-layer improves decoding speed by 4.2x, while only losing 1 BLEU with the Feedback architecture. Such results are useful for practical applications, where the speed of producing a translation is very important. We report decoding speed in tokens per second on 1 GPU. We further experiment with large encoder but shallow decoders. The Feedback Transformer achieves 29.0 BLEU with 12 layer encoder and 2 layer decoder. As the encoder is parallelized even during inference, the increased size of the encoder has negligible impact on decoding speed. To stabilize the training of deeper models, we use LayerDrop (Fan et al., 2019). 4.2.2 LONG MEMORY TRACKS STATE We apply Feedback to a reinforcement learning maze task that requires long memory to optimally solve because agents have limited vision. Note that in such reinforcement learning tasks, the models are trained online using A2C, so the input must be processed sequentially even during training time. Thus, the non-parallelized nature of the Feedback Transformer is not a drawback, and training Feedback Transformers is as fast as Transformers. The goal is to navigate a procedurally generated random maze where colored objects are placed. One of the colors will be randomly selected as a target, and the agent has to reach it for a reward and a new target. For optimal performance, the agent must remember the maze and object locations. In addition, the agent has turn actions like the Random Walk task, which makes it necessary to keep track of its location and orientation. As shown in Figure 4 (right), the Feedback Transformer converges to reach higher average reward, compared to Transformers. Results are shown averaged over 10 trials. 4.3 COMPARISON TO OTHER ARCHITECTURES In this section, we first, we compare Feedback to recurrent architectures such as LSTM, as well as hybrid RNN-Transformer architectures, and show that the Feedback is more powerful than recurrence alone. Next, we compare our construction of the Feedback Memory with other possible compositions. Lastly, we compare to other Transformer architectures on competitive benchmarks. 7 Model Test Recurrent Architectures DenseNMT Shen et al. (2018) RNMT+ (Chen et al., 2018) 25.5 28.5 Hybrid Architectures BiARN (Hao et al., 2019) SRU (Lei et al., 2017) 28.9 28.4 Transformer Architectures Transformer (Vaswani et al., 2017)28.4 29.3 Transformer (Ott et al., 2018) 29.5 Feedback Transformer # dev bpe 130 125 . 1.20 a a 1.15 ne baseline recurrent top-only all [e) A A. ° @ fe) Table 2: Results on WMT En-De compar- ing the Feedback Transformer to Recurrent architectures, hybrid Recurrent-Transformer models, and standard Transformers. Figure 5: Comparison of different memory com- position strategies on char-PTB. The recurrent connection alone is not as effective as feedback connections from a higher layer. 4.3.1 COMPARISON TO RECURRENT ARCHITECTURES We compare the Feedback Transformer architecture to recurrent architectures like LSTMs as well as hybrid RNN-Transformer architectures. In Table 2, we display that the Feedback Transformer has stronger performance than the Transformer, RNN, and RNN-Transformer hybrid model. We note that recurrent models address some limitations of Transformer architectures, but the Feedback mechanism goes beyond that. By allowing all past representations to be immediately available for the computation of future representations, Feedback is stronger than Recurrence alone — Recurrent models can only see representations from the previous layer (as depicted in Table 2). # 4.3.2 MEMORY COMPOSITION We next investigate the importance of the specific memory mechanism of the Feedback architecture on char-PTB. The Feedback architecture uses all layers when creating the memory, motivated by providing access to the entire past of all computations, but other ways of creating the memory as possible. For example, Recurrent architectures have a different memory structure. In multi-layer RNNs, each layer has recurrent connections to the same layer, but not to higher layers. This is an advantage of Feedback architectures — even the highest level abstractions are immediately available. In Figure 5, we examine the construction of the Feedback memory, comparing our choice of making all computation accessible with recurrent memory that can access all previous layers plus the same layer, and top-only memory that can attend only to the topmost layer. The Feedback Transformer has the best performance, closely matched by top-only memory. This indicates the importance of high level representations (see Appendix 6.4 for further analysis on this). Note that recurrence alone is not enough for good performance, and thus the Feedback memory provides richer representations beyond the capacity of recurrent networks. 4.3.3 COMPARISON TO OTHER TRANSFORMER ARCHITECTURES We examine the performance of Feedback Transformer on long context language modeling bench- marks. We use caching (Dai et al., 2019) and relative position embeddings. Mechanisms applied at inference time (Khandelwal et al., 2019; Krause et al., 2019) can further improve all models, so we do not focus on these. Wikitext-103. We evaluate on word-level language modeling on Wikitext-103 (Merity et al., 2017). Our Feedback architecture takes 3.5 days to train, compared to the Transformer which takes 1.2 days. We train a small Feedback model, about half the size of Transformer-XL, and find that it can match the performance of Transformer-XL, as shown in Table 3. This indicates the additional representational capacity of Feedback memory. If we train a standard Transformer that is approximately the same size as our Feedback Transformer, we find it has worse performance 8 Model Params Test Best Existing (Roy et al., 2020) — Trans-XL (Dai et al., 2019) 15.8 257M 18.3 Our Transformer Feedback Transformer 140M 19.9 126M 18.3 Model Params Test Best Existing (Rae et al., 2020) 277M 0.97 277M 0.99 Trans-XL (Dai et al., 2019) Feedback Transformer 77M 0.96 Table 3: Results on WikiText-103. We re- port perplexity on test. Table 4: Results on Enwiki8. We report bit- per-byte on test. Task Model Training Speed Inference Speed Language Modeling Transformer Feedback Transformer 296K 84.4K 592 2176 Translation Transformer Feedback Transformer 280K 126K 3190 5410 Reinforcement Learning Transformer Feedback Transformer 22.3K 22.3K — — Table 5: Results comparing Training and Inference Speed for three different tasks. For language modeling, we measure words-per-second on Wikitext-103 fixing model size and attention span. For translation, we measure words-per-second on WMT En-De, both models with a 6 layer encoder and 2 layer decoder. For RL, we measure the training frame-per-second on maze navigation (with 20 CPU cores and 1 GPU). All inference speed is reported on 1 GPU. (19.9 PPL rather than 18.3). Further, mechanisms like the Routing Transformer can be added to the Feedback Transformer as well. We focus on starting with Transformer-XL as a baseline and showing we can match the performance with a much smaller model. Enwiki8. Finally, we test our model on character-level language modeling in Enwiki8 (Mahoney, 2011), containing 100M unprocessed bytes from Wikipedia. We train a relatively small 12-layer model, that is one third of the size of the Transformer-XL baseline. Since the task requires very long context, we use adaptive attention span (Sukhbaatar et al., 2019). As shown in Table 4, the Feedback Transformer model achieves a new SOTA performance of 0.96 bit-per-byte despite its small size. 4.3.4 TRAINING AND INFERENCE SPEED Finally, we compare the training and inference speed of the Feedback Transformer with standard Transformer architectures. Results are shown in Table 5. The Feedback Transformer has faster inference, because the key-value projection sharing substantially reduces the memory footprint of the model and reduces computation. Further, shallow Feedback models perform well, so the batch size can be increased. In language modeling, for example, sharing key-value provides almost 3X inference speed improvement. The shallow model size provides the remaining 10% of speed improvement at inference time. Finally, note that for certain problems (such as in RL), the data must be processed strictly sequentially anyway and Feedback Transformer is not any slower. # 5 CONCLUSION We propose a novel reformulation of the Transformer that fully exploits sequential input — the increased representation power and recursive computation of the Feedback Transformer allows shallow and small models to have much stronger performance compared to a Transformer of the same size. This architecture addresses two fundamental limitations of Transformers as an autoregressive model — limited access to long memory and limited ability to update state. We demonstrate on a variety of tasks the advantages of the Feedback architecture to illustrate the strong performance of this straightforward modification. 9 # REFERENCES Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. In ICLR, 2019. Mia Xu Chen, Orhan Firat, Ankur Bapna, Melvin Johnson, Wolfgang Macherey, George Foster, Llion Jones, Niki Parmar, Mike Schuster, Zhifeng Chen, et al. The best of both worlds: Combining recent advances in neural machine translation. arXiv preprint arXiv:1804.09849, 2018. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509, 2019. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. Gated feedback recurrent neural networks. In International conference on machine learning, pp. 2067–2075, 2015. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019. Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. Universal transformers. arXiv preprint arXiv:1807.03819, 2018. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT (1), 2019. Sergey Edunov, Alexei Baevski, and Michael Auli. Pre-trained language model representations for language generation. arXiv preprint arXiv:1903.09722, 2019. Angela Fan, David Grangier, and Michael Auli. Controllable abstractive summarization. arXiv preprint arXiv:1711.05217, 2017. Angela Fan, Edouard Grave, and Armand Joulin. Reducing transformer depth on demand with structured dropout. arXiv preprint arXiv:1909.11556, 2019. Edouard Grave, Armand Joulin, Moustapha Cissé, and Hervé Jégou. Efficient softmax approximation for gpus. In ICML, 2017. Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. Michael Hahn. Theoretical limitations of self-attention in neural sequence models. Transactions of the Association for Computational Linguistics, 8:156–171, 2020. Jie Hao, Xing Wang, Baosong Yang, Longyue Wang, Jinfeng Zhang, and Zhaopeng Tu. Modeling recurrence for transformer. arXiv preprint arXiv:1904.03092, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Karl Moritz Hermann, Tomáš Koˇciský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. Teaching machines to read and comprehend. In Proc. of NIPS, 2015. Xiaojie Jin, Yunpeng Chen, Zequn Jie, Jiashi Feng, and Shuicheng Yan. Multi-path feedback recurrent neural networks for scene parsing. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In Advances in neural information processing systems, pp. 190–198, 2015. Łukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015. Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. Generalization through memorization: Nearest neighbor language models. arXiv preprint arXiv:1911.00172, 2019. 10 Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. Dynamic evaluation of trans- former language models. arXiv preprint arXiv:1904.08378, 2019. Tao Lei, Yu Zhang, Sida I Wang, Hui Dai, and Yoav Artzi. Simple recurrent units for highly parallelizable recurrence. arXiv preprint arXiv:1709.02755, 2017. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization branches out, pp. 74–81, 2004. Matt Mahoney. Large text compression benchmark. URL: http://www. mattmahoney. net/text/text. html, 2011. Stephen Merity. Single headed attention rnn: Stop thinking with your head. arXiv preprint arXiv:1911.11423, 2019. Stephen Merity, Nitish Shirish Keskar, and Richard Socher. Regularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182, 2017. Richard GM Morris. Spatial localization does not require the presence of local cues. Learning and motivation, 12(2):239–260, 1981. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. arXiv preprint arXiv:1806.00187, 2018. Emilio Parisotto, H. Song, Jack W. Rae, Razvan Pascanu, Çaglar Gülçehre, Siddhant M. Jayakumar, Max Jaderberg, Raphael Lopez Kaufman, A. Clark, Seb Noury, M. Botvinick, N. Heess, and Raia Hadsell. Stabilizing transformers for reinforcement learning. ArXiv, abs/1910.06764, 2019. Jorge Pérez, Javier Marinkovi´c, and Pablo Barceló. On the turing completeness of modern neural network architectures. arXiv preprint arXiv:1901.03429, 2019. Jack W. Rae, Anna Potapenko, Siddhant M. Jayakumar, Chloe Hillier, and Timothy P. Lillicrap. Com- pressive transformers for long-range sequence modelling. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=SylKikSYDH. Aurko Roy, Mohammad Saffar, Ashish Vaswani, and David Grangier. Efficient content-based sparse attention with routing transformers. arXiv preprint arXiv:2003.05997, 2020. Abigail See, Peter J Liu, and Christopher D Manning. Get to the point: Summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368, 2017. Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. In ACL (1), 2016. Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. Self-attention with relative position representations. In NAACL-HLT (2), 2018. Yanyao Shen, Xu Tan, Di He, Tao Qin, and Tie-Yan Liu. Dense information flow for neural machine translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pp. 1294–1303, 2018. Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. End-to-end memory networks. In NIPS, 2015. Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. Adaptive attention span in transformers. In ACL, 2019. Ke Tran, Arianna Bisazza, and Christof Monz. The importance of being recurrent for modeling hierarchical structure. arXiv preprint arXiv:1803.03585, 2018. 11 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. Zhiwei Wang, Yao Ma, Zitao Liu, and Jiliang Tang. R-transformer: Recurrent neural network enhanced transformer. arXiv preprint arXiv:1907.05572, 2019. 12 6 ADDITIONAL RESULTS 6.1 REINFORCEMENT LEARNING Maze Navigation Easy. We experiment with a slightly different version of the Maze Navigation task. Instead of an agent with forward, turn-left and turn-right actions, the agent has no orientation and there are only 4 movement actions corresponding to 4 cardinal directions. This makes navigation easier because the agent do not need to keep track of its orientation. Further, it is much easier to compute relative locations given a history of actions. This might explain why standard Transformers are not far behind Feedback Transformers in performance as shown in Figure 6 (left). We also compare to LSTMs, which performs much worse. See Section 7.2 for more implementation details. Water Maze. We modify the Morris Water Maze task (Morris, 1981) to make it more challenging. The maze is defined by a goal position and a mapping of cell to ID — these remain fixed within an episode but change between episodes. The agent receives as an observation the cell IDs of its current location and the target cell. When the agent finds the target, it receives +1 reward and is randomly teleported. During the same episode, if the agent reaches a previously seen cell, it needs to remember how it reached the target from there to go back. Results are shown averaged over 10 trials (the reward is reported averaged over the last 500 episodes for each trial). As shown in Figure 6 (right), the Feedback Transformer converges to higher average reward. — Transformer ‘dback Transformer — LSTM Reward Reward — Transformer —— Feedback Transformer — LSTM 0. 1 + 0.0 0.5 1.0 15 2.0 0 1 2 3 4 5 Training steps x10? Training steps x08 Figure 6: Averaged cumulative reward during training on (left) Maze Navigation Easy and (right) Water Maze tasks. 6.2 IWSLT DE-EN We additionally evaluate the Feedback Transformer on IWSLT De-En, a small machine translation dataset. We train a small Transformer model with 6 layers. For generation, we use beam size 5 without checkpoint averaging. Model quality is evaluated using tokenized BLEU. Results are shown in Figure 7 (left) and show that for shallower models, the Feedback Transformer has better performance than the standard Transformer. 6.3 SUMMARIZATION ON CNN-DAILYMAIL We evaluate on the CNN-Dailymail multi-sentence summarization benchmark of 280K news articles Hermann et al. (2015), modeling the first 400 words of the article See et al. (2017). We evaluate using ROUGE Lin (2004). and use 3-gram blocking and tune length Fan et al. (2017). Figure 7 (right) displays the performance of the Feedback Transformer as the decoder layers are reduced, making the model shallower only. For all model depths, the Feedback architecture maintains a consistent improvement in ROUGE compared to the standard Transformer. Compared to sentence- level tasks such as translation, this summarization benchmark requires multi-sentence generation, and the increased capacity of the Feedback architecture is beneficial. 13 35.25 375 35.00 - 3379 > 34.75 a) 34.50 5 365 ¥ se & 34.25 2 o 8 36.0 34.00 in . 4 Transformer —— Transformer 33.754 “@— Feedback ‘Transformer 35.5 —@- Feedback Transformer 4000 6000 8000-10000 12000 14000 i 2 3 i 5 6 words per second Decoder Depth Figure 7: Results on (left) the IWSLT De-En dataset, and (right) Summarization on CNN-Dailymail, test set ROUGE-L for varying decoder depths. 1.30 + Feedback + Average alll Figure 8: Ablation results on char-PTB: instead of a weighted sum of all layers as Feedback memory, only a single layer is used as memory for all layers. We also include a setting where the average of all layers is used. 6.4 ABLATION STUDIES ON LANGUAGE MODELS We investigate which layer of a model has the best representation to be used as a Feedback memory. In Feedback Transformers, a weighted sum of all layers is used as the memory, and feeds to all layers. An alternative approach is to manually select one of the layers as the memory and let all layers attend to it. In Figure 8, we explore this approach, using the same 6-layer char-PTB models as Section 4.3.2 (top-only memory there corresponds to using the last 6th layer as memory). We can see that representations from higher layers work better as memory, confirming our assumption of the importance of higher level representations. Simply averaging all layers together works reasonably well as well. Interestingly, when all layer attend to the first layer output, it works as good as the standard Transformer. The weighted sum approach matches the best performance because it can adopt to select any of the layers. Here we study how different techniques affect the model performance on WikiText-103. The results shown in Table 6 indicate: • Pre-normalization combined with higher learning rates helps the performance, particularly for the standard Transformer. • Increasing the context size with adaptive span further improves the performance for both models. • The technique of increasing the BPTT length during training for efficiency does not affect the final performance. • The gap between two model is consistent along those variations. Next, we examine the effect of the model depth on performance on char-PTB and WikiText-103 This time, we keep the total number of parameters constant and only vary the number of layers to 14 Model Pre-norm + Adapt. span higher LR Increase BPTT dev ppl Transformer Transformer Transformer Transformer Feedback Feedback Feedback Feedback no no yes yes no no yes yes no no no yes no no no yes no yes yes no no yes yes yes 22.9 22.9 21.0 20.6 19.7 19.9 19.6 19.0 Table 6: Ablation on WikiText-103 of various modeing choices. Results are shown without finetuning. Transformer 3 = Transformer —®- Feedback Transformer -@- Feedback Transformer Sia 20 4 6 8 Model depth Model Depth Figure 9: The performance on (left) char-PTB and (right) Wikitext-103 as a function of the model depth. The number of parameters is kept constant by increasing the width. isolate the effect of depth. This is achieved by proportionally increasing the head dimension and the ReLU layer size when we decrease the number of layers. The results in Figure 9 demonstrate that for the standard Transformer improves as the depth increase. In contrast, the Feedback architecture is much robust reduced depth, even achieving the best performance on char-PTB with only two layers. 7 ADDITIONAL IMPLEMENTATION DETAILS 7.1 RANDOM WALK TASK DETAILS We provide additional details for the random walk toy task we explore. The agent starts at a fixed position of a 8 8 grid. Available actions are 1) move one step forward, 2) turn left and 3) turn right. At every time step, the agent randomly picks on of the three actions and executes it. An action would be ignored if it can’t be executed like going out of the grid. After 100 actions, the agent is reset back to the initial position. The input to the model is a sequence of actions taken by the agent, and a special symbol if there was a reset. The output is a sequence of location symbols corresponding to the agent’s location after each action. We generate 10k training episodes, totalling 1M tokens. We use the same setup as our language modeling experiments, except now the model predicts separate output tokens rather than a next token. We concatenate all the episodes and feed them to the model as a single sequence. The training is done with the negative-log-likelihood loss. See Table 9 for the hyperparameters used in the experiment. The attention span is set to 100, so that the models can attend to all the information they needs to solve the task. 15 vision range vision range Figure 10: (left) Maze Navigation task and (right) Water Maze task. x = 1 ; print x ; x ++ ; print x ; z = 8 ; print z ; print z ; x -- ; if x > z : z -- ; z ++ ; print z ; print x ; print x ; if z < x : z ++ ; x ++ ; z -- ; x -- ; if z > x : z -- ; z ++ ; if x > z : z ++ ; if z < 5 : y = 7 ; print x ; if x > z : z ++ ; x ++ ; y = 7 ; if x > 10 : x -- ; y -- ; x ++ ; z ++ ; print z ; y -- ; print x ; print x ; z ++ ; y ++ ; y ++ ; if z < 3 : y ++ ; if x > 4 : x ++ ; z -- ; x -- ; x -- ; print x ; y ++ ; z ++ ; y -- ; if x > z : z -- ; x ++ ; z -- ; print x ; z ++ ; print y ; y ++ ; y -- ; x -- ; print x ; y ++ ; print y ; y -- ; if z < x : x ++ ; if z > 4 : y -- ; z -- ; x ++ ; if y < x : y ++ ; print y ; print z ; z -- ; y -- ; x ++ ; y -- ; y ++ ; if y > 3 : z -- ; y ++ ; if z < 10 : z ++ ; z ++ ; y -- ; z ++ ; print z ; x -- ; y -- ; x -- ; x ++ ; if x < 4 : y -- ; print y ; print z ; if z > x : y -- ; print z ; if y < x : x -- ; print x ; print z ; if x < 4 : z -- ; if z < y : z ++ ; z -- ; x -- ; print x ; if z < x : y ++ ; print x ; print z ; y -- ; if z < 6 : x ++ ; z -- ; END Table 7: An example program from the algorithmic task with 3 variables. 7.2 MAZE NAVIGATION DETAILS We generate random 9 9 mazes using Kruskal’s algorithm. Dead ends are eliminated by randomly removing one of the blocks surrounding them. We randomly place 8 target objects with different colors as shown in Figure 10 (left). The agent is given a randomly selected color as a target. If the agent manages to reach the correct target, it gets a reward of +1 and a new target color is sampled. An episode ends after 200 steps. The observation includes the 3 3 area around the agent and target color. We train 2-layer Transformers with a hidden size 256 and 4 heads. We set the BPTT to 100 and the batch size to 1024. The reward discount rate is 0.99. The attention span is 200 so the agent can keep an entire episode in memory. All agents were trained using A2C with Adam with a learning rate of 0.0003 and a entropy cost of 0.0005. For the easy version of the task, we use RMSprop with a batch size of 128 and a learning rate of 0.0003. The RMSProp epsilon regularization parameter is set to 0.01 The LSTM model is a 3-layer LSTM with a hidden size of 256. 7.3 WATER MAZE DETAILS 15. The water maze task we designed is depicted visually in Figure 10 (right). The grid size is 15 To help exploration, the agent can see if the goal is within a 3 3 area around it. An episode ends after 200 steps. We train for 500M steps (2.5M episodes). We use 2-layer Transformers with hidden size of 64 and 1 head. The attention span is 200 so the agent can put an entire episode in memory. All agents where trained using A2C with RMSprop with entropy cost of 0.0001, RMSProp epsilon regularisation parameter of 0.01, batch size of 64, and BPTT 200. Feedback Transformer and Transformer baseline were trained with a learning rate of 0.0003. LSTM model is a 2-layer LSTM with hidden size of 64. For LSTM model we used a learning rate of 0.0004. 7.4 ALGORITHMIC TASK DETAILS In this task, each program consists of 100 simple statements that should be sequentially executed. The available statement types are: 1. Initialization. Assign an initial value to a variable like x=3. A variable can only be initialized once in each program. 16 Hyperparameter Summarization WMT En-De IWSLT De-En Encoder Layers Decoder Layers FFN Size Attention Heads Dropout Hidden Size Learning Rate 6 6 2048 8 0.3 512 0.0005 6 6 4096 16 0.3 1024 0.001 6 6 1024 4 0.3 512 0.0005 Table 8: Hyperparamers for sequence to sequence experiments. Hyperparameter Random Walk Algorithmic char-PTB Enwik8 WikiText-103 WikiText-103 small large Layers Hidden size (d) FF size Head count (h) Head dim Attention span Dropout rate Embed. dropout BPTT len (M ) Batch size (B) Learning rate Gradient clip LR warm-up steps 4 256 4d 4 d/h 100 0.2 - 64 512 0.0001 0.1 1k 6 384 4d 4 d/h 512 0.5 - 128 2048 0.0015 1.0 1k 12 512 8d 8 2d/h 8192* 0.5 - 128 1024 0.0015 0.1 8k 4 512 8d 8 2d/h 512 0.1 0.1 256 512 0.0007 0.1 8k 8 1024 4d 8 d/h 512, 2048* 0.3 0.2 256 512 0.0007 0.1 8k Parameters 3.2M 10.7M 77M 44M 139M Table 9: Hyperparamers for language modeling experiments. Here * indicates the adaptive span. 2. Increment and decrement. Increment or decrement a variable value by 1, like x++ or y--. 3. Print. Output the value of a certain variable like print(y). Only this statement requires model to make a prediction. 4. Conditional. Execute the nested statement only if a variable has a certain value, e.g., if x==4: y--. Note that conditional and print statements cannot be nested. A program is generated by randomly choosing a statement one after another, but with the following conditions: a variable must be initialized before being used, and a variable value have to between 1 and 10. The training data contains 10k such programs concatenated with a special separator keyword. We generate two version the data with 3 and 5 different variables in them. An example program is shown in Table 7. We used the same hyperparameters as the random walk task as show in Table 9. 7.5 MACHINE TRANSLATION AND SUMMARIZATION We detail the hyperparameters in Table 8. Summarization experiments are done with the Transformer base architecture size and WMT En-De experiments are done with the Transformer big architecture size. As IWSLT De-En is a smaller dataset, we use a smaller model. For all sequence to sequence experiments, only the decoder is modified to have the Feedback Transformer architecture. # 7.6 LANGUAGE MODELING In the language modeling experiments, we added several improvements on top of the original Transformer Vaswani et al. (2017) to better adapt to unbounded sequences: 17 • Hidden representation caching Dai et al. (2019): Since the input to the model is an un- bounded sequence and the model needs to process it in small blocks, hidden representations from previous blocks are kept in cache so that any token in the current block will the same context length regardless of its position in the block. • Relative position embedding Shaw et al. (2018): Relative position embeddings allow each token in a block to be processed in the same way regardless of its absolute position in the block. We found that adding shared embeddings to key vectors at every layer to be effective. • Adaptive attention span Sukhbaatar et al. (2019) Language modeling requires a model to have a very long attention span, which is computationally expensive. The adaptive span mechanism allows each attention head to learn different attention spans for efficiency. • Pre-normalization Child et al. (2019): We observed that pre-normalization makes train- ing more stable for Transformers, which allowed us to use larger batch sizes for better parallelization. Dropouts are applied to attention and ReLU activations. In WikiText-103 models, additional dropouts are added to the embedding layer output and the last sublayer output. In Table 9, we present the hyperparameter values used for our experiments. We use the same hyperparameters for both Transformers and Feedback Transformers, and optimize them with Adam. The final performances are obtained by finetuning the models with a 10x smaller learning rate. Details on the char-PTB experiments We trained the models for 15k updates (or earlier if the validation loss stops decreasing), and funetined them for 1k steps. We varied the depth of the models while keeping the number of parameters constant. This is achieved by changing the FF size and the head dimension inverse proportionally to the depth. Details on the enwik8 experiments We used an adaptive span limited to 8192 tokens with a loss of 0.0000005. The training is done for 100k updates and another 10k steps is used for finetuning. The warming up BPTT length is used for speeding up the training, where the BPTT length is decreased to 64 for the first half of the training. Details for Training on WikiText-103 We employed the adaptive input Baevski & Auli (2019) and the adaptive softmax Grave et al. (2017) techniques for reducing the number of parameters within word embeddings. The models are trained for 200k steps and the finetuned for additional 10k steps. While most of the models have a fixed attention span of 512, the best performance is achieved by extending the attention span to 2048 with adaptive span loss 0.00001. After training our models, we noticed that our tokenization method differed from others by omitting end-of-line (EOL) symbols. Since our dictionary already contained the EOL token, we were able finetune our trained models on the data with EOL tokens, rather than training them from scratch. This change alone brought about 1ppl improvement. 18
{ "id": "1807.03819" }
2002.09277
Kernel and Rich Regimes in Overparametrized Models
A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach, we show how the scale of the initialization controls the transition between the "kernel" (aka lazy) and "rich" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We also highlight an interesting role for the width of a model in the case that the predictor is not identically zero at initialization. We provide a complete and detailed analysis for a family of simple depth-$D$ models that already exhibit an interesting and meaningful transition between the kernel and rich regimes, and we also demonstrate this transition empirically for more complex matrix factorization models and multilayer non-linear networks.
http://arxiv.org/pdf/2002.09277
Blake Woodworth, Suriya Gunasekar, Jason D. Lee, Edward Moroshko, Pedro Savarese, Itay Golan, Daniel Soudry, Nathan Srebro
cs.LG, stat.ML
This updates and significantly extends a previous article (arXiv:1906.05827), Sections 6 and 7 are the most major additions. 31 pages. arXiv admin note: text overlap with arXiv:1906.05827
null
cs.LG
20200220
20200727
0 2 0 2 l u J 7 2 ] G L . s c [ 3 v 7 7 2 9 0 . 2 0 0 2 : v i X r a # Kernel and Rich Regimes in Overparametrized Models Blake Woodworth Toyota Technological Institute at Chicago [email protected] Suriya Gunasekar Microsoft Research [email protected] Jason D. Lee Princton University [email protected] Edward Moroshko Technion [email protected] Pedro Savarese Toyota Technological Institute at Chicago [email protected] Itay Golan Technion [email protected] Daniel Soudry Technion daniel.soudry@ technion.ac.il Nathan Srebro Toyota Technological Institute at Chicago [email protected] # Abstract A recent line of work studies overparametrized neural networks in the “kernel regime,” i.e., when during training the network behaves as a kernelized linear predictor, and thus, training with gradient descent has the effect of finding the corresponding minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat et al. [6], we show how the scale of the initialization controls the transition between the “kernel” (aka lazy) and “rich” (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a family of simple depth-D linear networks that exhibit an interesting and meaningful transition between the kernel and rich regimes, and highlight an interesting role for the width of the models. We further demonstrate this transition empirically for matrix factorization and multilayer non-linear networks. 1 # Introduction A string of recent papers study neural networks trained with gradient descent in the “kernel regime.” They observe that, in a certain regime, networks trained with gradient descent behave as kernel methods [7, 15, 24]. This allows one to prove convergence to zero error solutions in overparametrized settings [1, 2, 4, 6, 8, 9, 16, 25]. This also implies that the learned function is the the minimum norm solution in the corresponding RKHS [4, 6, 19], and more generally that models inherit the inductive bias and generalization behavior of the RKHS. This suggests that, in a certain regime, deep models can be equivalently replaced by kernel methods with the “right” kernel, and deep learning boils down to a kernel method with a fixed kernel determined by the architecture and initialization, and thus it can only learn problems learnable by appropriate kernel. This contrasts with other recent results that show how in deep models, including infinitely overparametrized networks, training with gradient descent induces an inductive bias that cannot be represented as an RKHS norm. For example, analytic and/or empirical results suggest that gradient descent on deep linear convo- utional networks implicitly biases toward minimizing the L, bridge penalty, for p = 2/depth < 1, in the frequency domain [13]; on an infinite width single input ReLU network infinitesimal weight decay biases owards minimizing the second order total variations f|f’(«)|dx of the learned function [21], further, em- irically it has been observed that this bias is implicitly induced by gradient descent without explicit weight decay [21, 23]; and gradient descent on a overparametrized matrix factorization, which can be thought of as a two layer linear network, induces nuclear norm minimization of the learned matrix and can ensure low rank matrix recovery [3, 12, 17]. of these natural inductive biases are Hilbert norms, and therefore hey cannot be captured by any kernel. This suggests that training deep models with gradient descent can ehave very differently from kernel methods, and have richer inductive biases. So, does the kernel approximation indeed capture the behavior of deep learning in a relevant and inter- esting regime, or does the success of deep learning come from escaping this regime to have richer inductive biases that exploits the multilayer nature of neural networks? In order to understand this, we must first understand when each of these regimes hold, and how the transition between the “kernel regime” and the “rich regime” happens. 1 Early investigations of the kernel regime emphasize the number of parameters (“width”) going to infinity as leading to this regime (see e.g., [7, 15, 24]). However, Chizat et al. [6] identified the scale of the model at initialization as a quantity controlling entry into the kernel regime. Their results suggest that for any number of parameters (any width), a homogeneous model can be approximated by a kernel when its scale at initialization goes to infinity (see the survey in Section 3). Considering models with increasing (or infinite) width, the relevant regime (kernel or rich) is determined by how the scaling at initialization behaves as the width goes to infinity. In this paper we elaborate and expand of this view, carefully studying how the scale of initialization affects the model behaviour for D-homogeneous models. Our Contributions In Section 4 we analyze in detail a simple 2-homogeneous model for which we can exactly characterize the implicit bias of training with gradient descent as a function of the scale, a, of initialization. We show: (a) the implicit bias transitions from the ¢: norm in the a —> oo limit to ¢; in the a — 0 limit; (b) consequently, for certain problems e.g., high dimensional sparse regression, using a small initialization can be necessary for good generalization; and (c) we highlight how the “shape” of the initialization, 7.e., the relative scale of the parameters, affects the a — oo bias but not the a > 0 bias. In Section 5 we extend this analysis to analogous D-homogeneous models, showing that the order of homogeneity or the “depth” of the model hastens the transition into the ¢; regime. In Section 6, we analyze asymmetric matrix factorization models, and show that the “width” (i.e., the inner dimension of the factorization) has an interesting role to play in controlling the transition between kernel and rich behavior which is distinct from the scale. In Section 7, we show qualitatively similar behavior for deep ReLU networks. # 2 Setup and preliminaries We consider models f : R? x & — R which map parameters w € R? and examples x € Â¥ to predictions f(w,x) € R. We denote the predictor implemented by the parameters w as F(w) € {f : Â¥ > R}, such that F(w)(x) = f(w,x). Much of our focus will be on models, such a linear networks, which are linear in x (but not in the parameters w), in which case F(w) is a linear functional in the dual space Y* and can be represented as a vector Bw with f(w,x) = (Bw, x). Such models are essentially alternate parametrizations of linear models, but as we shall see that the specific parametrization is crucial. We focus on models that are D-positive homogeneous in the parameters w, for some integer D ≥ 1, meaning that for any c ∈ R+, F (c · w) = cDF (w). We refer to such models simply as D-homogeneous. Many interesting model classes have this property, including multi-layer ReLU networks with fully connected and convolutional layers, layered linear networks, and matrix factorization, where D corresponds to the depth of the network. n=1(f (w, xn) − yn)2 to denote the squared loss of the model over a training set (x1, y1), . . . , (xN , yN ). We consider minimizing the loss L(w) using gradient descent with infinitesimally small stepsize, i.e., gradient flow dynamics ˙w(t) = −∇L(w(t)). (1) We are particularly interested in the scale of initialization and capture it through a scalar parameter a € Ry. For scale a, we will denote by Wa,w, (t) the gradient flow path (1) with the initial condition Waw,(0) = awo. We consider underdetermined/overparameterized models (typically N < p), where there are many global minimizers of L(w) with L(w) = 0. Often, the dynamics of gradient flow converge to global minimizers of L(w) which perfectly fits the data—this is often observed empirically in large neural network learning, though proving this is challenging and is not our focus. Rather, we want to understand which of the many minimizers gradient flow converges to, i.€., Wow, ?= limy+oo Wa,wo(t) or, more importantly, the predictor F(wew,) reached by gradient flow depending on the scale a. # α,w0 # 3 The Kernel Regime Locally, gradient descent/flow depends solely on the first-order approximation w.r.t. w: F(w,x) = f(w(t), ©) + (w — w(t), Vw £(w(t),x)) + O(||w — w(t)|)?). (2) 2 That is, gradient flow operates on the model as if it were an affine model f(w, x) © fo(x)+(w, dw(1)(x)) with feature map ¢w(1)(x) = Vw (w(t), x), corresponding to the tangent kernel Kw)(x,x') = (Vwf (w(t), x), Vwf(w(t),x’)). Of particular interest is the tangent kernel at initialization, Kyo) {15, 24]. Previous work uses “kernel regime” to describe a situation in which the tangent kernel Ky 1) does not change over the course of optimization or, less formally, where it does not change significantly, z.e., where Vt, Kwt) © Kwo). For D homogeneous models with initialization wo(0) = awo, Kw.) = ar(P-)) Ko, where we denote Ky = Ky,. Thus, in the kernel regime, training the model f(w,x) is exactly equivalent to training an affine model fx (w,x) = a? f(w(0),x) + (dwo)(x), w — w(0)) with kernelized gradient de- scent /flow with the kernel Kyo) and a “bias term” of f(w(0),x). Minimizing the loss of this affine model using gradient flow reaches the solution nearest to the initialization where distance is measured with respect to the RKHS norm determined by Ko. That is, F(wS°) = argmin,||h — F(awo)||«, s.t. h(X) = y. To avoid handling this bias term, and in particular its large scale as a increases, Chizat et al. [6] suggest using “unbiased” initializations such that F(wo) = 0, so that the bias term vanishes. This is often achieved by replicating units with opposite signs at initialization (see, e.g., Section 4). But when does the kernel regime happen? Chizat et al. [6] showed that for any homogeneous1 model satisfying some technical conditions, the kernel regime is reached when α → ∞. That is, as we increase the scale of initialization, the dynamics converge to the kernel gradient flow dynamics for the initial kernel K0. In Sections 4 and 5, for our specific models, we prove this limit as a special case of our more general analysis for all α > 0, and we also demonstrate it empirically for matrix factorization and deep networks in Sections 6 and 7. In Section 6, we additionally show how increasing the “width” of certain asymmetric matrix factorization models can also lead to the kernel regime, even when the initial scale α goes to zero at an appropriately slow rate. In contrast to the kernel regime, and as we shall see in later sections, the α → 0 small initialization limit often leads to very different and rich inductive biases, e.g., inducing sparsity or low-rank structure [12, 13, 17], that allow for generalization in settings where kernel methods would not. We will refer to the limit of this distinctly non-kernel behavior as the “rich limit.” This regime is also called the “active,” “adaptive,” or “feature-learning” regime since the tangent kernel Kw(t) changes over the course of training, in a sense adapting to the data. We argue that this rich limit is the one that truly allows us to exploit the power of depth, and thus is the more relevant regime for understanding the success of deep learning. # 4 Detailed Study of a Simple Depth-2 Model Consider the class of linear functions over X = Rd, with squared parameterization as follows: Fow.x) = So" (w2, — w2 ,)xi = (Bw. x). w= [SE] © RY, and By = w — w? (3) where z? for z € R¢ denotes elementwise squaring. The model can be thought of as a “diagonal” linear neural network (i.e., where the weight matrices have diagonal structure) with 2d units. A “standard” diagonal linear network would have d units, with each unit connected to just a single input unit with weight u; and the output with weight v;, thus implementing the model f((u,v),x) = >, uivix; which is illustrated in Figure 9(a) in Appendix B. However, we also show in Appendix B that if |u;| = |v;| at initialization, then their magnitudes will remain equal and their signs will not flip throughout training. Therefore, we can equivalently parametrize the model in terms of a single shared input and output weight w; for each hidden unit, yielding the model f(w,x) = (w?, x). The reason for using an “unbiased model” with two weights w+ and w− (i.e.,. 2d units, see illustration in Figure 9(b) in Appendix B) is two-fold. First, it ensures that the image of F (w) is all (signed) linear functions, and thus the model is truly equivalent to standard linear regression. Second, it allows for initialization at F (αw0) = 0 (by choosing w+(0) = w−(0)) without this being a saddle point from which gradient flow will never escape.2 1Chizat et al. did not consider only homogeneous models, and instead of studying the scale of initialization they studied scaling the output of the model. For homogeneous models, the dynamics obtained by scaling the initialization are equivalent to those obtained by scaling the output, and so here we focus on homogeneous models and on scaling the initialization. ?Our results can be generalized to “biased" initialization (i.e., where w_ # wy at initialization), or the asymmetric parametrization f((u,v),x) = >>, uiviri, however this complicates the presentation without adding much insight. 3 Population Error 0.003 0009 «0024 0.067 +—«es 0.508 a Excess LI norm 00153 0.0357 «0.0834 0.1947 «0.4549, a (a) Generalization (b) Norms of solution (c) Sample complexity 100 200 300 ‘400 30 " Figure 1: In (a) the population error of the gradient flow solution vs. a in the sparse regression problem described in Section 4. In (b), we plot ||@2°1||1 — ||Gz, |l1 in blue and ||63°||2 — ||87, |l2 in red vs. a. In (c), the largest a such that 83°, achieves population error at most 0.025 is shown. The dashed line indicates the number of samples needed by G7,. The model (3) is perhaps the simplest non-trivial D-homogeneous model for D > 1, and we chose it for studying the role of scale of initialization because it already exhibits distinct and interesting kernel and rich behaviors, and we can also completely understand both the implicit regularization and the transition between regimes analytically. We study the underdetermined N « d case where there are many possible solutions X@ = y. We will use B3,,, to denote the solution reached by gradient flow when initialized at w+(0) = w_(0) = awo. We will start by focusing on the special case where wo = 1. In this case, the tangent kernel at initialization is Kw) (x, x’) = 80? (x, x’), which is just a scaling of the standard inner product kernel, so |||] K,.,, © ||ll2- Thus, in the kernel regime, 83° will be the minimum fy norm solution, G7, := arg minyg_,||G||2. Following Chizat et al. [6] and the discussion in Section 3, we thus expect that lima +.0 B31 = },- In contrast, from Corollary 2 in Gunasekar et al. [12], as @ — 0, gradient flow leads instead to a rich limit of ¢; minimization, i.e., lima+0 BX = G7, = arg minyg_,||A|]1. Comparing this with the kernel regime, we already see two distinct behaviors and, in high dimensions, two very different inductive biases. In particular, the rich limit £; bias is not an RKHS norm for any choice of kernel. We have now described the asymptotic regimes where a — 0 or a + ov, but can we characterize and understand the transition between the two regimes as a scales from very small to very large? The following theorem does just that. Theorem 1 (Special case: w0 = 1). For any 0 < α < ∞, if the gradient flow solution β∞ parameterization model in eq. (3) satisfies Xβ∞ β∞ α,1 = arg min β Qα (β) s.t. Xβ = y, (4) where Qo (B) = a? yi q (&) and q(z) = So arcsinh (4) du =2-—V/4+4 2? + zarcsinh (8). A General Approach for Deriving the Implicit Bias Once given an expression for Qa, it is straight- forward to analyze the dynamics of 3,,; and show that it is the minimum Q, solution to XG = y. However, a key contribution of this work is in developing a method for determining what the implicit bias is when we do not already have a good guess. First, we analyze the gradient flow dynamics and show that if X83, = y then By = ba(X Tv) for a certain function b, and vector v. It is not necessary to be able to calculate v, which would be very difficult, even for our simple examples. Next, we suppose that there is some function Qa such that (4) holds. The KKT optimality conditions for (4) are X@* = y and Jy s.t. VQ (B*) = XT v. Therefore, if indeed B3, = B* and XBX, = y then VQ (BX1) =VQa (ba(XTv)) = X'v. We solve the differential equation VQq = bz! to yield Qa. Theorem 1 in Appendix C is proven using this method. In light of Theorem 1, the function Qq (referred to as the “hypentropy” function in Ghai et al. [11]) can be understood as an implicit regularizer which biases the gradient flow solution towards one particular zero-error solution out of the many possibilities. As a ranges from 0 to ov, the Qq regularizer interpolates between the £, and fy norms, as illustrated in Figure 3(a) (the line labelled D = 2 depicts the coordinate function q). As 4 a > co we have that 3;/a? > 0, and so the behaviour of Q,(@) is governed by q(z) = @(z?) around z = 0, thus Q.(8) « $5, 6?. On the other hand when a + 0, |B;/a?| - oo is determined by q(z) = O(|z| log|z|) as |z| > oo. In this regime mata 2a(B) x oat 7a?) yi |B; | log| 85 (1/log(1/a?)). The following Theorem, proven in Appendix D, quantifies the scale of a which guarantees that G3, approximates the minimum ¢; or ¢2 norm solution: Theorem 2. For any 0 << d, under the setting of Theorem 1 with wo = 1, a < min{ (2(1 + 6[16j, lh) -exp (=d/(€l19%,h1)) } > 9% 1lh < +9 118%, a> 20 +e)(1+2/e)|Gi, 2 => WErI2 < A+) 67,18 # Ih Looking carefully at Theorem 2, we notice a certain asymmetry between reaching the kernel regime versus the rich limit: polynomially large a suffices to approximate {7, to a very high degree of accuracy, but exponentially small a is needed to approximate 3}, This suggests an explanation for the difficulty of empirically demonstrating rich limit behavior in matrix factorization problems [3, 12]: since the initialization may need to be exceedingly small, conducting experiments in the truly rich limit may be infeasible for computational reasons. Generalization In order to understand the effect of the initialization on generalization, consider a simple sparse regression problem, where x1,...,xn ~ N(0,I) and yn ~ N((8*, Xn) ,0.01) where 3* is r*-sparse with non-zero entries equal to 1/Vr*. When N < d, gradient flow will generally reach a zero training error solution, however, not all of these solutions will generalize the same. In the rich limit, N = Q(r* log d) samples suffices for 87, to generalize well. On the other hand, even though we can fit the training data perfectly well, the kernel regime solution 87, would not generalize at all with this sample size (N = Q(d) samples would be needed), see Figure 1(c). Thus, in this case good generalization requires using very small initialization, and generalization will tend to improve as a decreases. From an optimization perspective this is unfortunate because w = 0 is a saddle point, so taking a — 0 will likely increase the time needed to escape he vicinity of zero. Thus, there seems to be a tension between generalization and optimization: a smaller α might improve generalization, but it makes optimization trickier. This suggests that one should operate just on the edge of the rich limit, using the largest α that still allows for generalization. This is borne out by our experiments with deep, non-linear neural networks (see Section 7), where standard initializations correspond to being right on the edge of entering the kernel regime, where we expect models to both generalize well and avoid serious optimization difficulties. Given the extensive efforts put into designing good initialization schemes, this gives further credence to the idea that models will perform best when trained in the intermediate regime between rich and kernel behavior. This tension can also be seen through a tradeoff between the sample size and the largest a we can use and still generalize. In Figure 1(c), for each sample size N, we plot the largest a for which the gradient flow solution 83°, achieves population risk below some threshold. As N approaches the minimum number of samples for which B;, generalizes (the vertical dashed line), a must become extremely small. However, generalization is much easier if the number of samples is only slightly larger, and much larger a suffices. The “Shape” of w0 and the Implicit Bias So far, we have discussed the implicit bias in the special case w0 = 1, but we can also characterize it for non-uniform initialization w0: Theorem 1 (General case). For any 0 < α < ∞ and w0 with no zero entries, if the gradient flow solution β∞ # α,w0 # α,w0 β∞ α,w0 = arg min β Qα,w0 (β) s.t. Xβ = y, (5) √ where Qa,wo (B) = x4 = 0 we, ia( wes ) and q(z) = 2- 4+ # + zaresinh (3). i ’Theorem 2 only shows that exponentially small a is sufficient for approximating Bi, and is not a proof that it is necessary. However, Lemma 2 in Appendix D proves that a < d~%(/©) is indeed necessary for Qq to be proportional to the £, norm for every unit vector simultaneously. This indicates that a must be exponentially small to approximate Bi, for certain problems. 5 Consider the asymptotic behavior of Qα,w0. For small z, q(z) = z2 # 4 + O(z4) so for α → ∞ Consider the asymptotic behavior of Qa,w,. For small z, q(z) = 2 + O(24) so for a + 00 d d 6 vb age (Bi Pole | Qa,wo (B) = > a? wo (caw) > da2w? , ' O(a °) (6) In other words, in the a — oo limit, Qa,wo (8) is proportional to a quadratic norm weighted by diag (1/w). On the other hand, for large |z|, q(z) = |z| log|z| + O(1/|z|) so as a +0 d d Toe /ad) 22m (8) = ina 7a) > owe (ar) = 216i + O(1/ log(1/a?)) (7) So, in the a + 0 limit, Qa,w,() is proportional to |||], regardless of the shape of the initialization wo! The specifics of the initialization, wo, therefore affect the implicit bias in the kernel regime (and in the intermediate regime) but not in the rich limit. For wide neural networks with i.i.d. initialized units, the analogue of the “shape” is the distribution used to initialize each unit, including the relative scale of the input weights, output weights, and biases. Indeed, as was explored by Williams et al. [23] and as we elaborate in Section 7, changing the unit initialization distribution changes the tangent kernel at initialization and hence the kernel regime behavior. However, we also demonstrate empirically that changing the initialization distribution (“shape”) does not change the rich regime behavior. These observations match the behavior of Qα,w0 analyzed above. Explicit Regularization From the geometry of gradient descent, it is tempting to imagine that its implicit bias would be minimizing the Euclidean norm from initialization: 2 vo = F( argmin||w — awo||3 s.t. L(w) = 0) = arg min Ra,wo (9) st. XB=y (8) where Raw, (8) = min||w — awoll3 st. F(w) = B. (9) =20 =o 0 Fa 20 It is certainly the case for standard linear regression f(w,x) = (w, x), where from standard analysis, it can be shown that BY, = BE wg so the bias is captured by Ra,w,. But does this characterization fully explain the implicit bias for our 2-homogeneous model? Perhaps the behavior in terms of Qa,wo can also be explained by Ra,w,? Focusing on the special case wo = 1, it is easy to verify that the limiting behavior when a — 0 and a > oo of the two approaches match. We can also calculate Ra,1(3), which decomposes over the coordinates, as: Ra,1(8) = 95; r(Gi/a”) where r(z) is the unique real root of pz(u) = ut — Gu? + (12 — 2z7)u? — (8 + 10z?)ut 2? + 24. =20 =o 0 Fa 20 Figure 2: q(z) and r(z). This function r(z) is shown next to q(z) in Figure 2. They are similar but not the same since r(z) is algebraic (even radical), while q(z) is transcendental. Thus, Qa1(@) # Ro,1(B) and they are not simple rescalings of each other either. Furthermore, while a needs to be exponentially small in order for Qa,1 to approximate the ¢; norm, the algebraic Ra,1(@) approaches ||G||, polynomially in terms of the scale of a. Therefore, the bias of gradient descent and the transition from the kernel regime to the rich limit is more complex and subtle than what is captured simply by distances in parameter space. # 5 Higher Order Models So far, we considered a 2-homogeneous model, corresponding to a simple depth-2 “diagonal” network. Deeper models correspond to higher order homogeneity (e.g., a depth-D ReLU network is D-homogeneous), moti- vating us to understand the effect of the order of homogeneity on the transition between the regimes. We therefore generalize our model and consider: Fp(w) = Bw,p =we —w? and fp(w,x) = (w? — w®?, x) (10) As before, this is just a linear regression model with an unconventional parametrization, equivalent to a depth-D matrix factorization model with commutative measurement matrices, as studied by Arora et al. [3], 6 (a) Regularizer (b) Approximation ratio (c) Sparse regression simulation Die Figure 3: (a) qp(z) for several values of D. (b) The ratio g eS as a function of a, where e; = [1,0,0,..., 0] is the first standard basis vector and 1q = [1,1,...,1] is the all ones vector in R¢. This captures the transition between approximating the 2 norm (where the ratio is 1) and the ¢; norm (where the ratio is 1/Vd). (c) A sparse regression simulation as in Figure 1, using different order models. The y-axis is the largest a? (the scale of ( at initialization) that leads to recovery of the planted predictor to accuracy 0.025. The vertical dashed line indicates the number of samples needed in order for G7, to approximate the plant. or a depth-D diagonal linear network. We can again study the effect of the scale of α on the implicit bias. Let β∞ α,D denote the limit of gradient flow on w when w+(0) = w−(0) = α1. In Appendix E we prove: Theorem 3. For any 0 < α < ∞ and D ≥ 3, if Xβ∞ # α,D = y, then Be p = arg ming Q?(B) st. ye gp(Bi/a?) and gp = f hp! is the 702 on [-1,1]. Furthermore, lima+o BR # XB=y where QP(B) = a? ye gp(Bi/a?) and gp = f hp! is the antiderivative of the unique inverse of hp(z) = (l- 2)" D2 -—(1+ 2) 702 on [-1,1]. Furthermore, lima+o BR p = Be, and limaoo BY p = Bj, In the two extremes, we again get 37, in the kernel regime, and more interestingly, for any depth D > 2, we get the 7, in the rich limit, as has also been observed by Arora et al. [3]. That the rich limit solution does not change with D is surprising, and disagrees with what would be obtained with explicit regularization (regularizing ||w|l2 is equivalent to ||||2/p regularization), nor implicitly on with the logistic loss (which again corresponds to ||A||2/p, see, e-g., [12, 18]). Although the two extremes do not change as we go beyond D = 2, what does change is the intermediate regime, particularly the sharpness of the transition into the extreme regimes, as illustrated in Figures 3(a)- 3(c). The most striking difference is that, even at order D = 3, the scale of a needed to approximate ¢; is polynomial rather then exponential, yielding a much quicker transition to the rich limit versus the D = 2 case above. This allows near-optimal sparse regression with reasonable initialization scales as soon as D > 2, and increasing D hastens the transition to the rich limit. This may explain the empirical observations regarding the benefit of depth in deep matrix factorization [3]. # 6 The Effect of Width The kernel regime was first discussed in the context of the high (or infinite) width of a network, but our treatment so far, following [6], identified the scale of the initialization as the crucial parameter for entering the kernel regime. So is the width indeed a red herring? Actually, the width indeed plays an important role and allows entering the kernel regime more naturally. The fixed-width models so far only reach the kernel regime when the initial scale of parameters goes to infinity. To keep this from exploding both the outputs of the model and F (w(0)) itself, we used Chizat and Bach’s “unbiasing” trick. However, using unbiased models with F (αw0) = 0 conceals the unnatural nature of this regime: although the final output may not explode, outputs of internal units do explode in the scaling leading to the kernel regime. Realistic models are not trained like this. We will now use a “wide” generalization of our simple linear model to illustrate how increasing the width can induce kernel regime 7 behavior in a more natural setting where both the initial output and the outputs of all internal units, do not explode and can even vanish. Consider an (asymmetric) matrix factorization model, i.e., a linear model over matrix-valued observa- ions' X € R*%¢ described by f((U,V),X) = (UV", X) where U,V € R**, and we refer to k > das he “width.” We are interested in understanding the behaviour as k — 00 and the scaling of initialization a of each individual parameter changes with k. Let Mu.,v = F(U,V) = UV" denote the underlying linear redictor. We consider minimizing the squared loss L(U,V) = L(Muv) = ye ((Xn, Mu.v) - Yn)? on N samples using gradient flow on the parameters U and V. This formulation includes a number of special cases such as matrix completion, matrix sensing, and two layer linear neural networks. We want to understand how the scale and width jointly affect the implicit bias. Since the number of arameters grows with k, it now makes less sense to capture the scale via the magnitude of individual param- eters. Instead, we will capture scale via 0 = +|Mu vile, i.e., the scale of the model itself at initialization. The initial predictions are also of order o, e.g., when X is Gaussian and has unit Frobenius norm. We will now show that the model remains in the kernel regime depending on the relative scaling of k and o. Unlike he D-homogeneous models of Sections 4 and 5, My,v can be in the kernel regime when o remains bounded, or even when it goes to zero. "Lifted" symmetric factorization Does the scale of My,v indeed capture the relevant notion of pa- rameter scale? In case of a symmetric matrix factorization model Mw = WW, Mw captures the en- tire behaviour of the model since the dynamics on My) induced by gradient flow on W(t) given by Mwi) = VE(Mww))Mwi) + Mwy VL(Mws)) depends only on Mw) and not on W(t) itself [12]. For the asymmetric model My,y, this is no longer the case, and the dynamics of Myvz),v(z) do depend on the specific factorization U(t), V(t) and not only on the product Mu,v. Instead, we can consider an UU" Mu,v Z 1p 0 Xn Mov vv" | and X,, = alxt 0 with F((U, V),X)= (Mu,v. X). The dynamics over My,y—which on the off diagonal blocks are equivalent to those of My,v—are now fully determined by My,v itself; that is, by the combination of the “observed” part My,yv as well as the “unobserved” diagonal blocks UU' and VV". To see how this plays out in terms of the width, consider initializing U(0) and V(0) with iid. N’(0,a7) entries. The off-diagonal entries of Muv. and thus @, will scale with a?Vk while the diagonal entries of My.y will scale with a2k = oVk. equivalent “lifted” symmetric problem defined by Mu,v = [Â¥][U]" = [ k while the diagonal entries of ¯MU,V will scale with α2k = σ √ By analogy to the models studied in Sections 4 and 5, we can infer that the relevant scale for the problem is that of the entire lifted matrix ¯MU,V, which determines the dynamics, and which is a factor of k larger than the scale of the actual predictor MU,V. We now show that in the special case where the measurements k—when this X1, . . . , XN commute with each other, the implicit bias is indeed precisely captured by σ quantity goes to zero, we enter the rich limit; when this quantity goes to infinity, we enter the kernel regime; and in the transition we have behavior similar to the 2-homogeneous model from Section 4. Matrix Sensing with Diagonal/Commutative Measurements Consider the special case where Xj, ..., Xy are all diagonal, or more generally commutative, matrices. The diagonal elements of My,v (the only relevant part when X is diagonal) are [Mu,v]ii = via Ui; Vij, and so the diagonal case can be thought of as an (asymmetric) “wide” analogue to the 2-homogeneous model we considered in Section 4, i.e., a “wide parallel linear network” where each input unit Xj; has its own set of k hidden (Uj, Vi1),..., (Uix, Vix) units. This is depicted in Figure 4. We consider initializing U(0) and V(0) with iid. M’(0,a7) entries, so My o),v(o) Will be of magnitude ¢ = a?V/k, and take k — 00, scaling a as a function of k. √ Theorem 4, proven in Appendix F, completely characterizes the implicit bias of the model, which corresponds to minimizing Qµ applied to its spectrum (the “Schatten-Qµ-norm”). This corresponds to an implicit bias which approximates the trace norm for small µ and the Frobenius norm for large µ. In the diagonal case, this is just the minimum Qµ solution, but unlike the “width-1” model of Section 4, this is obtained without an “unbiasing” trick. 4X need not be square; the results and empirical observations extend for non-square matrices. 8 √ Theorem 4. Let k → ∞, σ(k) → 0, and µ2 := 1 MU,V(t) converges to a zero error solution M∗ 2 limk→∞ σ(k) U,V, then k, and suppose X1, . . . , XN commute. If M∗ U,V = arg min Qµ(spectrum(M)) s.t. L(M) = 0 M Non-Commutative Measurements We might expect that in the general case, there is also a transition around o x 1/Vk: (a) if ¢ = w(1/Vk), then Mu.v > co- J and the model should remain in the kernel regime, even in cases where o = ||Mu,v||” > 0; (b) on the other hand, if o = 0(1/Vk) then ||Mu,v||7 > 0 and the model should approach some rich limit; (c) at the transition, when ¢ = O(1/Vk), Mu,v will remain bounded and we should be in an intermediate regime. In light of Theorem 4, if 0 < p? = 3 lim aVk < exists, we expect an implicit bias resembling Q,,. Geiger et al. [10] also study such a transition using different arguments, but they focus on the extremes o = o(1/Vk) and o = w(1/Vk) and not on the transition. Here, we understand the scaling directly in terms of how the width affects the magnitude of the symmetrized model Mu. For the symmetric matrix factorization model with non-commutative measurements, we can analyze the k) = σ = o(1) and prove it, unsurprisingly, leads to the kernel regime (see Theorem 5 and case ω(1/ It would be more Corollary G in Appendix G, which closely follow the approach of Chizat et al. [6]). interesting to characterize the implicit bias across the full range of the intermediate regime, however, even just the rich limit in this setting has defied generic analysis so far (q.v., the still unresolved conjecture of [12]), and analyzing the intermediate regime is even harder (in particular, the limit of the intermediate regime describes the rich limit). Nevertheless, we now describe empirical evidence that the behavior of Theorem 4 may also hold for non-commutative measurements. Low-Rank Matrix Completion Matrix completion is a natural and commonly-studied instance of the general matrix factorization model where the measurements X, = Cie}, are indicators of single entries of the matrix (note: these measurements do not commute), and so yp, corresponds to observed entries of an unknown matrix Y*. When N < d?, there are many minimizers of the squared loss which correspond to matching Y* on all of the observed entries, and imputing arbitrary values for the unobserved entries. Generally, there is no hope of “generalizing” to unseen entries of Y*, which need not have any relation to the observed entries. However, when Y* is rank-r for r < d, the minimum nuclear norm solution will recover Y* when N = Q(d!#r) [5]. While Theorem 4 does not apply for these non-commutative measurements, our experiments described in Figure 5 indicate the same behavior appears to hold: when 0 = o(1/Vk), the nuclear norm is nearly minimized and My,yv converges to Y*. On the other hand, the kernel regime corresponds to implicit Frobenius norm regularization, which does not recover Y* until N = Q(d?). Therefore, in order to recover Y*, it is necessary to choose an initialization with oVk <i. Conclusion In this section, we provide evidence that both the scale, σ, and width, k, of asymmetric matrix factorization models have a role to play in the implicit bias. In particular, we show that the scale of the equivalent “lifted” or “symmetrized” model ¯MU,V is the relevant parameter. Under many natural initialization schemes for U and V, e.g., with i.i.d. Gaussian entries, the scale of ¯MU,V is k times larger than the scale of MU,V. Consequently, wide factorizations can reach the kernel regime even while MU,V remains bounded, even without resorting to “unbiasing.” On the other hand, reaching the rich limit requires an even smaller initialization for large k. # 7 Neural Network Experiments In Sections 4 and 5, we intentionally focused on the simplest possible models in which a kernel-to-rich transition can be observed, in order to isolate this phenomena and understand it in detail. In those simple models, we were able to obtain a complete analytic description of the transition. Obtaining such a precise description in more complex models is too optimistic at this point, but we demonstrate the same phenomena empirically for realistic non-linear neural networks. Figures 6(a) and 6(b) use a synthetic dataset to show that non-linear ReLU networks remain in the kernel regime when the initialization is large; that they exit from the kernel regime as the initialization becomes 9 Excess nuclear norm Movement of unobserved entries il N Nn co} co} ® o 3.0 3 ° = 8 = Oo o 8 6 N 6 N “I i il) . ® ® 8 3 = 4 = o oOo ° 2° o o 3 S = 2 = TLL : eo ive} oOo fo} ® o I ° = 0 = LP OPS PEEKS SE LDP GP Â¥ HS FS k k Figure 5: Matrix Completion We generate rank-1 ground truth Y* = u*(v*)' where u*,v* ~ N(0, Iiox10) and observe N = 60 random entries. We minimize the squared loss on the observed entries of the model F(U, V) = uv" with U,V € R®** using gradient descent with small stepsize 107°. We initialize U(0)ij, V(0)ij ~ N(0, 07). For the solution, Ma,., reached by gradient descent, the left heatmap depicts the excess nuclear norm ||Mq,x||« — || Y*||« (this is conjectured to be zero in the rich limit); and the right heatmap depicts the root mean squared difference between the entries Ma, and U(oOyV(0) corresponding to unobserved entries of Y* (in the kernel regime, the unobserved entries do not move). Both exhibit a phase transition around a’k =oVk <1. For oVk < 1 the excess nuclear norm is approximately zero, corresponding to the rich limit. For oVk > 1, the unobserved entries do not change, which corresponds to the kernel regime. This phase transition appears to sharpen somewhat as k increases. smaller; and that exiting from the kernel regime can allow for smaller test error. For MNIST data, Figure 6(c) shows that previously published successes with training very wide depth-2 ReLU networks without explicit regularization [e.g., 20] relies on the initialization being small, i.e., being outside of the kernel regime. In fact, the 2.4% test error reached for large initialization is no better than what can be achieved with a linear model over a random feature map. Turning to a more realistic network, Figure 6(d) shows similar behavior when training a VGG11-like network on CIFAR10. Interestingly, in all experiments, when α ≈ 1, the models both achieve good test error and are just about to enter the kernel regime, which may be desirable due to the learning vs. optimization tradeoffs discussed in Section 4. Not coincidentally, α = 1 corresponds to using the standard out-of-the-box Uniform He initialization. Given the extensive efforts put into designing good initialization schemes, this gives further credence to the idea that model will perform best when trained just outside of the kernel regime. Univariate 2-layer ReLU Networks Consider a two layer width-k ReLU network with univariate input x € R given by f((w,b), 2) = woo(wiax + bi) + be where wi € R**!, wo € R!** and by € R**!,b) ER are the weights and bias parameters, respectively, for the two layers. This setting is the simplest non-linear model which has been explored in detail both theoretically and empirically [21, 23]. Savarese et al. [21] show that for an infinite width, univariate ReLU network, the minimal ¢2 parameter norm solution for a 1D regression problem, i.e., arg miny ||w||3 s.t. Vr, f((w,b),2n) = Yn is given by a linear spline interpolation. We hypothesize that this bias to corresponds to the rich limit in training univariate 2-layer networks. In contrast, [Theorem 5 and Corollary 6, 23] shows that the kernel limit corresponds to different cubic spline interpolations, where the exact form of interpolation depends on the relative scaling of weights across the 10 4 2 % F (a) Test RMSE vs scale (b) Grad distance vs scale (c) MNIST test error vs scale (d) CIFAR10 test error vs scale Figure 6: Synthetic Data: We generated a small regression training set in R2 by sampling 10 points uniformly from the unit circle, and labelling them with a 1 hidden layer teacher network with 3 hidden units. We trained depth-D, ReLU networks with 30 units per layer with squared loss using full GD and a small stepsize 0.01. The weights of the network are set using the Uniform He initialization, and then multiplied by α. The model is trained until ≈ 0 training loss. Shown in (a) and (b) are the test error and the “grad distance” vs. the depth-adjusted scale of the initialization, αD. The grad distance is the cosine distance between the tangent kernel feature map at initialization versus at convergence. MNIST: We trained a depth-2, 5000 hidden unit ReLU network with cross-entropy loss using SGD until it reached 100% training accuracy. The stepsizes were optimally tuned w.r.t. validation error for each α individually. In (c), the dashed line shows the test error of the resulting network vs. α and the solid line shows the test error of the explicitly trained kernel predictor. CIFAR10: We trained a VGG11-like deep convolutional network with cross-entropy loss using SGD and a small stepsize 10−4 for 2000 epochs; all models reached 100% training accuracy. In (d), the dashed line shows the final test error vs. α. The solid line shows the test error of the explicitly trained kernel predictor. See Appendix A for further details about all of the experiments. layers. We explored the transition between the two regimes as the scale of initialization changes. We again consider a unbiased model as suggested by Chizat et al. [6] to avoid large outputs for large α. In Figure 7, we fix the width of the network to k = 10000 and empirically plot the functions learned with different initialization w(0) = αw0 for fixed w0. Additionally, we also demonstrate the effect of changing w0, by relatively scaling of layers without changing the output as shown in Figure 7-(b,c). First, as we suspected, we see that the rich limit of α → 0 indeed corresponds to linear spline interpolation and is indeed independent of the specific choice w0 as long as the outputs are unchanged. In contrast, as was also observed by [23], the kernel limit (large α), does indeed change as the relative scaling of the two layers changes, leading to what resembles different cubic splines. Acknowledgements This work was supported by NSF Grant 1764032. BW is supported by a Google PhD Research Fellowship. DS was supported by the Israel Science Foundation (grant No. 31/1031). This work was partially done while the authors were visiting the Simons Institute for the Theory of Computing. 11 (a) w0 = (w0 1 , b0 1, w0 2 ) (b) w0 = (k−0.5w0 1 , k−0.5b0 1, k0.5w0 2 ) (c) w0 = (k−0.25w0 1 , k−0.25b0 1 , k0.25w0 2 ) Figure 7: Each subplot has functions learned by univariate ReLU network of width k = 10000 with initialization w/(0) = awo, for some fixed wo. In Figure (a), wo are fixed by a standard initialization scheme as w?,b? ~ N(0, 1) and w3 ~ N(0,\/2/k) for second layer. In () and (c), the relative scaling of the layers in wo is changed without changing the scale of the output. References [1] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over- parameterization. arXiv preprint arXiv:1811.03962, 2018. [2] Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and generalization in overparameterized neural networks, going beyond two layers. In Advances in neural information processing systems, pages 6155–6166, 2019. [3] Sanjeev Arora, Nadav Cohen, Wei Hu, and Yuping Luo. Implicit regularization in deep matrix factor- ization. In Advances in Neural Information Processing Systems, pages 7411–7422, 2019. [4] Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of arXiv preprint optimization and generalization for overparameterized two-layer neural networks. arXiv:1901.08584, 2019. [5] Emmanuel J Candès and Benjamin Recht. Exact matrix completion via convex optimization. Founda- tions of Computational mathematics, 9(6):717–772, 2009. [6] Lenaic Chizat, Edouard Oyallon, and Francis Bach. On lazy training in differentiable programming. In Advances in Neural Information Processing Systems, pages 2933–2943, 2019. [7] Amit Daniely. SGD learns the conjugate kernel class of the network. In Advances in Neural Information Processing Systems, pages 2422–2430, 2017. [8] Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. arXiv preprint arXiv:1811.03804, 2018. [9] Simon S. Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. In International Conference on Learning Representations, 2019. [10] Mario Geiger, Stefano Spigler, Arthur Jacot, and Matthieu Wyart. Disentangling feature and lazy learning in deep neural networks: an empirical study. arXiv preprint arXiv:1906.08034, 2019. 12 [11] Udaya Ghai, Elad Hazan, and Yoram Singer. Exponentiated gradient meets gradient descent. arXiv preprint arXiv:1902.01903, 2019. [12] Suriya Gunasekar, Blake E Woodworth, Srinadh Bhojanapalli, Behnam Neyshabur, and Nati Srebro. Implicit regularization in matrix factorization. In Advances in Neural Information Processing Systems, pages 6151–6159, 2017. [13] Suriya Gunasekar, Jason D Lee, Daniel Soudry, and Nati Srebro. Implicit bias of gradient descent on linear convolutional networks. In Advances in Neural Information Processing Systems, pages 9461–9471, 2018. [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, pages 1026–1034, 2015. [15] Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and gener- alization in neural networks. In Advances in neural information processing systems, pages 8571–8580, 2018. [16] Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. In Advances in Neural Information Processing Systems, pages 8157–8166, 2018. [17] Yuanzhi Li, Tengyu Ma, and Hongyang Zhang. Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations. In Conference On Learning Theory, pages 2–47, 2018. [18] Kaifeng Lyu and Jian Li. Gradient descent maximizes the margin of homogeneous neural networks. arXiv preprint arXiv:1906.05890, 2019. [19] Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Mean-field theory of two-layers neural net- works: dimension-free bounds and kernel limit. In Conference on Learning Theory, pages 2388–2464, 2019. [20] Behnam Neyshabur, Ryota Tomioka, and Nathan Srebro. In search of the real inductive bias: On the role of implicit regularization in deep learning. arXiv preprint arXiv:1412.6614, 2014. [21] Pedro Savarese, Itay Evron, Daniel Soudry, and Nathan Srebro. How do infinite width bounded norm networks look in function space? In Conference on Learning Theory, pages 2667–2690, 2019. [22] Martin J Wainwright. High-dimensional statistics: A non-asymptotic viewpoint, volume 48. Cambridge University Press, 2019. [23] Francis Williams, Matthew Trager, Daniele Panozzo, Claudio Silva, Denis Zorin, and Joan Bruna. Gradient dynamics of shallow univariate relu networks. In Advances in Neural Information Processing Systems, pages 8376–8385, 2019. [24] Greg Yang. Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. arXiv preprint arXiv:1902.04760, 2019. [25] Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over- parameterized deep ReLU networks. arXiv preprint arXiv:1811.08888, 2018. # A Neural Network Experiment Details Here, we provide further details about the neural network experiments. 13 70 — @=0.75 —= g=1 60 m= GQ = 1.25 x — a@=15 5 50 —— @=1.75 ’ 3 = a=2.0 FE; 40 =—— Last layer fu WwW Oo 20+ ui T Â¥ + ‘ : } 250 500 750 1000 1250 1500 1750 2000 Epoch Figure 8: Training curves for the CIFAR10 experiments Synthetic Experiments We construct a synthetic training set with N = 10 points drawn uniformly from the unit circle in R2 and labelled by a teacher model with 1 hidden layer of 3 units. We train fully connected ReLU networks with depths 2, 3, and 5 with 30 units per layer to minimize the square loss using full gradient descent with constant stepsize 0.01 until the training loss is below 10−9. We use Uniform He initialization for the weights and then multiply them by α. Here, we describe the details of the neural network implementations for the MNIST and CIFAR10 ex- periments. MNIST Since our theoretical results hold for the squared loss and gradient flow dynamics, here we em- pirically assess whether different regimes can be observed when training neural networks following standard practices. We train a fully-connected neural network with a single hidden layer composed of 5000 units on the MNIST dataset, where weights are initialized as αw0, w0 ∼ N , nin denoting the number of units in the previous layer, as suggested by He et al. [14]. SGD with a batch size of 256 is used to minimize the cross-entropy loss over the 60000 training points, and error over the 10000 test samples are used as measure of generalization. For each value of α, we search over learning rates (0.5, 0.01, 0.05, . . . ) and use the one which resulted in best generalization. There is a visible phase transition in Figure 6(c) in terms of generalization (≈ 1.4% error for α ≤ 2, and ≈ 2.4% error for α ≥ 50), even though every network reached 100% training accuracy and less than 10−5 cross-entropy loss. The black line indicates the test error (2.7%) when training only the output layer of the network, as a proxy for the performance of a linear predictor with features given by a fixed, randomly- initialized hidden layer. CIFAR10 We trained a VGG11-like architecture, which is as follows: 64-M-128-M-256-256-M-512-512- M-512-512-M-FC (numbers represent the number of channels in a convolution layers with no bias, M is a maxpooling layer, and FC is a fully connected layer). Weights were initialized using Uniform He initialization multiplied by α. No data augmentation was used, and training done using SGD with batch size of 128 and learning rate of 0.0001. All experiments ran for 2000 epochs, and reached 100% train accuracy except when training only the last layer, which reached 50.38% train accuracy with LR = 0.001 (chosen after hyperparameter tuning). In addition, to approximate the test error in the kernel regime, we experimented with freezing the bottom layers and only training the output layer for both datasets (the solid lines in Figures 6(c) and 6(d)). Figure 8 illustrates some of the optimization difficulties that arise from using smaller α as discussed in Section 4. 14 x[1] x[2] x[3] x[4] u1 u2 u3 u4 v1 v2 v3 v4 ˆy x[1] x[2] x[3] x[4] w+,1 w−,1 w+,2 w−,2 w+,3 w−,3 w+,4 w−,4 + − + − + − + − ˆy (a) A biased diagonal network (b) An unbiased diagonal network Figure 9: Diagonal linear networks. # B Diagonal Linear Neural Networks Consider the model f((u, v),x) = >>; wivix; as described in Section 4, and suppose that |u;(0)| = |vi(0)|, i.e., the input and output weights for each hidden unit are initialized to have the same magnitude. Now, consider the gradient flow dynamics on the weights when minimizing the squared loss: d dt |u(t)| = −sign(u(t)) ˙u(t) (11) N /d 2 -2 Ss ( u;(t)v;(t)x¢”? - v) sign(u(t)) o v(t) ox (12) 1 n=1 \i= where a ◦ b denotes the element-wise multiplication of vectors a and b, and sign(a) is the vector whose ith entry is sign(ai). Similarly, # d dt |v(t)| = −sign(v(t)) ˙v(t) (13) N /d 2 = >> (x uu; (t)v;(t)x¢” - ) sign(v(t)) o u(t) o x”) (14) n=1 \i=1 Therefore, if |ui(0)| = |vi(0)|, then sign(ui(0))vi(0) = sign(vi(0))ui(0), so the dynamics on |ui| and |vi| are the same, and their magnitudes will remain equal throughout training. Furthermore, the signs of the weights cannot change, since |ui(t)| = |vi(t)| = 0 implies ˙ui(t) = ˙vi(t) = 0. # C Proof of Theorem 1 We prove Theorem 1 using the general approach outlined in Section 4. Theorem 1 (General case). For any 0 < α < ∞ and w0 with no zero entries, if the gradient flow solution β∞ # α,w0 # α,w0 β∞ α,w0 = arg min β Qα,w0 (β) s.t. Xβ = y, (5) √ where Qo,wo (8B) = ty oPw5id( saz) and q(z) =2— V4+ 2? + zarcsinh (3). Proof. We begin by calculating the gradient flow dynamics on w, since the linear predictor BX, by F applied to the limit of the gradient flow dynamics on w. Recalling that X= [x -X], Wa(t) = ~VE(wa(t)) = =V (||Xwa(t)? = yl3) = -2XT ra(l) © walt) (15) is given where the residual r_(t) 4 Xw,(t)? —y, and ao b denotes the element-wise product of a and b. It is easily confirmed that these dynamics have a solution: Wa(t) = wa(0) 0 exp (-2x" | ‘ ra(s)ds)| (16) 15 Since wα,+(0) = wα,−(0) = αw0 we can then express βα,w0(t) as Bawo(t) = Wa,+(t)” — Wa,-(t)? - wo (xn (1x7 [rotors —exp (ur ['retsu)) (17) t = 2a*wi o sinh (-1x" i ru(s)ds) 0 Supposing also that β∞ is a global minimum with zero error, i.e., Xβ∞ = y. Thus, # α,w0 # α,w0 # Xβ∞ β∞ X Bowe =Y awe + (18) jawy = Pa(X v) # α,w0 for bo (z) = 2a?w? o sinh(z) and v = —4 So. rq(s)ds. Following our general approach detailed in Section 4, we conclude . 1 ∇Qα,w0 (β) = b−1 α (β) = arcsinh ◦ β 2α2w2 0 (19) where we write 1/w0 to denote the vector whose ith element is 1/w0,i. Integrating this expression, we have that d Bi = 22 oi Qa,wo(8) = Ss a ce ) (20) i=l a where 2 t q(z) = i aresinh (5) at =2-V44224 zaresinh (=) (21) 0 2 # D Proof of Theorem 2 Lemma 1. For any β ∈ Rd, _e d @ < a4 (6|Bla.d) = min{ 1, VT lls) * exw (-3557-)} guarantees that (1-6) [[Bll, < in Qa(B) < (1+ €) [Bll waaay *) Proof. We consider only the special case w0 = 1 and will drop the subscript for brevity. First, we show that Qα(β) = Qα(|β|). Observe that g(x) = x arcsin(x/2) is even because x and arcsin(x/2) are odd. Therefore, na) =o Sayfa 8 + 8 + aresinh (75) _ 22 3B; Bi ~° yoo yr + (2) (22) d 2 =o Saf +a( ) i=1 = Qa(|BI) a2 16 Therefore, we can rewrite 1 ln(1/α2) 5Qal8) = ar paay Qe IA 20? do “ole n(1/a2)— In(1/a2)*«In(1/a2) 2a? 20? = fle il, (Il. [OR n(l/a2) 2 2a? * od 2 In (4 + =) 2a? 404 + BF ai {14 2 4 n(1/a?) In(1/a?) n(1/a2)— In(1/a2) In(1/a?) iaiyary Nas Mz Il a iM. iw Il a Using the fact that |a| ≤ a2 + b2 ≤ |a| + |b| (24) we can bound for α < 1 2 2 Ge) 2a 2a + [Bil (: 2 2 IA Ms iM am Onl (8) In(i/a?) ~ In(i/a?) * _ In (Bi +07) Bil ( In(1/a?) ) oe MAIBil +0”) <tss (1g In(1/a#) , (: _ mill so) In(1/a?) II = <6 So, for any a < min{1, J Blli. (2 (2/61) * }, then In a Qa(8) < Alls (: + te) < [Ills (1 , = (all) S Illa ( +e) 1 in(i a”) On the other hand, using (23) and (24) again, 1 “20? (Bi +20? | _ In(|Gil) n/a) @ 2d In(1/a?) ~ In( (A/a?) * 1B3| (1 mt) ; In (|Gi|) — 1 ya | (1 : In(1/a?) +) Using the inequality ln(x) ≥ 1 − 1 x , this can be further lower bounded by ne) nian 2e(8) 2 dia |- ma = ||Blla — rere 17 Therefore, for any a < exp (- ) then 4d __ 2€([B |] ad) 28) 2 MI6lla 1 € 29 ma) 2) B) 2 \|Bl, d—6) (29) “ Jill. (2||Bll1) 22 exp (—satz)} that 18 < ATR On(B) < 1+) Bh (30) We conclude that for α ≤ min Lemma 2. Fix any e>0 andd> max{e, 124}. Then for any a > a7 Be, Qa1(B) & ||Gl|1 in the sense that there exist vectors v,w such that Qa. (v) Ilelhs Qa,1 (w) >(1l+e 20+ Tol Proof. First, recall that # q 1 1 1 . 1 o() 2—4/44 Bal { cab aresinh (55) - (2° Varo 14 14 ! )) (31) ca! 2ca 4c?a4 1 1 1 2ca V4c2a4 +14+In : tIn{ = + 4/cat+ — ca? ca? 2 Thus, 1 2 1 2 1 - —)|< <: + : 1+In (4) cag (4) < 3ca 14+Iln (4) (32) Now, consider the ratio Qa.a (€1) a?q (zz) 1 aq (zz) ~ (33) Qua (pt) @?da(sahg) V4 02Vda (a4) Using (32), we conclude va Qaa (ou) > -1 +In (zz) - Qaa (4s) 3Vdo? —1+In (aa) -1+In(+) 3Vdo? — 1+ In (4) — }n(d) (34) In(d) — 6Vda? 6Vdo2 —2+2In ( 1+ a) ava 18 Fix any « > 0 and d> max{e, 12*¢}, and set a = d-1~%. Then, 1 di 2d% >6 and “zy ind26 1 1 qe 6 2d% —6+4+ Ind—->0 2€ € 1 1 } (t-5) ind — 2a-*® > Ga-*® —2 (35) € 2 € 1 ; 1 tind—Sa-® > 6d-® —24+ + Ina € € 2e In (d) — 602 Vd > € (s0?va- 24+2In (=7)) !'ouod This implies that the second term of (34) is at least e. We conclude that for any e > 0 and d > max{e, 12**}, a=d-t-% implies that Qa. (€1) Qua (pr) llerl|a 1 Wyte lla >(1+6) (36) Consequently, for at least one of these two vectors, Q is not proportional to the ¢; norm up to accuracy O(e) for this value of a. Since d Qa (1) 46 Qa (Giz) >0 (37) this conclusion applies also for larger α. Lemma 3. For any β ∈ Rd, ai a > a2(¢, ||4lla,d) = VBl2 (1+e*) guarantees that (1 — @)||B|l3 < 407Qa.1(8) < (1+ @)INBII3 Proof. The regularizer Qα,1 can be written d Bi /o? Qai(B) = “Di arcsinh (5) dt (38) Let $(z) = fle arcsinh ($) dt, then (0) =0 , 1 . z ¢'(0) a2 arcsinh (53) op 0 1 1 ¢" (0) 2 2a4 aie al, (39) 0" (0) = —_—_., =0 a8 44.4)" ] gl” Zz 32? 1 al? (44 4)? 19 Also, note that 22? — dat jo" = Pee ol? (44+ Sa) (40) 2 +2a4 ~ 16al? Therefore, by Taylor’s theorem, for some ξ with |ξ| ≤ |z| 2 el"(€) 4 2) ~ oa] a * 2 st" (€:) 420424 22 44 20422 (41) o(2) < sup —<o 5 dat lel<lz| 4! 38412 dat — 96a8 Therefore, for any β ∈ Rd, 2 : 6 |407Qq,1(8) ~ BI3| = 404) 1 (8) — i=1 1s 3 < 4a > $(Bi) — (42) B} + 204? < 2 ya 9608 4 42 2 B? + 2a*B? < ||6llz max a i + 2α4β2 β4 i 96α8 # 2 max i Therefore, a > \/||G}l2 (1 + <4) ensures (1 = IIBII3 < 407Qa1(8) < (1+ €)IIBIl3 (43) (43) Theorem 2. For any 0 << d, under the setting of Theorem 1 with wo = 1, < min{ (2(1 + 616), lh) -exp (=d/(€l18%,]1)) } > [19% lh < +9 118%, (Ll +€)(L + 2/e) (57, |l2 => Beall < (+ 6) Ie, ll2 # α ≤ min # Ih Proof. We prove the ¢; and 2 statements separately. ¢, approximation First, we will prove that ||83°4||1 < (1 + 2e) ||G7, ||1- By Lemma 1, since a < a1 (s5: (1 + for all 8 with |||, < (1 + 2c) |[87, ||, we have (s5: (1 + 2c) ||67, (3 ) ih < ata nat) < (14+ 552) lll (14) 20 d), ll, , Let 6 be such that X@ = y and ||G|l1 = (1 + 2e)||97, # |i. Then € € _ maa nat) > (1-55) lath € a = (1- pS) + 206%, h 1-5 ) Qe 1 2 (1 + 2e) Qa1(87,) 1+ ae) maja (45) + 2e ~ THe In( aa inJaxy 202 (Bi,) > nd ia in(l/a2y 1(83, ) S Be > wie in(i/a2) 24 1(Be-1) Therefore, 8 ¢ BX. Furthermore, let B be any solution XB = y with |||]: > (1 + 2¢)||G7, |l1. It is easily confirmed that there exists c € (0,1) such that the point A’ = (1 — c)B + cGj, is satisfies both XG" = y and |B’, = (1 + 2€)||G7, |l1- By the convexity of Q, this implies Qa.1(B) > Qa,1(8") > Qa1(8%1). Thus a 8 with a large ¢; norm cannot be a solution, even if mos @a,1(9) # ||B\l1- Since ||B3°, ||, < (1 + 2e)||G7, |l1, we conclude 1 OO Weal < Tap in(i/oy Pe Baa) 1 S$ mom Pa (87, ) 1- tH In(1/a?) “8 (46) l+o% a zt We, ll Tre = (+ 9)6e lh Next, we prove ||A3° || < (1+ 2c) ||G7,||2. By Lemma 3, since a > ag (si: (1 + 2€) |[37, ll2 ). for all B with ||A|]2 < (1 + 2e) |[67, lz we have W9l2 (1- y5) < 40%Qa.(8) <[1918 (1+ 55) (a7) +e 2+e Let 6 be such that X@ = y and ||||2 = (1 + 2e)||97, lz. Then, € € 2 —) al _ (: _ rs) (1 +26), 2 4a? Qa,1(B) > (1 -5 1-7 (8), + 2€)407Qa.1(87,) “ (a + ) 14+ 2€ ae -T. 407Qa,1(Bi,) > 4a7Qa,1 (B7,) > 4a? Qa,1 (B21) IV 14+ 2€ ae -T. 407Qa,1(Bi,) > 4a7Qa,1 (B7,) > 4a? Qa,1 (B21) BX. Furthermore, let 8 be any solution XG = y with ||B|l2 > (1 + 2¢)||B7,|l2- It is that there exists c € (0,1) such that the point @’ = (1—c)B+ cBz, satisfies X' = y and Therefore, 8 # easily confirmed 21 |18'2 = (1 + 2€)||Bf, lz. By the convexity of Qa, this implies Qu.1(8) > Qa.1(8") > Qa. (Bi,). Thus a 8 with a large €) norm cannot be a solution, even if 4a7Q,.1(8) # ||A||3. Since ||B%;||2 < (1 + 2e)||87, |l2, we conclude WOxa\l2 < 7-2 40? Qa,1 (B21) DHE 1 2 1 < 7 € 4a Qa,1(B?,) ~ ae (49) < 63,18 Te = (1+ )Gi,13 # E Proof of Theorem 3 Lemma 4. For D > 2 and the D-homogeneous model (10), [ “r(r)dr a2-P Ve <a> y so D(D = 2) # “r(r)dr — w?, so Proof. For the order-D unbiased model β(t) = wD − , the gradient flow dynamics are w(t) = Fee =—-DX'r(t)ow?'(t), w4(0)=al (50) w(t) (Gae + D(D —2)X" [/rour) ™ (51) Where ◦ denotes elementwise multiplication, r(t) = Xβ(t) − y, and where all exponentiation is elementwise. Similarly, w_(t)=-—= =DX'r(t)ow?1(t), w_(0) =al (52) t Do = w_(t)= (Gaa —D(D- axt | r(r)dr) (53) 0 First, we observe that ∀t∀i w+(t)i ≥ 0 and ∀t∀i w−(t)i ≥ 0. This is because at time 0, w+(0)i = w−(0)i = α > 0; the gradient flow dynamics are continuous; and w+(t)i = 0 =⇒ ˙w+(t)i = 0 and w−(t)i = 0 =⇒ ˙w−(t)i = 0. Consequently, 0< w(t)? ? =a?” + D(D- 2) fxr | r or(o)tr] 0< w_(t)?-? =a? ? — D(D- 2) pr | r “r(eiar] (54) t = -a?-? < D(D - 2) cae] ne <a?-P 0 a which concludes the proof. Theorem 3. For any 0 < α < ∞ and D ≥ 3, if Xβ∞ # α,D = y, then α (β) s.t. Xβ = y Be p = arg ming Q?(B) st. ye qp(Bi/e”?) and gp = f hp: is the D2 on [-1,1]. Furthermore, lima +0 BX where QP(B) = a? ye qp(Bi/e”?) and gp = f hp: is the antiderivative of the unique inverse of hp(z) = (l- z)-Ds —(1+ 2) D2 on [-1,1]. Furthermore, lima +0 BX p = B}, and lima +o BXp = Bi, - 22 Proof. For the order-D unbiased model β(t) = wD + − wD − , the gradient flow dynamics are w(t) = & = =—-DX'r(t)ow?"!, w(0) =al (55) w(t) (Ga + D(D — 2)X" [rear (56) D. B(t) =a? (1 + aP-2D(D —2)xXT [ “r(oyar) ~~ -—aP (1 — a? D(D — 2)xT [ r(r)dr) 7 (57) where ˜X = [X − X] and r(t) = Xβ(t) − y. Supposing β(t) converges to a zero-error solution, XB ~ = and = B(00) = a? hp(X 'v(o0)) (58) where v(oo) = —a?~? D(D — 2) Voor 0 r(τ )dτ and the function hD is applied elementwise and is defined where v(oo) = —a?~? D(D — 2) Voor T)dr and the function hp is applied elementwise and is defined hD(z) = (1 − z)− D D−2 − (1 + z)− D D−2 (59) By Lemma 4, ||X'v||.o < 1, so the domain of Ap is the interval [—1,1], upon which it is monotonically increasing from hp(—1) = —oo to hp(1) = oo. Therefore, there exists an inverse mapping hp'(t) with domain [—oo, oo] and range [—1, 1]. This inverse mapping unfortunately does not have a simple closed form. Nevertheless, it is the root of a rational equation. Following the general approach outlined in Section 4, we conclude: Bi/a? Q?(p) =o Df hp (t)dt (60) Rich Limit Next, we show that if gradient flow reaches a solution XB%p = y, then lima+0 BYp = G7, for any D. This is implied by the work of Arora et al. [3], but we include it here for an alternative, simpler proof for our special case, and for completeness’s sake. The KKT conditions for 8 = 7, are XB = y and 3v sign(@) = X'v (where sign(0) = [—1,1]). The first condition is satisfied by assumption. Define v as above. We will demonstrate that the second condition holds too in the limit as a > 0. First, by Lemma 4, ||X 'v||o0 < 1 for all a and D. Thus, for any coordinates i such that lima.0[B%p]i = 0, the second KKT condition holds. Consider now i for which lima +0[B82 pi > 0. As shown above, 2 __D lim [8p]: = tim a? (1—[XTy],) 7? -a? (14+ [XTv]:) 77 >0 (61) D . D _ T D2 . = dim a (1 - [X'v],) >0 (62) This and [X'v]; < 1 implies limg_o[X 'v]; = 1, and thus the positive coordinates satisfy the second KKT condition. An identical argument can be made for the negative coordinates. Kernel Regime Finally, we show that if gradient flow reaches a solution XB p = y, then lima +« BRp = 8}, for any D. for any D. First, since X and y are finite, there exists a solution β∗ whose entries are all finite, and thus all the entries of β∞ α,D, which is the QD α -minimizing solution, will be finite. The KKT conditions for 8 = 87, are XB = y and Ip B= X'. The first condition is satisfied by assumption. Defining v as above, we have __D __D Jim [8 pli = Jim a? (1-[XTy],) 2? -—a? (14 [XTv];) PF <0 (63) = lim [X'r]; =0 (64) aoo 23 Consequently, defining µ = 2DαD D−2 ν, and observing that for small z, (1 − z)− D D−2 − (1 + z)− D D−2 = 2D D − 2 z + O(z3) (65) we conclude b b lm [BXpli lin a? (1— [X'y];) 7? —a? (1+ [XTy]i) ?> a-+00 [XT yi a—oo [XT yi a? (B2[X Tv]: + O(X"y) lim 2DaP py T (66) aoe BS |X Vi =1+ lim O([X'p]?) a>oo =1 Thus, the KKT conditions are satisfied for limα→∞ β∞ BX p = (},- # F Proof of Theorem 4 Here, we prove Theorem 4: √ Theorem 4. Let k → ∞, σ(k) → 0, and µ2 := 1 MU,V(t) converges to a zero error solution M∗ 2 limk→∞ σ(k) U,V, then k, and suppose X1, . . . , XN commute. If M∗ U,V = arg min M Qµ(spectrum(M)) s.t. L(M) = 0 Proof. As k + o, Myw),vo) > PT, so the four d x d submatrices of the lifted matrix Myo),vio) have diagonal structure. The dynamics on Myiz),v(t) are linear combination of terms of the form My) ,v(t) Xn + XnMui),viz); and each of these terms will share this same block-diagonal structure, which is therefore maintained throughout the course of optimization. We thus restrict our attention to just the main diagonal of Muw,vi) and the diagonal of My),v(z), all other entries will remain zero. In fact, we only need to track A(t) := }diag(U(d)U(t)' + V(4)V(t)') € R¢ and d(t) = diag(U(t)V(t)') € R%, with the goal of understanding lim;_,.0 5(t). Since the dynamics of Mywy,vit) depend only on the observations and My),vit) itself, and not on the underlying parameters, we can understand the implicit bias via analyzing any initialization U(0), V(0) that gives Myo),v(o) = 272. A convenient choice is U(0) = [V2pJ,0] and V(0) = [0, V2pJ] so that 6(0) = 0 and A(0) = 271. Let Â¥ € R“*¢ denote the matrix whose nth row is diag(X,,), and let r(t) be the vector of residuals with r,(t) = (Muw),v(t); Xn) — Yn. A simple calculation then shows that the dynamics are given by 6(t) = —44Tr(t) o A(t) and A(t) = —44’Tr(t) 0 6(t) which have as a solution t t 6(t) = 2y sinh ( - ax" [ r(s)ds) and A(t) = 2u? cosh ( — ax" [ r(s)ds) (67) 0 0 This solution for δ(t) is identical to the one derived in the proof of Theorem 1, so if indeed δ reaches a zero-error solution, then using the same argument as for Theorem 1 we conclude that diag(M∞ U,V) = limt→∞ δ(t) = arg minδ Qµ(δ) s.t. X δ = y. # G Kernel Regime in Matrix Factorization Here, we provide additional kernel regime results in the context of matrix factorization model in Section 6. Recall the notation for f((U,V),X), Mu,v and their “lifted” space representations f((U,V),X), Mu.v, respectively, from Section 6. Let W = [Â¥] be the concatenation of U and V, let Â¥ € Rx be the matrix whose nth row is vec(X,,), let y* € R™ be the vector of targets y1,..., yn, and let y(t) = Xvec(Myw),v(t)) e the vector of predictions at time t, where U(t), V(t) follow the gradient flow dynamics. 24 Consider the tangent kernel model for the factorized problem in the “lifted" space ¯f ((U, V), X) = ¯f (W, X) (68) n=1 ∈ RN denote the tangent kernel model’s vector of predictions and let VTK(t) ] denote gradient flow path wrt the linearized model in (68). The following theorem Let yrk = [frx(Wrr, Xn)]Qe Wr(t) = [UT] denote gradient flow path wrt the linearized model Vrx(t) establishes the conditions under which W(t) © Theorem 5. Letk > dandlet\I < XX! X AI. Firy > 0 and p> io 1 € R% denote the tangent kernel Wrx(t). Let yrk = [frx(Wrr, Xn)]Qe Wr(t) = [UT] denote gradient flow path wrt the linearized model in (68). Vrx(t) establishes the conditions under which W(t) © 1 € R% denote the tangent kernel model’s vector of predictions and let The following theorem Wrx(t). io , and suppose that || || w(0) wo)" - Hl, Theorem 5. Letk > dandlet\I < XX! X AI. Firy > 0 and p> io , and suppose that || w(0) wo)" 7 and ||y(0) — y*|| < ur (1 — (1+ 2)/(4 *)): Then - Hl, Iw(7) - Wo)| < VR Huo) - vl su] an Tek rs AVE Ay/1+ Ally(0) — yl? 2VAV/T + Zl) - y"l ||W(T) —Wre(T)|lp < A? 13/2 At # sup T ∈R+ The proof of Theorem 5 follows a similar approach as the proof of [6, Theorem 2.4], except we do not make the assumption that ¯F (W(0)) = 0 (see Section G.1). Additionally, using Theorem 5, we can show the following corollary on the kernel regime for matrix factorization based on the scale Let I x XX" X AT, and |ly* probability at least 1 — 2 exp(— of initialization a and the width of the factorization k (proof in Section G.2). | <Y. If U(O), V(0) have iid. (0,0?) entries for a? > O(k-+), then with d) over the randomness in the initialization. Tek [vin] - eat ro ( + 0) (69) eke vn - P| ~ (sun + a + 7) (70) eat P| From Corollary G, we can infer that the gradient flow over matrix factorization model remains in the kernel regime whenever the scale of the initialization of the prediction matrix MU(0),V(0) given by σ = α2 k k). In particular, unlike width 1 diagonal network model in Section 4 (where the kernel satisfies σ = ω(1/ regime is reached only as scale of initialization α → ∞), with a width k model, we see that kernel regime can happen even when σ → 0 as long as σ to zero slower than 1/ G.1 Proof of Theorem 5 In order to prove Theorem 5, we require the following lemmas. We use yrx(t) € R% denote the tangent kernel model’s vector of predictio Lemma 5. Suppose that the weights are initialized such that || W(0) ments satisfy 0 <~ \I x XX! = Al. If supo<t<rl| W(t) — lly) — y* |] ns at corresponding to Wrx(t). W(0)T — ul|\,,, <Â¥ and the measure- (0)||» < R, then for allt <T y" || exp(—2uAt + 4A(y7 4 R? +2R/j4 7A), < ||y(0) — || W(0) W(0)T — ul|\,,, √ lly) — y* |] llyre(t) — y"|| y" || exp(—2uAt + 4A(y7 4 R? +2R/j4 7A), — y* || exp(—2puAt + 4A4t). < ||y(0) — S |lyrx(0) Proof. First, consider the dynamics of y(t): vi) = atmin =o oS —X(t)( (ty W(t)", x), N Xn Mea ; on W(t)", Xin) — ym) (W(t)W(t)", XnXm) 1 n=1 t)—y*) 25 < where the symmetric matrix Σ(t) ∈ RN ×N has entries X(t)min = 4[(W(t)W(t)", XnXm)]. (72) This matrix can also be written: U(t) = 44 (Laxa @ W(t)W(t)') Xe" (73) where ⊗ denotes the Kronecker product. Therefore, for t ≤ T \|Z(¢) — 4pXX" |, = AX (Taxa @ W(t) W(t)" = plaxa ® Taxa) xT IL, SAl|Laxa ® WH) W(t)" = plaxa ® Laxall, ll lop < 4A\/W(t)W(t)T -— HLaxcall op < 4A (WO) WO) = Heal, + | WOWC)T — WO)W(0)"|,,) (74) <4n(7+ [owe —weoncw() WO) "||, +2I|H —WO)WO"],.) <4A(7 + R? + 2R||W(0)|lop) <4A(y+ R?42RVu +4) Therefore, for all t < T, y/(t) = —X(t)(y(t) — y*) for X(t) = QnA — 4A(y + BR? 4+ 2RVe +4). (75) √ If pA > 2A(y + R? +2RVn +4), then applying [6, Lemma B.1] completes the first half of the proof. Otherwise, noting that ||y(t) — y*||’ is non-increasing in t implies ||y(t) — y*|| < lly Similarly, the dynamics of yrx are (0) —y" ||. rx (t) = [(W(O)W(0)", Xn) +2(Wer(#) — WOO), XnW(0))]™ , . N = [2(Were(t), X.W(0))] N 7 N 4 Eon (yr (t)m — Ym) (W(0)W(0)", XnXm) m=1 n=1 —X(0)(yrk(t) — y*) (76) From here, we can follow the same argument to show that (0) = 2urA — 4Ay. (77) Applying [6, Lemma B.1] again concludes the proof. Lemma 6. Suppose that the weights are initialized such that ||W(0)W(0)' — pJ|| measurements satisfy AI X XX" = AI. Suppose in addition that < Â¥ and that the op 4Ay ur 1+7 w>—>— and |ly0)-y"|| < +(1- x Xr VA 14% |: Then, "0 -y" ll sup|| W(t) -W(0)\lp < Varta vi 26 Proof. To begin, define R := µ + µλ 4Λ − √ µ + γ. (78) # Since µ > 4Λγ λ , R > 0. Note that with this choice # Qud — 4A(y + R? + 2R(Vu+7)) = Hr (79) Let T = inf{t| ||W(t) — W(0)||,, > R}, and suppose towards contradiction that T < oo. Then R<||W(T)- WO)llz | . W(t)dt F dt F = [ |(Zaxa ® W(t) (y(t) — y)|| at SS YOn = yn)Xn W(t) T (80) < [WE lAllaplv(t) ~ vlaet T < VR | (IWOl + 8)|iv( ~ what T < VA(ym=7+R) | Iu(t)— wll 0 r T =Viyn Bf but) - uae 4A Jy From here, we apply Lemma 5 and (79) to conclude that [A . [ RS yuA+ lly) —y"ll | exp(—pat)dt 0 pr «) [> wA+ lly) —y" || J exp(—pAt)dt 0 _ lly) — yl Jit (81) . VA+4 pr 1+ ~ AVR VK 144 _ HA + TW VEFY =R This is a contradiction, so we conclude T = ∞. We conclude the proof by pointing out that the same line of reasoning from the righthand side of (80) through to (81) applies even when T = ∞. Theorem 5. Let k > d and let I < XX! X Al. Fixy > 0 andy > 444, and suppose that || w(0) wo)! — ull| op 444, and suppose that || 27 # op = A(A— 7 and ly(0) ~ y"l) < 1 − A). (1 + γ µ )/(1 + λ # . Then wor) Wo) vA Flly(0) — yl su an res, pS WE wiry weeny) < AVES lv) =v? | VAY 1+ allyl) =v" Sup || (LT) -Wrx(T)|lp < 23/2 ' A/ Proof. Our proof follows the approach of Chizat et al. [6] closely, but it is specialized to our particular setting and formulation. We also do not require that F (W(0)) = 0. Consider for some T ||W(T) — Wr (7) |p dt T <| ° F -[ ||(Zaxa @ W(t) 4" (y(t) = y") = Taxa @ W(0))X" (yr (t) — y*)|| pat — Wrx(t)dt F Sluts = Yn) Xn W(t) = (yrK ()n — Yn) Xn W(0) n=1 = [ || Zaxa @ (W(t) — W(0))) 4 (y(t) — y*) — Taxa @ W(0)) 4" (yr (t) — y(t))]] pat T < va [ W(t) — W(0)Iloplly®) — "lle + WO) |lopllyrK (t) — y(®)|lat < va [ W(t) — W(0)loplly®) — 9" lle + WO) |lopllyrK (t) — y(®)|lat 0 By Lemma 6, A+ 3ly(0) — 9" sup||W(t) — W < sup|| W(t) — W < ¢ sup||W(2) O)llop S$ sup|lW(E) Olle < Wal (83) TX # By Lemma 5, for R = “ # A √ , we have √ lly) — 9" Il S Ily(0) — y* ll exp(—2pAt + 4A (y +R? + 2RV EFF) E) (84) llyr«(t) — Â¥*l] S Ily(0) — y" |] exp(—2pAt + 4Ayt) Since ps > io and ||y(0) — y*|| < & (1 - yt): this further implies \ly(t) — "| S Ily(0) — y" || exp(—pAt) (85) IIyri(t) — y" || < lly) — y* || exp(— pA) Finally, lyr (4) — y(E)I < ly) — 9" || + lyr) — 9" |] < 2lly(0) — y*|| exp(—pAt) (86) 28 (82) Combining the above inequalities, we have [W(T) - W)|, < vi [ WE) — WO)loplly@® — Â¥"lle + WO) pl9® — yOllaat «2 < vx f a <UT avi) aI) } exe(—natya (s7) Vi( OEE 5 VFO) — ul) < UX Ayt+ gellv@) —y"P | VAL + Flv) — 9" V2 3/2 AVE # G.2 Proof of Corollary G Finally, we prove Corollary G using the following: Lemma 7 (cf. Theorem 6.1 [22]). Let W ∈ Rd×k with d ≤ k and with Wi,j ∼ N (0, σ2), then [ Aas 2 2 d P|||WW —o2kI|lop > 80 Vka| < 2exp (—5 Let I x XX" < AI, and |ly*|| < Y. If U(0), V(O) have iid. M(0,a7) entries for a? > Q(k71), then with probability at least 1 − 2 exp(−d) over the randomness in the initialization. # Tek vin) eal ~ . eat erat (5 − ≤ O + α (69) sup T ∈R+ − ≤ O α3k3/2 + α 1 √ k + α (70) Proof. All that is needed is to show the relationship between k and the quantities involved in the statement of Theorem 5. Let W := H € R2¢x*, By Lemma 7, B||| WW" — oki, < 80 V2kd] > 1— 2exp(—d) (88) √ For the remainder of the proof, we condition on the event || || ww — o?kI|\,,, < 2kd. Next, we bound 29 Ily(0) — y* |: 3 3 Ily(0) — y* |" < 2Â¥? + 2\ly(0)| N =2y? + 2 (W(0)W(0)", Xn)” oy? 49 sw W(0)" —a?kI, Xn)” n=1 x 2s 42 (89) <2Â¥? +25°||W(0)W(0)! = a?kI||;,||Xnl|- n=1 21 = 2Y? + dal] W(0)W(0)" = oT]... 32 5 1X n=1 2 p <2? + 2d(8a? 2kd) \|x|2, < 2Y? + 256kd? a4 A, ≤ 2Y 2 + 256kd3α4Λ2, where for (a), we used that ¯Xn is zero on the diagonal. In order to apply Theorem 5 using √ γ = 8α2 2kd and µ = α2k, (90) we require that √ α2k = µ > 4Λγ λ = 32α2Λ λ 2kd ⇐⇒ k > 2048Λ2d λ2 (91) and nyc A 1+a y(0) —y* || < 1-— 92 lv(0) — "lI Al (ee (92) By (89), this is implied by dk V2? + 256kB at A? < © 93 VN ) 8192A2d_ 512d°A3(4 + 184)? Y <= k>max ; AT. 94 = { 2 2 * 202V/A(\ + 4A) (94) This is because k ≥ 8192Λ2d ensures # λ2 142 1 1 Ss =4/1- ax S1- TOA (95) 14a 2458 4+ 168 Consider two cases: either 2Y 2 ≤ 256kd3α4Λ2 or it is not. In the first case, a? dk s a? dk YA * Via SS) AVR V512dAR(4 + $8) (96) = Tx(a+ BS) d = V512kd ath? > SY? + 256k at A? 30 λ )2 512d3Λ3(4+ 16Λ λ2 For the first inequality, we used (95), for the second inequality we used k ≥ 2Y 2 > 256kd3α4Λ2 and . Otherwise, oP ak | 1+ sund 5 0M VA rea | > Var) (97) >2Y > V2Y?2 + 256kd3at A? # For # Y Λ(λ+4Λ) Therefore, for k sufficiently large (94), by Theorem 5 2α2 . I(T) — WO)|n< A+ 4lly(0) —y" | sup - < + TER, Pe AVE A+ AV2Y? + 256kd5 a7 A? = \Woek (98) 2VA(2Â¥ + 16VkdFaAK?) < ~ dAavk < 4y VA n 3209/2 A3/2q ~ Navk » and ||w(r) — W(T)|| - # sup T ∈R+ TER, . Al + Ally) — yl? 2VA\/1 + Fly) 9" 23/2 ' AVE 99 2A(2Â¥2 4 512kd%atA2) 2VAy/1+ 244 (2Â¥ + V5TRRdOA?) (99) < t d2(a2k)3/? AVark 4AY? 1024d®A8a_ 8YVA_ 64d3/? A970 ~ 208k? Vk Nak | r It is clear from (98) and (99) that there is some scalar c which depends only on Λ, λ, d, and Y such that 1 sup ||W(T) — W(0)|| p< o( + a), and TER, avk 1 1 (100) s W(T)-— W(T < cf — + — ae Il (T) ( le <¢( ape + ag +4) 31
{ "id": "1901.08584" }
2002.09018
Scalable Second Order Optimization for Deep Learning
Optimization in machine learning, both theoretical and applied, is presently dominated by first-order gradient methods such as stochastic gradient descent. Second-order optimization methods, that involve second derivatives and/or second order statistics of the data, are far less prevalent despite strong theoretical properties, due to their prohibitive computation, memory and communication costs. In an attempt to bridge this gap between theoretical and practical optimization, we present a scalable implementation of a second-order preconditioned method (concretely, a variant of full-matrix Adagrad), that along with several critical algorithmic and numerical improvements, provides significant convergence and wall-clock time improvements compared to conventional first-order methods on state-of-the-art deep models. Our novel design effectively utilizes the prevalent heterogeneous hardware architecture for training deep models, consisting of a multicore CPU coupled with multiple accelerator units. We demonstrate superior performance compared to state-of-the-art on very large learning tasks such as machine translation with Transformers, language modeling with BERT, click-through rate prediction on Criteo, and image classification on ImageNet with ResNet-50.
http://arxiv.org/pdf/2002.09018
Rohan Anil, Vineet Gupta, Tomer Koren, Kevin Regan, Yoram Singer
cs.LG, math.OC, stat.ML
24 pages, Code available here: https://bit.ly/3uXXtKy
null
cs.LG
20200220
20210305
1 2 0 2 r a M 5 ] G L . s c [ 2 v 8 1 0 9 0 . 2 0 0 2 : v i X r a # Scalable Second Order Optimization for Deep Learning Rohan Anil Vineet Gupta Google Research Google Inc [email protected] [email protected] Tomer Koren Tel Aviv University and Google Research [email protected] # Kevin Regan Google Inc [email protected] Yoram Singer Princeton University [email protected] # October 14, 2021 # Abstract Optimization in machine learning, both theoretical and applied, is presently dominated by first- order gradient methods such as stochastic gradient descent. Second-order optimization methods, that involve second derivatives and/or second order statistics of the data, are far less prevalent despite strong theoretical properties, due to their prohibitive computation, memory and communication costs. In an attempt to bridge this gap between theoretical and practical optimization, we present a scalable implementation of a second-order preconditioned method (concretely, a variant of full-matrix Adagrad), that along with several critical algorithmic and numerical improvements, provides significant convergence and wall-clock time improvements compared to conventional first-order methods on state- of-the-art deep models. Our novel design effectively utilizes the prevalent heterogeneous hardware architecture for training deep models, consisting of a multicore CPU coupled with multiple accelerator units. We demonstrate superior performance compared to state-of-the-art on very large learning tasks such as machine translation with Transformers, language modeling with BERT, click-through rate prediction on Criteo, and image classification on ImageNet with ResNet-50. # 1 Introduction Second order methods are among the most powerful algorithms in mathematical optimization. Algorithms in this family often use a preconditioning matrix to transform the gradient before applying each step. Classically, the preconditioner is the matrix of second-order derivatives (the Hessian) in the context of exact deterministic optimization (e.g., [Fletcher, 2013, Lewis and Overton, 2013, Nocedal, 1980]). While second-order methods often have significantly better convergence properties than first-order methods, the size of typical problems prohibits their use in practice, as they require quadratic storage and cubic computation time for each gradient update. Approximate algorithms such as quasi-Newton methods are aimed at significantly reducing these requirements; nonetheless, they still impose non-trivial memory costs equivalent to storing several copies of the model (and often quadratic computation, as in the popular two-loop recursion [Nocedal, 1980]), which severely limits their use at the immense scale of present-day deep learning. Arguably, one of the greatest challenges of modern optimization is to bridge this gap between theoretical and practical optimization by making second-order methods feasible to implement and deploy at immense scale. Besides the compelling scientific and mathematical developments it may stimulate, this challenge has also a clear real-world significance: recent practice of training large models suggests that the utility of common first-order methods is quickly reaching a plateau, in large part because their time-per-step is already negligible (compared to other parts of the computation) and cannot be optimized further; thus, the only way to train faster is by drastically reducing the number of steps. To this end, second-order methods seem a very natural and promising approach. 1 In this paper we focus on second-order adaptive methods for stochastic optimization. These methods can be thought of as full-matrix analogues of common adaptive algorithms such as AdaGrad [Duchi et al., 2011, McMahan and Streeter, 2010] and Adam [Kingma and Ba, 2014]: they precondition each gradient with a second moment matrix, akin to a covariance matrix, that accumulates the outer products of the stochastic gradients. Full-matrix versions are potentially more powerful than first-order methods as they can exploit statistical correlations between (gradients of) different parameters; geometrically, they can scale and rotate gradients whereas first order methods only scale gradients. However they suffer from similar prohibitive runtime and memory costs as Hessian-based methods. Recent second-order methods such as the K-FAC [Heskes, 2000, Martens and Grosse, 2015], K- BFGS [Goldfarb et al., 2020] and Shampoo [Gupta et al., 2018] exploit the structure of deep networks (and more generally, models described by a collection of tensors) to mitigate the space and runtime costs of full-matrix second-order algorithms. These methods approximate each preconditioning matrix using a factored representation that stems from the network structure. However, in very large applications, these algorithms are still impractical due to a number of numerical and infrastructural pitfalls, and are difficult to parallelize. # 1.1 Contributions We provide solutions to practical concerns and challenges that arise in implementing and using second- order methods at large scale. Our focus will be on the Shampoo algorithm, but most of the challenges we address are relevant to the implementation of many other second-order methods. We design and implement a pipelined version of the optimization algorithm, critically exploiting the heterogeneity and computing power of CPU-Accelerator coupled architectures; • We extend Shampoo in a number of ways so as to make it applicable to a larger range of deep architectures; in particular, the extensions allow Shampoo to be used for training very large layers such as embedding layers ubiquitous in language and translation models; • We replace expensive spectral decompositions (e.g. SVD) used for manipulating preconditioners with an efficient and numerically-stable iterative method for computing roots of PSD matrices; We describe practical challenges and limitations we faced in our design, which we argue could be useful for the design considerations of next-generation accelerator hardware architectures. Our distributed implementation demonstrates significant improvements in performance, both in terms of number of steps, and often in actual wall-clock time, on some extremely large deep learning tasks: • Machine translation: We trained Transformer models [Vaswani et al., 2017] on the WMT’14 English to French translation task [Bojar et al., 2014] in half as many steps compared to the state-of-the-art (well tuned Adam). Our overall training wall-time reductions were: Transformer: 45% reduction (∼12hrs to 6.7hrs), Transformer-Big: 37% reduction (∼47hrs to 29.5hrs). • Language modeling: We trained BERT [Devlin et al., 2018] in 16% fewer steps and achieved higher masked-LM accuracy compared to the state-of-the-art optimizer [You et al., 2019] at 32K batch size; the overall wall-time decreased by 4% from 3.8 to 3.65 hours. For this task, our system has not yet been tuned for performance; we discuss several possible optimizations below. • Click-Through Rate (CTR) prediction: We trained the DLRM model [Naumov et al., 2019] on the terabyte Criteo dataset [Criteo Labs, 2015] at 64K batch size in half as many steps as the current state-of-the-art optimizer, with a wall-time reduction of 37.5% (≈13mins to 8.2mins). We achieved a new state-of-the-art performance of 80.56% AUC (≈ 0.3% improvement) on this task. (An improvement of 0.1% is considered significant; see [Rong et al., 2020, Wang et al., 2017].) 2 Finally, we showcase an implementation which already performs better in both steps to convergence and as well as wall-clock time by emulating higher precision [Henry et al., 2019] for ResNet-50 at 32K batch size. We achieved the MLPerf [2020] target accuracy of 75.9% [Mattson et al., 2019] at 32K batch size on the standard ResNet-50 ImageNet benchmark in 1729 steps which is 31.7% fewer steps than the previous state-of-the-art [Nado et al., 2021] of 2512 steps, and saw an overall 13% reduction in wall-clock time, which can be further accelerated with better hardware/software support. An implementation in JAX [Bradbury et al., 2018] to reproduce is available here https://bit.ly/3uXXtKy. One of our main points in this work was to demonstrate wall-time speedups with second-order methods implemented on a real-world distributed setup being used to train state-of-the-art deep models. In our view, this is important for influencing future hardware accelerator design and runtime software. Indeed, first-order methods have received huge investments in tuning, implementation, platform support and tailored accelerator hardware over the last decade; we believe there are numerous opportunities to improve the per-step time performance of preconditioned methods as well. For example, our results give a concrete justification for incorporating 64-bit accumulation units in hardware for distributed training, and further support adding larger on-chip memory, and more (see Section 6). # 1.2 Related work Classic techniques for addressing the high storage and computation costs of second-order methods mostly belong to the quasi-Newton or the trust-region families of algorithms [Conn et al., 2000, Nocedal and Wright, 2006]. Traditionally, these methods need nearly-accurate gradients in order to construct useful quadratic approximations and implement reliable line searches, rendering them as suitable for training with very large batch sizes, and resulting in expensive iterations that make the overall algorithm slow compared with stochastic first-order methods (see, e.g., [Bollapragada et al., 2018] for a recent account). Our focus in this paper is on adaptive second-order methods which are directly applicable in a stochastic setting. That said, our effort could be relevant to quasi-Newton and trust-region methods as well: e.g., each iteration of typical trust-region methods amounts to solving a certain generalized eigenvalue problem, which presents numerical difficulties of similar nature to those encountered in matrix root/inverse computations, being addressed here. Various approximations to the preconditioning matrix have been proposed in the recent literature (e.g., [Gonen and Shalev-Shwartz, 2015, Erdogdu and Montanari, 2015, Agarwal et al., 2016, Xu et al., 2016, Pilanci and Wainwright, 2017]). However, so far the only prevalent and pragmatic approximation is the diagonal approximation. Some recent approaches for approximating a full-matrix preconditioner are K-FAC [Martens and Grosse, 2015], K-BFGS [Goldfarb et al., 2020], Shampoo [Gupta et al., 2018] and GGT [Agarwal et al., 2018]. K-FAC uses a factored approximation of the Fisher-information matrix as a preconditioner, and K-BFGS uses a similar approximation of the Hessian for a layer. While our focus in this paper is on Shampoo, we believe that many of the techniques presented here could also be applied to make K-FAC practical in large scale (see Appendix B). GGT uses a clever trick to compute a low-rank approximation to the AdaGrad preconditioner. However, GGT maintains several hundred copies of the gradient in memory, which is too expensive even for mid-sized models. Ba et al. [2017] took a first important step at experimenting with distributed K-FAC for training deep models, using a single machine with 8 GPUs to simulate a distributed environment for training. In contrast, a main thrust of our work is to demonstrate wall-time speedups with second-order methods on a real-world distributed setup used for training state-of-the-art deep models, that call for design considerations crucially different than in Ba et al. [2017]. More recently, Osawa et al. [2019] scaled up K-FAC for training convolutional networks, but fell short of reaching the accuracy of first order methods, despite making changes to data augmentation and model architecture. In Section 2 we provide some background on preconditioning methods and describe Paper organization. the Shampoo algorithm. We next discuss the various challenges one faces in a practical implementation of a second-order methods in Section 3, and describe the improvements we made to Shampoo to make it work 3 in our system. In Section 4 we describe the design of our distributed implementation with accelerators for deep learning. Finally, in Section 5 we describe experiments on several datasets, showing that our implementation significantly outperforms common first-order methods such as SGD, Adam and AdaGrad, and is comparable to second order methods such as K-FAC and K-BFGS. # 2 Preliminaries Notation. We use lowercase letters to denote scalars and vectors, and uppercase letters to denote matrices. ||A||~ denotes the Frobenius norm of A, i.e., |All7- = bij Aj. A e B denotes the Hadamard or element-wise product of A and B which have the same shape, so C = AcB => Ci; = Aij Bij. D°* is the element-wise power, (D%)i; = Dij. We use < to denote the Loewner order: given square symmetric matrices A, B, we write A < B iff 𝐵 − 𝐴 is positive semidefinite (PSD). Given a symmetric PSD matrix 𝐴, and α ∈ ℝ, 𝐴α is defined as follows: let 𝐴 = 𝑈𝐷𝑈T be the singular value decomposition of 𝐴, where 𝑈 is a unitary matrix and 𝐷 is a diagonal matrix (with 𝐷𝑖𝑖 ≥ 0 as 𝐴 is PSD), then 𝐴α = 𝑈𝐷α𝑈T, where (𝐷α)𝑖𝑖 = 𝐷α 𝑖𝑖. If α < 0, this is defined for positive definite matrices only, where 𝐷𝑖𝑖 > 0. We use vec( 𝐴) to denote the flattening of the 𝑚 × 𝑛 matrix 𝐴: if 𝐴 has rows 𝑎1, . . . , 𝑎𝑚, then vec( 𝐴) is the 𝑚𝑛 × 1 column vector vec( 𝐴) = (𝑎1, . . . , 𝑎𝑚)T. 𝐴 ⊗ 𝐵 denotes the Kronecker product of two matrices 𝐴 and 𝐵, and we will use the identities ( 𝐴 ⊗ 𝐵)α = 𝐴α ⊗ 𝐵α for α ∈ ℝ, and ( 𝐴 ⊗ 𝐵) vec(𝐶) = vec( 𝐴𝐶 𝐵T). Adaptive preconditioning methods. First order methods iteratively update the parameters solely based on gradient information: 𝑤𝑡+1 = 𝑤𝑡 − η𝑡 ¯𝑔𝑡 where 𝑤𝑡 and ¯𝑔𝑡 are (column) vectors in ℝ𝑑. Here ¯𝑔𝑡 denotes a linear combination of the current and past gradients 𝑔1, . . . , 𝑔𝑡 , where different algorithms use different combinations. Preconditioned methods take the form 𝑤𝑡+1 = 𝑤𝑡 − 𝑃𝑡 ¯𝑔𝑡 where 𝑃𝑡 is an 𝑑 × 𝑑 matrix. Whereas in Newton-type methods this matrix is related to the Hessian matrix of second-order derivatives, adaptive preconditioning is based on gradient-gradient correlations. The parameters of a deep network are structured as a set of tensors of order two (i.e., a matrix), three, or four. For simplicity of presentation we focus on the matrix case—however our design, analysis, and implementation hold for tensors of arbitrary order. We denote the space of parameters by the matrix 𝑊 ∈ ℝ𝑚×𝑛 and an estimate of its gradient by 𝐺. Full matrix Adagrad flattens 𝑊, 𝐺 to vectors 𝑤, 𝑔 of dimension 𝑚𝑛, it thus requires 𝑚2𝑛2 space to store the preconditioner and 𝑚3𝑛3 time to perform the update. 𝑚 and 𝑛 can be as large as 104 in large models, thus rendering full-matrix preconditioning impractical. For this reason, both AdaGrad and Adam constrain the preconditioning matrices to be diagonal. Shampoo bridges the gap between full matrix preconditioning and the diagonal version by approximating the full matrices by a Kronecker product. The Shampoo algorithm. We describe Shampoo in the context of the Online Convex Optimization (OCO) framework, which generalizes stochastic optimization (see, e.g., [Shalev-Shwartz, 2012, Hazan, 2016]). In OCO, learning progresses in rounds where on round f the learner receives an input X; and then uses the parameters W, to form a prediction denoted #,. After making the prediction, the true outcome yr is revealed. The discrepancy between the true and predicted outcomes is assessed by a loss function £ which takes values in R,. The learner then uses the discrepancy to update the matrix to W,4; and prepare for the next round. For instance, the input on round f can be an example x, € R* for which the learner predicts $ = f(W;,xr) where f : Ré — R and the loss is a function @ : R x R > R, such as (9.9) = (y ~$)? or (9, y) = log(1 + exp(-y$)). Stochastic gradient methods use the loss gradient G; = Vw 0(f(W, xr), yr), thus G; € R’””” if the parameters are shaped as a matrix W € R”””". For matrix-shaped parameters, Shampoo tracks two statistics over the course of its run, L; and R,;, which are defined as follows: # Ly =€Im + Ly =€Im + 5-,G,G); Ry = el, + Yh, GIG s: 4 Note that 𝐿𝑡 ∈ ℝ𝑚×𝑚 and 𝑅𝑡 ∈ ℝ𝑛×𝑛. The full matrix Adagrad preconditioner 𝐻𝑡 can be approximated as (𝐿𝑡 ⊗ 𝑅𝑡 )1/2. Thus the Adagrad update rule 𝑤𝑡+1 = 𝑤𝑡 − η𝐻−1/2 𝑔𝑡 yields the Shampoo update rule for the parameter matrix 𝑊: 𝑊𝑡+1 = 𝑊𝑡 − η 𝐿−1/4 𝑡 𝐺𝑡 𝑅−1/4 𝑡 . # 3 Scaling-up Second Order Optimization As described earlier, second order methods have not become the predominant method of choice for large scale deep learning as they come with several algorithmic, numeric and infrastructure challenges. We discuss these challenges and design choices in the development of the distributed implementation of Shampoo. These challenges arise from the fact that modern accelerators are highly optimized for training using first-order optimizers, which have low computational and memory requirements. The Shampoo algorithm is computationally expensive. The extra computation in Shampoo compared to standard first-order methods is in the following steps: • Preconditioner statistics computation: 𝐿𝑡 = 𝐿𝑡−1 + 𝐺𝑡 𝐺T 𝑡 and 𝑅𝑡 = 𝑅𝑡−1 + 𝐺T 𝑡 𝐺𝑡 ; • Inverse 𝑝’th root computation: 𝐿−1/4 𝑡 and 𝑅−1/4 𝑡 ; 𝑡 𝐺𝑡 𝑅−1/4 Preconditioner statistics and gradient computations are expensive for large fully connected as well as embedding layers, we address these below. Computing the inverse 𝑝’th roots is slow —as much as 100 times the step time in some cases—and calculating these without slowing down training was a key challenge in our system. Preconditioned gradient computation: 𝐿−1/4 . # 3.1 Preconditioning of large layers Modern ML architectures often use very large embedding layers, where the longer dimension can be in the millions. For example, DLRM [Naumov et al., 2019] on Criteo-1Tb uses a vocabulary with ∼186 million hash buckets, while in Transformer models [Shazeer et al., 2018] the largest layer can have up to 65536 units per dimension. This makes preconditioning impossible due to 𝑂 (𝑑2) memory and 𝑂 (𝑑3) computational complexity. We show how to extend Shampoo to overcome these problems; we provide proofs and convergence results in Appendix A. Large layers. For embedding layers specifically, we extend the Shampoo algorithm to allow us to use only one of the preconditioners, in case both preconditioners are too expensive to compute. Our choice is empirically supported by the experiments shown in Figs. 3b, 4a and 6a which suggest that there is a benefit from preconditioning one dimension of the large softmax and embedding layers with minimal increase in time. The following result allows us to choose a subset of preconditioners: Lemma 1. Let Gi,...,G; € R”" be matrices of rank at most r. Let gs = vec(Gs) and define Ay = linn + Diy 858) - Let L;, R; be defined as above: L; = €lm + )i_, GsG), Rp = €ln + Li, GIG, . Then for any p,q > 0 such that 1/p + 1/q = 1, we have A, < rL}/P ® RM 4, A consequence is that for any p,q > 0 such that 1/p + 1/q = 1, the full AdaGrad preconditioned gradient A '/29, is approximated by (L1/? @ R!/4)-!/2g,, giving us G, = L7'/?PG,R7 14, Now, by choosing (p, g) = (1, 00) and (p, g) = (0, 1) we obtain the simple preconditioned gradients: G,R; 1? and L;'/?G,. Theorem 3 shows that Lemma | can be used to prove a regret bound for this extended Shampoo in the online convex optimization setting—this provides intuitive justification for the usefulness of this approximation. We further optimize the computation of these preconditioned gradients for embedding layers by taking advantage of the sparse inputs (details in Appendix C). 5 In addition to embedding layers, large models occasionally Preconditioning blocks from large tensors. have large fully connected layers. To reduce the computational cost of computing statistics and preconditioned gradient: we divide the tensor into blocks and treat each individual block as a separate tensor. Concretely this entails dividing a tensor 𝑊 ∈ ℝ𝑘𝑚×𝑘𝑛, into 𝑊1,1 . . . 𝑊𝑚,𝑛 such that 𝑊𝑖, 𝑗 ∈ ℝ𝑘×𝑘 ∀𝑖, 𝑗. Shampoo still converges in this case in the convex setting (Theorem 4), showing that the extension is justified. Lemma 2. Assume that g},...,g; € R’* are vectors, and let g; = [g ->8; 4] where g; ; € R™. iiete Define A, = €Iinn + an ge and let B, € R™**"* be the block diagonal matrix with k m x m blocks, where the j-th block is BY =€Ilm+ vie 85 j8sj . Then A, < kB;. We performed experiments to study the effect of partitioning intermediate layers into blocks, in which we observed that the latter had minimal impact on quality of the solution while providing faster step time as well as reduced memory overheads; see Fig. 4b. Delayed preconditioners. As remarked above, computing the preconditioners is the most expensive computation in every Shampoo step. In Fig. 4c we show that we can compute the preconditioners once every few hundred steps without a significant effect on the accuracy which indicates that the loss function landscape does not change significantly with each step. We observe that there is a performance/quality tradeoff here — in our experiments we set the frequency of computing preconditioners to the smallest value that does not degrade performance, i.e. the number of training steps that can be completed in the amount of time needed to compute the largest preconditioner. The only way to increase the frequency of computing preconditioners is with better hardware/software support. # 3.2 Roots of ill-conditioned matrices Inverse 𝑝’th roots (where typically 𝑝 = 2, 4, 8) can be computed using SVD, but there are efficient iterative algorithms such as the coupled Newton iteration algorithm [Guo and Higham, 2006, Iannazzo, 2006] that can compute the inverse 𝑝’th root via a sequence of matrix-vector and matrix-matrix products, which are highly optimized on modern accelerators. However, our experiments suggest that on real workloads the condition numbers of the 𝐿𝑡 , 𝑅𝑡 matrices are very large (see Fig. 7 in Appendix D) so both SVD and the coupled iteration must be run in double-precision, but this is very expensive on accelerators. We applied several further optimizations to speedup the coupled Newton iteration in our implementation; these are described in Appendix D. iteration must be run in double-precision, but this is very expensive on accelerators. further optimizations to speedup the coupled Newton iteration in our implementation; in Appendix D. Deploying on current ML infrastructure Devices Preconditioner computation - Layer 1 /Preconditioners~ * Statistics rs, | Preconditioner computation - Layer N ; ; Preconditioner computation - Layer N iit Accelerator C > Time Preconditioner computation - Layer 1 Step N+1 a oo Step 2N | Transfers 3.3. Figure 1: Timeline illustrating the design of the optimization algorithm. Preconditioner statistics (𝐿𝑡 and 𝑅𝑡 ) are computed at each step by the accelerators. Preconditioners (𝐿1/4 ) are computed every 𝑁 steps and this computation is distributed to all available CPU cores. 6 Heterogeneous training hardware. Neural network accelerators are custom designed to run machine learning workloads faster and at lower cost. Accelerator design is trending towards preferring lower- precision arithmetic that satisfy both of these goals on existing benchmarks. We find that we need double-precision arithmetic for many layers in these models as described above, which makes running computation on accelerators relatively expensive, and therefore we had to design the system to leverage the existing underutilized CPUs attached to the accelerators (Section 4). Note that for the ResNet-50 experiments, we used single-precision arithmetic via emulation [Henry et al., 2019] and with sublayer blocked preconditioning with dimension 128 to significantly cut down the cost of the inverse. API inflexibility. Deep learning libraries such as TensorFlow [Abadi et al., 2016] offer APIs for optimizer implementation that are well suited for first-order optimizers and for mini-batch training. However our design requires that we interact with the training loop in non-standard ways as we need to pipeline the preconditioner computations — this requires framework level changes. Our Transformer experiments were carried out in the Lingvo [Shen et al., 2019] TensorFlow framework, while BERT-Large, DRLM, and ResNet-50 used the MLPerf v0.7 Tensorflow open source competitive baselines [Mattson et al., 2019]. Experimentation required changes to the training loop such as gathering statistics at regular intervals, distributing computation across all the CPUs available in the cluster without blocking the TPU training, as well as updating the preconditioners. We anticipate our work will encourage the development of more flexible API’s in machine learning libraries to fully utilize heterogeneous hardware. # 4 Distributed System Design We present our distributed system design of the modified Shampoo algorithm. Our method is designed to run effectively on modern neural network accelerators such as TPUs [Jouppi et al., 2017] or GPUs. We first describe the standard data parallelism paradigm used in training models on these accelerators [Dean et al., 2012]. Parameters are replicated on each core of the accelerator, and each core computes forward propagation and back propagation on a sub-batch (a subset of a mini-batch, which itself is a small randomly selected subset of the training set) of input examples. These gradients are averaged across all cores via all-reduction to get the average gradient for the mini-batch. Each core uses the average mini-batch gradient to update its copy of the parameters. All-reduction adds a barrier as all the cores need to synchronize to compute the mini-batch gradient. Fig. 3b shows the overhead of each of the steps on a Transformer [Vaswani et al., 2017] described in the experiment section. We observe that the overheads from all-reduction and weight updates are a minor part (< 5%) of the overall step time. The overall design of our implementation is illustrated by the timeline in Fig. 1. As discussed in the previous section the preconditioner computation (inverse 𝑝th root) is expensive and requires double precision, also we need to do this computation once every few hundred steps. These observations naturally suggested using the often underutilized CPUs on the machines to which the accelerators such as GPUs or Cloud TPUs are attached. CPUs offer double precision arithmetic while being cheaper which makes them a perfect choice to run the preconditioner computation without adding any extra overhead to the training, as the computation is pipelined and runs asynchronously without blocking the training loop. Preconditioners need to be computed for every layer of the network so we distribute the computation across all the CPUs that are part of the training system. As a result, the most expensive step in Shampoo adds almost nothing to the overall training time. Moreover, the computational overhead of preconditioned gradient is independent of the batch size. Thus, increasing the batch size allows us to linearly decrease the overhead making Shampoo practical for very large scale training setups. On smaller problems (e.g., CIFAR-10, see Appendix G.3), we find that our design still results in training time improvements as preconditioner computations take very little time. 7 # 5 Experiments We compare our method against various widespread optimization algorithms for training large state-of- the-art deep models for machine translation, language modeling, recommendation systems as well as image classification. Full details of the experiments and hyperparameter tuning are given in Appendix G. # 5.1 Comparison of second order methods We compared Shampoo with KFAC [Martens and Grosse, 2015] and K-BFGS [Goldfarb et al., 2020] for standard autoencoder tasks on MNIST, FACES and CURVES, and found that all second order algorithms performed approximately the same, and far better than first order optimizers. Fig. 2 shows the training losses and test errors on these autoencoder tasks; see Appendix F for complete details on the experiments. Scaling up each of these second order methods to work on state-of-the-art deep networks at scale is both a research and engineering challenge; we leave that for future work, and instead focus on comparison of Shampoo with existing baselines based on well-tuned first order methods in a variety of tasks. We used PyTorch [Paszke et al., 2019] code available from [Goldfarb et al., 2020] for the benchmarking. MNIST Autoencoder FACES Autoencoder CURVES Autoencoder “— RMSprop —— RMSprop —— RMSprop == Adam == Adam —-— Adam --. KFAC -. KFAC = KFAC K-BFGS K-BFGS KBFGS — Shampoo — Shampoo — Shampoo & & & 102} 102 10 Prettiest 625 50-75 -100:«135«150-«175 «200 0255075 -100:«135«150 «175-200 0 50 100180200250 300 Epochs Epochs Epochs MNIST Autoencoder CURVES Autoencoder “— RMSprop FACES Autoencoder ——"RMSprop <= Adam —— RMSprop -— Adam oo. KFAC 108 —— Adam ~~. KFAC K-BFGS = 10 KBFGS { 107 — Shampoo — Shampoo ‘Test error O25 sb 7s tbo ids 180-175-260 os sth as ss 20 6 50 1d0 180-260-0360 Epochs Epochs Epochs Figure 2: Training losses and test errors for various optimizers for the MNIST, FACES and CURVES autoencoder tasks. # 5.2 Machine Translation with a Transformer We demonstrate the effectiveness of our implementation on the standard machine translation dataset from WMT’14 English to French (en→fr) with 36.3M sentence pairs. We used the state-of-the-art Transformer architecture [Vaswani et al., 2017]. This architecture contains 93.3M parameters and consists of 6 layers for its encoder and decoder. Each layer is composed of 512 model dimensions, 2048 hidden dimensions, and 8 attention heads. The model makes use of a sub-word vocabulary that contains 32K word pieces [Schuster and Nakajima, 2012]. The experiment was run on 32 cores of a Cloud TPU v3 Pod, and the implementation of the optimizer was carried out in the Lingvo [Shen et al., 2019] framework. Our 8 results are shown in Fig. 3a: our algorithm achieves the same accuracy as AdaGrad or Adam in about half as many steps. (b) (a) Figure 3: Results for a Transformer model on WMT’14 en→fr, trained with batch size of 1536. (Top) Test log-perplexity vs. number of steps; the algorithm converges 1.95x faster in steps, while being only ≈ 16% slower per step. This allows the method to attain a particular log-perplexity in 40% less wall-time. (Bottom) Detailed breakdown of latency of a single step (Appendix G.5). Diagonal AdaGrad optimizer: 134ms, Shampoo: 145ms (all layers except embedding and softmax layers) and 155ms (all layers). Preconditioner computation is pipelined and distributed over CPUs, thus does not add any overhead, and the transfer latency (≈100ms) is amortized over hundreds of steps. Preconditioning of embedding and softmax layers. Following the first extension in Section 3.1 the algorithm preconditions the large layers with only one of the preconditioners (𝐺𝑡 𝑅−1/2 𝑡 𝐺𝑡 ) to make it tractable. Fig. 3b shows the increase in step time is only 6% while Fig. 4a shows that we can reduce the number of steps to convergence by ≈ 20%. Reducing overhead in fully-connected layers. Following the second extension in Section 3.1 we ran two experiments where we partitioned fully connected layer of size [512, 2048] into two blocks of size [512, 1024] and four blocks of size [512, 512]. Our experiments show no drop in quality with a small reduction in runtime (< 3%). (a) (c) (b) Figure 4: Impact of Shampoo extensions on WMT’14 en→fr training: (a) preconditioning applied to all layers except embedding and softmax layers, vs. applied to all layers; (b) preconditioning with fully-connected layers partitioned into sub-blocks; (c) varying interval between preconditioner updates. # 5.3 Transformer-Big model We also ran experiments with a larger Transformer model with 375.4M parameters, consisting of 6 layers for its encoder and decoder. Each layer is composed of 1024 model dimensions, 8192 hidden dimensions, and 16 attention heads. Our results are presented in Fig. 5a where again we see an improvement in the 9 end-to-end wall-clock time. For the softmax, embedding and the projection fully-connected layer (with 8192 hidden dimensions) we only make use of the left preconditioner. The step time is dominated by the preconditioned gradient computation which can be reduced by sub-blocking the layers. On the overhead of the optimizer. We show the computational and memory complexity of the Shampoo extensions (described in Section 3.1) in Appendix F. The overhead from computing the statistics, as well as from computing the preconditioned update for single step of training, can be further reduced by increasing the batch size (indeed, these overheads are independent of the batch size) as shown in Fig. 5b where the overhead dramatically reduces from 40% to 19%. (a) Batch size: 384 (b) Batch size: 1536 Figure 5: Test log-perplexity of a Transformer-Big model on WMT’14 en→fr. (a) Shampoo converges faster than AdaGrad (≈ 2x faster in steps), and allows larger learning rates; due to the large overhead in step time, this results in only 30% improvement in wall-time. (b) Larger batch sizes reduce the optimizer overhead from 40% to 19%, resulting in an end-to-end improvement of 41% in wall-time for convergence. # 5.4 Ads Click-Through Rate (CTR) prediction We trained the Deep Learning Recommendations Model (DLRM) of Naumov et al. [2019] on the terabyte Criteo click logs dataset for online advertisement click-through-rate prediction task [Criteo Labs, 2015]. We compared Shampoo against the highly tuned SOTA baseline from MLPerf v0.7 training benchmarks [Wu et al., 2020]. We trained the model with a batch size of 65536 for 64000 steps (1 epoch). Here we apply Shampoo to a) hidden layers b) both embedding and hidden layers. We found that Shampoo achieves the target accuracy of 80.25% in only 30.97K steps compared to 64K steps for the baseline. Moreover, Shampoo achieves new state-of-the-art performance of 80.56% AUC (an ≈ 0.3% improvement) on this dataset, note that an improvement of 0.1% is considered significant in this task; see [Rong et al., 2020, Wang et al., 2017]. Preconditioning of embedding layers further reduced the number of steps needed to reach the target accuracy from 39.96K to 30.97K. # 5.5 Language modeling We trained BERT-Large [Devlin et al., 2018] for the language modeling task on the concatenation of Wikipedia and BooksCorpus, with 2.5B and 800M words respectively. BERT-Large is a bidirectional transformer model containing 24 transformer blocks with 1024 hidden dimensions and 16 self attention heads, a total of 340M parameters. BERT is set up to jointly optimize two objectives: (a) masked language model (Masked-LM) loss where the task is to predict masked tokens based on surrounding context, and (b) next sentence prediction (NSP) loss where the task is to predict whether two given sentences are consecutive in the text. In Fig. 6b we compare our results against the current state of the art in training BERT [You et al., 2019]. Models were trained with batch size of 16K, tuning details are in Appendix G.2. 10 10 # AUC 72.00 80.50 aa 70.00 | Shampoo 80.00 68.00 79.50 66.00 64,00 79.00 Accuracy 62.00 78.50 60.00 78.00 — SGD (MLPerf v0.7) SOTA Baseline 38.00 — Shampoo (only fully connected layers) — Shampoo (all including embedding layers) 56.00 10K 20K 30K 40K 50K 60K ok SK 10K 15K 20K 25K 30K steps steps # (a) Test AUC on the Criteo-1Tb dataset. (b) Masked Language accuracy on BERT-Large. (a) Test AUC on the Criteo-1Tb dataset. (b) Masked Language accuracy on BERT-Large. Figure 6: (a) Shampoo reaches a target AUC of 80.25% in half as many steps with preconditioning embedding layers improving the results, and achieves a new state-of-the-art AUC of 80.56%; (b) Shampoo converges in ≈ 16% fewer steps, and achieves ≈ 1% higher MLM accuracy than the baseline on BERT-Large. # Image classification We trained a ResNet-50 model [He et al., 2016] on the ImageNet-2012 [Russakovsky et al., 2015] dataset and compared it against the state-of-the-art baseline using Nesterov momentum as well as LARS optimizers. We base our experiments JAX baseline available from Mattson et al. [2019] where the target criteria is reaching 75.9% accuracy. See results in Table 1; in particular, we find that Shampoo reaches the target accuracy in 1729 steps compared to 2512 steps by first order methods. Code as well as tuning details are available in https://bit.ly/3uXXtKy. Table 1: Epochs and steps to MLPerf target accuracy of 75.9% with a ResNet-50. Optimizer Batch Size Epochs Steps Nesterov 32768 64 2512 LARS 32768 64 2512 Shampoo 32768 44 1729 # 6 Concluding Remarks We have presented an implementation of a second order optimizer, and demonstrated step time as well as wall time improvements on multiple large tasks in different domains—in each case our implementation performed as well or better than state-of-the-art optimizers specialized for each domain. We hope that this work will influence future hardware accelerator design and runtime software: Most second order methods use symmetric matrices, but we haven’t found support for typing operands as symmetric, which can reduce flops and storage by up to ≈ 50%. • Several optimizations that are currently tuned towards first order methods could be extended to second order methods. For example, weight update sharding pattern matches first order methods [Xu et al., 2020] and dramatically reduces the time spent in the update step as well as memory used. This change can also be applied to Shampoo with blocked preconditioners—but we do not have support for it yet as it requires compiler level support, and is not expressible at the program layer. Currently every core must update all layers which is quite inefficient. 11 Mixed precision algorithms may work for inverse pth roots and can allow more frequent precondi- tioner computation. • Increased memory per chip can allow larger preconditioners. • Hardware support for high-precision arithmetic in accelerators can allow more frequent precon- ditioner computation. The benefits of high precision arithmetic for optimization run counter to the prevailing wisdom in ML1which has led to the focus on low-precision formats such as bfloat16 [Wang and Kanwar, 2019]. Hardware support for storing/packing and using upper/lower triangular matrices efficiently, as available in libraries like LAPACK. Our hope is that these suggestions could result in innovations that would make second-order methods practical across more domains and models, especially in data limited regimes where we may not able to amortize the latency added in the data transfer between the accelerator and the CPU. # References M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard, M. Kudlur, J. Levenberg, R. Monga, S. Moore, D. G. Murray, B. Steiner, P. Tucker, V. Vasudevan, P. Warden, M. Wicke, Y. Yu, and X. Zheng. Tensorflow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pages 265–283, 2016. N. Agarwal, B. Bullins, and E. Hazan. Second order stochastic optimization in linear time. arXiv preprint arXiv:1602.03943, 2016. N. Agarwal, B. Bullins, X. Chen, E. Hazan, K. Singh, C. Zhang, and Y. Zhang. The case for full-matrix adaptive regularization. CoRR, abs/1806.02958, 2018. N. Agarwal, R. Anil, E. Hazan, T. Koren, and C. Zhang. Disentangling adaptive gradient methods from learning rates. arXiv preprint arXiv:2002.11803, 2020. T. Ando, C.-K. Li, and R. Mathias. Geometric means. Linear algebra and its applications, 385:305–334, 2004. J. Ba, J. Martens, and R. Grosse. Distributed second-order optimization using kronecker-factored approximations. In International conference on machine learning, pages 2408–2417, 2017. O. Bojar, C. Buck, C. Federmann, B. Haddow, P. Koehn, J. Leveling, C. Monz, P. Pecina, M. Post, H. Saint-Amand, R. Soricut, L. Specia, and A. s. Tamchyna. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12–58, Baltimore, Maryland, USA, June 2014. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W/W14/W14-3302. R. Bollapragada, J. Nocedal, D. Mudigere, H.-J. Shi, and P. T. P. Tang. A progressive batching l-bfgs method for machine learning. In International Conference on Machine Learning, pages 620–629, 2018. J. Bradbury, R. Frostig, P. Hawkins, M. J. Johnson, C. Leary, D. Maclaurin, G. Necula, A. Paszke, J. Van- derPlas, S. Wanderman-Milne, and Q. Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax. 1For example, Gupta et al. [2015] say “it is well appreciated that in the presence of statistical approximation and estimation errors, high-precision computation in the context of learning is rather unnecessary...” and Higham and Pranesh [2019] say “machine learning provides much of the impetus for the development of half precision arithmetic in hardware...”. 12 A. R. Conn, N. I. Gould, and P. L. Toint. Trust region methods. SIAM, 2000. Criteo Labs. Criteo releases industry’s largest-ever dataset for machine learning to academic community, July 2015. URL https://www.criteo.com/news/press-releases/2015/07/ criteo-releases-industrys-largest-ever-dataset/. J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, M. A. Ranzato, A. Senior, P. Tucker, K. Yang, Q. V. Le, and A. Y. Ng. Large scale distributed deep networks. Advances in Neural Information Processing Systems 25, 2012. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011. M. A. Erdogdu and A. Montanari. Convergence rates of sub-sampled newton methods. In Proceedings of the 28th International Conference on Neural Information Processing Systems-Volume 2, pages 3052–3060. MIT Press, 2015. R. Fletcher. Practical methods of optimization. John Wiley & Sons, 2013. T. George, C. Laurent, X. Bouthillier, N. Ballas, and P. Vincent. Fast approximate natural gradient descent in a Kronecker factored eigenbasis. In Advances in Neural Information Processing Systems, pages 9550–9560, 2018. D. Goldfarb, Y. Ren, and A. Bahamou. Practical quasi-newton methods for training deep neural networks. arXiv preprint arXiv:2006.08877, 2020. A. Gonen and S. Shalev-Shwartz. Faster sgd using sketched conditioning. arXiv preprint arXiv:1506.02649, 2015. C.-H. Guo and N. J. Higham. A Schur-Newton method for the matrix p’th root and its inverse. SIAM Journal On Matrix Analysis and Applications, 28(3):788–804, 2006. S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan. Deep learning with limited numerical precision. In International Conference on Machine Learning, pages 1737–1746, 2015. In Proceedings of the 35th International Conference on Machine Learning, volume 80, pages 1842–1850, 2018. E. Hazan. Introduction to online convex optimization. Foundations and Trends in Optimization, 2(3-4): 157–325, 2016. E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2):169–192, 2007. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. G. Henry, P. T. P. Tang, and A. Heinecke. Leveraging the bfloat16 artificial intelligence datatype for higher-precision computations. In 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH), pages 69–76. IEEE, 2019. T. Heskes. On “natural” learning and pruning in multilayered perceptrons. Neural Computation, 12(4): 881–901, 2000. 13 N. J. Higham and S. Pranesh. Simulating low precision floating-point arithmetic. SIAM Journal on Scientific Computing, 41(5):C585–C602, 2019. B. Iannazzo. On the Newton method for the matrix p-th root. SIAM journal on matrix analysis and applications, 28(2):503–523, 2006. N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, et al. In-datacenter performance analysis of a tensor processing unit. In Computer Architecture (ISCA), 2017 ACM/IEEE 44th Annual International Symposium on, pages 1–12. IEEE, 2017. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. A. Krizhevsky et al. Learning multiple layers of features from tiny images. 2009. F. Kunstner, P. Hennig, and L. Balles. Limitations of the empirical fisher approximation for natural gradient descent. In Advances in Neural Information Processing Systems, pages 4156–4167, 2019. A. S. Lewis and M. L. Overton. Nonsmooth optimization via quasi-newton methods. Mathematical Programming, 141(1-2):135–163, 2013. J. Martens and R. Grosse. Optimizing neural networks with Kronecker-factored approximate curvature. In International conference on machine learning, pages 2408–2417, 2015. P. Mattson, C. Cheng, C. Coleman, G. Diamos, P. Micikevicius, D. Patterson, H. Tang, G.-Y. Wei, P. Bailis, V. Bittorf, et al. Mlperf training benchmark. arXiv preprint arXiv:1910.01500, 2019. H. B. McMahan and M. Streeter. Adaptive bound optimization for online convex optimization. COLT 2010, page 244, 2010. MLPerf. Training v0.7 results. https://github.com/mlperf/training_results_v0.7, 2020. Z. Nado, J. M. Gilmer, C. J. Shallue, R. Anil, and G. E. Dahl. A large batch optimizer reality check: Traditional, generic optimizers suffice across batch sizes. arXiv preprint arXiv:2102.06356, 2021. M. Naumov, D. Mudigere, H.-J. M. Shi, J. Huang, N. Sundaraman, J. Park, X. Wang, U. Gupta, C.-J. Wu, A. G. Azzolini, et al. Deep learning recommendation model for personalization and recommendation systems. arXiv preprint arXiv:1906.00091, 2019. J. Nocedal. Updating quasi-newton matrices with limited storage. Mathematics of computation, 35(151): 773–782, 1980. J. Nocedal and S. Wright. Numerical optimization. Springer Science & Business Media, 2006. K. Osawa, Y. Tsuji, Y. Ueno, A. Naruse, R. Yokota, and S. Matsuoka. Large-scale distributed second-order optimization using kronecker-factored approximate curvature for deep convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12359–12367, 2019. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8024–8035. Curran Associates, Inc., 2019. 14 M. Pilanci and M. J. Wainwright. Newton sketch: A near linear-time optimization algorithm with linear-quadratic convergence. SIAM Journal on Optimization, 27(1):205–245, 2017. H. Rong, Y. Wang, F. Zhou, J. Zhai, H. Wu, R. Lan, F. Li, H. Zhang, Y. Yang, Z. Guo, et al. Distributed equivalent substitution training for large-scale recommender systems. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 911–920, 2020. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–252, 2015. M. Schuster and K. Nakajima. Japanese and Korean voice search. In ICASSP, pages 5149–5152. IEEE, 2012. S. Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2012. N. Shazeer, Y. Cheng, N. Parmar, D. Tran, A. Vaswani, P. Koanantakool, P. Hawkins, H. Lee, M. Hong, C. Young, et al. Mesh-tensorflow: Deep learning for supercomputers. In Advances in Neural Information Processing Systems, pages 10414–10423, 2018. J. Shen, P. Nguyen, Y. Wu, Z. Chen, et al. Lingvo: a modular and scalable framework for sequence-to- sequence modeling, 2019. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pages 5998–6008, 2017. R. Wang, B. Fu, G. Fu, and M. Wang. Deep & cross network for ad click predictions. In Proceedings of the ADKDD’17, pages 1–7. 2017. S. Wang and P. Kanwar. to high performance on cloud https://cloud.google.com/blog/products/ai-machine-learning/ Bfloat16: The secret tpus. bfloat16-the-secret-to-high-performance-on-cloud-tpus, 2019. C.-J. Wu, R. Burke, E. Chi, J. Konstan, J. McAuley, Y. Raimond, and H. Zhang. Developing a recommendation benchmark for mlperf training and inference. arXiv preprint arXiv:2003.07336, 2020. P. Xu, J. Yang, F. Roosta-Khorasani, C. Ré, and M. W. Mahoney. Sub-sampled newton methods with non-uniform sampling. In Advances in Neural Information Processing Systems, pages 3000–3008, 2016. Y. Xu, H. Lee, D. Chen, H. Choi, B. Hechtman, and S. Wang. Automatic cross-replica sharding of weight update in data-parallel training. arXiv preprint arXiv:2004.13336, 2020. Y. You, J. Li, S. Reddi, J. Hseu, S. Kumar, S. Bhojanapalli, X. Song, J. Demmel, K. Keutzer, and C.-J. Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. arXiv preprint arXiv:1904.00962, 2019. 15 # A Deferred proofs Proor (of Lemma |). Lemma 8 in Gupta et al. [2018] shows that H, <rL, ®I, and H, <rlm® R,. By using Ando’s inequality [Ando et al., 2004], we get Ay < r(L1 ® In)'!? Im ® Rr)'/4 =r(L}/? @ In)(Im ® Rf!) - rL}/P @ RI , which concludes the proof. This lemma immediately allows us to prove a regret bound for Shampoo with extended exponents: Theorem 3. Assume that the gradients 𝐺1, . . . , 𝐺𝑇 are matrices of rank at most 𝑟. Then the regret of Shampoo with extended exponents compared to any 𝑊★ ∈ ℝ𝑚×𝑛 is bounded as follows, 𝑇 ∑︁ 𝑓𝑡 (𝑊𝑡 ) − 𝑇 ∑︁ 𝑓𝑡 (𝑊★) ≤ √ 2𝑟 𝐷 Tr(𝐿 1 2 𝑝 𝑇 ) Tr(𝑅 1 2𝑞 𝑇 ) , 𝑡=1 𝑡=1 # where where T T Lr =eln + )GrGr Rr =eln+ ) GIG » D= max ||W, — Wh 𝑡=1 and 1/𝑝 + 1/𝑞 = 1, 𝑝, 𝑞 ≥ 1. 1 2 𝑝 Proor. The proof follows the proof of Theorem 7 in Gupta et al. [2018]. Let H; = L?? @ R}*. Then the update rule of the extended Shampoo algorithm is equivalent to wi41 = w; — nH;!g;. Since O<L, <...< Lr andO < R; <... < Rr, standard properties of the Kronecker product and the operator monotonicity of the function x +> x® for @ < | (an immediate consequence of Ando’s inequality) ensure that 0 < Hi <...< Ar. Following the aforementioned proof, we have the regret bound T T D2 7 T 6 C * 5 || 2 fiw - L fw )< ay BUT) + 3D lsilin where D = max, ||W, — W*||2. Define g, = vec(G,) and A, = (€lm + vie shows that H; < VrH1, using operator monotonicity. Using this equation from the proof of Theorem 7, we have 𝑠)1/2, then Lemma 1 𝑟 𝐻𝑡 , using operator monotonicity. Using this equation twice, along with Equation (6) 𝑠=1 𝑔𝑠𝑔T √ T T ) liseli < VF) ligelliy. < 2Vr Tr) < 2r Tr). t=1 t=1 This gives us 𝑇 ∑︁ 𝑓𝑡 (𝑊𝑡 ) − 𝑇 ∑︁ 𝑓𝑡 (𝑊★) ≤ 𝐷2 2η Tr(𝐻𝑇 ) + η𝑟 Tr(𝐻𝑇 ). 𝑡=1 𝑡=1 √ Setting n = D/V2r and observing that Tr(H;) = Tr(L}/??) Tr(R} 4) gives us the required bound. 16 16 # 1 2𝑞 𝑡 Proof (of Lemma 2). Let 𝑥 ∈ ℝ𝑚𝑘, and 𝑥 = [𝑥1, 𝑥2, . . . , 𝑥𝑘], where 𝑥 𝑗 ∈ ℝ𝑚. Then # 𝑥T t t t k Ty 2 T T 2 TL 2 T Ax =ellxll +) x"gsgyx =ellalla +) (gex)? = elle} + Y (Ye s=l s=l ‘j=l s=l t k k t < kellx|[3 +k DL Die =k (cst + De.) k =k a (Elin La 78s,)* saeya xB) x, = kx" B,x. Je j=! 2 Here we used the inequality (rk jae i) <k am a, which follows from the convexity of x +» x? (or from the fact that variance of a random variable is non-negative). This lemma once again allows us to prove a regret bound, exactly following the proof of the regret bound above: Theorem 4. Assume that the gradients are 𝑔1, . . . , 𝑔𝑇 ∈ ℝ𝑚𝑘, and let 𝑔𝑖 = [𝑔𝑖,1, . . . , 𝑔𝑖,𝑘] where 𝑔𝑖, 𝑗 ∈ ℝ𝑚. Then the regret of Shampoo with blocking compared to any 𝑤★ ∈ ℝ𝑚𝑘 is bounded as follows: Y fon) - y scw") < VIED Yet ‘ y «i &) . t=1 t=I j=l t=1 The two regret bounds can be combined to show that Shampoo with both extensions also converges. # B Comparison with K-FAC K-FAC is a natural gradient algorithm, and approximates the curvature of the loss using the Fisher Information Matrix: F= E [Vlogp(x|0) Vlog p(x|0)"] = E [s oT |. an g p(2| ) g p(x| "| pale) Sp(x19) Sp (xa) For a fully connected layer with W € R’“", where Wx = s, the gradient for the layer G, € R””" can be written via the chain rule as G; = Vs@(s;, y,)xt and in vectorized form as: Vs¢(s;, yr) ®x. We can then write the Fisher information matrix as: F= cE) [(Vsl (50, yr) @ x) (Vsl(s1. Yr) @x)"] P(x = Ey [Vs 0057. Â¥)Vsb(51.92)") ® (rx7)] - P(x Assuming independence between V,¢(s;, y;) and x, K-FAC rewrites the Fisher in tractable form as: # EXE [(Vsl(s:, y)Vsl(sr.90))] ® E [arr] If we let D=E [(Vs l(87, 1) Vs L(s1,92)")| and X =E [xrx7 |. the update rule then becomes: # 𝑊𝑡+1 ≈ 𝑊𝑡 − η 𝐷−1𝐺𝑡 𝑋 −1. We note some of the differences and similarities between the two updates here. KFAC preconditioners use exponent of −1 (as original Fisher is inverted) whereas Shampoo uses −1/2𝑝 where 𝑝 is the rank of the tensor. KFAC computes statistics based on gradients with labels sampled from the model’s predictive distribution (hence requiring strictly more computation) where as Shampoo relies on the gradient of the mini-batch. 17 Now we can compute each term in the Shampoo preconditioners as: Now we can compute each term in the Shampoo preconditioners as: GiG] = Vs 051, yx He Vsl(Se V0)" = Ull5Vs (51, Â¥1)Vsl (St V0)" GiGr = x1Vsl(s1, ys) VS. Yoda = [IVs (Se yII7- Dividing by the scale, and taking expectations on both sides: G,G! Weel GIG, IVs2(sr. ye)I5 =E [Vse(s;, YA) Vs(Sr, yn)" =D; =E [x-x7 | =X. # 𝔼 This shows that K-FAC preconditioners are closely related to Shampoo preconditioners, especially when one uses the empirical Fisher [Kunstner et al., 2019]. The main difficulty in implementing K-FAC on a model is that current optimizer APIs make it difficult to send additional information such as Ilxell3. Vs2(se, yo lS to the optimizer, so K-FAC implementations have to register the structure of each layer. Moreover, due to the dependence of K-FAC on the structure of the network, it is difficult to implement standard operators like batch norm, weight norm, layer norm, etc., which are prevalent in the tasks and models we considered. For example, if we write a fully connected layer with weight norm as s = Wx/||W]|, then the gradient 1 Gy = Vsl(St, yr)xt a _ Vs€(S1 Yr) Wx WI wil > so rewriting 𝔼[vec(𝐺𝑡 ) vec(𝐺𝑡 )T] as a Kronecker product is not an easy task. The similarity between K-FAC and Shampoo preconditioners also allows us to use techniques explored by the K-FAC community for Shampoo. One of the extensions for KFAC is the E-KFAC algorithm [George et al., 2018] which constructs a better approximation of the Fisher matrix by using the eigenbasis computed from the Kronecker approximation, but rescaling the eigenvalues to match the diagonal of the Fisher matrix in this eigenbasis. This method produces a provably better approximation, and can immediately be applied to Shampoo too with a simple modification: Let H, ~ L!/? @ R}/?. Let the singular value decompositions of the factors be L!/? = UDU" and Ri? =VD’V'. Then Lie ® R} . =(U@V)(D ®@D’)(U @V)". Now the EKFAC correction replaces D ® D’ by the optimal diagonal A = diag((U @ V)'A,(U @ V)) =el+ diag((U @ V)" vec(G,) vec(G,)"(U ® V)) s=l t =el+ ) diag(vee(U"G,V) vec(U'GsV)') s=l t =el+ )_vee(UTG.V), s=l Thus we can approximately compute A;+) ~ A; + (U'G,V)™, and the new update becomes: W;41 = W, — 7: U(A;'/? @ (U'G;V))V". This technique does have the disadvantage that it requires computing the singular value decompositions (which we already observed are much slower than coupled Newton iterations), and doubles the number of matrix multiplications in the preconditioned gradient computation. At this time our experiments did not show significant improvements over the standard Shampoo implementation, but we plan to explore this further. 18 # C Shampoo for embedding layers In modern networks, embedding layers are usually very large, and even computing the left preconditioner as described in Section 3.1 can be prohibitively expensive. However we can take advantage of the fact that the inputs to the network are very sparse, and use this to reduce the computation significantly. Let our input example to such a network consist of a set of categorical features: each feature such as user language, user country etc consists of one out of a set of options. Then the output of the embedding layer is the concatenation of the embeddings for each such feature. If the embeddings are of width 𝑑 and there are 𝑁 such embeddings, then the embedding layer is 𝑊 ∈ ℝ𝑑×𝑁 . The input can be represented as 𝑥 ∈ ℝ𝑁 ×𝑚, where 𝑚 is the number of categorical features, and each column is one-hot: if the 𝑘-th feature is 𝑥(𝑘), then 𝑥 𝑗 𝑘 = δ 𝑗,𝑥 (𝑘) . The output of the layer is 𝑦 = 𝑊𝑥. Now G = Vw@ = Vy ex", so GG! = Vy lxtx Ve. But x'x =In, so GG! = Vye Vel Thus we can compute the preconditioner for W by computing it on the output of the embedding layer, and this is a much smaller computation since y is of dimension b x m, this computation is O(d*m) rather than O(d?N). Note that sparse multiplication would also be O(d?m), but accelerators usually implement sparse operations by densifying the tensors. If each column of x is multi-hot, as is the case when the features are words and their embeddings are averaged, x'x is a diagonal matrix, where each diagonal entry is a function of the number of ones in each column of x. Computing GGT = Vy l(xtx) Vy et is still O(d?m) « O(d?N). # D A coupled Newton iteration for computation of inverse p-th roots The Newton method for solving the matrix equation 𝑋 − 𝑝 − 𝐴 = 0 produces the iteration 𝑋𝑘+1 = 𝑝 [( 𝑝 + 1) 𝑋𝑘 − 𝑋 𝑝+1 1 𝑐 𝐼. This iteration satisfies 𝑋𝑘 → 𝐴−1/ 𝑝 as 𝑘 → ∞, but it is not numerically stable. Introducing the matrix 𝑀𝑘 = 𝑋 𝑝 +1)I1-M, 1 Xx - x,(C2 0M), Xo = —I, P c # and p Pp +1)1-M, +1)1-M, 1 Mus = XR A= (er) XPA= (er) Mk, Mo = — A, P cP since 𝑋𝑘, 𝑀𝑘 and 𝐴 commute with each other. This is the coupled Newton iteration for computing inverse 𝑝-th roots, and was shown to be numerically stable in [Guo and Higham, 2006, Iannazzo, 2006]. We implemented the following optimizations to the coupled Newton iteration method: • Warm Start: The coupled Newton iteration to compute 𝐺−1/ 𝑝 starts with 𝑋 = 𝐼, 𝑀 = 𝐺 and maintains the invariant 𝑀 = 𝑋 𝑝𝐺 while driving 𝑀 → 𝐼, resulting in 𝑋 → 𝐺−1/ 𝑝. We need to find the 𝑝-th root of a sequence 𝐺𝑡 , so we instead set 𝑋 = 𝐺−1/ 𝑝 , 𝑀 = 𝑋 𝑝𝐺𝑡+1; since the difference between 𝐺𝑡 and 𝐺𝑡+1 is small, this ensures that 𝑀 is already close to 𝐼. In our experiments warmstart improves convergence (by upto 4x fewer steps), in some cases. Note that for the coupled iteration to work, it is necessary that 𝑋 𝑀 = 𝑀 𝑋 — this is only approximately true if we initialize 𝑋 = 𝐺−1/ 𝑝 , 𝑀 = 𝑋 𝑝𝐺𝑡+1, so we monitor the commutator [𝑋, 𝑀] = 𝑋 𝑀 − 𝑀 𝑋, and if it diverges, 𝑡 we abort the warm start and re-initialize 𝑋 = 𝐼, 𝑀 = 𝐺𝑡+1. Scaled damping. In order to avoid numerical problems, as well as addressing rank deficient matrices, we add a damping factor before taking the inverse root: G + €4/. In our experiments we discovered that scaling the eg by the spectral norm of G (the largest eigenvalue, since are matrices are positive semi-definite) improves the performance of the optimizer — intuitively, we scale the damping to match the scale of the matrix. 19 Algorithm I A coupled Newton iteration procedure for computing inverse 𝑝-th roots of a PSD matrix, with warm start and singular value projection 1: procedure MaxSV(G) 2: Parameters: € > 0, step v € R”, where G € R”*” i=0, error= «0, A=0 while i < nsep and error > € do 3: 4: 5: v= v/llvll v=GV Aoia = ASA = vy error = |A — Agiglh;si =i +1 6: 7: 8: 9: 10: return λ # 11: 12: procedure CoupledIteration(G, 𝑝 ∈ ℕ, X (optional)) 13: Parameters: € > 0, €g > 0 Outputs: G-!/? Amax = MaxSV(G) G=G+t+e*Amax «1 a= -} if X is provided then 14: 15: 16: 17: 18: 19: M = X 𝑝G 20: else # z= wes X=1 M=7G 21: 22: 23: while ||M — I||. > ¢ do M, =(1-a)I+aM X = XM; M=M/M 24: 25: 26: 27: 28: return X # E Implementation details of Shampoo Our implementation of the Shampoo algorithm for fully-connected layers is described in Algorithm II. The algorithm can use heavy-ball momentum for its updates, as well an exponential moving average over the preconditioners, like Adam. The configuration parameter τ1 denotes the number of steps between subsequent fetches of the latest available preconditioner by the accelerator. τ1 must be set sufficiently high so that there is enough time for the CPU to complete the computation of the preconditioner asynchronously and pipeline it efficiently, but otherwise its setting does not have a significant effect on convergence. The configuration parameter τ2 (default value = 1) determines the frequency of gathering gradient statistics - we update 𝐿𝑡 , 𝑅𝑡 every τ2 steps only for efficiency. # E.1 Computation cost of Shampoo We capture the computational and memory complexity under various schemes described in Section 3.1 of handling large layers in Table 2. 20 20 # seconds —e- SVD — Evolution of condition numbers >< Coupled Newton Iterations 500.0 400.0 300.0 108 Condition number 200.0 7 100.0 10 0.0 0 1000 2000 3000 4000 5000 6000 7000 OK 20K 40K 60K 80K Dimension of matrix (n x n) steps Figure 7: Benchmarks on computing inverse-pth root for statistics of varying dimensions (left), and the condition numbers for 𝐿𝑡 of a layer in the transformer model over time (right). We find that the coupled Newton iteration method can effectively utilize the CPUs and give large walltime improvements compared to SVD (that relies on bidiagonal divide-and-conquer). These were measured without warmstart which provides additional speedup of upto 4x by reducing the number of iterations to the solution.These were measured on Intel Skylake CPUs. Note that since ≈ log2( 1 Type All preconditioner 𝑊𝑡 : [𝑛, 𝑚] Left only preconditioner for 𝑊𝑡 : [𝑛, 𝑚] Preconditioner: block size 𝑏 Computation Memory 𝑂 (𝑛2𝑚 + 𝑚2𝑛) 𝑂 (𝑛2 + 𝑚2) 𝑂 (𝑛2𝑚) 𝑂 (𝑚𝑛𝑏) 𝑂 (𝑛2) 𝑂 (𝑚𝑛) # Table 2: Computational and memory complexity of variants of Shampoo. Table 2: Computational and memory complexity of variants of Shampoo. # F Experimental comparison with second order optimizers In Fig. 2, we showed the results of Shampoo against K-BFGS and KFAC on standard autoencoder problems: MNIST2, FACES3 and CURVES4. We used code from the git repository released with [Goldfarb et al., 2020] at github.com/renyiryry/kbfgs_neurips2020_public, and used the hyperparameters they found to be optimal for each of these algorithms for each dataset. We tuned Shampoo by hand, and found that the parameter settings described below gave reasonable results — the main observation we make from this experiment is that with appropriate tuning all these algorithms can perform well. Task Learning Rate Ridge Epsilon Momentum Warmup MNIST FACES CURVES 0.032 0.033 0.1 10−3 5 × 10−6 3.5 × 10−6 0.9 0.9 0.99 0 0.99 0.999 Table 3: Hyperparameters used by Shampoo for autoencoders. The standard update for Shampoo described in Section 2 is 𝑊𝑡+1 = 𝑊𝑡 − η𝐿−1/4 # 𝑡 𝐺𝑡 𝑅−1/4 . However during our experiments we realized that sometimes we get better results by treating the exponent as a hyperparameter, thus using the update 𝑊𝑡+1 = 𝑊𝑡 − η𝐿− α for α ∈ [0, 1]. In the above experiments, we used α = 1, which corresponds to a Kronecker approximation of the Online Newton Step algorithm [Hazan et al., 2007]. Furthermore, as described in Appendix G, the learning rates for Shampoo were derived from SGD. # 2Downloadable at yann.lecun.com/exdb/mnist/ 3Downloadable at www.cs.toronto.edu/~jmartens/newfaces_rot_single.mat 4Downloadable at www.cs.toronto.edu/~jmartens/digs3pts_1.mat 21 Algorithm II Sketch of the Shampoo algorithm 1: parameters: learning rate η𝑡 , momentum: β1, β2 2: for 𝑡 = 1, . . . , 𝑇 do 3: 3: Receive stochastic gradients G, for each layer if ¢ % T2 = 0 then if By < 1 then Ly — Bo Ly-r, + (1 - 2) GG} Ri — Bo Ri-1, + (1 - B2) GIG; else Ly © Ly-1) + G:G) R; — Ri-1, + GIG, D,; — D,-\ + Gr eG; M; — Bi My-1 + (1— Bi) DO? 6 G, if ¢ % 7, =0 then Gather preconditioners Lis /Ro/ 4 from CPUs (t-11)? (t-11) Send L;, R; to CPU host to compute Lt RM eer = 5 6: if ¢ > tT, then T Py BrP (1~ pi) L;""G,R, 1 8 Nt — nollMe|l-/lPrll- 9 W, = Wi-1 — 91 Pr 20: else 21: Nt — No 22: W, = Wi-1- Mz # G Further details on experiments Layer wise learning rates. As seen in Fig. 8 the step size scale for each layer is dependent on the operator norm of the preconditioners (inverse-pth root of the smallest singular value of the statistics matrix) has large spread in its range which results in optimization instabilities in practice. Moreover, as statistics as well as preconditioner computation are amortized across many steps the norm does not grow at every step. Hence, we rely on a learning rate schedule based on the update directions of a well tuned first order optimizer (in our experiments we use diagonal AdaGrad for Transformers in machine translation, as well as Criteo, layer-wise scaling heuristic proposed in LARS/LAMB optimizer, where each layer’s learning rate is set to be \|w,|| F / \|G,|| p tor BERT and ResNet training. For example, when used with diagonal AdaGrad: Shampoo is used to determine the direction of the update, and AdaGrad to determine its magnitude. This procedure termed Grafting in [Agarwal et al., 2020] allows us to bootstrap a reasonable learning rate schedule for a specific problem that is well tuned, and study the effect of preconditioned gradient directions in isolation. The weight matrix 𝑊𝑡 is updated as 𝑊𝑡 = 𝑊𝑡−1 − 𝐴𝑡 ˆSt, where: t D,= ) Gs eGs; Ar = no|[De"? e Gill, (Adagrad magnitude) s=l L4G,Ro14 §, = —2 4 (Shampoo direction). -1/4A p-i/4 EG Re, # G.1 Transformer model on WMT’14 en→fr For all optimizers, we make use of a warmup schedule where the learning rate is increased from 0.0 to η over 40k steps. For the smaller transformer experiments, we use a quadratic warmup, and for the 22 softmax embedding query proiection [aver — 10° encoder embedding —=—$—$— __wsoftmax embedding =e ee ee ease ean 10 oo: mirvmax singular values i g Figure 8: Minimum (dashed) and maximum (solid) singular values for statistics matrices of the embedding, softmax and intermediate attention query projection layers. larger transformer experiments we use a linear warmup. We found that quadratic warmup improves all optimizers equally and provides a better log-perplexity. For the Adam optimizer experiments, we use a learning rate decay schedule of the form η𝑡 = η√︁𝑑/𝑡, following the suggestion of Vaswani et al. [2017]. For the smaller Transformer experiments, we tuned the hyperparameters for each algorithm over 100 trials. We took the best settings for the momentum and second-moment parameters, and tuned the learning rates until either the model became unstable, or did not increase performance. For Shampoo, we used a per layer learning rate derived from AdaGrad (see Appendix G for details), and found that for the exact same hyperparameter settings as AdaGrad, Shampoo provides a modest improvement in performance. Moreover, Shampoo allows for larger learning rates than AdaGrad does, as shown in Fig. 5a. # G.2 BERT-Large Our current implementation showed a 14% increase in step time for BERT-Large, nearly wiping out all the gains from reduced number of steps (16%). We note that due amount of resources it would require to tune BERT, we used Shampoo with exact same hyper-parameters as LAMB with grafting to understand the effect of preconditioner. Moreover, step time can be optimized considerably as the current implementation is not heavily optimized. For example, larger batch sizes help amortize the preconditioning overhead, and reduce overall wall time to reach the same accuracy. Furthermore, in our current implementation, all TPU cores compute all the preconditioning statistics and the preconditioned gradients, which involves over a hundred 1024 × 1024 matrix multiplications. This repeated work can be avoided by cross-replica sharding of weight update [Xu et al., 2020], which distributes this computation across cores, and should save at least half the step time overhead. Baseline results with LAMB optimizer used highly tuned learning rates. No tuning was carried out for Shampoo other than grafting the layer wise learning rates from LAMB. # G.3 CIFAR-10 We train a ResNet-50 model on CIFAR-10 [Krizhevsky et al., 2009] with 2 cores of CloudTPU-v2 at batch size 2048. Our baseline achieves 93.45% accuracy at 300 epochs, where as Shampoo reaches the same accuracy in 143 epochs. We see an overall training time reduction of 42% (1428 seconds to 827 seconds). As it is a smaller problem, the time taken for preconditioner inverse computation for the largest preconditioning matrix is less than 1ms on the CPU. We use a total of 8 CPU cores to run these inverses. # G.4 Detailed results for experiments Approximate wall clock times for the various tasks are as follows: 23 Experiment (TPU cores) Optimizer Batch Optimizer Parameters Warmup Transformer (32) Transformer-Big (32) Transformer-Big (32) Bert-Large (256) Adam Adagrad Shampoo Adam Adagrad Shampoo Adagrad Shampoo LAMB Shampoo 1536 1536 1536 384 384 384 1536 1536 16384 16384 η = 0.000225, β1 = 0.9, β2 = 0.98 η = 0.125, β1 = 0.95 η = 0.225, β1 = 0.95, κ = 500 τ1 = 1000, τ2 = 1 η = 0.000154, β1 = 0.9, β2 = 0.999 η = 0.03, β1 = 0.9 η = 0.06, β1 = 0.9, κ = 500 τ1 = 1000, τ2 = 1 η = 0.06, β1 = 0.9 η = 0.08, β1 = 0.9, κ = 500 τ1 = 1000, τ2 = 1 η = 0.0060 β1 = 0.9, β2 = 0.999 η = 0.0060 β1 = 0.9, β2 = 0.999, λ2 = 10−2, τ1 = 400, τ2 = 10 Block size: 1024 40k steps 40k steps 40k steps 40k steps 40k steps 40k steps 40k steps 40k steps 6.4k steps 6.4k steps DLRM (32) SGD Shampoo (w/ embd) 65536 65536 65536 η = 0.1, poly decay(p=2) at 38k steps η = 0.1 poly decay(p=2) at 38k steps β1 = 0.9, τ1 = 999, τ2 = 10 ηembd = 0.31 2k steps 2k steps Table 4: Experimentation setup, including number of TPU cores, as well hyper-parameters used in our experiments. Task Recommendations: Criteo-1Tb Translation: WMT-14 En-Fr Translation: WMT-14 En-Fr Language Modeling: Wikipedia+Books BERT-Large Model DLRM Transformer Transfomer-Big Baseline 13 min ≈ 12 hrs ≈ 47 hrs 228 mins Shampoo 8.2 min 6.5 hrs 29.5 hrs 219 mins # G.5 Breakdown of step-time in Fig. 3b Each step of training consists of the following phases, whose times are shown in Fig. 3b. Forward Pass: Each core independently computes the predictions for each training example in its sub-batch. Gradient: The gradient is for the sub-batch is computed using the back-propagation algorithm. All reduction: The gradients for the sub-batches from all cores are averaged to compute the gradient for the minibatch. This is then sent back to each core. • Preconditioner statistics: The preconditioner statistics for adaptive algorithms are updated, e.g. for 𝑖 for all parameters, while for Shampoo, we set 𝐿𝑖 := 𝐿𝑖 + 𝐺𝐺T etc. Preconditioned gradient: The preconditioned gradient is computed - e.g. for AdaGrad, we compute √ 𝐻𝑖, while for Shampoo, we compute 𝐿−1/4𝐺 𝑅−1/4. 𝑔𝑖/ Parameter updates: The parameters are updated using the preconditioned gradients. This step is the same for all algorithms: 𝑊 := 𝑊 − η ˜𝐺, where ˜𝐺 is the preconditioned gradient. Note that the Shampoo computation of the preconditioners 𝐿−1/4, 𝑅−1/4 is pipelined on the host CPU, so does not show up in the step times. 24
{ "id": "1904.00962" }
2002.08307
Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning
Pre-trained universal feature extractors, such as BERT for natural language processing and VGG for computer vision, have become effective methods for improving deep learning models without requiring more labeled data. While effective, feature extractors like BERT may be prohibitively large for some deployment scenarios. We explore weight pruning for BERT and ask: how does compression during pre-training affect transfer learning? We find that pruning affects transfer learning in three broad regimes. Low levels of pruning (30-40%) do not affect pre-training loss or transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from being transferred to downstream tasks. High levels of pruning additionally prevent models from fitting downstream datasets, leading to further degradation. Finally, we observe that fine-tuning BERT on a specific task does not improve its prunability. We conclude that BERT can be pruned once during pre-training rather than separately for each task without affecting performance.
http://arxiv.org/pdf/2002.08307
Mitchell A. Gordon, Kevin Duh, Nicholas Andrews
cs.CL
Accepted to Rep4NLP 2020 Workshop at ACL 2020 Conference
null
cs.CL
20200219
20200514
0 2 0 2 y a M 4 1 ] L C . s c [ 2 v 7 0 3 8 0 . 2 0 0 2 : v i X r a # Compressing BERT: Studying the Effects of Weight Pruning on Transfer Learning Mitchell A. Gordon & Kevin Duh & Nicholas Andrews Johns Hopkins University [email protected], [email protected], [email protected] # Abstract Pre-trained feature extractors, such as BERT for natural language processing and VGG for computer vision, have become effective meth- ods for improving deep learning models with- out requiring more labeled data. While ef- fective, these feature extractors may be pro- hibitively large for some deployment scenar- ios. We explore weight pruning for BERT and ask: how does compression during pre- training affect transfer learning? We find that pruning affects transfer learning in three broad regimes. Low levels of pruning (30-40%) do not affect pre-training loss or transfer to down- stream tasks at all. Medium levels of pruning increase the pre-training loss and prevent use- ful pre-training information from being trans- ferred to downstream tasks. High levels of pruning additionally prevent models from fit- ting downstream datasets, leading to further degradation. Finally, we observe that fine- tuning BERT on a specific task does not im- prove its prunability. We conclude that BERT can be pruned once during pre-training rather than separately for each task without affecting performance. # Introduction Pre-trained feature extractors, such as BERT (De- vlin et al., 2018) for natural language processing and VGG (Simonyan and Zisserman, 2014) for computer vision, have become effective methods for improving the performance of deep learning models. In the last year, models similar to BERT have become state-of-the-art in many NLP tasks, including natural language inference (NLI), named entity recognition (NER), sentiment analysis, etc. These models follow a pre-training paradigm: they are trained on a large amount of unlabeled text via a task that resembles language modeling (Yang et al., 2019; Chan et al., 2019) and are then fine-tuned on a smaller amount of downstream data, which is labeled for a specific task. Pre-trained models usually achieve higher accuracy than any model trained on downstream data alone. The pre-training paradigm, while effective, still has some problems. While some claim that lan- guage model pre-training is a “universal language learning task” (Radford et al., 2019), there is no theoretical justification for this, only empirical evi- dence. Second, due to the size of the pre-training dataset, BERT models tend to be slow and re- quire impractically large amounts of GPU memory. BERT-Large can only be used with access to a Google TPU, and BERT-Base requires some opti- mization tricks such as gradient checkpointing or gradient accumulation to be trained effectively on consumer hardware (Sohoni et al., 2019). Train- ing BERT-Base from scratch costs ∼$7k and emits ∼1438 pounds of CO2 (Strubell et al., 2019). Model compression (Bucila et al., 2006), which attempts to shrink a model without losing accuracy, is a viable approach to decreasing GPU usage. It might also be used to trade accuracy for memory in some low-resource cases, such as deploying to smartphones for real-time prediction. The main questions this paper attempts to answer are: Does compressing BERT impede it’s ability to trans- fer to new tasks? And does fine-tuning make BERT more or less compressible? To explore these questions, we compressed En- glish BERT using magnitude weight pruning (Han et al., 2015) and observed the results on transfer learning to the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2019), a diverse set of natural language understanding tasks including sentiment analysis, NLI, and tex- tual similarity evaluation. We chose magnitude weight pruning, which compresses models by re- moving weights close to 0, because it is one of the most fine-grained and effective compression meth- ods and because there are many interesting ways to view pruning, which we explore in the next section. Our findings are as follows: Low levels of prun- ing (30-40%) do not increase pre-training loss or affect transfer to downstream tasks at all. Medium levels of pruning increase the pre-training loss and prevent useful pre-training information from be- ing transferred to downstream tasks. This infor- mation is not equally useful to each task; tasks degrade linearly with pre-train loss, but at different rates. High levels of pruning, depending on the size of the downstream dataset, may additionally degrade performance by preventing models from fitting downstream datasets. Finally, we observe that fine-tuning BERT on a specific task does not improve its prunability or change the order of prun- ing by a meaningful amount. To our knowledge, prior work had not shown whether BERT could be compressed in a task- generic way, keeping the benefits of pre-training while avoiding costly experimentation associated with compressing and re-training BERT multiple times. Nor had it shown whether BERT could be over-pruned for a memory / accuracy trade-off for deployment to low-resource devices. In this work, we conclude that BERT can be pruned prior to dis- tribution without affecting it’s universality, and that BERT may be over-pruned during pre-training for a reasonable accuracy trade-off for certain tasks. # 2 Pruning: Compression, Regularization, Architecture Search Neural network pruning involves examining a trained network and removing parts deemed to be unnecessary by some heuristic saliency crite- rion. One might remove weights, neurons, layers, channels, attention heads, etc. depending on which heuristic is used. Below, we describe three different lenses through which we might interpret pruning. Compression Pruning a neural network de- creases the number of parameters required to spec- ify the model, which decreases the disk space re- quired to store it. This allows large models to be deployed on edge computing devices like smart- phones. Pruning can also increase inference speed if whole neurons or convolutional channels are pruned, which reduces GPU usage.1 Regularization Pruning a neural network also regularizes it. We might consider pruning to be 1If weights are pruned, however, the weight matrices be- come sparse. Sparse matrix multiplication is difficult to opti- mize on current GPU architectures (Han et al., 2016), although progress is being made. a form of permanent dropout (Molchanov et al., 2017) or a heuristic-based L0 regularizer (Louizos et al., 2018). Through this lens, pruning decreases the complexity of the network and therefore nar- rows the range of possible functions it can express.2 The main difference between L0 or L1 regulariza- tion and weight pruning is that the former induce sparsity via a penalty on the loss function, which is learned during gradient descent via stochastic relaxation. It’s not clear which approach is more principled or preferred. (Gale et al., 2019) Sparse Architecture Search Finally, we can view neural network pruning as a type of sparse architecture search. Liu et al. (2019b) and Frankle and Carbin (2019) show that they can train care- fully re-initialized pruned architectures to similar performance levels as dense networks. Under this lens, stochastic gradient descent (SGD) induces network sparsity, and pruning simply makes that sparsity explicit. These sparse architectures, along with the appropriate initializations, are sometimes referred to as lottery tickets.3 # 2.1 Magnitude Weight Pruning In this work, we focus on weight magnitude prun- ing because it is one of the most fine-grained and effective pruning methods. It also has a compelling saliency criterion (Han et al., 2015): if a weight is close to zero, then its input is effectively ignored, which means the weight can be pruned. Magnitude weight pruning itself is a simple pro- cedure: 1. Pick a target percentage of weights to be pruned, say 50%. 2. Calculate a threshold such that 50% of weight magnitudes are under that threshold. 3. Remove those weights. 4. Continue training the network to recover any lost accuracy. 5. Option- ally, return to step 1 and increase the percentage of weights pruned. This procedure is conveniently implemented in a Tensorflow (Abadi et al., 2016) package4, which we use (Zhu and Gupta, 2017). Calculating a threshold and pruning can be done for all network parameters holistically (global prun- ing) or for each weight matrix individually (matrix- 2Interestingly, recent work used compression not to induce simplicity but to measure it (Arora et al., 2018). 3Sparse networks are difficult to train from scratch (Evci et al., 2019). However, Dettmers and Zettlemoyer (2019) and Mostafa and Wang (2019) present methods to do this by al- lowing SGD to search over the space of possible subnetworks. Our findings suggest that these methods might be used to train sparse BERT from scratch. 4https://www.tensorflow.org/versions/ r1.15/api_docs/python/tf/contrib/model_ pruning local pruning). Both methods will prune to the same sparsity, but in global pruning the sparsity might be unevenly distributed across weight ma- trices. We use matrix-local pruning because it is more popular in the community.5 For information on other pruning techniques, we recommend Gale et al. (2019) and Liu et al. (2019b). # 3 Experimental Setup BERT is a large Transformer encoder; for back- ground, we refer readers to Vaswani et al. (2017) or one of these excellent tutorials (Alammar, 2018; Klein et al., 2017). # Implementing BERT Pruning BERT-Base consists of 12 encoder layers, each of which contains 6 prunable matrices: 4 for the multi- headed self-attention and 2 for the layer’s output feed-forward network. Recall that self-attention first projects layer in- puts into key, query, and value embeddings via linear projections. While there is a separate key, query, and value projection matrix for each atten- tion head, implementations typically stack matrices from each attention head, resulting in only 3 pa- rameter matrices: one for key projections, one for value projections, and one for query projections. We prune each of these matrices separately, calcu- lating a threshold for each. We also prune the linear output projection, which combines outputs from each attention head into a single embedding.6 We prune word embeddings in the same way we prune feed-foward networks and self-attention pa- rameters.7 The justification is similar: if a word embedding value is close to zero, we can assume it’s zero and store the rest in a sparse matrix. This is useful because token / subword embeddings tend to account for a large portion of a natural lan- guage model’s memory. In BERT-Base specifically, 5The weights in almost every matrix in BERT-Base are approximately normally distributed with mean 0 and variance between 0.03 and 0.05 (Table A). This similarity may imply that global pruning would perform similarly to matrix-local pruning. 6We could have calculated a single threshold for the entire self-attention layer or for each attention head separately. Sim- ilar to global pruning vs. matrix-local pruning, it’s not clear which one should be preferred. 7Interestingly, pruning word embeddings is slightly more interpretable that pruning other matrices. See Figure 8 for a heatmap of embedding magnitudes, which shows that shorter subwords tend to be pruned more than longer subwords and that certain dimensions are almost never pruned in any sub- word. the embeddings account for ∼21% of the model’s memory. Our experimental code for pruning BERT, based on the public BERT repository, is available here.8 # 3.2 Pruning During Pre-Training We perform weight magnitude pruning on a pre- trained BERT-Base model.9 We select sparsities from 0% to 90% in increments of 10% and gradu- ally prune BERT to this sparsity over the first 10k steps of training. We continue pre-training on En- glish Wikipedia and BookCorpus for another 90k steps to regain any lost accuracy.10 The resulting pre-training losses are shown in Table 1. We then fine-tune these pruned models on tasks from the General Language Understanding Evalu- ation (GLUE) benchmark, which is a standard set of 9 tasks that include sentiment analysis, natural language inference, etc. We avoid WNLI, which is known to be problematic.11 We also avoid tasks with less than 5k training examples because the results tend to be noisy (RTE, MRPC, STS-B). We fine-tune a separate model on each of the remaining 5 GLUE tasks for 3 epochs and try 4 learning rates: [2, 3, 4, 5] × 10−5. The best evaluation accuracies are averaged and plotted in Figure 1. Individual task results are in Table 1. BERT can be used as a static feature-extractor or as a pre-trained model which is fine-tuned end- to-end. In all experiments, we fine-tune weights in all layers of BERT on downstream tasks. # 3.3 Disentangling Complexity Restriction and Information Deletion Pruning involves two steps: it deletes the informa- tion stored in a weight by setting it to 0 and then regularizes the model by preventing that weight from changing during further training. To disentangle these two effects (model complex- ity restriction and information deletion), we repeat the experiments from Section 3.2 with an identical pre-training setup, but instead of pruning we simply set the weights to 0 and allow them to vary during downstream training. This deletes the pre-training information associated with the weight but does not prevent the model from fitting downstream datasets by keeping the weight at zero during downstream training. We also fine-tune on downstream tasks 8https://github.com/mitchellgordon95/bert-prune 9https://github.com/google-research/bert 10 Evaluation curves leveled out at 20k steps. 11https://gluebenchmark.com/faq until training loss becomes comparable to models with no pruning. We trained most models for 13 epochs rather than 3. Models with 70-90% informa- tion deletion required 15 epochs to fit the training data. The results are also included in Figure 1 and Table 1. # 3.4 Pruning After Downstream Fine-tuning We might expect that BERT would be more com- pressible after downstream fine-tuning. Intuitively, the information needed for downstream tasks is a subset of the information learned during pre- training; some tasks require more semantic infor- mation than syntactic, and vice-versa. We should be able to discard the “extra” information and only keep what we need for, say, parsing (Li and Eisner, 2019). For magnitude weight pruning specifically, we might expect downstream training to change the distribution of weights in the parameter matrices. This, in turn, changes the sort-order of the abso- lute values of those weights, which changes the order that we prune them in. This new pruning order, hypothetically, would be less degrading to our specific downstream task. To test this, we fine-tuned pre-trained BERT- Base on downstream data for 3 epochs. We then pruned at various sparsity levels and continued training for 5 more epochs (7 for 80/90% spar- sity), at which point the training losses became comparable to those of models pruned during pre- training. We repeat this for learning rates in [2, 3, 4, 5]×10−5 and show the results with the best development accuracy in Figure 1 / Table 1. We also measure the difference in which weights are selected for pruning during pre-training vs. down- stream fine-tuning and plot the results in Figure 3. # 4 Pruning Regimes # 30-40% of Weights Are Discardable Figure 1 shows that the first 30-40% of weights pruned by magnitude weight pruning do not impact pre-training loss or inference on any downstream task. These weights can be pruned either before or after fine-tuning. This makes sense from the perspective of pruning as sparse architecture search: when we initialize BERT-Base, we initialize many possible subnetworks. SGD selects the best one for pre-training and pushes the rest of the weights to 0. We can then prune those weights without affecting the output of the network.12 # 4.2 Medium Pruning Levels Prevent Information Transfer Past 40% pruning, performance starts to degrade. Pre-training loss increases as we prune weights necessary for fitting the pre-training data (Table 1). Feature activations of the hidden layers start to diverge from models with low levels of pruning (Figure 2).13 Downstream accuracy also begins to degrade at this point. Why does pruning at these levels hurt down- stream performance? On one hand, pruning deletes pre-training information by setting weights to 0, preventing the transfer of the useful inductive bi- ases learned during pre-training. On the other hand, pruning regularizes the model by keeping certain weights at zero, which might prevent fitting down- stream datasets. Figure 1 and Table 1 show information deletion is the main cause of performance degradation be- tween 40 - 60% sparsity, since pruning and informa- tion deletion degrade models by the same amount. Information deletion would not be a problem if pre- training and downstream datasets contained simi- lar information. However, pre-training is effective precisely because the pre-training dataset is much larger than the labeled downstream dataset, which allows learning of more robust representations. We see that the main obstacle to compressing pre-trained models is maintaining the inductive bias of the model learned during pre-training. Encoding this bias requires many more weights than fitting downstream datasets, and it cannot be recovered due to a fundamental information gap between pre- training and downstream datasets.14 This leads us to believe that the amount a model can be pruned 12We know, however, that increasing the size of BERT to BERT-Large improves performance. This view does not fully explain why even an obviously under-parameterized model should become sparse. This may be caused by dropout, or it may be a general property of our training regime (SGD). Per- haps an extension of Tian et al. (2019) to under-parameterized models would provide some insight. 13We believe this observation may point towards a more principled stopping criterion for pruning. Currently, the only way to know how much to prune is by trial and (dev-set) error. Predictors of performance degradation while pruning might help us decide which level of sparsity is appropriate for a given trained network without trying many at once. 14We might consider finding a lottery ticket for BERT, which we would expect to fit the GLUE training data just as well as pre-trained BERT (Morcos et al., 2019; Yu et al., 2019). However, we predict that the lottery-ticket will not reach similar generalization levels unless the lottery ticket encodes enough information to close the information gap. Average GLUE Dev Acc ae 0.85 0.80 u < g 0.75 —— prune pretrain —— info deletion 0.707 —— prunedownstream Nf — random pruning == BERT 0% Prune 0.65 0.0 0.2 0.4 0.6 0.8 Prune Percentage Average GLUE Training Loss 0.6 | —— prune pretrain — info deletion 0.5 | —— Prune downstream — random pruning =-= BERT 0% Prune B04 g é 0.3 £0. 0.2 ee 1 0.0 0.2 0.4 0.6 0.8 Prune Percentage Average GLUE Dev Acc Average GLUE Training Loss ae 0.6 | —— prune pretrain 0.85 — info deletion 0.5 | —— Prune downstream — random pruning 0.80 =-= BERT 0% Prune u B04 < g g 0.75 é 0.3 £0. —— prune pretrain —— info deletion 0.2 0.707 —— prunedownstream Nf ee — random pruning == BERT 0% Prune 1 0.65 0.0 0.2 0.4 0.6 0.8 0.0 0.2 0.4 0.6 0.8 Prune Percentage Prune Percentage Figure 1: (Blue) The best GLUE dev accuracy and training losses for models pruned during pre-training, averaged over 5 tasks. Also shown are models with information deletion during pre-training (orange), models pruned after downstream fine-tuning (green), and models pruned randomly during pre-training instead of by lowest magnitude (red). 30-40% of weights can be pruned using magnitude weight pruning without decreasing dowsntream accuracy. Notice that information deletion fits the training data better than un-pruned models at all sparsity levels but does not fully recover evaluation accuracy. Also, models pruned after downstream fine-tuning have the same or worse development accuracy, despite achieving lower training losses. Note: none of the pruned models are overfitting because un-pruned models have the lowest training loss and the highest development accuracy. While the results for individual tasks are in Table 1, each task does not vary much from the average trend, with an exception discussed in Section 4.3. Pre-training Loss vs. Information Deletion Glue Accuracy 0.90 0.85 y Pi $ 0.80 6 0.75 0.70 ° 20 25 30 35 40 45 50 55 60 Pre-Training Loss Average Feature Cosine Sim with Prune 0 10 os ge 0.6 a 2 a ° 04 0.2 0.0 0.0 02 0.4 0.6 08 Prune Percentage Average Pre-training Loss vs. Information Deletion Glue Accuracy 10 0.90 os 0.85 ge 0.6 y a Pi 2 $ 0.80 a 6 ° 04 0.75 0.2 0.70 ° 0.0 20 25 30 35 40 45 50 55 60 0.0 02 0.4 0.6 08 Pre-Training Loss Prune Percentage Figure 2: (Left) Pre-training loss predicts information deletion GLUE accuracy linearly as sparsity increases. We believe the slope of each line tells us how much a bit of BERT is worth to each task. (CoLA at 90% is excluded (Right) The cosine similarities of features extracted for a subset of the pre-training from the line of best fit.) development data before and after pruning. Features are extracted from activations of all 12 layers of BERT and compared layer-wise to a model that has not been pruned. As performance degrades, cosine similarities of features decreases. is limited by the largest dataset the model has been trained on: in this case, the pre-training dataset. 15 # 4.3 High Pruning Levels Also Prevent Fitting Downstream Datasets At 70% sparsity and above, models with informa- tion deletion recover some accuracy w.r.t. pruned models, so complexity restriction is a secondary cause of performance degradation. However, these models do not recover all evaluation accuracy, de- spite matching un-pruned model’s training loss. Table 1 shows that on the MNLI and QQP tasks, which have the largest amount of training data, in- formation deletion performs much better than prun- ing. In contrast, models do not recover as well on SST-2 and CoLA, which have less data. We believe this is because the larger datasets require larger models to fit, so complexity restriction becomes an issue earlier. We might be concerned that poorly performing models are over-fitting, since they have lower train- ing losses than unpruned models. But the best performing information-deleted models have the lowest training error of all, so overfitting seems unlikely.16 # 4.4 How Much Is A Bit Of BERT Worth? We’ve seen that over-pruning BERT deletes infor- mation useful for downstream tasks. Is this in- formation equally useful to all tasks? We might consider the pre-training loss as a proxy for how much pre-training information we’ve deleted in total. Similarly, the performance of information- deletion models is a proxy for how much of that information was useful for each task. Figure 2 shows that the pre-training loss linearly predicts the effects of information deletion on downstream accuracy. For every bit of information we delete from BERT, it appears only a fraction is useful for CoLA, and an even smaller fraction useful for QQP.17 This relationship should be taken into account when con- sidering the memory / accuracy trade-off of over- pruning. Pruning an extra 30% of BERT’s weights 15We would have more confidence in this supposition if we had experiments where the pre-training data is much smaller than the downstream data. It would also be useful to have a more information-theoretic analysis of how data complexity influences model compressibility. This is may be an interesting direction for future work. 16We are reminded of the double-descent risk curve pro- posed by Belkin et al. (2018). 17We can’t quantify this now, but perhaps compression will help quantify the “universality” of the LM task. is worth only one accuracy point on QQP but 10 points on CoLA. It’s unclear, however, whether this is because the pre-training task is less relevant to QQP or whether QQP simply has a bigger dataset with more information content.18 # 5 Downstream Fine-tuning Does Not Improve Prunability Since pre-training information deletion plays a cen- tral role in performance degradation while over- pruning, we might expect that downstream fine- tuning would improve prunability by making im- portant weights more salient (increasing their mag- nitude). However, Figure 1 shows that models pruned after downstream fine-tuning do not sur- pass the development accuracies of models pruned during pre-training, despite achieving similar train- ing losses. Figure 3 shows fine-tuning changes which weights are pruned by less than 6%. Why doesn’t fine-tuning change which weights are pruned much? Table 2 shows that the magni- tude sorting order of weights is mostly preserved; weights move on average 0-4% away from their starting positions in the sort order. We also see that high magnitude weights are more stable than lower ones (Figure 6). Our experiments suggest that training on down- stream data before pruning is too blunt an instru- ment to improve prunability. Even so, we might consider simply training on the downstream tasks for much longer, which would increase the differ- ence in weights pruned. However, Figure 4 shows that even after an epoch of downstream fine-tuning, weights quickly re-stabilize in a new sorting order, meaning longer downstream training will have only a marginal effect on which weights are pruned. In- deed, Figure 3 shows that the weights selected for 60% pruning quickly stabilize and evaluation accu- racy does not improve with more training before pruning. # 6 Related Work Compressing BERT for Specific Tasks Section 5 showed that downstream fine-tuning does not in- crease prunability. However, several alternative compression approaches have been proposed to dis- card non-task-specific information. Li and Eisner (2019) used an information bottleneck to discard 18Hendrycks et al. (2019) suggest that pruning these weights might have a hidden cost: decreasing model robust- ness. Pre-train vs. Downstream Pruning o MNLI oP QNLI SST-2 COLA ra eeeoe 5 Pruning Mask Difference (%) we aumene « oe N ep 0 e 0.0 0.2 0.4 0.6 0.8 Sparsity 100 Effect of Fine-tuning Before Pruning 60% 6 0.95 52 vo © g 0.90 Â¥ < 4g 3 @ & 0.85 € 5 30 x a 0.80 4 a 2 = 20.75 2 5 0.70 1g 0.65 -—+ 0 i?) 2 4 6 8 10 12 Downstream Epochs Before Pruning Figure 3: (Top) The measured difference in pruning masks between models pruned during pre-training and models pruned during downstream fine-tuning. As pre- dicted, the differences are less than 6%, since fine- tuning only changes the magnitude sorting order of weights locally, not globally. (Bottom) The average GLUE development accuracy and pruning mask differ- ence for models trained on downstream datasets before pruning 60% at learning rate 5e-5. After pruning, mod- els are trained for an additional 2 epochs to regain accu- racy. We see that training between 3 and 12 epochs be- fore pruning does not change which weights are pruned or improve performance. non-syntactic information. Tang et al. (2019) used BERT as a knowledge distillation teacher to com- press relevant information into smaller Bi-LSTMs, while Kuncoro et al. (2019) took a similar distilla- tion approach. While fine-tuning does not increase prunability, task-specific knowledge might be ex- tracted from BERT with other methods. Attention Head Pruning previously showed redundancy in transformer models by pruning en- tire attention heads. Michel et al. (2019) showed that after fine-tuning on MNLI, up to 40% of at- tention heads can be pruned from BERT without affecting test accuracy. They show redundancy in BERT after fine-tuning on a single downstream task; in contrast, our work emphasizes the inter- play between compression and transfer learning to many tasks, pruning both before and after fine- tuning. Also, magnitude weight pruning allows us to additionally prune the feed-foward networks and sub-word embeddings in BERT (not just self- attention), which account for ∼72% of BERT’s total memory usage. We suspect that attention head pruning and weight pruning remove different redundancies from BERT. Figure 4 shows that weight pruning does not prune any specific attention head much more than the pruning rate for the whole model. It is not clear, however, whether weight pruning and recov- ery training makes attention heads less prunable by distributing functionality to unused heads. # 7 Conclusion And Future Work We’ve shown that encoding BERT’s inductive bias requires many more weights than are required to fit downstream data. Future work on compressing pre-trained models should focus on maintaining that inductive bias and quantifying its relevance to various tasks during accuracy/memory trade-offs. For magnitude weight pruning, we’ve shown that 30-40% of the weights do not encode any useful in- ductive bias and can be discarded without affecting BERT’s universality. The relevance of the rest of the weights vary from task to task, and fine-tuning on downstream tasks does not change the nature of this trade-off by changing which weights are pruned. In future work, we will investigate the fac- tors that influence language modeling’s relevance to downstream tasks and how to improve compres- sion in a task-general way. It’s reasonable to believe that these conclusions will generalize to other pre-trained language mod- % of Individual Attention Head Pruned 1.0 0.8 0.6 0.4 % of Attn Head Pruned 0.2 0.0 0.0 0.2 0.4 0.6 08 Sparsity Weight Sort Order Movement During Downstream Training ‘Avg % Movement of Weights in Sort Order 10 0 2 4 6 8 Downstream Epochs Trained 2 % of Individual Attention Head Pruned 1.0 0.8 0.6 0.4 % of Attn Head Pruned 0.2 0.0 Weight Sort Order Movement During Downstream Training ‘Avg % Movement of Weights in Sort Order 0.0 0.2 0.4 0.6 08 Sparsity 10 0 2 4 6 8 Downstream Epochs Trained 2 Figure 4: (Left) The average, min, and max percentage of individual attention heads pruned at each sparsity level. We see at 60% sparsity, each attention head individually is pruned strictly between 55% and 65%. (Right) We compute the magnitude sorting order of each weight before and after downstream fine-tuning. If a weight’s original position is 59 / 100 before fine-tuning and 63 / 100 after fine-tuning, then that weight moved 4% in the sorting order. After even an epoch of downstream fine-tuning, weights quickly stabilize in a new sorting order which is not far from the original sorting order. Variances level out similarly. els such as Kermit (Chan et al., 2019), XLNet (Yang et al., 2019), GPT-2 (Radford et al., 2019), RoBERTa (Liu et al., 2019a) or ELMO (Peters et al., 2018). All of these learn some variant of language modeling, and most use Transformer ar- chitectures. While it remains to be shown in fu- ture work, viewing pruning as architecture search implies these models will be prunable due to the training dynamics inherent to neural networks. ing practice and the bias-variance trade-off. arXiv e-prints, page arXiv:1812.11118. and Alexandru Niculescu-Mizil. 2006. Model compression. In Pro- ceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Min- ing, Philadelphia, PA, USA, August 20-23, 2006, pages 535–541. # References William Chan, Nikita Kitaev, Kelvin Guu, Mitchell Stern, and Jakob Uszkoreit. 2019. KERMIT: genera- tive insertion-based modeling for sequences. CoRR, abs/1906.01604. Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Gregory S. Cor- rado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian J. Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal J´ozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Gordon Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Ku- nal Talwar, Paul A. Tucker, Vincent Vanhoucke, Vi- jay Vasudevan, Fernanda B. Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. Tensorflow: Large-scale machine learning on heterogeneous dis- tributed systems. CoRR, abs/1603.04467. Jay Alammar. 2018. The illustrated transformer. Tim Dettmers and Luke S. Zettlemoyer. 2019. Sparse networks from scratch: Faster training without los- ing performance. ArXiv, abs/1907.04840. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. CoRR, abs/1810.04805. Utku Evci, Fabian Pedregosa, Aidan N. Gomez, and Erich Elsen. 2019. The difficulty of training sparse neural networks. CoRR, abs/1906.10732. Jonathan Frankle and Michael Carbin. 2019. The lot- tery ticket hypothesis: Finding sparse, trainable neu- ral networks. In International Conference on Learn- ing Representations. Sanjeev Arora, Rong Ge, Behnam Neyshabur, and Yi Zhang. 2018. Stronger generalization bounds for deep nets via a compression approach. CoRR, abs/1802.05296. Trevor Gale, Erich Elsen, and Sara Hooker. 2019. The state of sparsity in deep neural networks. CoRR, abs/1902.09574. Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mand al. 2018. Reconciling modern machine learn- Song Han, Xingyu Liu, Huizi Mao, Jing Pu, Ardavan Pedram, Mark A. Horowitz, and William J. Dally. 2016. Eie: Efficient inference engine on compressed In Proceedings of the 43rd deep neural network. International Symposium on Computer Architecture, ISCA ’16, pages 243–254, Piscataway, NJ, USA. IEEE Press. Song Han, Jeff Pool, John Tran, and William Dally. Learning both weights and connections 2015. for efficient neural network. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 28, pages 1135–1143. Curran Asso- ciates, Inc. Dan Hendrycks, Kimin Lee, and Mantas Mazeika. 2019. Using pre-training can improve model robust- ness and uncertainty. In ICML, pages 2712–2721. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. In Proc. ACL. Adhiguna Kuncoro, Chris Dyer, Laura Rimell, Stephen Clark, and Phil Blunsom. 2019. Scalable syntax- aware language models using knowledge distillation. CoRR, abs/1906.06438. Xiang Lisa Li and Jason Eisner. 2019. Specializing word embeddings (for parsing) by information bot- tleneck. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Hong Kong. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized BERT pretraining ap- proach. CoRR, abs/1907.11692. Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. 2019b. Rethinking the value of In International Conference on network pruning. Learning Representations. Christos Louizos, Max Welling, and Diederik P. Kingma. 2018. Learning sparse neural networks through l-0 regularization. In International Confer- ence on Learning Representations. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? ArXiv, abs/1905.10650. Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. 2017. Variational dropout sparsifies deep In Proceedings of the 34th Inter- neural networks. national Conference on Machine Learning - Volume 70, ICML’17, pages 2498–2507. JMLR.org. Ari S. Morcos, Haonan Yu, Michela Paganini, and Yuand ong Tian. 2019. One ticket to win them all: generalizing lottery ticket initializations across arXiv e-prints, page datasets and optimizers. arXiv:1906.02773. Hesham Mostafa and Xin Wang. 2019. Parameter effi- cient training of deep convolutional neural networks by dynamic sparse reparameterization. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. CoRR, abs/1802.05365. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large- arXiv e-prints, page Scale Image Recognition. arXiv:1409.1556. Nimit Sharad Sohoni, Christopher Richard Aberger, Megan Leszczynski, Jian Zhang, and Christopher R´e. 2019. Low-memory neural network training: A technical report. CoRR, abs/1904.10631. Emma Strubell, Ananya Ganesh, and Andrew McCal- lum. 2019. Energy and policy considerations for deep learning in NLP. CoRR, abs/1906.02243. Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task- specific knowledge from BERT into simple neural networks. CoRR, abs/1903.12136. Yuandong Tian, Tina Jiang, Qucheng Gong, and Ari S. Morcos. 2019. Luck matters: Understand- ing training dynamics of deep relu networks. CoRR, abs/1905.13405. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A multi-task benchmark and analysis plat- In Inter- form for natural language understanding. national Conference on Learning Representations. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237. Haonan Yu, Sergey Edunov, Yuandong Tian, and Ari S. Morcos. 2019. Playing the lottery with rewards and multiple languages: lottery tickets in RL and NLP. arXiv e-prints, page arXiv:1906.02768. To prune, or not to prune: exploring the efficacy of prun- ing for model compression. arXiv e-prints, page arXiv:1710.01878. # A Appendix Pruned 0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90 0 10 20 30 40 50 60 70 80 90 Pre-train Loss 1.82 1.82 1.83 1.86 1.93 2.03 2.25 2.62 3.44 5.83 1.82 1.82 1.83 1.86 1.93 2.03 2.25 2.62 3.44 5.83 - - - - - - - - - - MNLI 392k 83.1|0.25 83.3|0.21 83.3|0.24 83.3|0.23 83.0|0.25 82.6|0.27 81.8|0.32 79.5|0.40 75.9|0.49 64.8|0.76 83.0|0.20 82.8|0.01 82.9|0.01 82.3|0.01 82.2|0.19 82.5|0.19 81.9|0.20 80.8|0.01 78.6|0.01 72.9|0.01 82.6|0.15 82.9|0.19 82.7|0.15 82.7|0.23 82.7|0.25 82.6|0.19 81.8|0.22 80.5|0.30 73.7|0.53 58.7|0.86 QQP 363k 90.5|0.10 90.4|0.10 90.5|0.11 90.2|0.12 90.1|0.12 89.8|0.13 89.4|0.16 88.6|0.18 86.9|0.24 81.1|0.36 SST-2 67k 92.1|0.06 91.6|0.07 91.6|0.05 91.9|0.06 91.5|0.06 90.9|0.07 91.4|0.07 90.1|0.10 88.1|0.12 80.3|0.25 90.6|0.06 90.5|0.05 90.5|0.05 90.6|0.04 90.5|0.05 90.3|0.05 90.1|0.05 90.2|0.01 89.3|0.02 87.5|0.02 92.1|0.03 92.2|0.05 91.5|0.05 90.8|0.05 92.0|0.05 91.2|0.05 90.8|0.05 90.3|0.06 88.8|0.07 83.0|0.09 90.6|0.06 90.6|0.06 90.6|0.07 90.4|0.07 90.5|0.11 90.3|0.08 90.2|0.10 89.4|0.14 87.8|0.12 82.5|0.26 92.1|0.04 91.6|0.05 92.0|0.04 91.6|0.04 91.7|0.05 90.8|0.06 90.6|0.06 88.2|0.07 86.4|0.07 81.5|0.16 CoLA 8.5k 79.1|0.26 79.4|0.30 79.1|0.30 79.5|0.31 78.4|0.23 77.4|0.30 75.9|0.44 72.7|0.47 69.1|0.61 69.1|0.61 80.6|0.18 80.8|0.16 80.3|0.16 80.0|0.18 79.0|0.17 77.9|0.19 76.4|0.23 74.4|0.28 70.0|0.45 69.1|0.61 78.7|0.25 79.0|0.11 79.0|0.22 78.5|0.23 78.8|0.17 78.0|0.22 76.1|0.31 69.5|0.58 69.1|0.59 69.1|0.61 AVG 87.2|15.7 87.2|16.0 87.1|16.0 87.1|16.9 86.7|15.6 86.2|18.0 85.6|23.0 83.9|27.1 81.1|34.8 73.4|49.8 87.3|11.6 87.4|07.2 87.2|07.3 86.9|07.7 86.7|11.1 86.4|11.6 85.7|12.6 84.9|09.3 82.5|11.5 77.9|15.7 86.8|12.0 86.9|10.3 86.9|10.7 86.6|12.8 86.7|13.9 86.3|13.0 85.6|16.4 82.7|25.8 79.5|30.5 71.4|47.9 QNLI 108k 91.1|0.12 91.0|0.12 91.1|0.11 90.7|0.12 90.4|0.12 90.2|0.13 89.3|0.16 88.4|0.21 85.3|0.29 71.7|0.52 Information Deletion 90.0|0.10 90.5|0.09 90.5|0.09 90.5|0.10 90.1|0.10 90.2|0.10 89.5|0.10 88.7|0.10 86.0|0.02 76.8|0.06 Pruned after Downstream Fine-tuning 90.1|0.10 90.3|0.10 90.2|0.07 89.7|0.07 89.9|0.12 89.7|0.11 89.3|0.12 86.2|0.19 80.4|0.21 65.2|0.52 Random Pruning 90.6|0.15 90.3|0.13 88.5|0.14 86.9|0.23 84.5|0.23 81.5|0.28 71.7|0.45 63.0|0.62 61.1|0.64 60.2|0.65 83.3|0.26 82.0|0.27 80.6|0.32 79.1|0.36 75.4|0.45 71.6|0.60 70.4|0.60 64.1|0.76 58.8|0.84 49.8|0.98 90.5|0.10 90.1|0.12 89.8|0.12 89.2|0.14 88.2|0.16 86.6|0.20 85.2|0.24 81.4|0.34 76.6|0.46 74.3|0.51 92.4|0.07 92.3|0.05 91.1|0.07 89.3|0.10 88.6|0.09 85.0|0.10 81.5|0.21 80.6|0.20 80.6|0.23 75.1|0.33 78.7|0.18 77.0|0.32 73.5|0.39 71.8|0.47 69.3|0.57 69.1|0.61 69.1|0.61 69.1|0.61 69.1|0.61 69.1|0.61 87.1|15.3 86.3|18.0 84.7|20.8 83.3|25.9 81.2|30.3 78.8|35.8 75.6|42.3 71.6|50.3 69.3|55.6 65.7|61.4 0 10 20 30 40 50 60 70 80 90 1.82 2.09 2.46 2.98 3.76 4.73 5.63 6.22 6.87 7.37 | | | | Table 1: Pre-training development losses and GLUE task development accuracies for various levels of pruning. Each development accuracy is accompanied on its right by the achieved training loss, evaluated on the entire train- ing set. Averages are summarized in Figure 1. Pre-training losses are omitted for models pruned after downstream fine-tuning because it is not clear how to measure their performance on the pre-training task in a fair way. Sum of Weights Pruned — Sum of Absolute 3000000 | —— Sum of Signed --- y=0 2500000 2000000 1500000 1000000 500000 ° 500000 0.0 0.2 04 0.6 08 1.0 Sparsity Figure 5: The sum of weights pruned at each sparsity level for one shot pruning of BERT. Given the motivation for our saliency criterion, it seems strange that such a large magnitude of weights can be pruned without decreasing accuracy. LR 2e-5 3e-5 4e-5 5e-5 MNLI 1.91 ± 1.81 2.68 ± 2.51 3.41 ± 3.18 4.12 ± 3.83 QQP 1.82 ± 1.72 2.56 ± 2.40 3.30 ± 3.10 4.02 ± 3.74 QNL 1.27 ± 1.22 1.79 ± 1.69 2.31 ± 2.19 2.77 ± 2.62 SST-2 1.06 ± 1.03 1.54 ± 1.47 1.99 ± 1.89 2.38 ± 2.29 CoLA 0.79 ± 0.77 1.06 ± 1.03 1.11 ± 1.09 1.47 ± 1.43 Table 2: We compute the magnitude sorting order of each weight before and after downstream fine-tuning. If a weight’s original position is 59 / 100 before fine-tuning and 63 / 100 after fine-tuning, then that weight moved 4% in the sorting order. We then list the average movement of weights in each model, along with the standard deviation. Sorting order changes mostly locally across tasks: a weight moves, on average, 0-4% away from its starting position. As expected, larger datasets and larger learning rates have more movement (per epoch). We also see that higher magnitude weights are more stable than lower weights, see Figure 6. Distribution of Sort Order Movements. % of Movement 20 40 60 80 Starting Magnitude Sort Order Position Figure 6: We show how weight sort order movements are distributed during fine-tuning, given a weight’s starting magnitude. We see that higher magnitude weights are more stable than lower magnitude weights and do not move as much in the sort order. This plot is nearly identical for every model and learning rate, so we only show it once. Parameter Matrix Magnitude Heatmap SBBESE BSESES ase: Bau Row SEES SSBEE ES EBESSESRSER SS REESY BE REE ER BEEY: Hy on SaRRUSESEGRRE SeGRRRRG TOT ESSER AGS olumn Figure 7: A heatmap of the weight magnitudes of the 12 horizontally stacked self-attention key projection matrices for layer 1. A banding pattern can be seen: the highest values of the matrix tend to cluster in certain attention heads. This pattern appears in most of the self-attention parameter matrices, but it does not cause pruning to prune one head more than another. However, it may prove to be a useful heuristic for attention head pruning, which would not require making many passes over the training data. Parameter Matix Magnitude Heatnap Figure 8: A heatmap of the weight magnitudes of BERT’s subword embeddings. Interestingly, pruning BERT embeddings are more interpretable; we can see shorter subwords (top rows) have smaller magnitude values and thus will be pruned earlier than other subword embeddings. Weight Matrix Weight Mean Weight STD embeddings word embeddings -0.0282 0.042 layer 0 attention output FC -0.0000 0.029 layer O self attn key 0.0000 0.043 layer O self attn query 0.0000 0.043 layer O self attn value -0.0000 0.029 layer 0 intermediate FC -0.0000 0.037 layer 0 output FC -0.0012 0.036 layer | attention output FC 0.0001 0.028 layer | self attn key 0.0000 0.043 layer 1 self attn query -0.0003 0.043 layer 1 self attn value -0.0000 0.029 layer 1 intermediate FC 0.0001 0.039 layer 1 output FC -0.0014 0.038 layer 10 attention output FC -0.0000 0.033 layer 10 self attn key -0.0000 0.046 layer 10 self attn query 0.0002 0.046 layer 10 self attn value -0.0000 0.036 layer 10 intermediate FC 0.0000 0.039 layer 10 output FC -0.0011 0.038 layer 11 attention output FC -0.0000 0.037 layer 11 self attn key 0.0002 0.044 layer 11 self attn query -0.0001 0.045 layer 11 self attn value -0.0000 0.039 layer 11 intermediate FC 0.0004 0.039 layer 11 output FC -0.0008 0.036 layer 2 attention output FC 0.0000 0.027 layer 2 self attn key 0.0000 0.047 layer 2 self attn query 0.0000 0.048 layer 2 self attn value -0.0000 0.028 layer 2 intermediate FC 0.0001 0.040 layer 2 output FC -0.0015 0.038 layer 3 attention output FC 0.0001 0.029 layer 3 self attn key 0.0000 0.043 layer 3 self attn query 0.0003 0.043 layer 3 self attn value -0.0001 0.031 layer 3 intermediate FC -0.0001 0.040 layer 3 output FC -0.0014 0.039 layer 4 attention output FC 0.0000 0.033 layer 4 self attn key 0.0000 0.042 layer 4 self attn query -0.0001 0.042 layer 4 self attn value 0.0001 0.035 layer 4 intermediate FC 0.0001 0.041 Weight Matrix embeddings word embeddings layer 0 attention output FC layer 0 self attn key layer 0 self attn query layer 0 self attn value layer 0 intermediate FC layer 0 output FC layer 1 attention output FC layer 1 self attn key layer 1 self attn query layer 1 self attn value layer 1 intermediate FC layer 1 output FC layer 10 attention output FC layer 10 self attn key layer 10 self attn query layer 10 self attn value layer 10 intermediate FC layer 10 output FC layer 11 attention output FC layer 11 self attn key layer 11 self attn query layer 11 self attn value layer 11 intermediate FC layer 11 output FC layer 2 attention output FC layer 2 self attn key layer 2 self attn query layer 2 self attn value layer 2 intermediate FC layer 2 output FC layer 3 attention output FC layer 3 self attn key layer 3 self attn query layer 3 self attn value layer 3 intermediate FC layer 3 output FC layer 4 attention output FC layer 4 self attn key layer 4 self attn query layer 4 self attn value layer 4 intermediate FC layer 4 output FC layer 5 attention output FC layer 5 self attn key layer 5 self attn query layer 5 self attn value layer 5 intermediate FC layer 5 output FC 0.0282 -0.0000 0.0000 0.0000 -0.0000 -0.0000 -0.0012 0.0001 0.0000 -0.0003 -0.0000 0.0001 -0.0014 -0.0000 -0.0000 0.0002 -0.0000 0.0000 -0.0011 -0.0000 0.0002 -0.0001 -0.0000 0.0004 -0.0008 0.0000 0.0000 0.0000 -0.0000 0.0001 -0.0015 0.0001 0.0000 0.0003 -0.0001 -0.0001 -0.0014 0.0000 0.0000 -0.0001 0.0001 0.0001 -0.0014 -0.0000 -0.0001 -0.0000 -0.0000 0.0000 -0.0014 0.042 0.029 0.043 0.043 0.029 0.037 0.036 0.028 0.043 0.043 0.029 0.039 0.038 0.033 0.046 0.046 0.036 0.039 0.038 0.037 0.044 0.045 0.039 0.039 0.036 0.027 0.047 0.048 0.028 0.040 0.038 0.029 0.043 0.043 0.031 0.040 0.039 0.033 0.042 0.042 0.035 0.041 0.040 0.033 0.043 0.043 0.035 0.041 0.039 layer 6 attention output FC 0.0001 -0.0000 0.0001 0.0000 -0.0000 -0.0014 layer 7 attention output FC 0.0000 -0.0000 -0.0000 0.0001 0.0003 -0.0013 layer 8 attention output FC 0.0000 -0.0000 0.0001 0.0000 0.0004 -0.0013 layer 9 attention output FC 0.0001 0.0000 -0.0001 0.0000 0.0005 -0.0012 0.0000 Table 3: The values of BERT’s weights are normally distributed in each weight matrix. The means and variances are listed for each.
{ "id": "1710.01878" }
2002.08155
CodeBERT: A Pre-Trained Model for Programming and Natural Languages
We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing.
http://arxiv.org/pdf/2002.08155
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, Ming Zhou
cs.CL, cs.PL
Accepted to Findings of EMNLP 2020. 12 pages
null
cs.CL
20200219
20200918
0 2 0 2 p e S 8 1 ] L C . s c [ 4 v 5 5 1 8 0 . 2 0 0 2 : v i X r a # CodeBERT: A Pre-Trained Model for Programming and Natural Languages Zhangyin Feng1∗, Daya Guo2∗, Duyu Tang3, Nan Duan3, Xiaocheng Feng1 Ming Gong4, Linjun Shou4, Bing Qin1, Ting Liu1, Daxin Jiang4, Ming Zhou3 1 Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology, China 2 The School of Data and Computer Science, Sun Yat-sen University, China 3 Microsoft Research Asia, Beijing, China 4 Microsoft Search Technology Center Asia, Beijing, China {zyfeng,xcfeng,qinb,tliu}@ir.hit.edu.cn [email protected] {dutang,nanduan,migon,lisho,djiang,mingzhou}@microsoft.com # Abstract We present CodeBERT, a bimodal pre-trained model for programming language (PL) and natural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as nat- ural language code search, code documen- tation generation, etc. We develop Code- BERT with Transformer-based neural architec- ture, and train it with a hybrid objective func- tion that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both “bimodal” data of NL-PL pairs and “unimodal” data, where the former provides input tokens for model training while the latter helps to learn bet- ter generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code docu- mentation generation. Furthermore, to inves- tigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL- PL probing.1 # Introduction Large pre-trained models such as ELMo (Peters et al., 2018), GPT (Radford et al., 2018), BERT (Devlin et al., 2018), XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019) have dramati- cally improved the state-of-the-art on a variety of natural language processing (NLP) tasks. These pre-trained models learn effective contextual repre- sentations from massive unlabeled text optimized by self-supervised objectives, such as masked language modeling, which predicts the original masked word from an artificially masked input sequence. The success of pre-trained models in NLP also drives a surge of multi-modal pre-trained models, such as ViLBERT (Lu et al., 2019) for language-image and VideoBERT (Sun et al., 2019) for language-video, which are learned from bi- modal data such as language-image pairs with bi- modal self-supervised objectives. In this work, we present CodeBERT, a bimodal pre-trained model for natural language (NL) and programming language (PL) like Python, Java, JavaScript, etc. CodeBERT captures the seman- tic connection between natural language and pro- gramming language, and produces general-purpose representations that can broadly support NL-PL understanding tasks (e.g. natural language code search) and generation tasks (e.g. code documen- tation generation). It is developed with the multi- layer Transformer (Vaswani et al., 2017), which is adopted in a majority of large pre-trained models. In order to make use of both bimodal instances of NL-PL pairs and large amount of available uni- modal codes, we train CodeBERT with a hybrid objective function, including standard masked lan- guage modeling (Devlin et al., 2018) and replaced token detection (Clark et al., 2020), where uni- modal codes help to learn better generators for producing better alternative tokens for the latter objective. ∗Work done while this author was an intern at Microsoft Research Asia. 1 All the codes and data are available at https:// github.com/microsoft/CodeBERT We train CodeBERT from Github code reposito- ries in 6 programming languages, where bimodal datapoints are codes that pair with function-level natural language documentations (Husain et al., 2019). Training is conducted in a setting similar to that of multilingual BERT (Pires et al., 2019), in which case one pre-trained model is learned for 6 programming languages with no explicit mark- ers used to denote the input programming lan- guage. We evaluate CodeBERT on two down- stream NL-PL tasks, including natural language code search and code documentation generation. Results show that fine-tuning the parameters of CodeBERT achieves state-of-the-art performance on both tasks. To further investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and test CodeBERT in a zero-shot scenario, i.e. without fine-tuning the parameters of CodeBERT. We find that CodeBERT consistently outperforms RoBERTa, a purely natu- ral language-based pre-trained model. The contri- butions of this work are as follows: large NL-PL pre- trained model for multiple programming lan- guages. • Empirical results show that CodeBERT is ef- fective in both code search and code-to-text generation tasks. • We further created a dataset which is the first one to investigate the probing ability of the code-based pre-trained models. # 2 Background # 2.1 Pre-Trained Models in NLP Large pre-trained models (Peters et al., 2018; Rad- ford et al., 2018; Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019; Raffel et al., 2019) have brought dramatic empirical improvements on al- most every NLP task in the past few years. Suc- cessful approaches train deep neural networks on large-scale plain texts with self-supervised learning objectives. One of the most representative neural architectures is the Transformer (Vaswani et al., 2017), which is also the one used in this work. It contains multiple self-attention layers, and can be conventionally learned with gradient decent in an end-to-end manner as every component is differen- tiable. The terminology “self-supervised” means that supervisions used for pre-training are auto- matically collected from raw data without manual annotation. Dominant learning objectives are lan- guage modeling and its variations. For example, in GPT (Radford et al., 2018), the learning objec- tive is language modeling, namely predicting the next word wk given the preceding context words {w1, w2, ..., wk−1}. As the ultimate goal of pre- training is not to train a good language model, it is desirable to consider both preceding and following contexts to learn better general-purpose contextual representations. This leads us to the masked lan- guage modeling objective used in BERT (Devlin et al., 2018), which learns to predict the masked words of a randomly masked word sequence given surrounding contexts. Masked language modeling is also used as one of the two learning objectives for training CodeBERT. # 2.2 Multi-Modal Pre-Trained Models The remarkable success of the pre-trained model in NLP has driven the development of multi-modal pre-trained model that learns implicit alignment between inputs of different modalities. These mod- els are typically learned from bimodal data, such as pairs of language-image or pairs of language- video. For example, ViLBERT (Lu et al., 2019) learns from image caption data, where the model learns by reconstructing categories of masked im- age region or masked words given the observed inputs, and meanwhile predicting whether the cap- tion describes the image content or not. Simi- larly, VideoBERT (Sun et al., 2019) learns from language-video data and is trained by video and text masked token prediction. Our work belongs to this line of research as we regard NL and PL as different modalities. Our method differs from previous works in that the fuels for model train- ing include not only bimodal data of NL-PL pairs, but larger amounts of unimodal data such as codes without paired documentations. A concurrent work (Kanade et al., 2019) uses masked language modeling and next sentence pre- diction as the objective to train a BERT model on Python source codes, where a sentence is a log- ical code line as defined by the Python standard. In terms of the pre-training process, CodeBERT differs from their work in that (1) CodeBERT is trained in a cross-modal style and leverages both bimodal NL-PL data and unimodal PL/NL data, (2) CodeBERT is pre-trained over six programming languages, and (3) CodeBERT is trained with a new learning objective based on replaced token detection. # 3 CodeBERT We describe the details about CodeBERT in this section, including the model architecture, the input and output representations, the objectives and data used for training CodeBERT, and how to fine-tune CodeBERT when it is applied to downstream tasks. # 3.1 Model Architecture We follow BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019), and use multi-layer bidirectional Transformer (Vaswani et al., 2017) as the model architecture of CodeBERT. We will not review the ubiquitous Transformer architecture in detail. We develop CodeBERT by using exactly the same model architecture as RoBERTa-base. The total number of model parameters is 125M. # Input/Output Representations In the pre-training phase, we set the input as the concatenation of two segments with a special sepa- rator token, namely [CLS], w1, w2, ..wn, [SEP ], c1, c2, ..., cm, [EOS]. One segment is natural lan- guage text, and another is code from a certain pro- gramming language. [CLS] is a special token in front of the two segments, whose final hidden repre- sentation is considered as the aggregated sequence representation for classification or ranking. Follow- ing the standard way of processing text in Trans- former, we regard a natural language text as a se- quence of words, and split it as WordPiece (Wu et al., 2016). We regard a piece of code as a se- quence of tokens. The output of CodeBERT includes (1) contextual vector representation of each token, for both natural language and code, and (2) the representation of [CLS], which works as the aggregated sequence representation. # 3.3 Pre-Training Data We train CodeBERT with both bimodal data, which refers to parallel data of natural language-code pairs, and unimodal data, which stands for codes without paired natural language texts and natural language without paired codes. We use datapoints from Github repositories, where each bimodal datapoint is an individual function with paired documentation, and each uni- modal code is a function without paired documen- tation. Specifically, we use a recent large dataset TRAINING DATA bimodal DATA unimodal CODES GO JAVA JAVASCRIPT PHP PYTHON RUBY ALL 319,256 500,754 143,252 662,907 458,219 52,905 2,137,293 726,768 1,569,889 1,857,835 977,821 1,156,085 164,048 6,452,446 Table 1: Statistics of the dataset used for training Code- BERT. provided by Husain et al. (2019), which includes 2.1M bimodal datapoints and 6.4M unimodal codes across six programming languages (Python, Java, JavaScript, PHP, Ruby, and Go). Data statistics is shown in Table 1.2 The data comes from publicly available open- source non-fork GitHub repositories and are fil- tered with a set of constraints and rules. For ex- ample, (1) each project should be used by at least one other project, (2) each documentation is trun- cated to the first paragraph, (3) documentations shorter than three tokens are removed, (4) func- tions shorter than three lines are removed, and (5) function names with substring “test” are removed. An example of the data is given in Figure 1 3. An example of the data is given in Figure | >. >>> _parse_memory("2g") 2048 if s[-1].lower() not in units raise ValueError(“invalid format: " + s) return int(float(s[:-1]) * units[s[-1].lower()]) >>> _parse_memory("2g") 2048 if s[-1].lower() not in units raise ValueError(“invalid format: " + s) return int(float(s[:-1]) * units[s[-1].lower()]) Figure 1: An example of the NL-PL pair, where NL is the first paragraph (filled in red) from the documenta- tion (dashed line in black) of a function. # 3.4 Pre-Training CodeBERT We describe the two objectives used for training CodeBERT here. The first objective is masked language modeling (MLM), which has proven ef- fective in literature (Devlin et al., 2018; Liu et al., 2Since we will evaluate on the natural language code search task, we only use the training data of Husain et al. (2019) to train CodeBERT with no access to the dev and test- ing data. 3The source of the illustrating example comes from https://github.com/apache/spark/blob/ 618d6bff71073c8c93501ab7392c3cc579730f0b/ python/pyspark/rdd.py#L125-L138 w, ——> [MASK] —— Ww, ——> W2 ws; ———* W3 NL Generator Wa ———> Wy ws ——> [MASK], ——> a > C ——> [MASK], ——> C3 > C3 Code Generator C4 = %/ cs ——> Cs C6 ——> [MASK], ——> ——> replaced —-+ original —+ original —-+ original — original NL-Code Discriminator ——> original ——> replaced —— original —— original — original —+ replaced Figure 2: An illustration about the replaced token detection objective. Both NL and code generators are language models, which generate plausible tokens for masked positions based on surrounding contexts. NL-Code discrimi- nator is the targeted pre-trained model, which is trained via detecting plausible alternatives tokens sampled from NL and PL generators. NL-Code discriminator is used for producing general-purpose representations in the fine- tuning step. Both NL and code generators are thrown out in the fine-tuning step. 2019; Sun et al., 2019). We apply masked language modeling on bimodal data of NL-PL pairs. The sec- ond objective is replaced token detection (RTD), which further uses a large amount of unimodal data, such as codes without paired natural language texts. Detailed hyper-parameters for model pre-training are given in Appendix B.1. Objective #1: Masked Language Modeling (MLM) Given a datapoint of NL-PL pair (x = {w, c}) as input, where w is a sequence of NL words and c is a sequence of PL tokens, we first select a random set of positions for both NL and PL to mask out (i.e. mw and mc, respectively), and then replace the selected positions with a special [M ASK] token. Following Devlin et al. (2018), 15% of the tokens from x are masked out. Objective #2: Replaced Token Detection (RTD) In the MLM objective, only bimodal data (i.e. data- points of NL-PL pairs) is used for training. Here we present the objective of replaced token detection. The RTD objective (Clark et al., 2020) is origi- nally developed for efficiently learning pre-trained model for natural language. We adapt it in our sce- nario, with the advantage of using both bimodal and unimodal data for training. Specifically, there are two data generators here, an NL generator pGw and a PL generator pGc, both for generating plau- sible alternatives for the set of randomly masked positions. ˆwi ∼ pGw (wi|wmasked) for i ∈ mw ˆci ∼ pGc(ci|cmasked) for i ∈ mc (7) (8) # mw mc i ∼ unif{1, |w|} for i = 1 to |w| i ∼ unif{1, |c|} for i = 1 to |c| wmasked = REPLACE(w, mw, [M ASK]) cmasked = REPLACE(c, mc, [M ASK]) x = w + c (5) The MLM objective is to predict the original to- kens which are masked out, formulated as follows, where pD1 is the discriminator which predicts a token from a large vocabulary. Lum (8) = S- —log pt (ai masked masked) iemvUme (6) # qd) (2) (3) (4) (6) wcorrupt = REPLACE(w, mw, ˆw) ccorrupt = REPLACE(c, mc, ˆc) xcorrupt = wcorrupt + ccorrupt The discriminator is trained to determine whether a word is the original one or not, which is a binary classification problem. It is worth noting that the RTD objective is applied to every position in the input, and it differs from GAN (generative adver- sarial network) in that if a generator happens to produce the correct token, the label of that token is “real” instead of “fake” (Clark et al., 2020). The loss function of RTD with regard to the discrimina- tor parameterized by θ is given below, where δ(i) is (9) (10) (11) an indicator function and pD2 is the discriminator that predicts the probability of the i-th word being original. |w|+lel Lero(@) = >> i=l (1 - 5(i)) (1 — tog pP2(acorrt, 9) (12) (s(itoe pP2 (aot, j) 4 δ(i) = 1, 0, if xcorrupt i otherwise. = xi. (13) There are many different ways to implement the generators. In this work, we implement two ef- ficient n-gram language models (Jurafsky, 2000) with bidirectional contexts, one for NL and one for PL, and learn them from corresponding uni- model datapoints, respectively. The approach is easily generalized to learn bimodal generators or use more complicated generators like Transformer- based neural architecture learned in a joint manner. We leave these to future work. The PL training data is the unimodal codes as shown in Table 1, and the NL training data comes from the documentations from bimodal data. One could easily extend these two training datasets to larger amount. The final loss function are given below. min θ LMLM(θ) + LRTD(θ) (14) # 3.5 Fine-Tuning CodeBERT We have different settings to use CodeBERT in downstream NL-PL tasks. For example, in natural language code search, we feed the input as the same way as the pre-training phase and use the representation of [CLS] to measure the semantic relevance between code and natural language query, while in code-to-text generation, we use an encoder- decoder framework and initialize the encoder of a generative model with CodeBERT. Details are given in the experiment section. # 4 Experiment We present empirical results in this section to verify the effectiveness of CodeBERT. We first describe the use of CodeBERT in natural language code search (§4.1), in a way that model parameters of CodeBERT are fine-tuned. After that, we present the NL-PL probing task (§4.2), and evaluate Code- BERT in a zero-shot setting where the parameters of CodeBERT are fixed. Finally, we evaluate Code- BERT on a generation problem, i.e. code documen- tation generation (§4.3), and further evaluate on a programming language which is never seen in the training phase (§4.4). # 4.1 Natural Language Code Search Given a natural language as the input, the objec- tive of code search is to find the most semantically related code from a collection of codes. We con- duct experiments on the CodeSearchNet corpus (Husain et al., 2019) 4. We follow the official evalu- ation metric to calculate the Mean Reciprocal Rank (MRR) for each pair of test data (c, w) over a fixed set of 999 distractor codes. We further calculate the macro-average MRR for all languages as an overall evaluation metric. It is helpful to note that this met- ric differs from the AVG metric in the original pa- per, where the answer is retrieved from candidates from all six languages. We fine-tune a language- specific model for each programming language5. We train each model with a binary classification loss function, where a sof tmax layer is connected to the representation of [CLS]. Both training and validation datasets are created in a way that posi- tive and negative samples are balanced. Negative samples consist of balanced number of instances with randomly replaced NL (i.e. (c, ˆw)) and PL (i.e. (ˆc, w)). Detailed hyper-parameters for model fine-tuning are given in Appendix B.2. Model Comparisons Table 2 shows the results of different approaches on the CodeSearchNet cor- pus. The first four rows are reported by Husain et al. (2019), which are joint embeddings of NL and PL (Gu et al., 2018; Mitra et al., 2018). NBOW represents neural bag-of-words. CNN, BIRNN and SELFATT stand for 1D convolultional neu- ral network (Kim, 2014), bidirectional GRU-based recurrent neural network (Cho et al., 2014), and multi-head attention (Vaswani et al., 2017), respec- tively. We report the remaining numbers in Table 2. We train all these pre-trained models by regarding codes as a sequence of tokens. We also continu- ously train RoBERTa only on codes from Code- SearchNet with masked language modeling. Re- sults show that CodeBERT consistently performs 4More details about the dataset are given in Appendix A. 5We have fine-tuned a multi-lingual model for six program- ming languages, but find that it performs worse that fine-tuning a language-specific model for each programming language. MODEL RUBY JAVASCRIPT GO PYTHON JAVA PHP MA-AVG NBOW CNN BIRNN SELFATT ROBERTA PT W/ CODE ONLY (INIT=S) PT W/ CODE ONLY (INIT=R) CODEBERT (MLM, INIT=S) CODEBERT (MLM, INIT=R) CODEBERT (RTD, INIT=R) CODEBERT (MLM+RTD, INIT=R) 0.4285 0.2450 0.0835 0.3651 0.6245 0.5712 0.6612 0.5695 0.6898 0.6414 0.6926 0.4607 0.3523 0.1530 0.4506 0.6060 0.5557 0.6402 0.6029 0.6997 0.6512 0.7059 0.6409 0.6274 0.4524 0.6809 0.8204 0.7929 0.8191 0.8304 0.8383 0.8285 0.8400 0.5809 0.5708 0.3213 0.6922 0.8087 0.7855 0.8438 0.8261 0.8647 0.8263 0.8685 0.5140 0.5270 0.2865 0.5866 0.6659 0.6567 0.7213 0.7142 0.7476 0.7150 0.7484 0.4835 0.5294 0.2512 0.6011 0.6576 0.6172 0.6706 0.6556 0.6893 0.6774 0.7062 0.5181 0.4753 0.2580 0.5628 0.6972 0.6632 0.7260 0.6998 0.7549 0.7233 0.7603 Table 2: Results on natural language code retrieval. Baselines include four joint embeddings (first group) of NL and PL, RoBERTa, and RoBERTa which is continuously trained with masked language modeling on codes only (second group). PT stands for pre-training. We train CodeBERT (third group) with different settings, including using different initialization (from scratch (INIT=S) or initialized with the parameters of RoBERTa (INIT=R)) and using different learning objectives (MLM, RTD, or the combination of both). better than RoBERTa and the model pre-trained with code only. CodeBERT (MLM) learned from scratch performs better than RoBERTa. Unsur- prisingly, initializing CodeBERT with RoBERTa improves the performance 6. # 4.2 NL-PL Probing In the previous subsection, we show the empirical effectiveness of CodeBERT in a setting that the parameters of CodeBERT are fine-tuned in down- stream tasks. In this subsection, we further inves- tigate what type of knowledge is learned in Code- BERT without modifying the parameters. Task Formulation and Data Construction Fol- lowing the probing experiments in NLP (Petroni et al., 2019; Talmor et al., 2019), we study NL- PL probing here. Since there is no existing work towards this goal, we formulate the problem of NL-PL probing and create the dataset by ourselves. Given an NL-PL pair (c, w), the goal of NL-PL probing is to test model’s ability to correctly pre- dict/recover the masked token of interest (either a code token ci or word token wj) among distractors. There are two major types of distractors: one is the whole target vocabulary used for the masked lan- guage modeling objective (Petroni et al., 2019), and another one has fewer candidates which are filter or curated based on experts’ understanding about the ability to be tested (Talmor et al., 2019). We follow the second direction and formulate NL-PL probing as a multi-choice question answering task, where the question is cloze-style in which a certain token is replaced by [M ASK] and distractor candidate answers are curated based on our expertise. Specifically, we evaluate on the NL side and PL side, respectively. To ease the effort of data col- lection, we collect data automatically from NL-PL pairs in both validation and testing sets of Code- SearchNet, both of which are unseen in the pre- training phase. To evaluate on the NL side, we select NL-PL pairs whose NL documentations in- clude one of the six keywords (max, maximize, min, minimize, less, greater), and group them to four candidates by merging first two keywords and the middle two keywords. The task is to ask pre-trained models to select the correct one instead of three other distractors. That is to say, the input in this setting includes the complete code and a masked NL documentation. The goal is to select the correct answer from four candidates. For the PL side, we select codes containing keywords max and min, and formulate the task as a two-choice answer selection problem. Here, the input includes complete NL documentation and a masked PL code, and the goal is to select the correct answer from two candidates. Since code completion is an important scenario, we would like to test model’s ability in predicting the correct token merely based on preceding PL contexts. Therefore, we add an additional setting for PL side, where the input includes the complete NL documentation and preceding PL codes. Data statistics is given in the top two rows in Table 3. 6We further give a learning curve of different pre-trained models in the fine-tuning process in Appendix C. Model Comparisons Results are given in Table 3. We report accuracy, namely the number of cor- rectly predicted instances over the number of all instances, for each programming language. Since RUBY JAVASCRIPT GO PYTHON JAVA PHP ALL NUMBER OF DATAPOINTS FOR PROBING 38 PL (2 CHOICES) NL (4 CHOICES) 20 PL PROBING ROBERTA PRE-TRAIN W/ CODE ONLY CODEBERT (MLM) PL PROBING WITH PRECEDING CONTEXT ONLY ROBERTA PRE-TRAIN W/ CODE ONLY CODEBERT (MLM) NL PROBING ROBERTA PRE-TRAIN W/ CODE ONLY CODEBERT (MLM) 73.68 71.05 86.84 73.68 63.16 65.79 50.00 55.00 65.00 272 65 63.97 77.94 86.40 53.31 48.53 50.74 72.31 67.69 89.23 152 159 72.37 89.47 90.79 51.32 61.84 59.21 54.72 60.38 66.67 1,264 216 59.18 70.41 82.20 55.14 56.25 62.03 61.57 68.06 76.85 482 323 59.96 70.12 90.46 42.32 58.51 54.98 61.61 65.02 73.37 407 73 69.78 82.31 88.21 52.58 58.97 59.95 65.75 68.49 79.45 2,615 856 62.45 74.11 85.66 52.24 56.71 59.12 61.21 65.19 74.53 Table 3: Statistics of the data for NL-PL probing and the performance of different pre-trained models. Accuracies (%) are reported. Best results in each group are in bold. datasets in different programming languages are extremely unbalanced, we report the accumulated metric with the same way. We use CodeBERT (MLM) here because its output layer naturally fits for probing. Results show that CodeBERT per- forms better than baselines on almost all languages on both NL and PL probing. The numbers with only preceding contexts are lower than that with bidirectional contexts, which suggests that code completion is challenging. We leave it as a future work. masked NL token "Transforms a vector np.arange(-N, M, dx) to np.arange((min\(|vec]), max(N,M),dx)]" def vec_to_halfvec(vec): d= vec[1:] - vec{:-1] if ((d/d.mean()).std() > 1e-14) or (d.mean() < 0): raise ValueError('vec must be np.arange() in increasing order') dx = d.mean() lowest = np.abs(vec). highest = np.abs(vec).max() masked PL token return np.arange(lowest, highest + 0.1*dx, dx).astype(vec.dtype) We further give a case study on PL-NL probing. We mask NL token and PL token separately, and report the predicted probabilities of RoBERTa and CodeBERT. Figure 3 illustrates the example of a python code7. We can see that RoBERTa fails in both cases, whereas CodeBERT makes the correct prediction in both NL and PL settings. max min less greater NL Roberta 96.24% | 3.73% 0.02% 0.01% CodeBERT (MLM) | 39.38% | 60.60% | 0.02% | 0.0003% OL Roberta 95.85% | 4.15% CodeBERT (MLM) | 0.001% | 99.999% Figure 3: Case study on python language. Masked to- kens in NL (in blue) and PL (in yellow) are separately applied. Predicted probabilities of RoBERTa and Code- BERT are given. # 4.3 Code Documentation Generation Although the pre-training objective of Code- BERT does not include generation-based objectives (Lewis et al., 2019), we would like to investigate to what extent does CodeBERT perform on gen- eration tasks. Specifically, we study code-to-NL generation, and report results for the documenta- tion generation task on CodeSearchNet Corpus in six programming languages. Since the generated documentations are short and higher order n-grams may not overlap, we remedy this problem by using smoothed BLEU score (Lin and Och, 2004). 7The example comes from https:// github.com/peri-source/peri/blob/ 61beed5deaaf978ab31ed716e8470d86ba639867/ peri/comp/psfcalc.py#L994-L1002 Model Comparisons We compare our model with several baselines, including a RNN-based model with attention mechanism (Sutskever et al., 2014), the Transformer (Vaswani et al., 2017), RoBERTa and the model pre-trained on code only. To demonstrate the effectiveness of CodeBERT on code-to-NL generation tasks, we adopt various pre-trained models as encoders and keep the hyper- parameters consistent. Detailed hyper-parameters are given in Appendix B.3. Table 4 shows the results with different mod- els for the code-to-documentation generation task. As we can see, models pre-trained on program- ming language outperform RoBERTa, which illus- trates that pre-trainning models on programming MODEL RUBY JAVASCRIPT GO PYTHON JAVA PHP OVERALL SEQ2SEQ TRANSFORMER ROBERTA PRE-TRAIN W/ CODE ONLY CODEBERT (RTD) CODEBERT (MLM) CODEBERT (RTD+MLM) 9.64 11.18 11.17 11.91 11.42 11.57 12.16 10.21 11.59 11.90 13.99 13.27 14.41 14.90 13.98 16.38 17.72 17.78 17.53 17.78 18.07 15.93 15.81 18.14 18.58 18.29 18.77 19.06 15.09 16.26 16.47 17.50 17.35 17.38 17.65 21.08 22.12 24.02 24.34 24.10 24.85 25.16 14.32 15.56 16.57 17.35 17.00 17.46 17.83 Table 4: Results on Code-to-Documentation generation, evaluated with smoothed BLEU-4 score. language could improve code-to-NL generation. Besides, results in the Table 4 show that CodeBERT pre-trained with RTD and MLM objectives brings a gain of 1.3 BLEU score over RoBERTa overall and achieve the state-of-the-art performance8. # 4.4 Generalization to Programming Languages NOT in Pre-training We would like to evaluate CodeBERT on the pro- gramming language which is never seen in the pre- training step. To this end, we study the task of gen- erating a natural language summary of a C# code snippet. We conduct experiments on the dataset of CodeNN (Iyer et al., 2016)9, which consists of 66,015 pairs of questions and answers automati- cally collected from StackOverflow. This dataset is challenging since the scale of dataset is orders of magnitude smaller than CodeSearchNet Corpus. We evaluate models using smoothed BLEU-4 score and use the same evaluation scripts as Iyer et al. (2016). MODEL BLEU MOSES (KOEHN ET AL., 2007) IR SUM-NN (RUSH ET AL., 2015) 2-LAYER BILSTM TRANSFORMER (VASWANI ET AL., 2017) TREELSTM (TAI ET AL., 2015) CODENN (IYER ET AL., 2016) CODE2SEQ (ALON ET AL., 2019) ROBERTA PRE-TRAIN W/ CODE ONLY CODEBERT (RTD) CODEBERT (MLM) CODEBERT (MLM+RTD) 11.57 13.66 19.31 19.78 19.68 20.11 20.53 23.04 19.81 20.65 22.14 22.32 22.36 could generalize better to other programming lan- guage which is never seen in the pre-training step. However, our model achieve slightly lower results than code2seq (Alon et al., 2019). The main reason could be that code2seq makes use of compositional paths in its abstract syntax tree (AST) while Code- BERT only takes original code as the input. We have trained a version of CodeBERT by traversing the tree structure of AST following a certain order, but applying that model does not bring improve- ments on generation tasks. This shows a potential direction to improve CodeBERT by incorporating AST. # 5 Conclusion In this paper, we present CodeBERT, which to the best of our knowledge is the first large bimodal pre-trained model for natural language and pro- gramming language. We train CodeBERT on both bimodal and unimodal data, and show that fine- tuning CodeBERT achieves state-of-the-art perfor- mance on downstream tasks including natural lan- guage code search and code-to-documentation gen- eration. To further investigate the knowledge em- bodied in pre-trained models, we formulate the task of NL-PL probing and create a dataset for probing. We regard the probing task as a cloze-style answer selection problem, and curate distractors for both NL and PL parts. Results show that, with model parameters fixed, CodeBERT performs better than RoBERTa and a continuously trained model using codes only. Table 5: Code-to-NL generation on C# language. Model Comparisons Table 5 shows that our model with MLM and RTD pre-training objectives achieves 22.36 BLEU score and improves by 2.55 points over RoBERTa, which illustrates CodeBERT 8We further give some output examples in Appendix E. 9https://github.com/sriniiyer/codenn There are many potential directions for further research on this field. First, one could learn better generators with bimodal evidence or more compli- cated neural architecture to improve the replaced to- ken detection objective. Second, the loss functions of CodeBERT mainly target on NL-PL understand- ing tasks. Although CodeBERT achieves strong BLEU scores on code-to-documentation genera- tion, the CodeBERT itself could be further im- proved by generation-related learning objectives. How to successfully incorporate AST into the pre- training step is also an attractive direction. Third, we plan to apply CodeBERT to more NL-PL re- lated tasks, and extend it to more programming languages. Flexible and powerful domain/language adaptation methods are necessary to generalize well. # Acknowledgments Xiaocheng Feng is the corresponding author of this work. We thank the anonymous reviewers for their insightful comments. Zhangyin Feng, Xiaocheng Feng, Bing Qin and Ting Liu are supported by the National Key R&D Program of China via grant 2018YFB1005103 and National Natural Science Foundation of China (NSFC) via grant 61632011 and 61772156. # References Uri Alon, Shaked Brody, Omer Levy, and Eran Yahav. 2019. code2seq: Generating sequences from struc- tured representations of code. International Confer- enceon Learning Representations. Kyunghyun Cho, Bart Van Merri¨enboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. {ELECTRA}: Pre- training text encoders as discriminators rather than In International Conference on Learn- generators. ing Representations. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Xiaodong Gu, Hongyu Zhang, and Sunghun Kim. 2018. Deep code search. In 2018 IEEE/ACM 40th Interna- tional Conference on Software Engineering (ICSE), pages 933–944. IEEE. Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. 2019. Code- searchnet challenge: Evaluating the state of seman- tic code search. arXiv preprint arXiv:1909.09436. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code In Proceedings using a neural attention model. of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2073–2083. Dan Jurafsky. 2000. Speech & language processing. Pearson Education India. Aditya Kanade, Petros Maniatis, Gogul Balakrish- Pre-trained contex- arXiv preprint nan, and Kensen Shi. 2019. tual embedding of source code. arXiv:2001.00059. Yoon Kim. 2014. Convolutional neural net- arXiv preprint works for sentence classification. arXiv:1408.5882. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source In Pro- toolkit for statistical machine translation. ceedings of the 45th annual meeting of the associ- ation for computational linguistics companion vol- ume proceedings of the demo and poster sessions, pages 177–180. Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Chin-Yew Lin and Franz Josef Och. 2004. Orange: a method for evaluating automatic evaluation metrics for machine translation. In Proceedings of the 20th international conference on Computational Linguis- tics, page 501. Association for Computational Lin- guistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi- olinguistic representations for vision-and-language tasks. In Advances in Neural Information Process- ing Systems, pages 13–23. Bhaskar Mitra, Nick Craswell, et al. 2018. An intro- duction to neural information retrieval. Foundations and Trends® in Information Retrieval, 13(1):1-126. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. arXiv preprint arXiv:1802.05365. Fabio Petroni, Tim Rockt¨aschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Se- bastian Riedel. 2019. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. arXiv How multilingual is multilingual bert? preprint arXiv:1906.01502. Alec Radford, Karthik Narasimhan, Tim Salimans, Improving language and Ilya Sutskever. 2018. understanding by generative pre-training. URL https://s3-us-west-2. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683. Alexander M Rush, Sumit Chopra, and Jason We- A neural attention model for ab- arXiv preprint ston. 2015. stractive sentence summarization. arXiv:1509.00685. Chen Sun, Austin Myers, Carl Vondrick, Kevin Mur- phy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learn- ing. arXiv preprint arXiv:1904.01766. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In Advances in neural information processing sys- tems, pages 3104–3112. Kai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representations from tree-structured long short-term memory net- works. arXiv preprint arXiv:1503.00075. Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. lan- guage model pre-training captures. arXiv preprint arXiv:1912.13283. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all In Advances in neural information pro- you need. cessing systems, pages 5998–6008. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between hu- arXiv preprint man and machine translation. arXiv:1609.08144. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- arXiv preprint ing for language understanding. arXiv:1906.08237. # A Data Statistic Data statistics of the training/validation/testing data splits for six programming languages are given in Table 6. CODE SEARCH TRAINING DEV TESTING GO JAVA JAVASCRIPT PHP PYTHON RUBY 635,635 908,886 247,773 1,047,406 824,342 97,580 28,483 30,655 16,505 52,029 46,213 4,417 14,291 26,909 6,483 28,391 22,176 2,279 Table 6: Data statistics about the CodeSearchNet Cor- pus for natural language code search. # B Train Details # B.1 Pre-training We train CodeBERT on one NVIDIA DGX-2 ma- chine using FP16. It combines 16 interconnected NVIDIA Tesla V100 with 32GB memory. We use the following set of hyper-parameters to train mod- els: batchsize is 2,048 and learning rate is 5e-4. We use Adam to update the parameters and set the num- ber of warmup steps as 10K. We set the max length as 512 and the max training step is 100K. Training 1,000 batches of data costs 600 minutes with MLM objective, 120 minutes with RTD objective. # B.2 CodeSearch In the fine-turning step, we set the learning rate as 1e-5, the batch size as 64, the max sequence length as 200 and the max fine-tuning epoch as 8. As the same with pre-training, We use Adam to update the parameters. We choose the model performed best on the development set, and use that to evaluate on the test set. # B.3 Code Summarization on Six Programming Languages We use Transformer with 6 layers, 768 dimensional hidden states and 12 attention heads as our decoder in all settings. We set the max length of input and inference as 256 and 64, respectively. We use the Adam optimizer to update model parameters. The learning rate and the batch size are 5e-5 and 64, respectively. We tune hyperparameters and perform early stopping on the development set. # B.4 Code Summarization on C# Since state-of-the-art methods use RNN as their de- coder, we choose a 2-layer GRU with an attention mechanism as our decoder for a comparison. We fine-tune models using a grid search with the fol- lowing set of hyper-parameters: batchsize is in {32, 64} and learning rate is in {2e-5, 5e-5}. We report the number when models achieve best performance on the development set. # C Learning Curve of CodeSearch From Figure 4, we can see that CodeBERT per- forms better at the early stage, which reflects that CodeBERT provides good initialization for learn- ing downstream tasks. Se Roberta 2 CodeBERT Te Roberta e- CodeBERT 2 Pretrain w/code only! 92.5 24.5 795 2 Pre-train wi Ate Dev Accuracy of Python Dev Accuracy of Java ‘The Number of Epoch The Number af Epoch Figure 4: Learning curve of different pre-trained mod- els in the fine-tuning step. We show results on Python and Java. # D Late Fusion In section §4.1 , we show that CodeBERT per- forms well in the setting where natural languages and codes have early interactions. Here, we in- vestigate whether CodeBERT is good at working as a unified encoder. We apply CodeBERT for natural language code search in a later fusion set- ting, where CodeBERT first encodes NL and PL separately, and then calculates the similarity by dot- product. In this way, code search is equivalent to find the nearest codes in the shared vector space. This scenario also facilitates the use of CodeBERT in an online system, where the representations of codes are calculated in advance. In the runtime, a system only needs to compute the representation of NL and vector-based dot-product. We fine-tune CodeBERT with the following ob- jective, which maximizes the dot-product of the ground truth while minimizing the dot-product of distractors. Pe exp(Enc(c;)™ Enc(wi)) ) d, exp(Enc(c;j )TEnc(wi)) (15) Results are given in Table 7. We just do this setting on two languages with a relatively small amount of data. We can see that CodeBERT performs better than RoBERTa and the model pre-trained with codes MODEL RUBY GO ROBERTA PRE-TRAIN W/ CODE ONLY CODEBERT 0.0043 0.1648 0.6870 0.0030 0.4179 0.8372 Table 7: Results on natural language code search by late fusion. only. And late fusion performs comparable with the standard way. What’s more, late fusion is more efficient and this setting could be used in an online system. # E Case Study To qualitatively analyze the effectiveness of Code- BERT, we give some cases for code search and code documentation generation tasks. Considering the limited space, we only give the top2 results of the query for python programming language. As show in Figure 5, search results are very relevant with query. Figure 6 and Figure 7 show the outputs with different models for the code documentation gen- eration task. As we can see, CodeBERT performs better than all baselines. Query create file and write something Search Results (top2) https://github.com/darknessomi/musicbox/blob/master/NEMbox/utils.py#L37-L40 def create_file(path, default=" "): if not os.path.exists(path): with open(path, "w") as f: f.write(default) https://github.com/datakortet/yamldirs/blob/master/yamldirs/filemaker.py#L114-L118 def make_file(self, filename, content): """Create a new file with name ‘filename’ and content ‘‘content*’. with open(filename, 'w') as fp: fp.write(content) Figure 5: Python CodeSearch example. The results are searched from 1,156,085 python code data. We only give the top2 results because space is limited. public void addWriteErrorResult(final Bulk WriteError writeError, final IndexMap indexMap) { notNull("writeError", writeError); mergeWriteErrors(asList(writeError), indexMap); } Gold: Add a write error result CodeBERT: Add a write error result . PRE-TRAIN W/ CODEONLY : Merges the given write error . Roberta: Add a write operation to the map . Transformer: Adds an error to the write map . RNN: Add an error map . Figure 6: Java code documentation generation output example. def create_or_update(self, list_id, subscriber_hash, data): subscriber_hash = check_subscriber_hash(subscriber_hash) self.list_id = list_id self.subscriber_hash = subscriber_hash if 'email_address' not in data: raise KeyError('The list member must have an email_address') check_email(data['email_address']) if 'status_if_new' not in data: raise KeyError('The list member must have a status_if_new’) if data['status_if_new’] not in ['subscribed’, 'unsubscribed', 'cleaned', ‘pending’, 'transactional']: raise ValueError('The list member status_if_new must be one of "subscribed", "unsubscribed", "cleaned", "pending", or "transactional"') return self._mc_client._put(url=self._build_path(list_id, 'members', subscriber_hash), data=data) Gold: Add or update a list member . CodeBERT: Create or update a list member . PRE-TRAIN W/ CODEONLY: Create or update a subscriber . Roberta: Create or update an existing record . Transformer: Create or update a subscription . RNN: Creates or updates an email address . # Figure 7: Python code documentation generation output example.
{ "id": "1810.04805" }
2002.07520
Gradient $\ell_1$ Regularization for Quantization Robustness
We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization. By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized on-demand to different bit-widths as energy and memory requirements of the application change. Unlike quantization-aware training using the straight-through estimator that only targets a specific bit-width and requires access to training data and pipeline, our regularization-based method paves the way for "on the fly'' post-training quantization to various bit-widths. We show that by modeling quantization as a $\ell_\infty$-bounded perturbation, the first-order term in the loss expansion can be regularized using the $\ell_1$-norm of gradients. We experimentally validate the effectiveness of our regularization scheme on different architectures on CIFAR-10 and ImageNet datasets.
http://arxiv.org/pdf/2002.07520
Milad Alizadeh, Arash Behboodi, Mart van Baalen, Christos Louizos, Tijmen Blankevoort, Max Welling
cs.LG, stat.ML
ICLR 2020
null
cs.LG
20200218
20200218
0 2 0 2 b e F 8 1 ] G L . s c [ 1 v 0 2 5 7 0 . 2 0 0 2 : v i X r a Published as a conference paper at ICLR 2020 # GRADIENT ¢; REGULARIZATION FOR QUANTIZATION ROBUSTNESS # Milad Alizadeh∗ 2,1, Arash Behboodi1, Mart van Baalen1, Christos Louizos1, Tijmen Blankevoort1, and Max Welling1 1Qualcomm AI Research† Qualcomm Technologies Netherlands B.V. {behboodi,mart,clouizos,tijmen,mwelling}@qti.qualcomm.com 2University of Oxford [email protected] # ABSTRACT We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robust- ness against post-training quantization. By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized on- demand to different bit-widths as energy and memory requirements of the ap- plication change. Unlike quantization-aware training using the straight-through estimator that only targets a specific bit-width and requires access to training data and pipeline, our regularization-based method paves the way for “on the fly” post- training quantization to various bit-widths. We show that by modeling quantiza- tion as a £,,-bounded perturbation, the first-order term in the loss expansion can be regularized using the /;-norm of gradients. We experimentally validate the ef- fectiveness of our regularization scheme on different architectures on CIFAR-10 and ImageNet datasets. # INTRODUCTION Deep neural networks excel across a variety of tasks, but their size and computational requirements often hinder their real-world deployment. The problem is more challenging for mobile phones, embedded systems, and IoT devices, where there are stringent requirements in terms of memory, compute, latency, and energy consumption. Quantization of parameters and activations is often used to reduce the energy and computational requirements of neural networks. Quantized neural networks allow for more speed and energy efficiency compared to floating-point models by using fixed-point arithmetic. However, naive quantization of pre-trained models often results in severe accuracy degradation, especially when targeting bit-widths below eight (Krishnamoorthi, 2018). Performant quantized models can be obtained via quantization-aware training or fine-tuning, i.e., learning full-precision shadow weights for each weight matrix with backpropagation using the straight-through estimator (STE) (Bengio et al., 2013), or using other approximations (Louizos et al., 2018). Alternatively, there have been successful attempts to recover the lost model accuracy without requiring a training pipeline (Banner et al., 2018; Meller et al., 2019; Choukroun et al., 2019; Zhao et al., 2019) or representative data (Nagel et al., 2019). But these methods are not without drawbacks. The shadow weights learned through quantization- aware fine-tuning often do not show robustness when quantized to bit-widths other than the one they were trained for (see Table 1). In practice, the training procedure has to be repeated for each quantization target. Furthermore, post-training recovery methods require intimate knowledge of the relevant architectures. While this may not be an issue for the developers training the model in the first ∗Work done during internship at Qualcomm AI Research †Qualcom AI Research is an initiative of Qualcomm Technologies, Inc. 1 Published as a conference paper at ICLR 2020 place, it is a difficult step for middle parties that are interested in picking up models and deploying them to users down the line, e.g., as part of a mobile app. In such cases, one might be interested in automatically constraining the computational complexity of the network such that it conforms to specific battery consumption requirements, e.g. employ a 4-bit variant of the model when the battery is less than 20% but the full precision one when the battery is over 80%. Therefore, a model that can be quantized to a specific bit-width “on the fly” without worrying about quantization aware fine-tuning is highly desirable. In this paper, we explore a novel route, substantially different from the methods described above. We start by investigating the theoretical properties of noise introduced by quantization and analyze it as a €,.-bounded perturbation. Using this analysis, we derive a straightforward regularization scheme to control the maximum first-order induced loss and learn networks that are inherently more robust against post-training quantization. We show that applying this regularization at the final stages of training, or as a fine-tuning step after training, improves post-training quantization across different bit-widths at the same time for commonly used neural network architectures. # 2 FIRST-ORDER QUANTIZATION-ROBUST MODELS In this section, we propose a regularization technique for robustness to quantization noise. We first propose an appropriate model for quantization noise. Then, we show how we can effectively control the first-order, i.e., the linear part of the output perturbation caused by quantization. When the linear approximation is adequate, our approach guarantees the robustness towards various quantization bit-widths simultaneously. We use the following notation throughout the paper. The ¢,-norm of a vector x in R" is denoted by ||x||, and defined as |||, := (S77, |a:|?)/” for p € [1,00). Atits limit we obtain the 0..-norm defined by ||a||.. := max; |x;|. The inner product of two vectors x and y is denoted by (a, y). ∞ 2.1 ROBUSTNESS ANALYSIS UNDER £-BOUNDED ADDITIVE NOISE The error introduced by rounding in the quantization operation can be modeled as a generic additive perturbation. Regardless of which bit-width is used, the quantization perturbation that is added to each value has bounded support, which is determined by the width of the quantization bins. In other words, the quantization noise vector of weights and activations in neural networks has entries that are bounded. Denote the quantization noise vector by A. If 6 is the width of the quantization bin, the vector A satisfies ||A||,, < 5/2. Therefore we model the quantization noise as a perturbation bounded in the ¢,,.-norm. A model robust to ¢,,-type perturbations would also be robust to quantization noise. To characterize the effect of perturbations on the output of a function, we look at its tractable ap- proximations. To start, consider the first-order Taylor-expansion of a real valued-function f (w +∆) around w: f(w + A) = f(w) + (A, VF (w)) + Ra, (1) where Rg refers to the higher-order residual error of the expansion. We set Rz aside for the moment and consider the output perturbation appearing in the first-order term (A, V f(w)). The maximum of the first-order term among all ¢,,-bounded perturbations A is given by: dy, dy, (A VU (w)) = 6IIV Fw) 2) To prove this, consider the inner product of A and an arbitrary vector x given by )>;_, nix. Since |n;| is assumed to be bounded by 46, each n;x; is bounded by 6|:;|, which yields the result. The maximum in Equation[2]is obtained indeed by choosing A = 6 sign(V f(w)). Equation[2|comes with a clear hint. We can guarantee that the first-order perturbation term is small if the £;-norm of the gradient is small. In this way, the first-order perturbation can be controlled efficiently for various values of 6, ie. for various quantization bit-widths. In other words, an ef- fective way for controlling the quantization robustness, up to first-order perturbations, is to control the ¢;-norm of the gradient. As we will shortly argue, this approach yields models with the best robustness. 2 Published as a conference paper at ICLR 2020 2500 2 c § 2000 g a 2 6 2 1500 E 6 = ™ 1000 = Baseline Network 500 9 © Regularized Network 0 5 10 15 20 25 fo-norms of gradients © Quantization config (6, 6) + Quantization config (5, 5) 10? Quantization config (4, 4) 10° 5] wo N 1 5 lo7 , 2 a 4 &% 10-2 — 10-3 1o-* lo-* 10° 10? 10 10° 10! KL Baseline Figure 1: ¢1- and ¢2-norms of the gradients for CIFAR-10 test-set mini-batches. Note the differ- ence between the scales on the horizontal and ver- tical axis. We observe that our regularization term decreases the ¢;-norm significantly, compared to its unregularized counterpart. Figure 2: KL-divergence of the floating point predictive distribution to the predictive distribu- tion of the quantized model for CIFAR-10 test- set mini-batches. We observe that the regulariza- tion leads to a smaller gap, especially for smaller bit- widths. This conclusion is based on worst-case analysis since it minimizes the upper bound of the first-order term, which is realized by the worst-case perturbation. Its advantage, however, lies in simultaneous control of the output perturbation for all δs and all input perturbations. In the context of quantization, this implies that the first-order robustness obtained in this way would hold regardless of the adopted quantization bit-width or quantization scheme. The robustness obtained in this way would persist even if the perturbation is bounded in other ¢,- norms. This is because the set of ¢,.-bounded perturbations includes all other bounded perturba- tions, as for all p € [1,00), |la|» < 6 implies ||a||,. < 6 (see Figure|8) . The robustness to £o-norm perturbations is, therefore, the most stringent one among other /,-norms, because a model should be robust to a broader set of perturbations. Controlling the ¢;-norm of the gradient guarantees robustness to ¢,.-perturbations and thereby to all other ¢,-bounded perturbations. ∞ In what follows, we propose regularizing the ¢;-norm of the gradient to promote robustness to bounded norm perturbations and in particular bounded ¢,,-norm perturbations. These perturbations arise from quantization of weights and activations of neural networks. 2.2 ROBUSTNESS THROUGH REGULARIZATION OF THE ¢;-NORM OF THE GRADIENT We focused on weight quantization in our discussions so far, but we can equally apply the same arguments for activation quantization. Although the activations are not directly learnable, their quantization acts as an additive ¢,.-bounded perturbation on their outputs. The gradient of these outputs is available. It therefore suffices to accumulate all gradients along the way to form a large vector for regularization. Suppose that the loss function for a deep neural network is given by Lc z(W, Y; x) where W denotes the set of all weights, Y denotes the set of outputs of each activation and x the input. We control the £,-norm of the gradient by adding the regularization term > Vw. Lce(W, Y; x)||, + > IVyLoce(W, Y; x)||, wicw wey Wl to the loss, yielding an optimization target L(W; x) = LCE(W, Y; x) + λw ∈ ∈ LW; 2) = Log (W,Y;@)+dw SD |Vwilou(W,Y;@)|] + Ay SO Vy. Lou(W,Y;2)Il, , W,,EeWw mey ∈ ∈ where λw and λy are weighing hyper-parameters. 3 (3) Published as a conference paper at ICLR 2020 0.25 mm Introduced loss mmm First order prediction 0.20 0.15 0.10 1 | WAM detent h ‘es Hakhh hk L. Ii. “aos L- mi | U | i -0.10 1 5 1s 20 Introduced Loss 10 Batch Number Figure 3: Predicting induced loss using first-order terms. We added ¢,,-bounded noise with 6 correspond- ing to 4-bit quantization to all weights of ResNet-18 and compared the induced loss on the CIFAR-10 test-set with the predictions using gradients. While not perfect, the first-order term is not insignificant. # 2.3. ALTERNATIVES TO THE £;-REGULARIZATION The equivalence of norms in finite-dimensional normed spaces implies that all norms are within a constant factor of one another. Therefore, one might suggest regularizing any norm to control other norms. Indeed some works attempted to promote robustness to quantization noise by controlling the @-norm of the gradient 2019). However, an argument related to the curse of dimensionality can show why this approach will not work. The equivalence of norms for ¢; and ¢2 in n-dimensional space is stated by the inequality: √ lle lly < llelly < Ve [ella - Although the ¢)-norm bounds the ¢;-norm from above, it is vacuous if it does not scale with 1/\/n. Imposing such a scaling is demanding when n, which is the number of trainable parameters, is large. Figure |1]shows that there is a large discrepancy between these norms in a conventionally trained network, and therefore small ¢2-norm does not adequately control the ¢;-norm. A very similar argument can be provided from a theoretical perspective (see the supplementary materials). O(1//n). We experimentally show in Section|4|that this is a difficult task. We therefore directly control the £;-norm in this paper. Note that small ¢;-norm is guaranteed to control the first order- perturbation for all types of quantization noise with bounded support. This includes symmetric and asymmetric quantization schemes. To guarantee robustness, the /2-norm of the ijn therefore, should be pushed as small as Another concern is related to the consistency of the first-order analysis. We neglected the residual term R2 in the expansion. Figure 3 compares the induced loss after perturbation with its first-order approximation. The approximation shows a strong correlation with the induced loss. We will see in the experiments that the quantization robustness can be boosted by merely controlling the first-order term. Nonetheless, a higher-order perturbation analysis can probably provide better approximations. Consider the second-order perturbation analysis: f(w+A) = f(w) + (A, Vf(w)) + SATV?F(w)A + R3. Computing the worst-case second-order term for ¢,,-bounded perturbations is hard. Even for convex functions where V? f(w) is positive semi-definite, the problem of computing worst-case second- order perturbation is related to the mixed matrix-norm computation, which is known to be NP- hard. There is no polynomial-time algorithm that approximates this norm to some fixed relative precision (Hendrickx & Olshevsky} |2 . For more discussions, see the supplementary materials. It is unclear how this norm should be controlled via regularization. # 3 RELATED WORK A closely related line of work to ours is the analysis of the robustness of the predictions made by neural networks subject to an adversarial perturbation in their input. Quantization can be seen as a similar scenario where non-adversarial perturbations are applied to weights and activations instead. Cisse et al. (2017) proposed a method for reducing the network’s sensitivity to small perturbations 4 √ Published as a conference paper at ICLR 2020 by carefully controlling its global Lipschitz. The Lipschitz constant of a linear layer is equal to the spectral norm of its weight matrix, i.e., its largest singular value. The authors proposed regularizing weight matrices in each layer to be close to orthogonal: Vwrew ||wPw, - |’. All singular values of orthogonal matrices are one; therefore, the operator does not amplify perturbation (and input) in any direction. studied the effect of this regularization in the context of quantized networks. The authors demonstrate the extra vulnerability of quantized models to adversarial attacks and show how this regularization, dubbed “Defensive Quantization”, improves the robustness of quantized networks. While the focus of is on improving the adversarial robustness, the authors report limited results showing accuracy improvements of post- training quantization. The idea of regularizing the norm of the gradients has been proposed before (Gulrajani et al.|{2017) in the context of GANs, as another way to enforce Lipschitz continuity. A differentiable function is 1-Lipschitz if and only if it has gradients with ¢-norm of at most | everywhere, hence the authors penalize the @-norm of the gradient of the critic with respect to its input. This approach has a major advantage over the methods mentioned above. Using weight regularization is only well-defined for 2D weight matrices such as in fully-connected layers. The penalty term is often approximated for convolutional layers by reshaping the weight kernels into 2D matrices. showed that the singular values found in this weight could be very different from the actual operator norm of the convolution. Some operators, such as nonlinearities, are also ignored. Regularizing Lipschitz constant through gradients does not suffer from these shortcomings, and the operator-norm is reg- ularized directly. |Guo et al.|(2018) demonstrated that there exists an intrinsic relationship between sparsity in DNNs and their robustness against ¢,, and f2 attacks. For a binary linear classifier, the authors showed that they could control the ¢,, robustness, and its relationship with sparsity, by reg- ularizing the ¢; norm of the weight tensors. In the case of a linear classifier, this objective is, in fact, equivalent to our proposed regularization penalty. Finally, another line of work related to ours revolves around quantization-aware training. This can, in general, be realized in two ways: 1) regularization and 2) mimicking the quantization procedure during the forward pass of the model. In the first case, we have methods (Yin et al., 2018; Achter- hold et al., 2018) where there are auxiliary terms introduced in the objective function such that the optimized weights are encouraged to be near, under some metric, to the quantization grid points, thus alleviating quantization noise. In the second case, we have methods that rely on either the STE (Courbariaux et al., 2015; Rastegari et al., 2016; Jacob et al., 2018), stochastic rounding (Gupta et al., 2015; Gysel, 2016), or surrogate objectives and gradients (Louizos et al., 2018; Shayer et al., 2017). While all of the methods above have been effective, they still suffer from a major limitation; they target one-specific bit-width. In this way, they are not appropriate for use-cases where we want to be able to choose the bit-width “on the fly”. # 4 EXPERIMENTS In this section we experimentally validate the effectiveness of our regularization method on im- proving post-training quantization. We use the well-known classification tasks of CIFAR-10 with ResNet-18 2016) and VGG-like (Simonyan & Zisserman} 2014) and of ImageNet with ResNet-18. We compare our results for various bit-widths against (1) unregularized baseline networks (2) Lipschitz regularization methods (Lin et al.| {2019} |Gulrajani et al.| (2017) and (3) quantization-aware fine-tuned models. Note that (2017) control the Lipschitz con- stant under an fy metric by explicitly regularizing the 0-norm of the gradient, while[Lin et al]2019) essentially control an upper bound on the @2-norm of the gradient. Comparing against these base- lines thus gives insight into how our method of regularizing the ¢;-norm of the gradient compares against regularization of the 2-norm of the gradient. 4.1 EXPERIMENTAL SETUP Implementation and complexity Adding the regularization penalty from Equation|3|to the train- ing objective requires higher-order gradients. This feature is available in the latest versions of frame- works such as Tensorflow and PyTorch (of which we have used the latter for all our experiments). Computing Vy||Vw||1 using automatic differentiation requires O(2 x C x E) extra computations, where E is the number of elementary operations in the original forward computation graph, and C’ 5 Published as a conference paper at ICLR 2020 (a) (b) # g Figure 4: Accuracy of regularized VGG-like after post-training quantization. We trained 5 models with different initializations and show the mean accuracy for each quantization configuration. The error bars indicate min/max observed accuracies. (a) Weight-only quantization (b) Activation quantization fixed to 4-bits is a fixed constant (Baydin et al.||2018). This can be seen from the fact that || VwL||1 is a function R!! — R, where |w] denotes the number of weights and the computation of the gradient w.r.t. the loss contains E elementary operations, as many as the forward pass. In practice, enabling regular- ization increased time-per-epoch time on CIFAR1O from 14 seconds to 1:19 minutes for VGG, and from 24 seconds to 3:29 minutes for ResNet-18. On ImageNet epoch-time increased from 33:20 minutes to 4:45 hours for ResNet-18. The training was performed on a single NVIDIA RTX 2080 Ti GPU. However, in our experiments we observed that it is not necessary to enable regularization from the beginning, as the ¢;-norm of the gradients decreases naturally up to a certain point as the training progresses (See Appendix [D]for more details). We therefore only enable regularization in the last 15 epochs of training or as an additional fine-tuning phase. We experimented with tuning ,, and A, in Equation[3]separately but found no benefit. We therefore set ,, = Ay = A for the remainder of this section. We use a grid-search to find the best setting for λ. Our search criteria is ensuring that the perfor- mance of the unquantized model is not degraded. In order to choose a sensible range of values we first track the regularization and cross-entropy loss terms and then choose a range of λ that ensures their ratios are in the same order of magnitude. We do not perform any quantization for validation purposes during the training. Quantization details We use uniform symmetric quantization (Jacob et al., 2018; Krishnamoorthi, 2018) in all our experiments unless explicitly specified otherwise. For the CIFAR 10 experiments we fix the activation bit-widths to 4 bits and then vary the weight bits from 8 to 4. For the Imagenet experiments we use the same bit-width for both weights and activations. For the quantization-aware fine-tuning experiments we employ the STE on a fixed (symmetric) quantization grid. In all these experiments we perform a hyperparameter search over learning rates for each of the quantization bit-widths and use a fixed weight decay of 1e − 4. For our experiments with defensive quantization (Lin et al., 2019) we perform a hyperparameter search over the scaling parameters of the regularizer and the learning rate. We limit the search over the scaling parameters to those mentioned in (Lin et al., 2019) and do not use weight decay. When applying post-training quantization we set the activation ranges using the batch normalization parameters as described in (Nagel et al., 2019). When a model is fine-tuned to a target bit-width and evaluated on a higher bit-width, we can trivially represent the original quantized weights and activations by ignoring the higher-order bits, or quantize using the higher bit-width. As using the higher bit-width to quantize shadow weights and activations introduces noise to the model and might yield lower results, we try both approaches and only report a result if quantization using the higher bit-width gives better results. 6 Published as a conference paper at ICLR 2020 egulared (ul Preston egulared (ul Preston Figure 5: Random cross sections of decision boundaries in the input space. To generate these cross- sections, we draw a random example from the CIFAR-10 test set (represented by the black dot in the center) and pass a random two-dimensional hyper-plane ⊂ R1024 through it. We then evaluate the network’s output for each point on the hyper-plane. Various colors indicate different classes. Softmax’s maximum values determine the contours. The top row illustrates the difference between the baseline and the regularized VGG-like networks (and their quantized variants) when they all classify an example correctly. The bottom row depicts a case where the quantized baseline misclassifies an example while the regularized network predicts the correct class. We can see that our regularization pushes the decision boundaries outwards and enlarges the decision cells. 4.2 EFFECTS OF REGULARIZATION In order to get a better understanding of our proposed regularizer, we first adopt the visualization method from Hoffman et al. (2019) and illustrate the effects that the quantization in general, and our method in particular, have on the trained classifier’s decision boundaries. The result can be seen in Figure 5, where we empirically observe that the regularized networks “expands” its decision cells. Secondly, we investigate in Figure [I] the £,- and ¢9-norms of the gradients for all CIFAR-10 test batches on the VGG-like model. We can observe that while the £2-norms of the gradient are small in the unregularized model, the /;-norms are orders of magnitude larger. Consequently, when fine- tuning the same model with our method, we see a strong decrease of the ¢;-norm. Finally, we investigate how the predictive distribution of the floating point model, p(y|x), changes when we quantize either an unregularized baseline or a model regularized with our method, thus obtaining q(y|x). We measure this discrepancy using the KL-divergence of the original predictive when using the predictive distribution of the quantized model, i.e. DKL(p(y|x)||q(y|x)), averaged over each test batch. Since our method improves robustness of the loss gradient against small per- turbations, we would expect the per-class probabilities to be more robust to perturbations as well, and thus more stable under quantization noise. The result can be seen in Figure 2, where we indeed observe that the gap is smaller when quantizing our regularized model. 4.3 CIFAR-10 & IMAGENET RESULTS The classification results from our CIFAR-10 experiments for the VGG-like and ResNet18 networks are presented in Table 1, whereas the result from our Imagenet experiments for the ResNet18 net- work can be found in Table 2. Both tables include all results relevant to the experiment, including results on our method, Defensive Quantization regularization, L2 gradient regularization and fine- tuning using the STE. Comparison to “Defensive Quantization” As explained in Section 3, Defensive Quantization (Lin et al., 2019) aims to regularize each layer’s Lipschitz constant to be close to 1. Since the 7 Published as a conference paper at ICLR 2020 FP VGG-like (8,4) (6,4) (4,4) FP ResNet-18 (8,4) (6,4) (4,4) No Regularization DQ Regularization L2 Regularization L1 Regularization (Ours) 92.49 91.51 91.88 92.63 79.10 86.30 86.64 89.74 78.84 84.29 86.14 89.78 11.47 30.86 63.93 85.99 93.54 92.46 93.31 93.36 85.51 83.31 84.50 88.70 85.35 83.34 84.99 88.45 83.98 82.47 83.82 87.62 STE @ (8,4) STE @ (6,4) STE @ (4,4) – – – 91.28 – – 89.99 90.25 – 32.83 39.56 89.79 – – – 89.10 – – 87.79 90.77 – 86.21 88.17 89.98 Table 1: Test accuracy (%) for the VGG-like and ResNet-18 models on CIFAR-10. STE @ (X,X) indicates the weight-activation quantization configuration used with STE for fine-tuning. DQ denotes Defensive Quantization (Lin et al., 2019). For the No Regularization row of results we only report the mean of 5 runs. The full range of the runs is shown in Figure 4. FP Configuration (6,6) (8,8) (4,4) No Regularization DQ Regularization L2 Regularization L1 Regularization (Ours) L1 Regularization (Ours) (λ = 0.05) 69.70 68.28 68.34 70.07 64.02 69.20 67.76 68.02 69.92 63.76 63.80 62.31 64.52 66.39 61.19 0.30 0.24 0.19 0.22 55.32 STE @ (8,8) STE @ (6,6) STE @ (4,4) – – – 70.06 – – 60.18 69.63 – 0.13 11.34 57.50 Table 2: Test accuracy for the ResNet-18 architecture on ImageNet. STE @ (X,X) indicates the weight-activation quantization configuration used with STE for fine-tuning. In addition to the λ we found through the grid-search which maintains FP accuracy, we also experimented with a stronger λ = 0.05 to show that (4,4) accuracy can be recovered at the price of overall lower performance. regularization approach taken by the authors is similar to our method, and the authors suggest that their method can be applied as a regularization for quantization robustness, we compare their method to ours. As the experiments from the original paper differ methodologically from ours in that we quantize both weights and activations, all results on defensive quantization reported in this paper are produced by us. We were able to show improved quantization results using defensive quantization for CIFAR-10 on VGG-like, but not on any of the experiments on ResNet18. We attribute this behavior to too stringent regularization in their approach: the authors regularize all singular values of their (reshaped) convolutional weight tensors to be close to one, using a regularization term that is essentially a fourth power regularization of the singular values of the weight tensors (see Appendix C). This regularization likely inhibits optimization. Comparison to explicit /.-norm gradient regularization We consider the ¢2 regularization of the gradient, as proposed by |Gulrajani et al.| (2017), as a generalization of the DQ regularization. Such regularization has two key benefits over DQ: 1) we can regularize the singular values without reshaping the convolutional kernels and 2) we impose a less stringent constraint as we avoid enforc- ing all singular values to be close to one. By observing the results at Table[I]and[2} we see that the £5 regularization indeed improves upon DQ. Nevertheless, it provides worse results compared to our 4, regularization, an effect we can explain by the analysis of Section[2] Comparison to quantization-aware fine-tuning While in general we cannot expect our method to outperform models to which quantization-aware fine-tuning is applied on their target bit-widths, as in this case the model can adapt to that specific quantization noise, we do see that our model performs on par or better when comparing to bit-widths lower than the target bit-width. This is in line with our expectations: the quantization-aware fine-tuned models are only trained to be robust to a specific noise distribution. However, our method ensures first-order robustness regardless of bit- 8 Published as a conference paper at ICLR 2020 width or quantization scheme, as explained in Section 2. The only exception is the 4 bit results on ImageNet. We hypothesize that this is caused by the fact that we tune the regularization strength λ to the highest value that does not hurt full-precision results. While stronger regularization would harm full-precision performance, it would also most likely boost 4 bit results, due to imposing robustness to a larger magnitude, i.e. δ, of quantization noise. Table 1 includes results for a higher value of δ that is in line with this analysis. # 5 CONCLUSION In this work, we analyzed the effects of the quantization noise on the loss function of neural net- works. By modelling quantization as an ¢,,-bounded perturbation, we showed how we can con- trol the first-order term of the Taylor expansion of the loss by a straightforward regularizer that encourages the ¢,-norm of the gradients to be small. We empirically confirmed its effectiveness, demonstrating that standard post-training quantization to such regularized networks can maintain good performance under a variety of settings for the bit-width of the weights and activations. As a result, our method paves the way towards quantizing floating-point models “on the fly” according to bit-widths that are appropriate for the resources currently available. # ACKNOWLEDGMENTS We would like to thank Markus Nagel, Rana Ali Amjad, Matthias Reisser, and Jakub Tomczak for their helpful discussions and valuable feedback. # REFERENCES Jan Achterhold, Jan Mathias Koehler, Anke Schmeink, and Tim Genewein. Variational network quantization. International Conference on Learning Representations, 2018. R Banner, Y Nahshan, E Hoffer, and D Soudry. Post training 4-bit quantization of convolution networks for rapid-deployment. CoRR, abs/1810.05723, 1:2, 2018. Atilim Gunes Baydin, Barak A Pearlmutter, Alexey Andreyevich Radul, and Jeffrey Mark Siskind. Automatic differentiation in machine learning: a survey. Journal of machine learning research, 18(153), 2018. Yoshua Bengio, Nicholas L´eonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. Vijay Bhattiprolu, Mrinalkanti Ghosh, Venkatesan Guruswami, Euiwoong Lee, and Madhur Tul- siani. Inapproximability of matrix p → q norms. arXiv preprint arXiv:1802.07425, 2018. Yoni Choukroun, Eli Kravchik, and Pavel Kisilev. Low-bit quantization of neural networks for efficient inference. arXiv preprint arXiv:1902.06822, 2019. Moustapha Cisse, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. Parseval networks: Improving robustness to adversarial examples. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 854–863. JMLR. org, 2017. Matthieu Courbariaux, Yoshua Bengio, and Jean-Pierre David. Binaryconnect: Training deep neural networks with binary weights during propagations. In Advances in neural information processing systems, pp. 3123–3131, 2015. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron C Courville. Im- proved training of wasserstein gans. In Advances in neural information processing systems, pp. 5767–5777, 2017. Yiwen Guo, Chao Zhang, Changshui Zhang, and Yurong Chen. Sparse dnns with improved adver- sarial robustness. In Advances in neural information processing systems, pp. 242–251, 2018. 9 Published as a conference paper at ICLR 2020 Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In International Conference on Machine Learning, pp. 1737–1746, 2015. Philipp Gysel. Ristretto: Hardware-oriented approximation of convolutional neural networks. arXiv preprint arXiv:1605.06402, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Julien M. Hendrickx and Alex Olshevsky. Matrix p-Norms Are NP-Hard to Approximate If p 4 1, 2,00. SIAM Journal on Matrix Analysis and Applications, 31(5):2802-2812, 2010. Wassily Hoeffding. Probability Inequalities for Sums of Bounded Random Variables. Journal of the American Statistical Association, 58(301):13–30, March 1963. Judy Hoffman, Daniel A. Roberts, and Sho Yaida. Robust Learning with Jacobian Regularization. arXiv:1908.02729 [cs, stat], August 2019. arXiv: 1908.02729. Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for effi- cient integer-arithmetic-only inference. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018. Raghuraman Krishnamoorthi. Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342, 2018. Ji Lin, Chuang Gan, and Song Han. Defensive quantization: When efficiency meets robustness. arXiv preprint arXiv:1904.08444, 2019. Christos Louizos, Matthias Reisser, Tijmen Blankevoort, Efstratios Gavves, and Max Welling. Re- laxed quantization for discretized neural networks. arXiv preprint arXiv:1810.01875, 2018. Eldad Meller, Alexander Finkelstein, Uri Almog, and Mark Grobman. Same, same but different In International - recovering neural network quantization error through weight factorization. Conference on Machine Learning, 2019. Markus Nagel, Mart van Baalen, Tijmen Blankevoort, and Max Welling. Data-free quantization through weight equalization and bias correction. arXiv preprint arXiv:1906.04721, 2019. Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, and Ali Farhadi. Xnor-net: Imagenet classification using binary convolutional neural networks. In European Conference on Computer Vision, pp. 525–542. Springer, 2016. Hanie Sedghi, Vineet Gupta, and Philip M Long. The singular values of convolutional layers. arXiv preprint arXiv:1805.10408, 2018. Oran Shayer, Dan Levi, and Ethan Fetaya. Learning discrete weights using the local reparameteri- zation trick. arXiv preprint arXiv:1710.07739, 2017. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Penghang Yin, Shuai Zhang, Jiancheng Lyu, Stanley Osher, Yingyong Qi, and Jack Xin. Bina- ryrelax: A relaxation approach for training deep neural networks with quantized weights. SIAM Journal on Imaging Sciences, 11(4):2205–2223, 2018. Ritchie Zhao, Yuwei Hu, Jordan Dotzel, Chris De Sa, and Zhiru Zhang. Improving neural network quantization without retraining using outlier channel splitting. In International Conference on Machine Learning, pp. 7543–7552, 2019. 10 Published as a conference paper at ICLR 2020 Empirical quantization perturbation distribution ool Figure 6: Quantization noise is uniformly distributed. In this plot we show the quantization noise on each individual weight in an ImageNet trained ResNet18 model. The noise is scaled by the width of the quantization bin for each weight quantizer. This plot shows that quantization noise is uniformly distributed between −δ/2 and δ/2. # A ROBUSTNESS ANALYSIS FOR QUANTIZATION PERTURBATIONS In this section, we address two questions in more details, first regarding regularization of the /2-norm of gradient and second regarding non-uniform quantization schemes. We argued above that regularizing the ¢:-norm of gradient cannot achieve the same level of ro- bustness as regularization of the ¢;-norm of gradient. We provide here another, more theoretical, argument. The following inequality shows how the /2-norm of gradient controls the first-order per- turbation: (A, Vf(w)) < |All IVS) |l2 - This is a simple Cauchy-Shwartz inequality. Therefore, if the 2-norm of the gradient is inversely proportional to the power of the perturbation, the first-order term is adequately controlled. However, using a theoretical argument, we show that the power of the £,,-bounded perturbation can blow up with the dimension as a vector A in R” with || A||,, = 6 can reach an ¢j-norm of approximately s/no. In other words, the length of the quantization noise behaves with high probability as O(,/7n), which implies that the the ¢2-norm of the gradient should be as small as O(1/,/7). We show that this can indeed occur with high probability for any random quantization noise with the bounded support. Note that for symmetric uniform quantization schemes, quantization noise can be approximated well by a uniform distribution over [—5/2, 5/2] where 6 is the width of the quantization bin. See Figures|6]for the empirical distribution of quantization noise on the weights of a trained network. Our argument, however, works for any distribution supported over [—6/2, 5/2], and, therefore, it includes asymmetric quantization schemes over a uniform quantization bin. Consider a vector © = (x1,...,%n)7 € R” with entries x; randomly and independently drawn from a distribution supported on [—5/2, 6/2]. We would like to show that ||2||} is well concentrated around its expected values. To do that we are going to write down the above norm as the sum of independent zero-mean random variables. See that: B (lel) = ® (S522) =n) - ee i=1 i ∈ [0, δ2/4]. Therefore x2 i − δ2/12 is a zero-mean random variable that lies Besides, note that x2 in the interval [−δ2/12, δ2/6]. We can now use Hoeffding’s inequality. To be self-contained, we include the theorem below. Theorem A.1 (Hoeffding’s inequality, (Hoeffding, 1963)). Let X1, . . . , Xn be a sequence of in- dependent zero-mean random variables such that Xi is almost surely supported on [ai, bi] for 11 Published as a conference paper at ICLR 2020 i ∈ {1, . . . , n}. Then, for all t > 0, it holds that P sox. >t)< 20 (4) . exp ( —-————"____ PU] ON SG a? i=1 i=1 03 =) < 2exp (- SG aa? = =) (5) ux i=l Applying Theorem A.1 to our setting, we obtain: P (lial - nd? /12| > t) <2exp (-savpe) . So with probability 1 — €, we have: nb 1/2 lel —né?/12| < (revo) Therefore, if the quantization noise A has entries randomly drawn from a distribution over [-6/2, 5/2], then with probability 1 — ¢, the squared £2-norm of A, ice., AIS. lies in the interval [s - NE log(2/e), ), 2 } /% log( 2/9], In other words, the length of the vector behaves with high probability as O(/n). This result holds for any quantization noise with bounded support. If the quantization bins are non-uniformly chosen, and if the weights can take arbitrarily large val- ues, the quantization noise is no-longer bounded in general. As long as the quantization noise has a Gaussian tail, i.e., it is a subgaussian random variable, one can use Hoeffding’s inequality for subgaussian random variables to show a similar concentration result as above. The power of the perturbation will, therefore, behave with O(,/7), and the £2-norm of the gradient cannot effectively control the gradient. Note that nonuniform quantization schemes are not commonly used for hard- ware implementations, hence, our focus on uniform cases. Besides, the validity of this assumption about nonuniform quantization noise requires further investigation, which is relegated to our future works. # B SECOND-ORDER PERTURBATION ANALYSIS We start by writing the approximation of f (·) up to the second-order term: 1 ; f(w+A) = f(w) + (A, Vf(w )) + sATV? F(w)A + Ro. The worst-case second-order term under ¢,,-bounded perturbations is given by δ ∆T ∇2f (w)∆. The above value is difficult to quantify for general case. We demonstrate this difficulty by consider- ing some special cases. Let’s start with convex functions, for which the Hessian ∇2f (w) is positive semi-definite. In this case, the Hessian matrix admits a square root, and the second-order term can be written as: . >. W2 ATV? f(w)A = ATV? /(w))2(V?f(w)) PA = |[(V? fw) 7A]. . Therefore the worst-case analysis of the second-term amounts to UBS, Aty? f(w)A= max, |w2rew))'2alf The last term is the mixed co + 2-norm of (V?f(w))!/?. As a reminder, the p + q-matrix norm is defined as JAlpoa = mx, Aly = ms, (yA) = AT oy uv vel 12 Published as a conference paper at ICLR 2020 where p∗, q∗ denote the dual norms of p and q, i.e. satisfying 1/p + 1/p∗ = 1/q + 1/q∗ = 1. The worst case second-order perturbation is given by: x ATV? f(w)A = 2 x ATV? f(w)A = 6*||(V2f(w))"|| Irl].<6 cod Unfortunately the ∞ → 2-norm is known to be NP-hard ((Hendrickx & Olshevsky, 2010); see Bhattiprolu et al. (2018) for a more recent study). As a matter of fact, if f (·) is positive semi- definite, and hence the function is convex, the problem above corresponds to maximization of convex functions, which is difficult as well. For a general Hessian, the problem is still difficult to solve. First note that: max ATV? f(w)A = max Tr(V?f(w)AA*). ro $8 ro $8 We can therefore replace ∆∆T with a positive semi-defintite matrix of rank 1 denoted by N . The worst case second-order perturbation can be obtained by solving the following problem: max Tr (V? f(w).N) (6) Nex” subject to N = 0 Nii <6 fori € {1,...,n} rank(N) = 1. The last constraint, namely the rank constraint, is a discrete constraint. The optimization problem above is therefore NP-hard to solve. To sum up, the worst case second-order perturbation cannot be efficiently computed, which poses difficulty for controlling the second-order robustness. There are, however, approximations available in the literature. A common approximation, which is widely known for the Max-Cut and community detection problems, consists of dropping the rank- constraint from the above optimization problem to get the following semi-definite program: ax Tr (V7 f(w)N 7 vinax,,, Tr (V"f(w)N) (7) subject to N = 0 Nu <6? fori e {1,...,n} Unfortunately this approximation, apart from being costly to solve for large n, does not provide a regularization parameter that can be included in the training of the model. It is not clear how we can control the second-order term through a tractable term. C DEFENSIVE QUANTIZATION IMPOSES A 4TH POWER CONSTRAINT ON SINGULAR VALUES From basic linear algebra we have that Ws = TH(WTW) = Soo? (W), i i.e., the Frobenius norm is equal to sum of the squared singular values of W.. From this we can conclude that the regularization term ||W7W — I||3 introduced by|Lin et al.] 2019) thus equals introduced by|Lin et al.] 2019) thus equals —1) => |o2(W) -1)", |W? w —18 = S03 (Wlw —1) => |o2(W) -1)", 7 a and therefore imposes a 4th power regularization term on the singular values of W . A softer regu- larization can be introduced by regularizing Tr(W T W − I) instead. 13 Published as a conference paper at ICLR 2020 # D GRADIENT-PENALTY PROGRESSION IN NON-REGULARIZED NETWORKS Optimizing our regularization penalty requires computing gradients of the gradients. While this is easily done by double-backpropagation in modern software frameworks it introduces overhead (as discussed in Section 4.1) and makes training slower. However, as the training progresses, the gradients in unregularized networks tend to become smaller as well, which is inline with our reg- ularization objective. It is therefore not necessary to apply the regularization from the beginning of training. In Figure 7 we show examples of how the regularization objective naturally decreases during training. We also show how turning the regularization on in the final epochs where the regu- larization objective is oscillating can push the loss further down towards zero. (a) VGG-like = © = S é 6 # a 5 a B © # (b) ResNet-18 Figure 7: The gradients in unregularized networks tend to become smaller as training progresses. This means for large parts of the training there is no need to apply the regularization. The plots on the left show the regularization penalty in unregularized networks. The plots on the right show how turning on the regularization in the last 15 epochs of the training can push the regularization loss even further down. E ¢,,-BOUNDED PERTURBATIONS INCLUDE OTHER BOUNDED-NORM PERTURBATIONS Figure[8|show that the Figure[8|show that the £,,-bounded perturbations include all other bounded-norm perturbations. ∞ 14 Published as a conference paper at ICLR 2020 Figure 8: ¢..-bounded vectors include other bounded- norm vectors. In this plot we show that the pertur- bations with bounded £,-norm are a subset of £,.-bounded perturbations. For p = 1, 2, 00, we plot the vectors with |||], = 1. 15
{ "id": "1806.08342" }
2002.07526
A Survey of Deep Learning Techniques for Neural Machine Translation
In recent years, natural language processing (NLP) has got great development with deep learning techniques. In the sub-field of machine translation, a new approach named Neural Machine Translation (NMT) has emerged and got massive attention from both academia and industry. However, with a significant number of researches proposed in the past several years, there is little work in investigating the development process of this new technology trend. This literature survey traces back the origin and principal development timeline of NMT, investigates the important branches, categorizes different research orientations, and discusses some future research trends in this field.
http://arxiv.org/pdf/2002.07526
Shuoheng Yang, Yuxin Wang, Xiaowen Chu
cs.CL
null
null
cs.CL
20200218
20200218
0 2 0 2 b e F 8 1 ] L C . s c [ 1 v 6 2 5 7 0 . 2 0 0 2 : v i X r a # A Survey of Deep Learning Techniques for Neural Machine Translation Shuoheng Yang, Yuxin Wang, Xiaowen Chu Department of Computer science Hong Kong Baptist University Hong Kong, China [email protected], {yxwang, chxw}@comp.hkbu.edu.hk Abstract—In recent years, natural language processing (NLP) has got great development with deep learning techniques. In the sub-field of machine translation, a new approach named Neural Machine Translation (NMT) has emerged and got massive attention from both academia and industry. However, with a significant number of researches proposed in the past several years, there is little work in investigating the development process of this new technology trend. This literature survey traces back the origin and principal development timeline of NMT, investigates the important branches, categorizes different research orientations, and discusses some future research trends in this field. Index Terms—Neural Machine Translation, Deep Learning, Attention Mechanism. # I. INTRODUCTION A. Introduction of Machine Translation summarized the conventional methods in sequence learning [120], which provided essential information for the origin of NMT as well as the related base knowledge. Britz et al. and Tobias Domhan have done some model comparison work in NMT with experiment and evaluation in the practical performance of some wildly accepted technologies, but they have rare theoretical analysis, especially in presenting the relationship between different proposed models [22] [118]. On the other hand, some researchers limited their survey work on a special part related to NMT the Attention Mechanism, but both of them have a general scope that oriented to all kinds of AI tasks with Attention [116] [117]. Maybe the most related work was an earlier doctoral thesis written by Minh- Thang Luong in 2016 [156], which included a comprehensive description about the original structure of NMT as well as some wildly applied tips. Machine translation (MT) is a classic sub-field in NLP that investigates how to use computer software to translate the text or speech from one language to another without human involvement. Since MT task has a similar objective with the final target of NLP and AI, i.e., to fully understand the human text (speech) at semantic level, it has received great attention in recent years. Besides the scientific value, MT also has huge potential of saving labor cost in many practical applications, such as scholarly communication and international business negotiation. Machine translation task has a long research history with many efficient methods proposed in the past decades. Re- cently, with the development of Deep Learning, a new kind of method called Neural Machine Translation (NMT) has emerged. Compared with the conventional method like Phrase- Based Statistical Machine Translation (PBSMT), NMT takes advantages in its simple architecture and ability in capturing long dependency in the sentence, which indicates a huge potential in becoming a new trend of the mainstream. After a primitive model in origin, there are a variety of NMT models being proposed, some of which have achieved great progresses with the state-of-the-art result. This paper summarizes the major branches and recent progresses in NMT and discusses the future trend in this field. B. Related Work and Our Contribution Although there is little work in the literature survey of NMT, some other works are highly related. Lipton et al. have This paper, however, focuses on a direct and up-to-date literature survey about NMT. We have investigated a lot about the relevant literature in this new trend and provided compre- hensive interpretation for current mainstream technology in NMT. As for concrete components, this literature survey investi- gates the origin and recent progresses in NMT, categorizes these models by their different orientation in the model struc- ture. Then we demonstrate the insight of these NMT types, summarize the strengths and weaknesses by reviewing their design principal and corresponding performance analysis in translation quality and speed. We also give a comprehensive overview of two components in NMT development, namely attention mechanism and vocabulary coverage mechanism, both of which are indispensable for current achievement. At last, we give concentration on some literature which proposed advanced models with comparison work; we introduce these considerable models as well as the potential direction in future work. Regarding the survey scope, some subareas of NMT with less attention were deliberately left out of the scope except with brief description in future trend. These include but are not limited to the literature of robustness of NMT, domain adaptation in NMT and other applications that embed NMT method (such as speech translation, document translation). Although the research scope has been specifically designed, due to the numerous of researches and the inevitable expert selection bias, we believe that our work is merely a snapshot of part of current research rather than all of them. We are hoping that our work could provide convenience for further research. The remaining of the paper is organized as follows. Sec- tion II provides an introduction of machine translation and presents its history of development. Section III introduces the structure of NMT and the procedure of training and testing. Section IV discusses attention mechanism, an essential innovation in the development of NMT. Section V surveys a variety of methods in handling word coverage problem and some fluent divisions. Section VI describes three advanced models in NMT. Finally, Section VII discusses the future trend in this field. # II. HISTORY OF MACHINE TRANSLATION Machine translation (MT) has a long history; the origin of this field could be traced back to the 17th century. In 1629, Ren´e Descartes came up with a universal language that expressed the same meaning in different languages and shared one symbol. The specific research of MT began at about 1950s, when the first researcher in the field, Yehoshua Bar-Hillel, began his research at MIT (1951) and organized the first International Conference on Machine Translation in 1952. Since then, MT has experienced three primary waves in its development, the Rule-based Machine Translation [2], the Statistical Machine Translation [3] [4], and the Neural Machine Translation [7]. We briefly review the development of these three stages in the following. A. Development of Machine Translation 1) Rule-based Machine Translation: Rule-based Machine Translation is the first design in MT, which is based on the hypothesis that all different in representing the same meaning. Because in usual, a word in one language could find its corresponding word in another language with the same meaning. In this method, the translation process could be treated as the word replacement in the source sentence. In terms of ’rule-based’, since different languages could represent the same meaning of sentence in different word order, the word replacement method should base on the syntax rules of both two languages. Thus every word in the source sentence should take its corresponding position in the target language. The rule-based method has a beautiful theory but hardly achieves satisfactory performance in implementation. This is because of the computational inefficiency in determining the adaptive rule of one sentence. Besides, grammar rules are also hard to be organized, since linguists summarize the grammar rules, and there are too many syntax rules in one language (especially language with more relaxed grammar rules). It is even possible that two syntax rules conflict with each other. The most severe drawback of rule-based method is that it has ignored the need of context information in the translation process, which destroys the robustness of rule-based machine translation. One famous example was given by Marvin Minsky in 1966, where he used two sentences given below: “T he pen is in the box” “T he box is in the pen” Both sentences have the same syntax structure. The first sentence is easy to understand; but the second one is more confusing, since the word “pen” is a polysemant, which also means “fence” in English. But it is difficult for the computer to translate the “pen” to that meaning; the word replacement is thus an unsuccessful method. 2) Statistical Machine Translation: Statistical Machine Translation (SMT) has been the mainstream technology for the past 20 years. It has been successfully applied in the industry, including Google translation, Baidu translation, etc. Different from Rule-based machine translation, SMT tackles the translation task from a statistical perspective. Concretely, the SMT model finds the words (or phrases) which have the same meaning through bilingual corpus by statistics. Given one sentence, SMT divides it into several sub-sentences, then every part could be replaced by target word or phrase. The most prevalent version of SMT is Phrase-based SMT (PBSMT), which in general includes pre-processing, sentence alignment, word alignment, phrase extraction, phrase feature preparation, and language model training. The key component of a PBSMT model is a phrase-based lexicon, which pairs phrases in the source language with phrases in the target language. The lexicon is built from the training data set which is a bilingual corpus. By using phrases in this translation, the translation model could utilize the context information within phrases. Thus PBSMT could outperform the simple word-to- word translation methods. 3) Neural Machine Translation: It has been a long time since the first try on MT task by neural network [44] [43]. Because of the poor performance in the early period and the computing hardware limitation, related research in translation by neural network has been ignored for many years. Due to the proliferation of Deep Learning in 2010, more and more NLP tasks have achieved great improvement. Using deep neural networks for MT task has received great attention as well. A successful DNN based Machine Translation (NMT) model was first proposed by Kalchbrenner and Blunsom [8], which is a totally new concept for MT by that time. Comparing with other models, the NMT model needs less linguistic knowledge but can produce a competitive performance. Since then, many researchers have reported that NMT can perform much better than the traditional SMT model [1] [112] [113] [114] [115], and it has also been massively applied to the industrial field [24]. B. Introduction of Neural Machine Translation 1) Motivation of NMT: The inspiration for neural machine translation comes from two aspects: the success of Deep Learning in other NLP tasks as we mentioned, and the unresolved problems in the development of MT itself. For the first reason, in many NLP tasks, traditional Machine Learning method is highly dependent on hand-crafted features that often come from linguistic intuition, which is definitely an empirical trial-and-error process [133] [134] and is often far more incomplete in representing the nature of original data. For example, the context size of training language model is assigned by researchers with strong assumption in context relation [136]; and in text representation method, the classic bag-of-words (BOW) method has ignored the influence of word order [135]. However, when applying deep neural network (DNN) in the aforementioned tasks, the DNN requires minimum domain-knowledge and avoids some pre-processing steps in human feature engineering [22]. DNN is a powerful neural network which has achieved excellent performance in many complex learning tasks which are traditionally considered difficult [137] [138]. In NLP field, DNN has been applied in some traditional tasks, for example, speech recognition [5] and Named Entity Recognition (NER) [133]. With the exceptional performance they got, DNN-based models have found many potential applications in other NLP tasks. For the second reason, in MT field, PBSMT has got a pretty good performance in the past decades, but there are still some inherent weaknesses which require further improvement. First, since the PBSMT generates the translation by segmenting the source sentence into several phrases and doing phrase replacement, it may ignore the long dependency beyond the length of phrases and thus cause inconsistency in translation results such as incorrect gender agreements. Second, there are generally many intricate sub-components in current sys- tems [13] [14] [15], e.g., language model, reordering model, length/unknown penalties, etc. With the increasing number of these sub-components, it is hard to fine-tune and combine each other to get a more stable result [23]. All the above discussions have indicated the bottleneck in the development of SMT miniature. Specifically, this bottle- neck mainly comes from the language model (LM). This is because, in MT task, language model actually can give the most important information: the emergence probability of a particular word (or phrase) that is conditioned on previous words. So building a better LM can definitely improve the translation performance. The vast majority of conventional LM is based on the Target output words moi suis étudiant <EQS> | loss layer projection layer hidden layer 2 hidden layer 1 embedding layer i am a student <EOS> moi —suis_étudiant Source input words Target input words Fig. 1. The training process of RNN based NMT. The symbol < EOS > means end of sequence. The embedding layer is for pre-processing. The two RNN layers are used to represent the sequence. Markov assumption: T p(#1,%9,...,e7) = [] Giles... 21-1) Il a (1) 2 om) P(#1|@t-ns +++, %4-1) Il a t=1 where x1, x2, ..., xT is a sequence of words in a sentence and T represents the length of the sentence. In this assumption, the probability of the sentence is equal to the multiplication of probability of each word. n is the total number of words that is chosen to simplify the model, which is also referred to as context window. Obviously, the dependency of words that exceed n would be ignored, which implies that the conventional LM performs poorly on modeling long dependency. Moreover, since the experimental result has indicated that a modest context size (generally 4-6 words) can be accepted, the first problem of traditional LM is the limited representation ability. Besides, the data sparsity for training has always been the problem that hinders an LM built with a larger size of context window. This is because the number of n-tuples for counting is exponential in n. In other words, when building an LM, with the increment of the number of order, the number of training samples we need would also increase remarkably, which is also referred to as ”curse of dimensionality”. For example, if one LM has the order of 5 with a vocabulary size of 10,000, then the possible combination of words for statistics should be about 1025, which requires enormous training data. And since most of these combinations have not been observed before, subsequent researches have used various trade-off and smoothing method to alleviate the sparsity problem [129] [128] [127] [130] [131]. While further research of the aforementioned LM with statistical method has become almost stagnant, Neural Lan- guage Model (NLM) [6], on the other hand, uses a neural that models text data network to build a language model directly. In initial stage, NLM used fixed-length of a feature vector to represent each word, and then the solid number of word vectors would concatenate together as a semantic metric to represent the context [6] [38] [39], which is very similar to the context window. This work was enhanced later by injecting additional context information from source sentence [12] [132] [126]. Comparing with the traditional LM, the original NLM alleviates the sample sparsity due to the distributed representation of the word, which enables them to share the statistical weights rather than being independent variables. And since words with similar meaning may occur in the similar context, the corresponding feature vector would have the similar value, which indicates that the semantic relation of words has been ”embedded” into the feature vector. New proposals in the next stage solve the long dependency problem by using Recurrent Neural Network (RNN). RNN based NLM (RNLM) models the whole sentence by reading each word once a time-step, the true conditional probability without limitation of context size [41]. Before the emergence of NMT, the RNLM, as mentioned earlier, outperformed the conventional LM in the evaluation of text perplexity and brought better performance in many practical tasks [41] [26]. The direct application of NLM in SMT has been naturally proposed [12] [36] [37] [40] [58], and the preliminary ex- periment indicated promising results. The potential of NLM motivates further exploration for a complete DNN based translation model. Subsequently, a more ”pure” model with the only neural network has emerged, with the DNN architecture that learns to do the translation task end-to-end. Section III demonstrates its basic structure (in Fig. 4), as well as its concrete details. 2) Formulation of NMT Task: Currently, NMT task is originally designed as an end-to-end learning task. It directly processes a source sequence to a target sequence. The learning objective is to find the correct target sequence given the source sequence, which can be seen as a high dimensional classification problem that tries to map the two sentences in the semantic space. In all mainstreams of modern NMT model, this process can be divided into two steps: encoding and decoding, and thus can functionally separate the whole model as Encoder and Decoder as illustrated in Fig. 2. Target Encoder sentence Decoder Source sentence Fig. 2. End-to-End structure in modern NMT model. The encoder is used to represent the source sentence to semantic vector, while the decoder makes prediction from this semantic vector to a target sentence. End-to-End means the model processes source data to target data directly, without explicable intermediate result. In perspective of probability, NMT generates the target se- quence T (t1, t2, ..., tm) from the max conditional probability given the source sequence S(s1, s2, ..., sn), where n is the length of sequence S and m is the length of target sequence T . The whole task could be formulated as [24]: argmax P (T |S). (2) More concretely, when generating each word of the target sentence, it uses the information from both the word it predicted previously and the source sentence. In that case, each generating step could be described as when generating the i-th word: m argmaz || P(tilt;<i, $) (3) i=l Based on this formula and the discussion of NLM above, NMT task could be regarded as an NLM model with additional constraints (e.g., conditioned on a given source sequence). C. The Recent Development in NMT We devide the recent developement of NMT in five main stages: (a) the original NMT with a shallow layer, (b) SMT assisted by NLM, (c) the DNN based NMT, (d) NMT with attention mechanism, (e) the attention-based NMT. NMT with Shallow Layer Even before the Deep Learning, Allen has used binary encoding to train an NMT model in 1987 [44]. Later in 1991, Chrisman used Dual-ported RAAM architecture [42] to build an original NMT model [43]. Although both of them have a pretty primitive design with the limited result when looking back, their work has indicated the original idea of this field. The further related work has almost stagnated in the following decades, due to the huge progress that SMT method acquired at that period, as well as the limited computing power and data samples. SMT assisted by NLM Based on the above discussion, NLM has revolutionized the traditional LM even before the rise of deep learning. Later on, deep RNN based NLM has been applied in the SMT system. Cho et al. proposed an SMT model along with an NLM model [18]. Although the main body is still SMT, this hybrid method provides a new direction for the emergence of a pure deep learning-based NMT. NMT with Deep Neural Network Since the traditional SMT model with NLM has got the state-of-the-art performance at that time, a pure DNN based translation approach was proposed later with an end-to-end design to model the entire MT process [8] [16]. Using DNN based NMT could capture subtle irregularities in both two languages more efficiently [24], which is similar to the ob- servation that DNNs often have a better performance than ’shallow’ neural networks [21]. NMT with Attention Mechanism Although the initial DNN based NMT model has not outperformed the SMT completely, it still exhibited a huge potential for further research. When tracing back the major weakness, although one theoretical advantage of RNN is its ability in capturing the long dependency between words, in fact, the model performance would deteriorate with the increase of sentence length. This scenario is due to the limited feature representation ability in a fixed-length vector. Under the circumstances, since the original NMT has got a pretty good performance without any auxiliary, the idea of whether some variants in architecture could bring a breakthrough has led to the rise of Attention Mechanism. Attention Mechanism was originally proposed by Bahdanau et al. as an intermediate component [21], and the objective is to provide additional word alignment information in translating the long sentence. Surprisingly, NMT model has got a con- siderable improvement with the help of this simple method. Later on, with tremendous popularity among both academia and industry, many refinements in Attention Mechanism have emerged, and more details will be discussed in Section IV. Fully Attention based NMT With the development of Attention Mechanism, fully Attention-based NMT has emerged as a great innovation in NMT history. In this new tendency, Attention mechanism has taken the dominate position in text feature extraction rather than a auxiliary component. And the representative model is Transformer [25], which is a fully attention-based model proposed by Vaswani et al. Abandoning previous framework in neither RNN nor CNN based NMT models, Transformer is a that solely based on an intensified version of Attention Mechanism called Self- Attention with feed-forward connection, which got revolu- tionary progress in structure with state-of-the-art performance. Specifically, the innovative attention structure is the secret sauce to gain such significant improvement. The self-attention is a powerful feature extractor which also allows to ’read’ the entire sentence and model it once a time. In the perspective of model architecture, this character can be seen as a combination of advantages from both CNN and RNN, which endows it a good feature representation ability with high inference speed. More details about self-attention will be given in Section IV. The architecture of Transformer will be discussed in Section VI. # III. DNN BASED NMT The emergence of DNN based NLM indicates the feasibility of building a pure DNN based translation model. The further implementation is the def acto form of NMT in origin. This section reviews the basic concept of DNN based NMT, demon- strates a comprehensive introduction of the standard structure of the original DNN based NMT, and discusses the training and inferencing processes. A. Model Design of DNN based NMT There are many variations of network design for NMT, which can be categorized into recurrent or non − recurrent models. More specifically, this category can be traced back to the early development of NMT, when RNN and CNN based models are the most common design. Many sophisticated models proposed afterwards also belong to either CNN or RNN family. This sub-section follows the development of NMT in the early years, and demonstrates some representative models by classifying them as RNN or CNN based models. 1) RNN based NMT: Although in theory, any network with enough feature extraction ability could be selected to in def acto implementations, RNN build an NMT model, based NMT models have taken the dominant position in NMT development, and they have achieved state-of-the-art performance. Based on the discussion in Section II, since many NLM literature used RNN to model the sequence data, this design has intuitively motivated the further work to build an RNN based NMT model. In the initial experiment, an RNN based NLM was applied as a feature extractor to compress the source sentence into a feature vector, which is also referred to as thought vector. Then a similar RNN was applied to do the ’inverse work’ to find the target sentence that can match the previous thought vector in semantic space. The first successful RNN based NMT was proposed by Sutskever et al., who used a pure deep RNN model and got a performance that approximates the best result achieved by SMT [16]. Further development proposed the Attention Mechanism, which improves the translation performance sig- nificantly and exceeds the best SMT model. GNMT model was an industry-level model applied in Google Translation, and it was regarded as a milestone in RNN based NMT. Besides the above mentioned work, other researchers have also proposed different architectures with excellent perfor- mance. Zhang et al. proposed Variational NMT method, which has an innovative perspective in modeling translation task, and the corresponding experiment has indicated a better performance than the baseline of original NMT in Chinese- English and English-German tasks [89]. Zhou et al. have designed Fast-Forward Connections for RNN (LSTM), which can allow a deeper network in implementation and thus gets a better performance [88]. Shazeer et al. incorporated Mixture- of-Expert (MoE) architecture into the GNMT model, which has outperformed the original GNMT model [90]. Concretely, MoE is one layer in the NMT model, which contains many sparsely combined experts (which are feed-forward neural networks in this experiment) and is connected with the RNN layer by a gate function. This method requires more parameters in total for the NMT model, but still maintains the efficiency in training speed. Since more parameters often imply a better representation ability, it demonstrates huge potential in the future. 2) CNN based NMT: Related work in trying other DNN models have also been proposed. Perhaps the most noted one is the Convolutional Neural Network (CNN) based NMT. In fact, CNN based models have also undergone many variations in its concrete architecture. But for a long while, most of these models can’t have competitive performance with RNN based model, especially when the Attention Mechanism has emerged. In the development of CNN based NMT models, Kalch- brenner & Blunsom once tried a CNN encoder with RNN Decoder [8], and it’s maybe the earliest NMT architecture applied with CNN. Cho et al. tried a gated recursive CNN encoder with RNN decoder, but it has shown worse perfor- mance than RNN encoder [18]. A fully CNN based NMT was proposed by Kaiser & Bengio later [86], which applied Extended Neural GPU [119]. The best performance in the early period of CNN based NMT was achieved by Gehring et al., which was a CNN encoder NMT and got the similar translation performance with RNN based model at that time [19]. Concurrently, Kalchbrenner et al. also proposed ByteNet (a kind of CNN) based NMT, which achieved the state-of- the-art performance on character-level translation but failed at word-level translation [84]. In addition, Meng et al. and Tu et al. proposed a CNN based model separately, which provides additional alignment information for SMT [20] [83]. Compared with RNN based NMT, CNN based models have its advantage in training speed; this is due to the intrinsic structure of CNN which allows parallel computations for its different filters when handling the input data. And also, the model structure has made CNN based models easier to resolve the gradient vanishing problem. However, there are two fatal drawbacks that affect their translation quality. First, since the original CNN based model can only capture the word depen- dencies within the width of its filters, the long dependency of words can only be found in high-level convolution layers; this unnatural character often causes a worse performance than the RNN based model. Second, since the original NMT model compresses a sentence into a fixed size of the vector, a large performance reduction would happen when the sentence becomes too long. This comes from the limited representation ability in fixed size of the vector. Similar phenomenon can also be found in early proposed RNN based models, which are later alleviated by Attention Mechanism. Some advanced CNN based NMT models have also been proposed with corresponding solutions in addressing the above drawbacks. Kaiser et al. proposed the Depthwise separable convolutions based NMT. The SliceNet they created can get similar performance with Kaiser et al. (2016) [85]. Gehring et al. (2017) followed their previous work by proposing a CNN based NMT that is cooperated with Attention Mechanism. It even got a better result than RNN based model [82], but this achievement was soon outperformed by Transformer [25]. # B. Encoder-Decoder Structure As is known, Encoder-Decoder is the most original and classic structure of NMT; it was directly inspired by NLM and proposed by Kalchbrenner & Blunsom [8] and Cho et al. [18]. Despite all kinds of refinements in details and small tips, it was wildly accepted by almost all modern NMT models. Based on the discussion above, since RNN based NMT has held the dominant position in NMT, and to avoid being overwhelmed in describing all kinds of small distinctions between models’ structures, we specifically focus our discussion just on the vanilla RNN based NMT, thus can help to trace back the development process of NMT. The original structure of Encoder-Decoder structure is con- ceptually simple. It contains two connected networks (the encoder and the decoder) in its architecture, each for a different part of the translation process. When the encoder network receives one source sentence, it reads the source sentence word by word and compresses the variable-length sequence into a fixed-length vector in each hidden state. This process is called encoding. Then given the final hidden state of the encoder (referred to as thought vector), the decoder does the reverse work by transforming the thought vector to the target sentence word by word. Because Encoder-Decoder structure addresses the translation task from source data directly to the target result, which means there’s no visible result in the middle process, this is also called end-to-end translation. The principle of Encoder-Decoder structure of NMT can be seen as mapping the source sentence with the target sentence via an intermediate vector in semantic space. This intermediate vector actually can represent the same semantic meaning in both two languages. For specific details of this structure, besides the model selection in the network, RNN based NMT models also differ in three main terms: (a) the directionality; (b) the type of activation function; and (c) the depth of RNN layer [156]. In the following, we give a detailed description. Depth: For the depth of RNN, as we discussed in Section II, single layer RNN usually performs poorly comparing with multi-layer RNN. In recent years, almost all the models with competitive performance are using a deep network, which has indicated a trend of using a deeper model to get the state- of-the-art result. For example, Bahdanau et al. [21] used four layers RNN in their model. However, simply increasing more layers of RNN may not always be useful. In the proof proposed by Britz et al. [22], they found that using 4 layers RNN in the encoder for specific dataset would produce the best performance when there is no other auxiliary method in the whole model. Besides that, stacking RNN layers may make the network become too slow and difficult to train. One major challenges is the gradient exploding and vanishing problem [28], which will cause the gradient be amplified or diminished when processing back propagation in deep layers. Besides the additional gate struc- ture in refined RNN (like LSTM and GRU), other methods have also been applied to alleviate this phenomenon. For example, in Wu et al.’s work, the residual connections are provided between layer, which can improve the value of gradient flow in the backward pass, thus can speed up the convergence process [24]. Another possible problem is that a deeper model often indicates larger model capacity, which may perform worse on comparatively less training data due to the over-fitting. Directionality: In respect of directionality, a simple uni- directional RNN has been chosen by some researchers. For example, Luong et al. have directly used unidirectional RNN to accept the input sentence [23]. In comparison, bidirec- tional RNN is another common choice that can empower the translation quality. This is because the model performance is affected by whether it ’knows’ well about the information in context word when predicting current word. A bidirectional RNN obviously could strengthen this ability. In practice, both Bahdanau et al. and Wu et al. used bidirectional RNN on the bottom layer as an alternative to capture the context information [21] [24]. In this structure, the first to right”, and the second layer reads the sentence in a reverse direction. Then they are concatenated and fed to the next layer. This method generally has a better performance in experiment, although the explanation is intuitive: Based on the discussion of LM in Section II, the emergence probability of a specific word is determined by all the other words in both the prior and the post positions. When applying unidirectional RNN, word dependency between the first word and the last word is hard to be captured by the thought vector, since the model has experienced too many states in all time steps. On the country, bidirectional RNN provides an additional layer of information with reverse direction of reading words, which could naturally reduce this relative length within steps. it’s The most visible drawback of this method is that hard to be paralleled, considering the time-consuming in its realization, both Bahdanau et al. and Wu et al. choose to apply just one layer bidirectional RNN in the bottom layer of the encoder, and other layers are all unidirectional layers [24] [21]. This choice makes a trade-off between the feature representation ability with model efficiency, due to it can still enable the model to be distributed on multi GPUs [24]. The basic concept of bidirectional RNN could find in Fig. 2. Activation Function Selection: In respect of activation function selection, there are three common choices: vanilla RNN, Long Short Term Memory (LSTM) [17], and Gated Recurrent Unit (GRU) [18]. Comparing with the vanilla RNN, both the last two have some robustness in addessing the gradient exploding and vanishing problem [27] [28]. Another sequence processing task has also indicated better performance achieved by GRU and LSTM [26]. Besides, some innovative neural units have been proposed. Wang et al. proposed linear associative units, which can alleviate the gradient diffusion phenomenon in non-linear recurrent activation [92]. More recently, Zhang et al. have created addition-subtraction twin- gated recurrent network (ATR). This type of unit reduces the inefficiency in NMT training and inference by simplifying the weight matrices among units [91]. All in all, in NMT task, LSTM is the most common choice. # C. Training method Before feeding the training data to the model, one pre-step is to transfer the words to vectors, which makes a proper form that the neural network could receive. Usually, the most frequent V words in one language will be chosen, and each language generally has different word set. Despite that the embedding weights will be learned in the training period, the pre-trained word embedding vector such as word2vec [9] [10] or Glove vector [11] can also be applied directly. In the training period, this model is fed by a bilingual corpus for Encoder and Decoder. The learning objective is to map the input sequence with the corresponding sequence in the target language correctly. Like other DNN models, the input sentence pair is embedded as a list of word vectors, and the model | | | LSTM |— LSTM|— LSTM — LSTM a pieeinein ewes Fig. 3. The concept of Bidirectional RNN Target output words _ moi suis étudiant <EOS> étudiant 0.1 01 0.5) 07 Tok layer - 04 01 02 peg | lossiayer je 0.3 04 0.41 o4 mol .4 0.1) [0.1 pt projection layer suis 0,1 06 01 04 hidden layer 2 hidden layer 1 embedding layer 1 am a student <EOS> moi suis étudiant Source input words Target input words Fig. 4. The process of greedy decoding: each time the model would predict the word with highest probability, and use the current result as the input in next time step to get further prediction parameters are initialized randomly. The training process could be formulated as trying to updating its parameters periodically until getting the minimum loss of the neural network. In the implementation, RNN will refine the parameters after it processes a subset of data that contain a batch of training samples; this subset is called the mini-batch set. To simplify the discussion of the training process, we take one sentence pair (one training sample) as example. For the Encoder, the encoding RNN will receive one word in source sentence once a time-step. After several steps, all words will be compressed into the hidden state of the Encoder. Then the final vector will be transferred to the Decoder. For Decoder, the input comes from two sources: the thought vector that is directly sent to Decoder, and the correct word in the last time-step (the first word is < EOS >). The output process in Decoder can be seen as a reverse work of Encoder; Decoder predicts one word in each time-step until the last symbol is < EOS >. # D. Inference method the model could be used for translation, which is called inference. The inference procedure is quite similar to the training process. Nevertheless, there is still a clear distinction between training and inference: at decoding time, we only have access to the source sentence, i.e., encoder hidden state. There is more than one way to perform decoding. Proposed decoding strategies include Sampling and Greedy search, while the latter one is generally accepted and be evolved as Beam-search. 1) General decoding work flow (greedy): The idea of greedy strategy is simple, as we illustrate in Fig. 4. The Greedy strategy is only considering the predicted word with the high- est probability. In the implementation of our illustration, the previously generated word would also be fed to the network together with the thought vector in the next time-step. The detailed steps are as follows: 1. The model still encodes the source sentence in the same way as during the training period to obtain the thought vector, and this thought vector is used to initialize the decoder. 2. The decoding (translation) process will start as soon as the decoder receives the end-of-sentence marker < EOS > of source sentence. 3. For each time-step on the decoder side, we treat the RNN’s output as a set of logits. We choose the word with the highest translation probability as the emitted word, whose ID is associated with the maximum logit value. For example, in Fig. 4, the word “moi” has the highest probability in the first decoding step. We then feed this word as an input in the next time-step. The probability is thus conditioned on the previous prediction (this is why we call it “greedy” behavior). 4. The process will continue until the ending symbol < EOS > is generated as an output symbol. 2) Beam-search: While the Greedy search method has produced a pretty good result, Beam-search is a more elab- orated one with better results. Although it is not a necessary component for NMT, Beam-search has been chosen by most of NMT models to get the best performance [22]. The beam-search method was proposed by other sequence learning task with successful application [29] [30]. It’s also the conventional technique of MT task that has been used for years in finding the most appropriate translation result [34] [32] [33]. Beam-search can be simply described as retaining the top-k possible translations as candidates at each time, where k is called the beam-width. In the next time-step, each candidate word would be combined with a new word to form new possible translation. The new candidate translation would then compete with each other in log probability to get the new top-k most reasonable results. The whole process continues until the end of translation. Concretely, following steps: the beam search can be formulated in the # Algorithm 1 Beam Search set Beamsize = K; h0 ⇐ encoder(S) t ⇐ 1 // LS means length of source sentence; // while n ≤ α ∗ LS do y1,i ⇐ < EOS > while i ≤ K do α is Length factor; set ht ⇐ decoder(ht−1, yt,i); set Pt,i = Sof tmax(yt,i); set yt+1,i ⇐ argT op K(Pt,i); set i = i + 1 end while set i = 0 if ht == < EOS > then break; end if set t = t + 1 end while select argmax(p(Y )) from K candidates Yi return Yi Besides the standard Beam-search which finds the candidate translation only by sorting log probability, this evaluation function mathematically tends to find shorter sentence. This is because a negative log-probability would be added at each decoding step, which lowers the scores with the increasing length of sentences [31]. An efficient variant for alleviating this scenario is to add a length normalization [7]. A refined length normalization was also proposed by Wu et al. [24]. Another kind of refined method in Beam-search is adding coverage penalty, which helps to encourage the decoder to cover the words in the source sentence as much as possible when generating an output sentence [24] [35]. In addition, since this method finds k times of transla- tion(rather than one) until getting the final result, it generally makes the decoding process more time-consuming. In practice, an intuitive solution is to limit the beam-width as a small constant, which is a trade-off between the decoding efficiency and the translation accuracy. As reported by a comparison work, an experimental beam width for best performance is 5 to 10 [22]. # IV. NMT WITH ATTENTION MECHANISM A. Motivation of Attention Mechanism While the promising performance of NMT has indicated its great potential in capturing the dependencies inside the sequence, in practice, NMT still suffers a huge performance reduction when the source sentence becomes too long. Com- paring with other feature extractors, the major weakness of the original NMT Encoder is that it has to compress one sentence into a fixed-length vector. When the input sentence becomes longer, the performance deteriorates because the final output of the network is a fixed-length vector, which may have limitation in representing the whole sentence and cause some information loss. And because of the limited length of vector, this information loss usually covers the long-range dependencies of words. While increasing the dimension of encoding vector is an intuitive solution, since the RNN training speed is naturally slow, a larger vector size would cause an even worse situation. Attention Mechanism emerged under this circumstance. Bahdanau et al. [21] initially used this method as a supplement that can provide additional word alignment information in the decoding process, thus can alleviate the information reduction when the input sentence is too long. Concretely, Attention Mechanism is an intermediate component between Encoder and Decoder, which can help to determine the word correlation (word alignment information) dynamically. In the encoding period, it extends the vector of the final state in the original NMT model with a weighted average of hidden state in each time state, and a score function is provided to get the weight we mention above by calculating the correlation of each word in source sentence with the current predicting word. Thus the decoder could adapt its concentration in different translation steps by ordering the importance of each word correlation in source sentence, and this method can help to capture the long- range dependencies for each word respectively. The inspiration for applying the Attention Mechanism on NMT comes from human behavior in reading and translating the text data. People generally read text repeatedly for mining the dependency within the sentence, which means each word has different dependency weight with each other. Comparing with other models in capturing word dependency information such as pooling layer in CNN or N-gram language model, attention mechanism has a global scope. When finding the dependency in one sequence, N -gram model will fix its the searching scope in a small range, usually the N is equal to 2 or 3 in practice. Attention Mechanism, on the other hand, calculates the dependency between the current generating word with other words in source sentence. This more flexible method obviously bring a better result. The practical application of Attention Mechanism is actually far beyond the NMT field, and it is even not an invention in NMT development. Some other tasks have also proposed similar methods that give weighted concentration on different position of input data, for example, Xu et al [109]. proposed similar mechanism in handling image caption task, which can helps to dynamically locate different entries in image feature vector when generating description of them. Due to the scope of this survey, the following discussion would only focus on the Attention Mechanism in NMT. B. Structure of Attention Mechanism There are many variants in the implementation of Attention Mechanism. Here we just give the detailed description of Attention Mechanism which has been widely accepted as bringing significant contribution in the development of NMT. 1) basic structure: The structure of attention mechanism was originally proposed by Bahdanau et al. In later, Luong et al. proposed similar structure with small distinctions and extends this work [23] [21]. To simplify the discussion, here we take Luong et al.’s method as an example. Concretely, in encoding period, this mechanism receiving the input words like the basic NMT model, but instead of compressing all the information in one vector, every unit in the top layer of encoder will generate one vector that represents one time-step in the source sentence. Target output words attention vector : moi syis._ étudiant <EQS> | loss layer context vector . projection layer attention (95) (03) 1) ot weight : : 7 | hidden layer 2 | hidden layer 1 | embedding layer 1 am a student <EOS> moi suis _étudiant Source input words Target input words Fig. 5. The concept of Attention Mechanism,which can provide additional alignment information rather than just using information in fixed-length of vector In the decoding period, the decoder wont predict the word just use its own information. However, it collaborates with the attention layer to get the translation. The input of attention mechanism is the hidden states in the top layer of the en- coder and the current decoder. It gets the relativity order by calculating the following steps: 1. The current decoding hidden state ht will be used to compare with all source states hs to derive the attention weights score st. 2. The attention weights at is driven by normalization operation for all attention weight score. 3. Based on the attention weights, we then compute the weighted average of the source states as a context vector ct. 4. Concatenate the context vector with the current decod- ing hidden state to yield the final attention vector(the exact combination method can be different). 5. The attention vector is fed as an input to decoder in the next time-step (applicable for input feeding). The first three steps can be summarized by the equations below: st = score(ht, hs) [Attention f unction] (4) a = cares) [Attention weight] Â¥ caps) ct = aths [Context vector] # s Among the above function, The score function could be defined in different ways. Here, we two classic definitions: hPWhs vl tanh(Wyhi, hs) Luong's version g [Bahdanau's version] (7) score(hy, hs) = { (7) Back to the decoding period, it receives the information from both two sides, the decoder hidden state and the attention vector, given the current two vectors, it then predicts the words by alignment them to a new vector, then it usually has another layer to predict the current target word. 2) Global Attention & Local Attention: Global Attention Global Attention is the method of Attention Mechanism we mentioned above, and its also a fluent type in various of Attention mechanism. The idea of Global Attention is also the original form of attention mechanism, though it got this name by Luong et al. [23], the corresponding term is Local Attention. The term global derives from it calculates the context vector by considering the relevance order of all words in the source sentence. This method has excellent performance because more alignment information will generally produce a better result. A straightforward presentation in Fig. 6 As we have introduced in Section IV, this method considers all the source word in the decoding period. The main drawback is calculation speed deteriorates when the sequence is very long since one hidden state will be generated in one time-step in the Encoder, the cost of score function would be linear with the number of time-steps on the Encoder. When the input is a long sequence like a compound sentence or a paragraph, it may affect the decoding speed. Local attention Local attention was first proposed by Luong et al. [23]. As illustrated in Fig. 7, this model, on the (5) (6) other hand, will just calculate the relevance with a subset of the source sentence. Comparing with Global attention, it fixes the length of attention vector by giving a scope number, thus avoiding the expensive computation in getting context vectors. The experiment result indicated that local attention can keep a balance between model performance with computing speed. The inspiration of local attention comes from the soft attention and hard attention in image caption generation task, which was proposed by Xu et al. [109]. While the global attention is very similar to soft attention, the local attention, on the other hand, can be seen as a mixture method of soft attention with hard attention. In theory, although covering more information would gen- erally get a better result, the fantastic result of this method has indicated a comparable performance with global attention when it has been fine-tuned. This seems due to the common phenomenon in human language the current word would naturally have a high dependency with some of its nearby words, which is quite similar to the assumption of the n-gram language model. In the details of calculation process, given the current target words position pt, the model fixes the context vector in scope D. The context vector ct is then derived as a weighted average over the set of source hidden states within the range [pt − D, pt + D]; Scope D is selected by experience, and then it could be same steps in deriving the attention vector like Global attention. 3) Input feeding approach: Input feeding is a small tip in constructing the NMT structure, but from the perspective of providing alignment information, it can also be seen as a kind of attention. The concept of input feeding is simple. In the decoding period, besides using the previously predicted words as input, it also uses the attention vector that in the previous time-step as additional input in next time-step [1] [23]. This attention vectors will concatenate with input vector to get the final input vector; then this new vector will be fed as the input in the next step. 4) Attention Mechanism in GNMT : GNMT is short for Google Neural Machine Translation, which is a well-known version of NMT with Attention Mechanism. GNMT was proposed by Wu et al., [24] and famous for its successful application in industrial level NMT system. With the help of many kinds of advanced tips in model detail, it got state-of-the- attention vector moi context vector, | Global 05) 03 attention weight —:”” 0.1) (0.1 Fig. 6. The concept of Global attention, current decoder hidden state calculated with all the hidden states in source side to get the alignment information. attention vector moi context vector, Local attention weight Fig. 7. The concept of Local attention, current hidden state calculated with a subset of all the hidden states in source side. art performance at that time. Besides, the elaborate architecture of GNMT makes it have a better inference speed, which helps it more applicable in satisfying the industry need. The concept of GNMT get the help of the current research in attention mechanism; it used Global Attention but was recon- struct by a more effective structure for model parallelization. The concrete details illustrated in the figure, it has two main points in this architecture. First, this structure has canceled the Fig. 8. Attention in GNMT, the Attention weight was driven by the bottom layer of Decoder and sent to all Decoder layers, which helps to improve computing parallelization connection between the encoder and the decoder. So that it can have more freedom in choosing the structure of the encoder and decoder, for example, the encoder could choose the different dimensions in each layer regardless of the dimension in the decoder, only the top layer of the both encoder and decoder should have same dimensions to guarantee that they can be calculated in mathematics for driving attention vector. Second, this structure makes it easier for paralleling the model. Only the bottom layer of the decoder is used to get the context vector, then all of the remain decoding layers will use this context vector directly. This architecture can retain as much parallelism as possible. For details of attention calculation, GNMT applying the At- tention Mechanism like the way of calculating global attention, while the score() function is a feed forward network with 1 hidden layer. also called intra- attention, it is wildly known for its application in NMT task due to the emergence of Transformer. While other commonly Linear s Concat AAR £ Scaled Dot-Product on h Attention ry Ly ry Linear | | Linear | | Linear | | Â¥ Â¥ Â¥ K Vv Q | | Fig. 9. The concept of Multi-head Self-attention noted Attention Mechanism driven the context information by calculating words dependency between source sequence with target sequence, Self-attention calculates the words de- pendency inside the sequence, and thus get an attention based sequence representation. As for calculation steps, Self-attention first gets 3 vectors based original embedding for different purpose, the 3 vectors are Query vector, Key vector, and Value vector. Then the attention weights was calculated in this way: QKT Vag Attention (Q, K,V) = softmax ( ) Vv (8) where the is a scaled factor for avoiding to have more stable gradients that caused by dot products operation. In addition, the above calculation can be implemented in metrics multiplication, so the words dependency can easily got in form of relation metrics. C. Other related work Besides the above description in significant progress, there are also some other refinements in a different perspective of attention mechanism. In perspective of attention structure, Yang et al. improved the traditional attention structure by providing a network to model the relationship of word with its previous and subsequent attention [94]. Feng et al. proposed a recurrent attention mechanism to improve the alignment accuracy, and it has been proved to outperformed vanilla models in large- scale ChineseEnglish task [93]. Moreover, other researches focus on the training process. Cohn et al. extended the original attention structure by adding several structural biases, they including positional bias, Markov conditioning, fertility, and Bilingual Symmetry [95], model that integrated with these refinements have got better translation performance over the basic attention-based model. More concretely, the above methods can be seen as a kind of inheritance from the alignment model in SMT research, with more experiential assumption and intuition in linguistics: P osition Bias : It assumed words in source and target sentence with the same meaning would also have a similar relative position, especially when both two-sentence have a similar word order. As an adjustment of the original attention mechanism, it helps to improve the alignment accuracy by en- couraging words in similar relative position to be aligned. Fig- ure11111 demonstrated the phenomena strongly, where words in diagonal are tended to be aligned. M arkov Condition : Empirically, in one sentence, one word has higher Correlation with its nearby words rather than those far from it, this is also the basement in explaining context capture of n-gram LM. As for translation task, it’s obvious that words are adjacent in source sentence would also map to the nearby position in target sentence, taking advantage of this property, this consideration thus improves the alignment accuracy by discouraging the huge jumps in finding the corresponding alignment of nearby words. In addition, the method with similar consideration but different implementation is local attention. F ertility : Fertility measures whether the word has been attended at the right level, it considers preventing both scenarios when the word hasn’t got enough attention or has been paid too much attention. This design comes from the fact that the poor translation result is commonly due to repeatedly translated some word or lack coverage of other words, which refers to Under-translation and Over-translation. Bilingual Symmetry : In theory, word alignment should be a reversible result, which means the same word alignment should be got when translation processing form A to B with translation from B to A. This motivates in both directions and the parallel encouraging the similar alignment result. The refinement infertility was further extended by Tu et al. [35], who proposed fertility prediction as a normalizer before decoding, this method adjusts the context vector in original NMT model by adding coverage information when calculating attention weights, thus can provide complementary information about the probability of source words have been translated in prior steps. Besides the intuition that heuristics from SMT, Cheng et al. applied the agreement-based learning method on NMT task, which encourages joint training in the agreement of word alignment with both translation directions [96]. In later, Mi et al. proposed a supervised method for attention component, and it utilized annotated data with additional alignment constraints in its objective function, experiments in Chinese-to-English task has proven to benefit for both translation performance and alignment accuracy [97]. # V. VOCABULARY COVERAGE MECHANISM Besides the long dependency problem in general MT tasks, the existence of unknown words is another problem that can severely affect the translation quality. Different from traditional SMT methods which support enormous vocabulary, most of NMT models suffer from the vocabulary coverage problem due to the nature that it can only choose candidate words in predefined vocabulary with a modest size. In terms of vocabulary building, the chosen words are usually frequent words, while the remaining words are called unknown words or out-of-vocabulary (OOV) words. Empirically speaking, the vocabulary size in NMT varies between 30k-80k at most in each language, with one marked exception was proposed by Jean et al., who once used an efficient approximation for sof tmax to accommodate for the immense size of vocabulary (500k) [47]. However, the vocabulary coverage problem still persists widely because of the far more number of OOV words in de f acto translation task, such as proper nouns in different domains and a great number of rarely used verbs. Since the vocabulary coverage in NMT is extremely lim- ited, handling the OOV words is another research hot spot. This section demonstrates the intrinsic interpretation of the vocabulary coverage problem in NMT and the corresponding solutions proposed in the past several years. # A. Description of Vocabulary Coverage problem in NMT Based on the scenario as mentioned above, in the prac- tical implementation of NMT, the initial way is choosing a small set of vocabulary and converting a large number of OOV words to one uniform “UNK” symbol (or other tags) as illustrated in Fig. 10. This intuitive solution may hurt translation performance in the following two aspects. First, the existence of “UNK” symbol in translation may hurt the semantic completeness of sentence; ambiguity may emerge when “UNK” replace some crucial words [48]. Second, as the NMT model hard to learn information from OOV words, the prediction quality beyond the OOV words may also be affected [49]. TABLE I BLEU PERFORMANCE OF NMT MODELS BLEU Model EN-DE EN-FR ByteNet Deep-Att + PosUnk GNMT + RL ConvS2S MoE 23.75 24.6 25.16 26.03 39.2 39.92 40.46 40.56 Deep-Att + PosUnk Ensemble GNMT + RL Ensemble ConvS2S Ensemble 26.3 26.36 40.4 41.16 41.29 Transformer (base model) Transformer (big) 27.3 28.4 38.1 41.8 Besides the unsurprising observation that NMT performed poorly on sentence with more OOV words than with more frequent words, some other phenomena in MT task are also hard to be handled like multi-word alignment, transliteration, and spelling, .etc. [16] [21]. They are seen as a similar phenomenon which is also caused by unknown words problem or suffers from rare training data [50]. Example of OOV words problem en: The ecotax portico in Pont-de-Buis , ... [truncated] . was taken down on Thursday morning fr: Le portique écotaxe de Pont-de-Buis , ... [truncated] ..., a été démonté jeudi matin nn: Le unk de unk “a unk , . .. [truncated] . . ., a étépris le jeudi matin Fig. 10. An example of OOV words problem presented in [23]. en and f r denote the source sentence in English and the corresponding target sentence in French, nn denotes the neural network’s result. For most of NMT model, choosing a modest size of vocabulary list is virtually a trade-off between the computation cost with translation quality. Also, it has been found the same thing in training NLM [53] [52]. Concretely, the computation cost mainly comes from the nature of the method in getting predicting word a normalization operation, which is used repeatedly in training of the DL model. Specifically, in NMT task, since DL model needs to adjust the parameters each time, the probability of current word thus would be calculated repeatedly to get the gradient, and since NMT model calculates the probability of current word when making a prediction, it needs to normalize the all of words in the vocabulary each time. Unfortunately, the normalization process is time- consuming due to its time complexity is linear with the vocabulary size, and this attribute has rendered the same time complexity in the training process. # B. Different Solutions Related researches have proposed various methods in both the training and inference process. Further, these methods can be roughly divided into three categories based on their different orientations. The first one is intuitively focused on finding solutions in improving computation speedup, which could support a more extensive vocabulary. The second one focus on using context information, this kind of method can address some of the unknown words(such as Proper Noun) by copying them to translation result as well as low-frequency words which cause a poor translation quality. The last one, which is more advanced, prefers to utilize information inside the word such as characters, because of their flexibility in handling morphological variants of words, this method can support translating OOV words in a more ”intelligent” way. 1) methods by computation speedup: For computation speedup method, there are lots of literature that implement their idea in NLM training. The first thought in trying com- putation speedup is to scale the sof tmax operation. Since an effective sof tmax calculation could obviously support a larger vocabulary, this kind of trying has got a lot of attention in NLM literature. Morin & Bengio [53] proposed hierarchical models to get an exponential speedup in the computation of normalization factor, thus help to accelerate the gradient calculation of word probabilities. In concrete details, the original model has transformed vocabulary into a binary tree structure, which was built with pre-knowledge from WordNet [54]. The initial experiment result shows that this hierarchical method is comparable with traditional trigram LM but fails to exceed original NLM; this is partly because of utilizing handcrafted feature from WordNet in the tree building process. As a binary tree can provide a significant improvement in cost-effective between the speed with performance, further work still focuses on this trend to find better refinement. Later on, Mnih & Hinton followed this work by removing the requirement of expert knowledge in tree building process [52]. A more elegant method is to retain the original model but change the method in calculating the normalization factor. Bengio & Senecal proposed importance sampling method to approximate the normalization factor [55]. However, this method is not stable unless with a careful control [56]. Mnih & Teh used noise-contrastive estimation to learn the normalization factor directly, which can be more stable in the training process of NLM [57]. Later, Vaswani et al. proposed a similar method with application in MT [58]. The above methods are difficult to be implemented par- allelly by GPUs. Further consideration found solutions that are more GPU friendly. Jean et al. alleviated the computation time by utilizing a subset of vocabulary as candidate words list in the training process while used the whole vocabulary in the inference process. Based on the inspiration of using importance sampling in earlier work [56], they proposed a pure data segmentation method in the training process. Specifically, they pre-processed the training data sequentially, choosing a subset of vocabulary for each training example with the number of its distinct words reached threshold t(which is still far less than the size of original vocabulary). In the inference process, they still abandon using the whole vocabulary and proposing a hybrid candidate list alternatively. They composed candidate words list from two parts. The first part is some spe- cific candidate target words that translated from a pre-defined dictionary with the others are the K-most frequent words. In the practical performance analysis, this method remains the similar modest size of candidate words in the training process; thus, it can maintain the computational efficiency while supporting an extremely larger size of candidate words [47]. Similarly, Mi et al. proposed vocabulary manipulation method which provides a separate vocabulary for different sentences or batches, it contains candidate words from both word-to-word dictionary and phrase-to-phrase library [104]. Besides all kinds of corresponding drawbacks in the above method, the common weakness of all these methods is they still suffer from the OOV words despite a larger vocabulary size they can support. This is because the enlarged vocabulary is still size limited, and there’s no solution for complementary when encountering unknown words, whereas the following category of methods can partly handle it. In addition, simply increasing the vocabulary size can merely bring little improve- ment due to the Zipfs Law, which means there is always a large tail of OOV words need to be addressed [48]. 2) methods by using context information: Besides the above variants which focus on computation speed-up, a more ad- vanced category is using context information. Luong et al. proposed a word alignment algorithm which collaborates with Copy Mechanism to post-processing the translation result. This old but useful operation was inspired by the common word(phrase) replacement method in SMT and has achieved a pretty considerable improvement in BLEU [59]. Concretely, in Luong’s method, for each of the OOV words, there’s a “pointer” which map to the corresponding word in the source sentence. In the post-processing stage, a predefined dictionary was provided with “pointer” to find the corresponding transla- tion, while using directly copy mechanism to handle the OOV words that not in the dictionary. The popularity of Luong et al. ’s method is partly because the Copy Mechanism actually provides an infinite vocabulary. Further research has refined this alignment algorithm for better replacement accuracy and generalization. Choi et al. extended Luong et al.’s approach by dividing OOV words into one of three subdivisions based on their linguistic features. [70] This method can help to remap the OOV words effectively. Gulcehre et al. done several refinements in this category, they applied Copy Mechanism similar to Luong et al. but cooperate the Attention Mechanism in determining the location of word alignment, which is more flexible in addressing alignment and could be directly utilized in other tasks which alignment loca- tion varies dramatically in both sides (like text summarization). Besides that, they synthesized Copy Mechanism with general translation operation by adding a so-called switching network to decide which operation should be applied in each time- step, this could be thought to improve the generalization of the whole model. [48]. Gu et al. made parallel efforts in integrating different mechanisms, they proposed a kind of Attention Mechanism called CopyNet with the vanilla encoder-decoder model, which can be naturally extended to handle OOV words in NMT task [74]. Additionally, they found that the attention mechanism has driven more by the semantics and language model when using traditional word translation, but by location when using copying operation. Learn alignments: ™ Le portique écotaxe de Pont-de-Buis The ecotax portico in Pont-de-Buis Add relative positions: The <unk> portico <unk> in NF ~ N\ Le <unk>, <unk>_, de <unk>g Fig. 11. An example of one kind of Copy Mechanism proposed by Luong et al. [23], the subscripts of < unk > symbol (refer as d)is a relative position of corresponding source words, where the alignment relation is a target word at position j is aligned to a source word at position i = j + d. Besides the Copy Mechanism, using extra knowledge is also useful in handling some other linguistic scenarios, which is highly related to the vocabulary coverage problem. Arthur et al. incorporated lexicons knowledge to assist with trans- lation in low-frequency words [72]. Feng et al. proposed a similar method with a memory-augmented NMT (M-NMT) architecture, and it used novel attention mechanism to get the extra knowledge from the dictionary that constructed by SMT [73]. Additionally, using context information can also be applied to improve the translation quality of ambiguous words (Homographs) [75]. In a nutshell, there are many context-based refinements have been proposed, most of them using Copy Mechanism to handle the OOV words with various of alignment algorithm to locate the corresponding word in the target side. However, these kinds of methods have a limited room for further improve- ment because the Copy Mechanism is too crude to handle the sophisticated scenarios in different languages. Practically, these methods perform poorly in languages which are rich morphology like Finnish and Turkish, which motivated method with better generalization [64]. 3) Methods in fine grit level: This sub-section introduce some more “intelligent” ways that focus on using additional information inside the word unit. It’s obviously that such additional information could enhance the ability in covering the various of the linguistic phenomenon. In previous research, although using semantic information of word unit could provide the vast majority of learning features, other features in sub-word level are generally ignored. From the perspective of linguistic, the concept of “word” is the basic unit of language but not the minimum one in containing semantic information, and there are abundant experienced rules could be learned from the inside of word units like shape and suffix. Comparing with identity copy or dictionary translation which regards the rare words as an identical entity, the refined method using fine-grit information is more adapt- able. Further, a more “radical” method was proposed, which just treats words in character level completely. It would be an innovative concern for future work. In this category, one popular choice is using the sub- word unit, the most remarkable achievement was proposed by Sennrich et al., which has been proved to have the best performance in some shared result [50]. Concretely, in this design, they treat unknown words as sequences of sub-word units, which is reasonable in terms of the composition of the vast majority of these words(e.g., named entities, loanwords, and morphologically complex words). In order to represent these OOV words completely, one intuitive solution is to build a predefined sub-word dictionary that contains enough variants of units. However, restoring the sub-words cause massive space consumption in vocabulary size, which is effectively cancels out the whole purpose of reducing the computational efficiency in both time and space. Under this circumstances, a Byte Pair Encoding (BPE) based sub-words extraction method was applied for word segmentation operation in both sides of languages, which successfully adapted this old but effective data compression method in text pre-processing. In concrete details of this adapted BPE method, it alternated merging frequent pairs of bytes with characters or sequence of characters, and word segmentation process followed the below steps: (1) Prepare a large training corpus(generally are bilingual corpus). (2) Determined the size of the sub-word vocabulary. (3) Split the words to a sequence of characters (using a special space character for marking the original spaces). (4) Merge the most frequent adjacent pair of characters (e.g., in English, this may be c and h into ch). (5) Repeat step 4 until reaching the fixed given number of times or the defined size of the vocabulary. Each of these steps would increase the vocabulary by one. The figure shows a toy example of the method. As for the practical result of word segmentation, the most frequent words will be merged as single tokens, while the rare words(which is similar to the OOV words in previous categories’ work) may still contain un-merged sub-words. However, they have been found rare in processed text [50]. Further work of BPE method has also been proposed to obtaining a better generalization. Taku proposed subword regularization in later as an alternative to handle the spurious ambiguity phenomenon in BPE, and they further proposed a new subword segmentation algorithm based on a uni-gram language model, which shares the same idea with BPE but was more flexible in getting multiple segmentation based on their probabilities. Similarly, Wu et al. used “workpieces” concept to handle the OOV words which were once applied on the Google speech recognition system to solve Japanese/Korean segmentation problem [60] [24]. This method breaks words into word pieces to get a balance between flexibility with effi- ciency when using single characters and full words separately. in characters-level. Inspired by using character-level information completely in building NLM [65], Costa-jussa‘ & Fonollosa used CNN with highway network to model the characters directly [62], they deployed this architecture in source side with a common word-level generation in target side. Simi- larly, Ling et al. and Ballesteros et al. have proposed model respectively that using RNN(LSTM) to build character level embedding and composes it into the word embedding [64] [98], this idea later has been applied in building an RNN based character-level NMT model [63]. More recently, Luong and Manning(2016) proposed a hybrid model that combines the word level RNN with character level RNN for assist [99]. Concretely, Luongs’ method translates mostly at the word level, when encounter an OOV word, character level RNN would be used for the consult. The figure shows the detailed architecture of this model. On the other hand, the trying of designing a fully character- level translation model has also got attention accordingly. Chung et al. used BPE method to extract a sequence of subword in encoder side, they just varied the decoder by using pure characters, and it has indicated to provide comparable performance with models uses sub-words [61]. Motivated by aforementioned work, Lee et al. proposed fully character-level NMT without any segmentation, it was based on CNN pooling with highway layers, which can solve the prohibitively slow speed of training in Luong and Manning’s work [71]. VI. ADVANCED MODELS This section gives a demonstration of some advanced mod- els that have got the state-of-the-art performance, while all of them belong to different categories of model structure. Experimental result these networks can indicated that all achieve similar performance with different advantages in their corresponding aspects. A. ConvS2S ConvS2S is short for Convolutional Sequence to Sequence, which is an end-to-end NMT model proposed by Gehring et al. [82]. Different with most of RNN based NMT models, ConvS2S is entirely CNN based model both in encoder and decoder. In the network structure, ConvS2S stacked 15 layers of CNN in its encoder and decoder with fixed kernel width of 3. This deep structure helps to mitigate the weakness in capturing context information. In respect of network details, ConvS2S applied Gated Linear Units (GLU) [100] in building network, which provide a gated function for output of convolution layer. Specifically, the output of convolution layer Y ∈ R2d which is a vector with double times dimensions (2d numbers of dimensions) of each input element’s embedding (d numbers of dimensions), the gated function processes the output Y = [AB] ∈ R2d by implementing the equation 8, where both A and B are d dimensions vector, and the function σ(B) is a gated function used to control which inputs A of the current context are relevant. This non-linearity operation has been proved to be more effective in applying training language model [100] , surpassing those only applying tanh function on A [140]. In addition, ConvS2S also used residual connection [141] between different convolution layers. v([AB]) = A ⊗ σ(B) (9) Besides the innovation of CNN based encoder-decoder struc- ture, ConvS2S also applied similar Attention Mechanism that has been wildly accepted by RNN model, called Multi- step Attention. Concretely, Multi-step Attention is a separate attention structure applied in each decoder layer. In the process of calculating attention, the current hidden state dl i (i.e., the output of the lth layer) has combined with previous output embedding gi as a vector of decoder state summary dl i: dhl i = W l dl Then the attention vector al ij (i.e., the attention of state i with source element j in decoder layer l) would be driven by the dot-product of the summary vector with the output of the final encoder layer zu j . Lt 7 exp (di - zi) 0 = F™ oo (dat) Dies exp (di - 27) Lastly, the context vector is calculated as the weighted average of the attention vector ay with the encoder output z;' as well as the encoder input e;. a, (1) m ol — Ss al, (2 + e;) j=l (12) Embeddings convolutions Daninalanfl Liner Unit Attentions Dot Products Fig. 12. The structure of ConvS2S model, a successful CNN based NMT model with competitive performance to the state-of-the-art # B. RNMT+ RNMT+ was proposed by Chen et al. [103]. This model that has directly inherited the structure of GNMT model was proposed by Wu et al. [24]. Specifically, RNMT+ can be seen as an enhanced GNMT model, which demonstrated the best performance of RNN based NMT model. In model structure, RNMT+ mainly differs from the GNMT model in the following several perspectives: First, RNMT+ used six bi-directional RNN (LSTM) in its decoder, whereas GNMT used one layer of bi-directional RNN with seven layers of unidirectional RNN. This structure has sacrificed the computation efficiency in return for the extreme performance. Second, RNMT+ applied Multi-head additive attention in- stead of the single-head attention in conventional NMT model, which can be seen as taking advantage of Transformer model. Third, synchronous training strategy was provided in the training process, which improved the convergence speed with model performance based on empirical results [102]. In addition, inspired by Transformer model, per-gate layer normalization [101] was applied, which has indicated to be helpful in stabilizing model training. Output Probability Dropout Source Embedding (reme—) aa ™ =) sg “Attention Target Embedding Fig. 13. The structure of RNMT+ model,which has the similar structure of GNMT with adaptive innovation in Attention Mechanism # C. Transformer and Transformer based models Transformer is a new NMT structure proposed by Vaswani et al. [25]. Different from existing NMT models, it has aban- doned the standard RNN/CNN structures and designed an in- novative multi-layer self-attention blocks that are incorporated with a positional encoding method. This new trend of structure design takes the advantages from both RNN and CNN based model, which has been further used for initializing the input representation for other NLP tasks. Notably, Transformer is a complete Attention based NMT model. the model: Transformer has its distinct structure in its model, where the major differences are the input representation and multi-head attention. (1) Input representation Transformer has its unique representation in handling input data that quite different with recurrent or convolution model. For computing Self-Attention we mentioned, transformer han- dles the input as three kind of vectors for different purpose. They are Key,Value and Query vectors. And all these vectors are driven by multiplying the input embedding with three matrices that we trained during the training process. Also, there’s Positional Encoding method was applied for enhancing the modeling ability of sequence order, since Transformer has abandoned recurrence structure, this kind of method made a compensation by inject word order information in to feature vector, which can avoid the model to become invariant to sequence ordering (148). Specifically, Transformer add Positional Encoding to the input embedding at the bottoms of the encoder and decoder stacks, the Positional Encoding has been designed to have the same dimension with model embedding and thus could be summed. Positional Encoding can be calculated by applying positional functions directly or be learned (82). with be proven to have similar performance in final evaluation. In Transformer, adding Positional Encoding by using since and cosine functions is finally chosen, and each position can be encoded in the following way: PE(pos,2i) = Sin (pos/10000 2*/ dee!) PE (pos, 2141) = C08 (pos/100007/4e! ) (13) where pos indicates the position and i indicates the dimension. That is, each dimension of the positional encoding corresponds to a sinusoid function, and the wavelengths form a geometric progression from 2π to 20000π. The reason of choosing these functions is that they have been assumed theoretically in helping the model to learn to attend by relative positions easily, due to the characters that for any fixed offset k, P Epos + k can be represented as a linear function of P Epos. (2) Multi-head Self-Attention Self-Attention is the major innovation in Transformer. But in implementation, rather than just computing the Self-Attention once, the multi-head mechanism runs through the scaled dot- product attention multiple times in parallel, and the outputs of these independent attention are then concatenated and lin- early transformed into the expected dimensions. This multiple times Self-Attention computation is called Multi-head Self- Attention, which is applied for allowing the model to jointly attend to information from different representation sub-spaces at different positions. (3) Encoder & Decoder Blocks The encoder is built by 6 identical components, each of which contains one multi-head attention layer with a fully connected network above it. These two sub-layers are equipped with residual connection as well as layer normalization. All the sub-layers have the same dimension of 512 in output data. The decoder, however, is more complicated. There are also 6 components stacked; and in each component, three sub- layers are connected, including two sub-layers of multi-head self attention and one sub-layer of fully-connected neural network. Specifically, the bottom attention layer is modified with method called masked to prevent positions from attending to subsequent positions, which is used for avoiding the model to look into the future of the target sequence when predicting the current word. Additionally, the second attention layer (the top attention layer) performs multi-head attention over the output of the encoder stack. 2) Transformer based NMT variants: Due to the tremen- dous performance improvement by Transformer, related refine- ment has got huge attention for researchers. The well accepted weakness of vanilla Transformer includes: lacking of recur- rence modeling, theoretically not Turing-complete, capturing position information, as well as large model complexity. All these drawbacks have hindered its further improvement of translation performance. In response to these problems, some adjustments have been proposed for getting a better model. In respect of model architecture, some proposed modi- fications focused on both in depth of attention layer and network composition. Bapna et al. has proposed 2-3x deeper Transformer with a refined attention mechanism, which can be easier for the optimization of deeper models [152]. The refined attention mechanism extended its connection to each encoder layers, like a weighted residual connections along the encoder depth, which allows the model to flexibly adjust the gradient flow to different layers of encoder. Similarly, Wang et al. [145] proposed a more deeper Transformer model (25 layers of encoder), which continues the same line of Bapna et al.(2018)’s work with properly applying layer normalization and a novel output combination method. In contrast to the fixed layers of NMT model, Dehghani et al. proposed Universal Transformers, which cancelled to stack the constant number of layers by combining recurrent inductive bias of RNNs and Adaptive Computation Time halting mechanism, thus enhanced the original self-attention based representation for better learning iterative or recursive transformations. Notably, this adjustment has made the model be shown to be Turing-complete under certain assumptions [142]. As for refinement in network composition, inspired by the thinking of AutoML, So et al. applied neural architecture search (NAS) to find a comparable model with simplified architecture [146]. The Evolved Transformer proposed in [146] has an innovative combination of basic blocks achieves the same quality as the original Transformer-Big model with 37.6% less parameters. While most modification is focus on changing model struc- ture directly, some new literature has chosen to utilize different input representation to improve model performance. One direct method is using enhanced Position Encoding for sequence order injection, where vanilla Transformer has weakness in capturing position information. Shaw et al. proposed modified self-attention mechanism with awareness of utilizing repre- sentations of relative positions, which demonstrated to have a significant improvements in two MT tasks [147]. Concurrently, using pre-initialized input representation with fine tune is another orientation, where some attempt have been proposed in different NLP tasks such as applying ELMo [150] for encoder of NMT model [155]. In terms of Transformer, one by-product of this innovative model is using self-attention for representing sequence, which can effectively fused word information with contextual information. In later, Two well- known Transformer based input representation methods were been proposed named Bert (Bidirectional Encoder Represen- tation from Transformers) [149] and GPT (Generative Pre- trained Transformer) [151], which has been indicated to bring improvement in some downstream NLP tasks. As for applying in NMT task, more Transformer based pre-trained model recently, this kind of trying has also been realized by using Bert as additional embedding layer or applying Bert as pre- trained model directly [154], which has been indicated to provide a bit better performance than vanilla Transformer after fine tune. Additionally, directly applying Bert as pre-trained model has been proved to have similar performance and thus can be more convenient for encoder initialization. The full structure of Transformer is illustrated in Fig. 14. # VII. FUTURE TREND Although we have witnessed the fast-growing research pro- gresses in NMT, there are still many challenges. Based on the extensive investigation [121] [125] [117] [116], we summarize the major challenges and list some potential directions in the following several aspects. Output Probability Softmax Tha fo \ ETE fo \ ‘Add & Norm ees \| | Se ([ Maltchead Attention i Ne wx ‘Add &Nom_) ( ‘ad & Nom) Multi-head Attention ) (_ Masked Multi-head Attention} | bomen LD O— Fete Source Target Embedding Embedding Fig. 14. The full structure of Transformer (1) In terms of translation performance, NMT still doesn’t perform well in translating long sentences. This is mainly because of two reasons: the practical limitation in engineering and the learning ability of the model itself. For the first reason, some academic experiments have chosen to ignore part of the long sentence that exceeds the RNN length. But we do believe that it’s not the same thing when NMT has been deployed in industrial applications. For the second reason, as research progresses, the model architecture would be more compli- cated. For example, Transformer model has applied innovative structure in its design which brought significant improvement in translation quality and speed [25]. We believe that more refinements in model structure would be proposed. As we all know that RNN based NMT takes its advantages in modeling sequence order but results in computational inefficiency. More future work would consider the trade off between these two aspects. (2) Alignment mechanism is essential for both SMT and NMT models. For the vast majority of NMT models, Atten- tion Mechanism plays the functional role in the alignment task, while it arguably does broader work than conventional alignment models. We believe this advanced alignment method would still get attraction in future research, since powerful attention method can improve the model performance directly. Later research in attention mechanism would try to relieve the weakness in NMT such as interpretation ability [116]. (3) Vocabulary coverage problem has always affected most of the NMT models. The research trend in handling com- putation load of softmax operation would pursue. And we have also found new training strategy which supports large vocabulary size. Besides, research of NMT operating in sub- word or character level has also aroused in recent years, which provided additional solution beyond traditional scope. More importantly, solving sophisticated translation scenario such as informal spelling is also a hot spot. Current NMT model integrated with character-level network has alleviated this phenomenon. Future work should focus on handling all kinds of OOV words in a more flexible way. (4) Low-Resource Neural Machine Translation [125] is another hot spot in current NMT, which tries to solve the severe performance reduction when NMT model is trained with rare bilingual corpus. Since the aforementioned scenario happened commonly in practice where some seldom-used languages don’t have enough data, we do believe this filed would be extended in further research. Multilingual transla- tion method [111] [110] is the commonly proposed method which incorporated multi-language pair of data to improve NMT performance. It may need more interpretation about the different results in choosing different language pairs. Besides, unsupervised method has utilized the additional dataset and provided pre-trained model. Further research could improve its effect and provide hybrid training strategy with traditional method [122] [123] [124]. research in NMT applications would also become more abundant. Currently, many applications have been developed such as speech translation [107] [106] and translation [105]. We believe that various document applications (especially end-to-end tasks) would emerge in the future. We strongly hope that an AI based simultaneous translation system could be applied in large-scale, which can bring huge benefit to our human society [108]. # REFERENCES [1] Klein, G., Kim, Y., Deng, Y., Senellart, J., & Rush, A. M. (2017). Opennmt: Open-source toolkit for neural machine translation.arXiv preprint arXiv:1701.02810. [2] Forcada, M. L., Ginest-Rosell, M., Nordfalk, J., ORegan, J., Ortiz-Rojas, S., Prez-Ortiz, J. A., ... & Tyers, F. M. (2011). Apertium: a free/open- source platform for rule-based machine translation. Machine translation, 25(2), 127-144. [3] Koehn, P., Och, F. J., & Marcu, D. (2003, May). Statistical phrase- based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1 (pp. 48-54). Association for Computational Linguistics. [4] Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., ... & Dyer, C. (2007, June). Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th annual meeting of the association for computational linguistics companion volume proceedings of the demo and poster sessions (pp. 177-180). [5] Chorowski, J., Bahdanau, D., Cho, K., & Bengio, Y. (2014). End-to-end continuous speech recognition using attention-based recurrent nn: First results. arXiv preprint arXiv:1412.1602. [6] Bengio, Y., Ducharme, R., Vincent, P., & Jauvin, C. (2003). A neural probabilistic language model. Journal of machine learning research, 3(Feb), 1137-1155. [7] Cho, K., Van Merrinboer, B., Bahdanau, D., & Bengio, Y. (2014). On the properties of neural machine translation: Encoder-decoder ap- proaches.arXiv preprint arXiv:1409.1259. [8] Kalchbrenner, N., & Blunsom, P. (2013). Recurrent continuous trans- lation models. InProceedings of the 2013 Conference on Empirical Methods in Natural Language Processing(pp. 1700-1709). [9] Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. [10] Mikolov, T., Yih, W. T., & Zweig, G. (2013). Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (pp. 746- 751). [11] Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532- 1543). [12] Devlin, J., Zbib, R., Huang, Z., Lamar, T., Schwartz, R., & Makhoul, J. (2014). Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Vol. 1, pp. 1370-1380). [13] Galley, M., and Manning, C. D. (2008, October). A simple and effective hierarchical phrase reordering model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (pp. 848-856). Association for Computational Linguistics. [14] Chiang, D., Knight, K., & Wang, W. (2009, May). 11,001 new features for statistical machine translation. In Proceedings of human language technologies: The 2009 annual conference of the north american chapter of the association for computational linguistics (pp. 218-226). Associa- tion for Computational Linguistics. [15] Green, S., Wang, S., Cer, D., & Manning, C. D. (2013). Fast and adaptive online training of feature-rich translation models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (Vol. 1, pp. 311-321). [16] Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. InAdvances in neural information pro- cessing systems(pp. 3104-3112). [17] Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural computation, 9(8), 1735-1780. [18] Cho, K., Van Merrinboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation.arXiv preprint arXiv:1406.1078. [19] Gehring, J., Auli, M., Grangier, D., & Dauphin, Y. N. (2016). A con- volutional encoder model for neural machine translation.arXiv preprint arXiv:1611.02344. [20] Meng, F., Lu, Z., Wang, M., Li, H., Jiang, W., & Liu, Q. (2015). En- coding source language with convolutional neural network for machine translation.arXiv preprint arXiv:1503.01838. [21] Bahdanau, D., Cho, K., & Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate.arXiv preprint arXiv:1409.0473. [22] Britz, D., Goldie, A., Luong, M. T., & Le, Q. (2017). Massive exploration of neural machine translation architectures.arXiv preprint arXiv:1703.03906. [23] Luong, M. T., Pham, H., & Manning, C. D. (2015). Effective ap- proaches to attention-based neural machine translation.arXiv preprint arXiv:1508.04025. [24] Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., ... & Klingner, J. (2016). Google’s neural machine translation system: Bridging the gap between human and machine translation.arXiv preprint arXiv:1609.08144. [25] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., ... & Polosukhin, I. (2017). Attention is all you need. InAdvances in Neural Information Processing Systems(pp. 5998-6008). [26] Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. [27] Wu, Y., Zhang, S., Zhang, Y., Bengio, Y., & Salakhutdinov, R. R. (2016). On multiplicative integration with recurrent neural networks. In Advances in neural information processing systems (pp. 2856-2864). [28] Bengio, Y., Simard, P., & Frasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 5(2), 157-166. [29] Graves, A. (2012). Sequence transduction with recurrent neural net- works. arXiv preprint arXiv:1211.3711. [30] Boulanger-Lewandowski, N., Bengio, Y., & Vincent, P. (2013, Novem- ber). Audio Chord Recognition with Recurrent Neural Networks. In ISMIR (pp. 335-340). [31] Graves, A. (2013). Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850. [32] Koehn, P. (2004, September). Pharaoh: a beam search decoder for phrase-based statistical machine translation models. In Conference of the Association for Machine Translation in the Americas (pp. 115-124). Springer, Berlin, Heidelberg. [33] Chiang, D. (2005, June). A hierarchical phrase-based model for statisti- cal machine translation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics (pp. 263-270). Association for Computational Linguistics. [34] Och, F. J., Tillmann, C., & Ney, H. (1999). Improved alignment models for statistical machine translation. In 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora. [35] Tu, Z., Lu, Z., Liu, Y., Liu, X., & Li, H. (2016). Modeling coverage for neural machine translation. arXiv preprint arXiv:1601.04811. [36] Auli, M., Galley, M., Quirk, C.,& Zweig, G. (2013). Joint language and translation modeling with recurrent neural networks. [37] Auli, M., & Gao, J. (2014). Decoder integration and expected bleu training for recurrent neural network language models. [38] Schwenk, H., & Gauvain, J. L. (2005, October). Training neural network language models on very large corpora. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing (pp. 201-208). Association for Computational Linguistics. [39] Schwenk, H. (2007). Continuous space language models. Computer Speech & Language, 21(3), 492-518. [40] Schwenk, H., Dchelotte, D., & Gauvain, J. L. (2006, July). Continuous space language models for statistical machine translation. In Proceedings of the COLING/ACL on Main conference poster sessions (pp. 723-730). Association for Computational Linguistics. [41] Mikolov, T., Karafit, M., Burget, L., ernock, J., & Khudanpur, S. (2010). Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association. [42] Pollack, J. B. (1990). Recursive distributed representations. Artificial Intelligence, 46(1-2), 77-105. [43] Chrisman, L. (1991). Learning recursive distributed representations for holistic computation. Connection Science, 3(4), 345-366. [44] Allen, R. B. (1987, June). Several studies on natural language and back- propagation. In Proceedings of the IEEE First International Conference on Neural Networks (Vol. 2, No. S 335, p. 341). IEEE Piscataway, NJ. [45] Elman, J. L. (1990). Finding structure in time. Cognitive science, 14(2), 179-211. [46] JORDAN, M. (1986). Serial Order; a parallel distributed processing approach. ICS Report 8604, UC San Diego. [47] Jean, S., Cho, K., Memisevic, R., & Bengio, Y. (2014). On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007. [48] Gulcehre, C., Ahn, S., Nallapati, R., Zhou, B.,& Bengio, Y. (2016). Pointing the unknown words. arXiv preprint arXiv:1603.08148. [49] Jiajun, Z., & Chengqing, Z. (2016). Towards zero unknown word in neural machine translation. [50] Sennrich, R., Haddow, B., & Birch, A. (2015). Neural machine transla- tion of rare words with subword units.arXiv preprint arXiv:1508.07909. [51] Gage, P. (1994). A new algorithm for data compression. The C Users Journal, 12(2), 23-38. [52] Mnih, A., & Hinton, G. E. (2009). A scalable hierarchical distributed lan- guage model. InAdvances in neural information processing systems(pp. 1081-1088). [53] Morin, F., & Bengio, Y. (2005, January). Hierarchical probabilistic neural network language model. InAistats(Vol. 5, pp. 246-252). [54] Miller, G. (1998).WordNet: An electronic lexical database. MIT press. [55] Bengio, Y., & Sencal, J. S. (2003, January). Quick Training of Proba- bilistic Neural Nets by Importance Sampling. InAISTATS(pp. 1-9). [56] Bengio, Y., & Sencal, J. S. (2008). Adaptive importance sampling to accelerate training of a neural probabilistic language model.IEEE Transactions on Neural Networks,19(4), 713-722. [57] Mnih, A., & Teh, Y. W. (2012). A fast and simple algorithm for training neural probabilistic language models.arXiv preprint arXiv:1206.6426. [58] Vaswani, A., Zhao, Y., Fossum, V., & Chiang, D. (2013). Decoding with large-scale neural language models improves translation. InProceedings of the 2013 Conference on Empirical Methods in Natural Language Processing(pp. 1387-1392). [59] Luong, M. T., Sutskever, I., Le, Q. V., Vinyals, O., & Zaremba, W. (2014). Addressing the rare word problem in neural machine transla- tion.arXiv preprint arXiv:1410.8206. [60] Schuster, M., & Nakajima, K. (2012, March). Japanese and korean voice search. In2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(pp. 5149-5152). IEEE. [61] Chung, J., Cho, K., & Bengio, Y. (2016). A character-level decoder with- out explicit segmentation for neural machine translation.arXiv preprint arXiv:1603.06147. [62] Costa-Jussa, M. R., & Fonollosa, J. A. (2016). Character-based neural machine translation.arXiv preprint arXiv:1603.00810. [63] Ling, W., Trancoso, I., Dyer, C., & Black, A. W. (2015). Character-based neural machine translation.arXiv preprint arXiv:1511.04586. [64] Ling, W., Lus, T., Marujo, L., Astudillo, R. F., Amir, S., Dyer, C., ... & Trancoso, I. (2015). Finding function in form: Compositional character models for open vocabulary word representation.arXiv preprint arXiv:1508.02096. [65] Kim, Y., Jernite, Y., Sontag, D., & Rush, A. M. (2016, March). Character-aware neural language models. InThirtieth AAAI Conference on Artificial Intelligence. [66] Kudo, T. (2018). Subword regularization: Improving neural network translation models with multiple subword candidates.arXiv preprint arXiv:1804.10959. [67] Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2015, June). Gated feed- back recurrent neural networks. InInternational Conference on Machine Learning(pp. 2067-2075). [68] Denkowski, M., & Neubig, G. (2017). Stronger baselines for trustable results in neural machine translation.arXiv preprint arXiv:1706.09733. [69] Nakazawa, T., Higashiyama, S., Ding, C., Mino, H., Goto, I., Kazawa, H., ... & Kurohashi, S. (2017, November). Overview of the 4th Workshop on Asian Translation. InProceedings of the 4th Workshop on Asian Translation (WAT2017)(pp. 1-54). [70] Choi, H., Cho, K., & Bengio, Y. (2017). Context-dependent word representation for neural machine translation.Computer Speech & Lan- guage,45, 149-160. [71] Lee, J., Cho, K., & Hofmann, T. (2017). Fully character-level neural machine translation without explicit segmentation.Transactions of the Association for Computational Linguistics,5, 365-378. [72] Arthur, P., Neubig, G., & Nakamura, S. (2016). Incorporating dis- crete translation lexicons into neural machine translation.arXiv preprint arXiv:1606.02006. [73] Feng, Y., Zhang, S., Zhang, A., Wang, D., & Abel, A. (2017). Memory- augmented neural machine translation.arXiv preprint arXiv:1708.02005. [74] Gu, J., Lu, Z., Li, H., & Li, V. O. (2016). Incorporating copying mecha- nism in sequence-to-sequence learning.arXiv preprint arXiv:1603.06393. [75] Liu, F., Lu, H., & Neubig, G. (2017). Handling homographs in neural machine translation.arXiv preprint arXiv:1708.06510. [76] Zhao, Y., Zhang, J., He, Z., Zong, C., & Wu, H. (2018). Addressing Troublesome Words in Neural Machine Translation. InProceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing(pp. 391-400). [77] Vaswani, A., Bengio, S., Brevdo, E., Chollet, F., Gomez, A. N., Gouws, S., ... & Sepassi, R. (2018). Tensor2tensor for neural machine translation. arXiv preprint arXiv:1803.07416. [78] Wang, Q., Li, B., Xiao, T., Zhu, J., Li, C., Wong, D. F., & Chao, L. S. (2019). Learning Deep Transformer Models for Machine Translation. arXiv preprint arXiv:1906.01787. [79] So, D. R., Liang, C., & Le, Q. V. (2019). The evolved transformer. arXiv preprint arXiv:1901.11117. [80] Dehghani, M., Gouws, S., Vinyals, O., Uszkoreit, J., & Kaiser, . (2018). Universal transformers. arXiv preprint arXiv:1807.03819. [81] Dai, Z., Yang, Z., Yang, Y., Cohen, W. W., Carbonell, J., Le, Q. V., & Salakhutdinov, R. (2019). Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. [82] Gehring, J., Auli, M., Grangier, D., Yarats, D., & Dauphin, Y. N. (2017, August). Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 1243-1252). JMLR. org. [83] Hu, B., Tu, Z., Lu, Z., Li, H., & Chen, Q. (2015, July). Context- dependent translation selection using convolutional neural network. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers) (pp. 536-541). [84] Kalchbrenner, N., Espeholt, L., Simonyan, K., Oord, A. V. D., Graves, A., & Kavukcuoglu, K. (2016). Neural machine translation in linear time. arXiv preprint arXiv:1610.10099. [85] Kaiser, L., Gomez, A. N., & Chollet, F. (2017). Depthwise sep- arable convolutions for neural machine translation. arXiv preprint arXiv:1706.03059. [86] Kaiser, ., & Bengio, S. (2016). Can active memory replace attention?. In Advances in Neural Information Processing Systems (pp. 3781-3789). (2018). The importance of being recurrent for modeling hierarchical structure. arXiv preprint arXiv:1803.03585. [88] Zhou, J., Cao, Y., Wang, X., Li, P., & Xu, W. (2016). Deep recurrent models with fast-forward connections for neural machine translation. Transactions of the Association for Computational Linguistics, 4, 371- 383. [89] Zhang, B., Xiong, D., Su, J., Duan, H., & Zhang, M. (2016). Variational neural machine translation. arXiv preprint arXiv:1605.07869. [90] Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., & Dean, J. (2017). Outrageously large neural networks: The sparsely- gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538. [91] Zhang, B., Xiong, D., Su, J., Lin, Q., & Zhang, H. (2018). Simplify- ing Neural Machine Translation with Addition-Subtraction Twin-Gated Recurrent Networks. arXiv preprint arXiv:1810.12546. [92] Wang, M., Lu, Z., Zhou, J., & Liu, Q. (2017). Deep neural machine translation with linear associative unit. arXiv preprint arXiv:1705.00861. [93] Feng, S., Liu, S., Yang, N., Li, M., Zhou, M., & Zhu, K. Q. (2016, December). Improving attention modeling with implicit distortion and fertility for machine translation. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers (pp. 3082-3092). [94] Yang, Z., Hu, Z., Deng, Y., Dyer, C., & Smola, A. (2016). Neural machine translation with recurrent attention modeling. arXiv preprint arXiv:1607.05108. [95] Cohn, T., Hoang, C. D. V., Vymolova, E., Yao, K., Dyer, C., & Haffari, G. (2016). Incorporating structural alignment biases into an attentional neural translation model. arXiv preprint arXiv:1601.01085. [96] Cheng, Y., Shen, S., He, Z., He, W., Wu, H., Sun, M., & Liu, Y. (2015). Agreement-based joint training for bidirectional attention-based neural machine translation. arXiv preprint arXiv:1512.04650. [97] Mi, H., Wang, Z., & Ittycheriah, A. (2016). Supervised attentions for neural machine translation. arXiv preprint arXiv:1608.00112. [98] Ballesteros, M., Dyer, C., & Smith, N. A. (2015). Improved transition- based parsing by modeling characters instead of words with LSTMs. arXiv preprint arXiv:1508.00657. [99] Luong, M. T., & Manning, C. D. (2016). Achieving open vocabulary neural machine translation with hybrid word-character models. arXiv preprint arXiv:1604.00788. [100] Dauphin, Y. N., Fan, A., Auli, M., & Grangier, D. (2017, August). Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 933-941). JMLR. org. [101] Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer normalization. arXiv preprint arXiv:1607.06450. [102] Chen, J., Pan, X., Monga, R., Bengio, S., & Jozefowicz, R. (2016). Re- visiting distributed synchronous SGD. arXiv preprint arXiv:1604.00981. [103] Chen, M. X., Firat, O., Bapna, A., Johnson, M., Macherey, W., Foster, G., ... & Wu, Y. (2018). The best of both worlds: Combining recent ad- vances in neural machine translation. arXiv preprint arXiv:1804.09849. [104] Mi, H., Wang, Z.,& Ittycheriah, A. (2016). Vocabulary manipulation for neural machine translation. arXiv preprint arXiv:1605.03209. [105] Wang, L., Tu, Z., Way, A., & Liu, Q. (2017). Exploiting cross-sentence context for neural machine translation.arXiv preprint arXiv:1704.04347. [106] Weiss, R. J., Chorowski, J., Jaitly, N., Wu, Y., & Chen, Z. (2017). Sequence-to-sequence models can directly translate foreign speech.arXiv preprint arXiv:1703.08581. [107] Duong, L., Anastasopoulos, A., Chiang, D., Bird, S., & Cohn, T. (2016, June). An attentional model for speech translation without transcription. InProceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies(pp. 949-959). [108] Gu, J., Neubig, G., Cho, K., & Li, V. O. (2016). Learning to translate in real-time with neural machine translation.arXiv preprint arXiv:1610.00388. [109] Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhudinov, R., ... & Bengio, Y. (2015, June). Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning (pp. 2048-2057). [110] Dong, D., Wu, H., He, W., Yu, D., & Wang, H. (2015, July). Multi- task learning for multiple language translation. InProceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)(pp. 1723-1732). [111] Firat, O., Cho, K., & Bengio, Y. (2016). Multi-way, multilingual neural machine translation with a shared attention mechanism.arXiv preprint arXiv:1601.01073. [112] Bojar, Ondrej, et al. Findings of the 2015 Workshop on Statistical Machine Translation. Proceedings of the Tenth Workshop on Statistical Machine Translation, 2015, pp. 146. [113] Cettolo, Mauro, et al. The IWSLT 2015 Evaluation Campaign. IWSLT 2015, International Workshop on Spoken Language Translation, 2015. [114] Junczys-Dowmunt, M., Dwojak, T., & Hoang, H. (2016). Is neural ma- chine translation ready for deployment? A case study on 30 translation directions. arXiv preprint arXiv:1610.01108. [115] Bentivogli, L., Bisazza, A., Cettolo, M., & Federico, M. (2016). Neural versus phrase-based machine translation quality: a case study. arXiv preprint arXiv:1608.04631. [116] Chaudhari, S., Polatkan, G., Ramanath, R., & Mithal, V. (2019). An attentive survey of attention models.arXiv preprint arXiv:1904.02874. [117] Galassi, A., Lippi, M., & Torroni, P. (2019). Attention, please! a critical review of neural attention models in natural language processing.arXiv preprint arXiv:1902.02181. [118] Domhan, T. (2018, July). How much attention do you need? a granular analysis of neural machine translation architectures. InProceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)(pp. 1799-1808). [119] Kaiser, ., & Sutskever, I. (2015). Neural gpus learn algorithms.arXiv preprint arXiv:1511.08228. [120] Lipton, Z. C., Berkowitz, J., & Elkan, C. (2015). A critical re- view of recurrent neural networks for sequence learning.arXiv preprint arXiv:1506.00019. [121] Koehn, P., & Knowles, R. (2017). Six challenges for neural machine translation.arXiv preprint arXiv:1706.03872. [122] He, D., Xia, Y., Qin, T., Wang, L., Yu, N., Liu, T. Y., & Ma, W. Y. (2016). Dual learning for machine translation. InAdvances in Neural Information Processing Systems(pp. 820-828). [123] Ramachandran, P., Liu, P. J., & Le, Q. V. (2016). Unsupervised pretrain- ing for sequence to sequence learning.arXiv preprint arXiv:1611.02683. [124] Artetxe, M., Labaka, G., Agirre, E., & Cho, K. (2017). Unsupervised neural machine translation.arXiv preprint arXiv:1710.11041. [125] Sennrich, R., & Zhang, B. (2019). Revisiting Low-Resource Neural Machine Translation: A Case Study.arXiv preprint arXiv:1905.11901. [126] Schwenk, H. (2012, December). Continuous space translation models for phrase-based statistical machine translation. InProceedings of COL- ING 2012: Posters(pp. 1071-1080). [127] Rosenfeld, R. (2000). Two decades of statistical language modeling: Where do we go from here?.Proceedings of the IEEE,88(8), 1270-1278. [128] Stolcke, A. (2002). SRILM-an extensible language modeling toolkit. InSeventh international conference on spoken language processing. [129] Teh, Y. W. (2006, July). A hierarchical Bayesian language model based on Pitman-Yor processes. InProceedings of the 21st International Con- ference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics(pp. 985-992). Association for Computational Linguistics. [130] Federico, M., Bertoldi, N., & Cettolo, M. (2008). IRSTLM: an open source toolkit for handling large scale language models. InNinth Annual Conference of the International Speech Communication Association. [131] Heafield, K. (2011, July). KenLM: Faster and smaller language model queries. InProceedings of the sixth workshop on statistical machine translation(pp. 187-197). Association for Computational Linguistics. [132] Son, L. H., Allauzen, A., & Yvon, F. (2012, June). Continuous space translation models with neural networks. InProceedings of the 2012 conference of the north american chapter of the association for computational linguistics: Human language technologies(pp. 39-48). Association for Computational Linguistics. [133] Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., & Kuksa, P. (2011). Natural language processing (almost) from scratch.Journal of machine learning research,12(Aug), 2493-2537. [134] Young, T., Hazarika, D., Poria, S., & Cambria, E. (2018). Recent trends in deep learning based natural language processing.ieee Computational intelligenCe magazine,13(3), 55-75. [135] Wallach, H. M. (2006, June). Topic modeling: beyond bag-of-words. InProceedings of the 23rd international conference on Machine learn- ing(pp. 977-984). ACM. language model for information retrieval. InProceedings of the eighth international conference on Information and knowledge management(pp. 316-321). ACM. [137] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. InAdvances in neural information processing systems(pp. 1097-1105). [138] Maas, A. L., Hannun, A. Y., & Ng, A. Y. (2013, June). Rectifier nonlinearities improve neural network acoustic models. InProc. icml(Vol. 30, No. 1, p. 3). [139] Cheng, Y., Shen, S., He, Z., He, W., Wu, H., Sun, M., & Liu, Y. Agreement-Based Joint Training for Bidirectional Attention-Based Neural Machine Translation. [140] Oord, A. V. D., Kalchbrenner, N., & Kavukcuoglu, K. (2016). Pixel recurrent neural networks. arXiv preprint arXiv:1601.06759. [141] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). [142] Dehghani, M., Gouws, S., Vinyals, O., Uszkoreit, J., & Kaiser, . (2018). Universal transformers. arXiv preprint arXiv:1807.03819. [143] Xiao, F., Li, J., Zhao, H., Wang, R., & Chen, K. (2019). Lattice-Based Transformer Encoder for Neural Machine Translation. arXiv preprint arXiv:1906.01282. [144] Hao, J., Wang, X., Yang, B., Wang, L., Zhang, J., & Tu, Z. (2019). Modeling recurrence for transformer. arXiv preprint arXiv:1904.03092. [145] Wang, Q., Li, B., Xiao, T., Zhu, J., Li, C., Wong, D. F., & Chao, L. S. (2019). Learning Deep Transformer Models for Machine Translation. arXiv preprint arXiv:1906.01787. [146] So, D. R., Liang, C., & Le, Q. V. (2019). The evolved transformer. arXiv preprint arXiv:1901.11117. [147] Shaw, P., Uszkoreit, J., & Vaswani, A. (2018). Self-attention with relative position representations. arXiv preprint arXiv:1803.02155. [148] Parikh, A. P., Tckstrm, O., Das, D., & Uszkoreit, J. (2016). A decomposable attention model for natural language inference. arXiv preprint arXiv:1606.01933. [149] Devlin, J., Chang, M. W., Lee, K., & Toutanova, K. (2018). Bert: Pre- training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. [150] Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., & Zettlemoyer, L. (2018). Deep contextualized word representations. arXiv preprint arXiv:1802.05365. [151] Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. pre- com/openai- understanding (2018). training. assets/researchcovers/languageunsupervised/language paper. pdf. Improving URL language understanding by generative https://s3-us-west-2. amazonaws. [152] Bapna, A., Chen, M. X., Firat, O., Cao, Y., & Wu, Y. (2018). Training deeper neural machine translation models with transparent attention. arXiv preprint arXiv:1808.07561. [153] Guo, Q., Qiu, X., Liu, P., Shao, Y., Xue, X., & Zhang, Z. (2019). Star-transformer. arXiv preprint arXiv:1902.09113. [154] Clinchant, S., Jung, K. W., & Nikoulina, V. (2019). On the use of BERT for Neural Machine Translation. arXiv preprint arXiv:1909.12744. (2019). Pre-trained lan- guage model representations for language generation. arXiv preprint arXiv:1903.09722. [156] Luong, M. T. (2017).Neural Machine Translation. Unpublished doctoral dissertation, Stanford University, Stanford, CA 94305.
{ "id": "1807.03819" }
2002.06823
Incorporating BERT into Neural Machine Translation
The recently proposed BERT has shown great power on a variety of natural language understanding tasks, such as text classification, reading comprehension, etc. However, how to effectively apply BERT to neural machine translation (NMT) lacks enough exploration. While BERT is more commonly used as fine-tuning instead of contextual embedding for downstream language understanding tasks, in NMT, our preliminary exploration of using BERT as contextual embedding is better than using for fine-tuning. This motivates us to think how to better leverage BERT for NMT along this direction. We propose a new algorithm named BERT-fused model, in which we first use BERT to extract representations for an input sequence, and then the representations are fused with each layer of the encoder and decoder of the NMT model through attention mechanisms. We conduct experiments on supervised (including sentence-level and document-level translations), semi-supervised and unsupervised machine translation, and achieve state-of-the-art results on seven benchmark datasets. Our code is available at \url{https://github.com/bert-nmt/bert-nmt}.
http://arxiv.org/pdf/2002.06823
Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, Tie-Yan Liu
cs.CL
Accepted to ICLR-2020
null
cs.CL
20200217
20200217
0 2 0 2 b e F 7 1 ] L C . s c [ 1 v 3 2 8 6 0 . 2 0 0 2 : v i X r a Published as a conference paper at ICLR 2020 # INCORPORATING BERT INTO NEURAL MACHINE TRANSLATION Jinhua Zhu1,∗, Yingce Xia2,∗, Lijun Wu3, Di He4, Tao Qin2, Wengang Zhou1, Houqiang Li1, Tie-Yan Liu2 1CAS Key Laboratory of GIPAS, EEIS Department, University of Science and Technology of China; 2Microsoft Research; 3Sun Yat-sen University; 4Key Laboratory of Machine Perception (MOE), School of EECS, Peking University [email protected], {zhwg,lihq}@ustc.edu.cn [email protected], {taoqin,tyliu}@microsoft.com [email protected] 4di [email protected] # ABSTRACT The recently proposed BERT (Devlin et al., 2019) has shown great power on a va- riety of natural language understanding tasks, such as text classification, reading comprehension, etc. However, how to effectively apply BERT to neural machine translation (NMT) lacks enough exploration. While BERT is more commonly used as fine-tuning instead of contextual embedding for downstream language understanding tasks, in NMT, our preliminary exploration of using BERT as con- textual embedding is better than using for fine-tuning. This motivates us to think how to better leverage BERT for NMT along this direction. We propose a new algorithm named BERT-fused model, in which we first use BERT to extract rep- resentations for an input sequence, and then the representations are fused with each layer of the encoder and decoder of the NMT model through attention mech- anisms. We conduct experiments on supervised (including sentence-level and document-level translations), semi-supervised and unsupervised machine trans- lation, and achieve state-of-the-art results on seven benchmark datasets. Our code is available at https://github.com/bert-nmt/bert-nmt. # INTRODUCTION Recently, pre-training techniques, like ELMo (Peters et al., 2018), GPT/GPT-2 (Radford et al., 2018; 2019), BERT (Devlin et al., 2019), cross-lingual language model (briefly, XLM) (Lample & Con- neau, 2019), XLNet (Yang et al., 2019b) and RoBERTa (Liu et al., 2019) have attracted more and more attention in machine learning and natural language processing communities. The models are first pre-trained on large amount of unlabeled data to capture rich representations of the input, and then applied to the downstream tasks by either providing context-aware embeddings of an input se- quence (Peters et al., 2018), or initializing the parameters of the downstream model (Devlin et al., 2019) for fine-tuning. Such pre-training approaches lead to significant improvements on natural lan- guage understanding tasks. Among them, BERT is one of the most powerful techniques that inspires lots of variants like XLNet, XLM, RoBERTa and achieves state-of-the-art results for many language understanding tasks including reading comprehension, text classification, etc (Devlin et al., 2019). Neural Machine Translation (NMT) aims to translate an input sequence from a source language to a target language. An NMT model usually consists of an encoder to map an input sequence to hidden representations, and a decoder to decode hidden representations to generate a sentence in the target language. Given that BERT has achieved great success in language understanding tasks, a question worthy studying is how to incorporate BERT to improve NMT. Due to the computation resource limitation, training a BERT model from scratch is unaffordable for many researchers. Thus, we focus on the setting of leveraging a pre-trained BERT model (instead of training a BERT model from scratch) for NMT. ∗This work is conducted at Microsoft Research Asia. The first two authors contributed equally to this work. 1 Published as a conference paper at ICLR 2020 Given that there is limited work leveraging BERT for NMT, our first attempt is to try two previous strategies: (1) using BERT to initialize downstream models and then fine-tuning the models, and (2) using BERT as context-aware embeddings for downstream models. For the first strategy, follow- ing Devlin et al. (2019), we initialize the encoder of an NMT model with a pre-trained BERT model, and then finetune the NMT model on the downstream datasets. Unfortunately, we did not observe significant improvement. Using a pre-trained XLM (Lample & Conneau, 2019) model, a variant of BERT for machine translation, to warm up an NMT model is another choice. XLM has been ver- ified to be helpful for WMT’16 Romanian-to-English translation. But when applied to a language domain beyond the corpus for training XLM (such as IWSLT dataset (Cettolo et al., 2014), which is about spoken languages) or when large bilingual data is available for downstream tasks, no sig- nificant improvement is observed neither. For the second strategy, following the practice of (Peters et al., 2018), we use BERT to provide context-aware embeddings for the NMT model. We find that this strategy outperforms the first one (please refer to Section 3 for more details). This motivates us to go along this direction and design more effective algorithms. We propose a new algorithm, BERT-fused model, in which we exploit the representation from BERT by feeding it into all layers rather than served as input embeddings only. We use the attention mechanism to adaptively control how each layer interacts with the representations, and deal with the case that BERT module and NMT module might use different word segmentation rules, resulting in different sequence (i.e., representation) lengths. Compared to standard NMT, in addition to BERT, there are two extra attention modules, the BERT-encoder attention and BERT-decoder attention. An input sequence is first transformed into representations processed by BERT. Then, by the BERT- encoder attention module, each NMT encoder layer interacts with the representations obtained from BERT and eventually outputs fused representations leveraging both BERT and the NMT encoder. The decoder works similarly and fuses BERT representations and NMT encoder representations. We conduct 14 experiments on various NMT tasks to verify our approach, including supervised, semi-supervised and unsupervised settings. For supervised NMT, we work on five tasks of IWSLT datasets and two WMT datasets. Specifically, we achieve 36.11 BLEU score on IWSLT’14 German- to-English translation, setting a new record on this task. We also work on two document-level translations of IWSLT, and further boost the BLEU score of German-to-English translation to 36.69. On WMT’14 datasets, we achieve 30.75 BLEU score on English-to-German translation and 43.78 on English-to-French translation, significantly better over the baselines. For semi-supervised NMT, we boost BLEU scores of WMT’16 Romanian-to-English translation with back translation (Sennrich et al., 2016b), a classic semi-supervised algorithm, from 37.73 to 39.10, achieving the best result on this task. Finally, we verify our algorithm on unsupervised English↔French and unsupervised English↔Romanian translations and also achieve state-of-the-art results. # 2 BACKGROUND AND RELATED WORK We briefly introduce the background of NMT and review current pre-training techniques. NMT aims to translate an input sentence from the source language to the target one. An NMT model usually consists of an encoder, a decoder and an attention module. The encoder maps the input sequence to hidden representations and the decoder maps the hidden representations to the target sequence. The attention module is first introduced by Bahdanau et al. (2015), which is used to better align source words and target words. The encoder and decoder can be specialized as LSTM (Hochreiter & Schmidhuber, 1997; Sutskever et al., 2014; Wu et al., 2016), CNN (Gehring et al., 2017) and Transformer (Vaswani et al., 2017). A Transformer layer consists of three sub- layers, a self-attention layer that processes sequential data taking the context of each timestep into consideration, an optional encoder-decoder attention layer that bridges the input sequence and tar- get sequence which exists in decoder only, and a feed-forward layer for non-linear transformation. Transformer achieves the state-of-the-art results for NMT (Barrault et al., 2019). In this work, we will use Transformer as the basic architecture of our model. Pre-training has a long history in machine learning and natural language processing (Erhan et al., 2009; 2010). Mikolov et al. (2013) and Pennington et al. (2014) proposed to use distributional representations (i.e., word embeddings) for individual words. Dai & Le (2015) proposed to train a language model or an auto-encoder with unlabeled data and then leveraged the obtained model to finetune downstream tasks. Pre-training has attracted more and more attention in recent years 2 Published as a conference paper at ICLR 2020 and achieved great improvements when the data scale becomes large and deep neural networks are employed. ELMo was proposed in Peters et al. (2018) based on bidirectional LSTMs and its pre-trained models are fed into downstream tasks as context-aware inputs. In GPT (Radford et al., 2018), a Transformer based language model is pre-trained on unlabeled dataset and then finetuned on downstream tasks. BERT (Devlin et al., 2019) is one of the widely adopted pre-training approach for model initialization. The architecture of BERT is the encoder of Transformer (Vaswani et al., 2017). Two kinds of objective functions are used in BERT training: (1) Masked language modeling (MLM), where 15% words in a sentence are masked and BERT is trained to predict them with their surrounding words. (2) Next sentence prediction (NSP): Another task of pre-training BERT is to predict whether two input sequences are adjacent. For this purpose, the training corpus consists of tuples ([cls], input 1, [sep], input 2, [sep]), with learnable special tokens [cls] to classify whether input 1 and input 2 are adjacent and [sep] to segment two sentences, and with probability 50%, the second input is replaced with a random input. Variants of BERT have been proposed: In XLM (Lample & Conneau, 2019), the model is pre-trained based on multiple languages and NSP task is removed; in RoBERTa (Liu et al., 2019), more unlabeled data is leveraged without NSP task neither; in XLNet (Yang et al., 2019b), a permutation based modeling is introduced. # 3 A PRELIMINARY EXPLORATION While a few pieces of work (Lample & Conneau, 2019; Song et al., 2019) design specific pre- training methods for NMT, they are time and resource consuming given that they need to pre-train large models from scratch using large-scale data, and even one model for each language pair. In this work, we focus on the setting of using a pre-trained BERT model. Detailed model download links can be found in Appendix D. Considering that pre-trained models have been utilized in two different ways for other natural lan- guage tasks, it is straightforward to try them for NMT. Following previous practice, we make the following attempts. (I) Use pre-trained models to initialize the NMT model. There are different implementations for this approach. (1) Following (Devlin et al., 2019), we initialize the encoder of an NMT model with a pre- trained BERT. (2) Following (Lample & Conneau, 2019), we initialize the encoder and/or decoder of an NMT model with XLM. (II) Use pre-trained models as inputs to the NMT model. Inspired from (Peters et al., 2018), we feed the outputs of the last layer of BERT to an NMT model as its inputs. We conduct experiments on the IWSLT’14 English→German translation, a widely adopted dataset for machine translation consisting of 160k labeled sentence pairs. We choose Transformer (Vaswani et al., 2017) as the basic model architecture with transformer iwslt de en configuration (a six-layer model with 36.7M parameters). The translation quality is evaluated by BLEU (Papineni et al., 2002) score; the larger, the better. Both BERTbase and XLM models are pre-trained and we get them from the Web. More details about the experimental settings are included in Appendix A.2. # Table 1: Preliminary explorations on IWSLT’14 English→German translation. Algorithm BLEU score Standard Transformer 28.57 Use BERT to initialize the encoder of NMT Use XLM to initialize the encoder of NMT Use XLM to initialize the decoder of NMT Use XLM to initialize both the encoder and decoder of NMT 27.14 28.22 26.13 28.99 Leveraging the output of BERT as embeddings 29.67 The results are shown in Table 1. We have several observations: (1) Using BERT to initialize the en- coder of NMT can only achieve 27.14 BLEU score, which is even worse than standard Transformer without using BERT. That is, simply using BERT to warm up an NMT model is not a good choice. (2) Using XLM to initialize the encoder or decoder respectively, we get 28.22 or 26.13 BLEU score, which does not outperform the baseline. If both modules are initialized with XLM, the BLEU score 3 Published as a conference paper at ICLR 2020 is boosted to 28.99, slightly outperforming the baseline. Although XLM achieved great success on WMT’16 Romanian-to-English, we get limited improvement here. Our conjecture is that the XLM model is pre-trained on news data, which is out-of-domain for IWSLT dataset mainly about spoken languages and thus, leading to limited improvement. (3) When using the output of BERT as context-aware embeddings of the encoder, we achieve 29.67 BLEU, much better than using pre- trained models for initialization. This shows that leveraging BERT as a feature provider is more effective in NMT. This motivates us to take one step further and study how to fully exploit such features provided by pre-trained BERT models. # 4 ALGORITHM In this section, we first define the necessary notations, then introduce our proposed BERT-fused model and finally provide discussions with existing works. Notations Let X and Y denote the source language domain and target language domain respectively, which are the collections of sentences with the corresponding languages. For any sentence x ∈ X and y ∈ Y, let lx and ly denote the number of units (e.g., words or sub-words) in x and y. The i-th unit in x/y is denoted as xi/yi. Denote the encoder, decoder and BERT as Enc, Dec and BERT respectively. For ease of reference, we call the encoder and decoder in our work as the NMT module. W.l.o.g., we assume both the encoder and decoder consists of L layers. Let attn(q, K, V ) denote the attention layer, where q, K and V indicate query, key and value respectively (Vaswani et al., 2017). We use the same feed-forward layer as that used in (Vaswani et al., 2017) and denote it as FFN. Mathematical formulations of the above layers are left at Appendix E. 4.1 BERT-FUSED MODEL An illustration of our algorithm is shown in Figure 1. Any input x ∈ X is progressively processed by the BERT, encoder and decoder. decoder layer Feed LX] encoder layer - BERT Figure 1: The architecture of BERT-fused model. The left and right figures represent the BERT, encoder and decoder respectively. Dash lines denote residual connections. HB (red part) and H L E (green part) denote the output of the last layer from BERT and encoder. Step-1: Given any input x ∈ X , BERT first encodes it into representation HB = BERT(x). HB is the output of the last layer in BERT. The hB,i ∈ HB is the representation of the i-th wordpiece in x. Step-2: Let H l word embedding of sequence x. Denote the i-th element in H l E denote the hidden representation of l-th layer in the encoder, and let H 0 E denote i for any i ∈ [lx]. In the l-th E as hl 4 Published as a conference paper at ICLR 2020 layer, l ∈ [L], +, 1 il = g(attns(hi', Hy! Ap’) +attnp(hi', Hp, Hp)), Vi € [le], (ly where attnS and attnB are attention models (see Eqn.(6)) with different parameters. Then each ˜hl i is further processed by FFN(·) defined in Eqn.(7) and we get the output of the l-th layer: H l E = (FFN(˜hl Step-3: Let Sl Sl <t = (sl embedding of the predicted word at time-step t − 1. At the l-th layer, we have <t+1, Sl−1 time step t, i-e., sequence, and s? is the al I-1 : 8 = attngs (sp, S41, Se¢41)3 1 (2) = 3 (attan(%;, He, Hn) + attne(3;, Hg, Hp), s; = FFN(5). The attnS, attnB and attnE represent self-attention model, BERT-decoder attention model and encoder-decoder attention model respectively. Eqn.(2) iterates over layers and we can eventually obtain sL is mapped via a linear transformation and softmax to get the t-th predicted word ˆyt. The decoding process continues until meeting the end-of-sentence token. In our framework, the output of BERT serves as an external sequence representation, and we use an attention model to incorporate it into the NMT model. This is a general way to leverage the pre-trained model regardless of the tokenization way. 4.2 DROP-NET TRICK Inspired by dropout (Srivastava et al., 2014) and drop-path (Larsson et al., 2017), which can regular- ize the network training, we propose a drop-net trick to ensure that the features output by BERT and the conventional encoder are fully utilized. The drop-net will effect Eqn.(1) and Eqn.(2). Denote the drop-net rate as pnet ∈ [0, 1]. At each training iteration, for any layer l, we uniformly sample a random variable U l from [0, 1], then all the ˜hl pnet 2 ≤ U l ≤ 1 − i in Eqn.(1) are calculated in the following way: pnet 2 where I(·) is the indicator function. For any layer, with probability pnet/2, either the BERT-encoder attention or self-attention is used only; w.p. (1 − pnet), both the two attention models are used. For example, at a specific iteration, the first layer might uses attnS only while the second layer uses attnB only. During inference time, the expected output of each attention model is used, which is EU ∼uniform[0,1](˜hl Similarly, for training of the decoder, with the drop-net trick, we have pnet 2 pnet 2 ≤ U l ≤ 1 − t,drop-net = I(U l < ˜sl pnet 2 t, H L ) · attnE(ˆsl t, HB, HB) + I(U l > 1 − ) · attnB(ˆsl E, H L E) pnet 2 1 2 ) · (attnB(ˆsl t, HB, HB) + attnE(ˆsl t, H L E, H L I( + E)). (4) For inference, it is calculated in the same way as Eqn.(2). Using this technique can prevent network from overfitting (see the second part of Section 6 for more details). 4.3 DISCUSSION Comparison with ELMo As introduced in Section 2, ELMo (Peters et al., 2018) provides a context- aware embeddings for the encoder in order to capture richer information of the input sequence. Our approach is a more effective way of leveraging the features from the pre-trained model: (1) The output features of the pre-trained model are fused in all layers of the NMT module, ensuring the well-pre-trained features are fully exploited; (2) We use the attention model to bridge the NMT module and the pre-trained features of BERT, in which the NMT module can adaptively determine how to leverage the features from BERT. 5 Published as a conference paper at ICLR 2020 Limitations We are aware that our approach has several limitations. (1) Additional storage cost: our approach leverages a BERT model, which results in additional storage cost. However, considering the BLEU improvement and the fact that we do not need additional training of BERT, we believe that the additional storage is acceptable. (2) Additional inference time: We use BERT to encode the input sequence, which takes about 45% additional time (see Appendix C for details). We will leave the improvement of the above two limitations as future work. # 5 APPLICATION TO SUPERVISED NMT AND SEMI-SUPERVISED NMT We first verify our BERT-fused model on the supervised setting, including low-resource and rich- resource scenarios. Then we conduct experiments on document-level translation to verify our ap- proach. Finally, we combine BERT-fused model with back translation (Sennrich et al., 2016b) to verify the effectiveness of our method on semi-supervised NMT. # 5.1 SETTINGS Dataset For the low-resource scenario, we choose IWSLT’14 English↔German (En↔De), English→Spanish (En→Es), IWSLT’17 English→French (En→Fr) and English→Chinese (En→Zh) translation. There are 160k, 183k, 236k, 235k bilingual sentence pairs for En↔De, En→Es, En→Fr and En→Zh tasks. Following the common practice (Edunov et al., 2018), for En↔De, we lowercase all words. All sentences are preprocessed by BPE (Sennrich et al., 2016c). The model configuration is transformer iwslt de en, representing a six-layer model with embedding size 512 and FFN layer dimension 1024. For the rich-resource scenario, we work on WMT’14 En→De and En→Fr, whose corpus sizes are 4.5M and 36M respectively. We concate- nate newstest2012 and newstest2013 as the validation set and use newstest2014 as the test set. The model configuration is transformer big, another six-layer network with embedding size 1024 and FFN layer dimension 4096. More details about data and model are left in Appendix A.1. We choose BERTbase for IWSLT tasks and BERTlarge for WMT tasks, which can ensure that the dimension of the BERT and NMT model almost match. The BERT models are fixed during training. Detailed BERT information for each task is in Appendix D. The drop-net rate pnet is set as 1.0. Training Strategy We first train an NMT model until convergence, then initialize the encoder and decoder of the BERT-fused model with the obtained model. The BERT-encoder attention and BERT- decoder attention are randomly initialized. Experiments on IWSLT and WMT tasks are conducted on 1 and 8 M40 GPUs respectively. The batchsize is 4k tokens per GPU. Following (Ott et al., 2018), for WMT tasks, we accumulate the gradient for 16 iterations and then update to simulate a 128-GPU environment. It takes 1, 8 and 14 days to obtain the pre-trained NMT models, and additional 1, 7 and 10 days to finish the whole training process. The optimization algorithm is Adam (Kingma & Ba, 2014) with initial learning rate 0.0005 and inverse sqrt learning rate scheduler (Vaswani et al., 2017). For WMT’14 En→De, we use beam search with width 4 and length penalty 0.6 for inference following (Vaswani et al., 2017). For other tasks, we use width 5 and length penalty 1.0. Evaluation We use multi-bleu.perl to evaluate IWSLT’14 En↔De and WMT translation tasks for fair comparison with previous work. For the remaining tasks, we use a more advance implementation of BLEU score, sacreBLEU for evaluation. Script urls are in Appendix A.1. 5.2 RESULTS The results of IWSLT translation tasks are reported in Ta- ble 2. We implemented standard Transformer as baseline. Our proposed BERT-fused model can improve the BLEU scores of the five tasks by 1.88, 1.47, 2.4, 1.9 and 2.8 points respectively, demonstrating the effectiveness of our method. The consistent improvements on various tasks shows that our method works well for low-resource trans- lations. We achieved state-of-the-art results on IWSLT’14 De→En translation, a widely investigated baseline in ma- Table 2: BLEU of all IWSLT tasks. Transformer BERT-fused En→De De→En En→Es En→Zh En→Fr 28.57 34.64 39.0 26.3 35.9 30.45 36.11 41.4 28.2 38.7 6 Published as a conference paper at ICLR 2020 chine translation. The comparison with previous methods are shown in Appendix B.4 due to space limitation. The results of WMT’14 En→De and En→Fr are shown in Table 3. Our reproduced Transformer matches the results reported in Ott et al. (2018), and we can see that our BERT-fused model can improve these two numbers to 30.75 and 43.78, achieving 1.63 and 0.82 points improvement. Our approach also outperforms the well-designed model DynamicConv (Wu et al., 2019) and a model obtained through neural architecture search (So et al., 2019). Table 3: BLEU scores of WMT’14 translation. Algorithm En→De En→Fr DynamicConv (Wu et al., 2019) Evolved Transformer (So et al., 2019) 29.7 29.8 43.2 41.3 Transformer + Large Batch (Ott et al., 2018) Our Reproduced Transformer Our BERT-fused model 29.3 29.12 30.75 43.0 42.96 43.78 5.3 TRANSLATION WITH DOCUMENT-LEVEL CONTEXTUAL INFORMATION BERT is able to capture the relation between two sentences, since the next sentence prediction (NSP) task is to predict whether two sentences are adjacent. We can leverage this property to improve translation with document-level contextual information (Miculicich et al., 2018), which is briefly denoted as document-level translation. The inputs are a couple of sentences extracted from a para- graph/document, xd T , where the T x’s are contextually correlated. We want to translate them into target language by considering the contextual information. Algorithm In our implementation, to translate a sentence x to target domain, we leverage the con- textual information by taking both x and its preceding sentence xprev as inputs. x is fed into Enc, which is the same as sentence-level translation. For the input of BERT, it is the concatenation of two sequences: ([cls], xprev, [sep], x, [sep]), where both [cls] and [sep] are special tokens of BERT. Setting We use IWSLT’14 En↔De dataset as introduced in Section 5.1. The data is a collection of TED talks, where each talk consists of several sequences. We can extract the adjacent sentences for training, validation and test sets. The training strategy, hyperparameter selection and evaluation metric are the same for sentence-level translation. En→De De→En Sentence-level 28.57 34.64 Our Document-level Miculicich et al. (2018) 28.90 27.94 34.95 33.97 Sentence-level + BERT Document-level + BERT 30.45 31.02 36.11 36.69 Results The results are shown in Table 4. We can see that introducing contextual information from an additional encoder can boost the sentence-level baselines, but the improvement is limited (0.33 for En→De and 0.31 for De→En). For Miculicich et al. (2018), the best results we obtain are 27.94 and 33.97 respectively, which are worse than the sentence-level baselines. Combining BERT-fused model and document-level information, we can eventually achieve 31.02 for En→De and 36.69 for De→En. We perform significant test1 between sentence-level and document-level translation. Our document-level BERT-fused model significantly outperforms sentence-level baseline with p-value less than 0.01. This shows that our approach not only works for sentence-level translation, but can also be generalized to document-level translation. # 1https://github.com/moses-smt/mosesdecoder/blob/master/scripts/ analysis/bootstrap-hypothesis-difference-significance.pl 7 Published as a conference paper at ICLR 2020 5.4 APPLICATION TO SEMI-SUPERVISED NMT We work on WMT’16 Romanian→English (Ro→En) translation to verify whether our approach can still make improvement over back translation (Sennrich et al., 2016b), the standard and powerful semi-supervised way to leverage monolingual data in NMT. The number of bilingual sentence pairs for Ro→En is 0.6M . Sennrich et al. (2016a) provided 2M back translated data2. We use newsdev2016 as validation set and newstest2016 as test set. Sentences were encoded using BPE with a shared source-target vocabulary of about 32k tokens. We use transformer big configuration. Considering there is no Romanian BERT, we use the cased multilingual BERT (please refer to Appendix D) to encode inputs. The drop-net rate pnet is set as 1.0. The translation quality is evaluated by multi-bleu.perl. The results are shown in Table 5. The Transformer baseline achieves 33.12 BLEU score. With back-translation, the performance is boosted to 37.73. We use the model obtained with back-translation to initialize BERT-fused model, and eventually reach 39.10 BLEU. Such a score surpasses the previous best result 38.5 achieved by XLM (Lample & Conneau, 2019) and sets a new record. This demonstrates that our proposed approach is effective and can still achieve improvement over strong baselines. # 6 ABLATION STUDY We conduct two groups of ablation studies on IWSLT’14 En→De translation to better understand our model. # Table 6: Ablation study on IWSLT’14 En→De. Standard Transformer BERT-fused model 28.57 30.45 Randomly initialize encoder/decoder of BERT-fused model Jointly tune BERT and encoder/decoder of BERT-fused model 27.03 28.87 Feed BERT feature into all layers without attention Replace BERT output with random vectors Replace BERT with the encoder of another Transformer model 29.61 28.91 28.99 Remove BERT-encoder attention Remove BERT-decoder attention 29.87 29.90 # Study for training strategy and network architecture We conduct ablation study to investigate the performance of each component of our model and training strategy. Results are reported in Table 6: (1) We randomly initialize the NMT module (i.e., encoder and decoder) of BERT-fused model in- stead of using a warm-start one as introduced in the training strategy of Section 5.1. In this way, we can only achieve 27.03 BLEU score, which cannot catch up with the baseline. We also jointly train BERT model with the NMT module. Although it can also boost the baseline from 28.57 to 28.87, it is not as good as fixing the BERT part, whose BLEU is 30.45. (2) We feed the output of BERT into all layers of the encoder without attention models. That is, the Eqn.(1) is revised to ˜hl , H l−1 B is learnable. In this case, the encoder and BERT have to share the same vocabulary. The BLEU score is 29.61, which is better than the standard Transformer but slightly worse than leveraging the output of BERT # 2Data at http://data.statmt.org/rsennrich/wmt16_backtranslations/ro-en/. 8 Published as a conference paper at ICLR 2020 as embedding. This shows that the output of BERT should not be fused into each layer directly, and using the attention model to bridge the relation is better than using simple transformation. More results on different languages are included in Appendix B.3. To illustrate the effectiveness of our method, we choose another two kinds of ways to encode the input sequence rather than using BERT: (1) Using a fixed and randomly initialized embedding; (2) Using the encoder from another NMT model. Their BLEU scores are 28.91 and 28.99 respectively, indicating that the BERT pre-trained on large amount of unlabeled data can provide more helpful features to NMT. (3) To verify where the output of BERT should be connected to, we remove the BERT-encoder atten- tion (i.e., attnB in Eqn.(1)) and the BERT-decoder attention (i.e,, attnB in Eqn.(2)) respectively. Correspondingly, the BLEU score drops from 30.45 to 29.87 and 29.90. This indicates that the out- put of BERT should be leveraged by both encoder and decoder to achieve better performances. At last, considering that there are two stacked encoders in our model, we also choose ensemble models and deeper NMT models as baselines. Our approach outperforms the above baselines. The results are left in Appendix B.2 due to space limitation. # Study on drop-net To investigate the effect of drop-net, we conduct experiments on IWSLT’14 En→De dataset with different drop-net probability, pnet ∈ {0, 0.2, 0.4, 0.6, 0.8, 1.0}. The results are shown in Figure 2. As can been seen, although larger pnet leads to larger training loss, it leads to smaller validation loss and so better BLUE scores. This shows that the drop-net trick can indeed improve the generalization ability of our model. We fix pnet = 1.0 in other experiments unless specially specified. (a) Training loss. (b) Validation loss. (c) Validation BLEU. # Figure 2: Training/validation curves with different pnet’s. # 7 APPLICATION TO UNSUPERVISED NMT We work on unsupervised En↔Fr and En↔Ro translation. The data processing, architecture selec- tion and training strategy is the same as Lample & Conneau (2019). Settings For En↔Fr, we use 190M monolingual English sentences and 62M monolingual French sentences from WMT News Crawl datasets, which is the same as that used in (Song et al., 2019).3 For unsupervised En↔Ro translation, we use 50M English sentences from News Crawl (sampled from the data for En→Fr) and collect 2.9M sentences for Romanian by concatenating News Crawl data sets and WMT’16 Romanian monolingual data following Lample et al. (2018). The data is preprocessed in the same way as Lample & Conneau (2019). We use the same model configuration as Lample & Conneau (2019), with details in Appendix A.3. The BERT is the pre-trained XLM model (see Appendix D). We first train an unsupervised NMT model following Lample & Conneau (2019) until convergence. Then we initialize our BERT-fused model with the obtained model and continue training. We train models on 8 M40 GPUs, and the batchsize is 2000 tokens per GPU. We use the same optimization hyper-parameters as that described in Lample & Conneau (2019). # 3Data source: https://modelrelease.blob.core.windows.net/mass/en-fr.tar.gz. 9 Published as a conference paper at ICLR 2020 # Table 7: BLEU scores of unsupervised NMT. En→Fr Fr→En En→Ro Ro→En Lample et al. (2018) XLM (Lample & Conneau, 2019) MASS (Song et al., 2019) Our BERT-fused model 27.6 33.4 37.50 38.27 27.7 33.3 34.90 35.62 25.1 33.3 35.20 36.02 23.9 31.8 33.10 33.20 Results The results of unsupervised NMT are shown in Table 7. With our proposed BERT-fused model, we can achieve 38.27, 35.62, 36.02 and 33.20 BLEU scores on the four tasks, setting state- of-the-art results on these tasks. Therefore, our BERT-fused model also benefits unsupervised NMT. # 8 CONCLUSION AND FUTURE WORK In this work, we propose an effective approach, BERT-fused model, to combine BERT and NMT, where the BERT is leveraged by the encoder and decoder through attention models. Experiments on supervised NMT (including sentence-level and document-level translations), semi-supervised NMT and unsupervised NMT demonstrate the effectiveness of our method. For future work, there are many interesting directions. First, we will study how to speed up the in- ference process. Second, we can apply such an algorithm to more applications, like questioning and answering. Third, how to compress BERT-fused model into a light version is another topic. There are some contemporary works leveraging knowledge distillation to combine pre-trained models with NMT (Yang et al., 2019a; Chen et al., 2019), which is a direction to explore. # REFERENCES Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly In 6th International Conference on Learning Representations, learning to align and translate. 2015. URL https://arxiv.org/pdf/1409.0473v7.pdf. Lo¨ıc Barrault, Ond˘rej Bojar, Marta R. Costa-juss´a, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M¨uller, Santanu Pal, Matt Post, and Marcos Zampieri. Findings of the 2019 conference on ma- In Proceedings of the Fourth Conference on Machine Translation chine translation (wmt19). (Volume 2: Shared Task Papers, Day 1), pp. 1–61, Florence, Italy, August 2019. Association for Computational Linguistics. URL http://www.aclweb.org/anthology/W19-5301. Mauro Cettolo, Jan Niehues, Sebastian St¨uker, Luisa Bentivogli, and Marcello Federico. Report on the 11th iwslt evaluation campaign, iwslt 2014. In Proceedings of the International Workshop on Spoken Language Translation, Hanoi, Vietnam, pp. 57, 2014. Yen-Chun Chen, Zhe Gan, Yu Cheng, Jingzhou Liu, and Jingjing Liu. Distilling the knowledge of bert for text generation. arXiv preprint arXiv:1911.03829, 2019. Andrew M Dai and Quoc V Le. Semi-supervised sequence learning. In Advances in neural infor- mation processing systems, pp. 3079–3087, 2015. Yuntian Deng, Yoon Kim, Justin Chiu, Demi Guo, and Alexander Rush. Latent alignment and In Advances in Neural Information Processing Systems, pp. 9712–9724, variational attention. 2018. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. NAACL, 2019. URL https://arxiv. org/pdf/1810.04805.pdf. Sergey Edunov, Myle Ott, Michael Auli, David Grangier, and Marcaurelio Ranzato. Classical struc- tured prediction losses for sequence to sequence learning. NAACL, 2018. 10 Published as a conference paper at ICLR 2020 Dumitru Erhan, Pierre-Antoine Manzagol, Yoshua Bengio, Samy Bengio, and Pascal Vincent. The difficulty of training deep architectures and the effect of unsupervised pre-training. In Artificial Intelligence and Statistics, pp. 153–160, 2009. Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Journal of Machine Samy Bengio. Why does unsupervised pre-training help deep learning? Learning Research, 11(Feb):625–660, 2010. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1243–1252. JMLR. org, 2017. Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735– 1780, November 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL http://dx. doi.org/10.1162/neco.1997.9.8.1735. Marcin Junczys-Dowmunt and Roman Grundkiewicz. Ms-uedin submission to the wmt2018 ape shared task: Dual-source transformer for automatic post-editing. EMNLP 2018 THIRD CON- FERENCE ON MACHINE TRANSLATION (WMT18), 2018. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. NeurIPS, 2019. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. Phrase-based & neural unsupervised machine translation. arXiv preprint arXiv:1804.07755, 2018. Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Fractalnet: Ultra-deep neural net- ICLR, 2017. URL https://arxiv.org/pdf/1605.07648. works without residuals. pdf. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. Document-level neural machine translation with hierarchical attention networks. arXiv preprint arXiv:1809.01576, 2018. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed represen- tations of words and phrases and their compositionality. In Advances in neural information pro- cessing systems, pp. 3111–3119, 2013. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. EMNLP 2018 third conference on machine translation (WMT18), 2018. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Association for Computational Linguistics, 2002. Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532–1543, 2014. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. Deep contextualized word representations. arXiv preprint arXiv:1802.05365, 2018. Improving language un- derstanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/research-covers/languageunsupervised/language understanding paper. pdf, 2018. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019. 11 Published as a conference paper at ICLR 2020 Rico Sennrich, Barry Haddow, and Alexandra Birch. Edinburgh neural machine translation systems for wmt 16. In Proceedings of the First Conference on Machine Translation, volume 2, pp. 371– 376, 2016a. URL http://www.statmt.org/wmt16/pdf/W16-2323.pdf. Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation mod- els with monolingual data. ACL, 2016b. URL https://aclweb.org/anthology/ P16-1009. Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. ACL, 2016c. David So, Quoc Le, and Chen Liang. The evolved transformer. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, pp. 5877–5886, Long Beach, Cali- fornia, USA, 09–15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/ so19a.html. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. MASS: Masked sequence to sequence pre-training for language generation. In Kamalika Chaudhuri and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceed- ings of Machine Learning Research, pp. 5926–5936, Long Beach, California, USA, 09–15 Jun 2019. PMLR. URL http://proceedings.mlr.press/v97/song19d.html. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104–3112, 2014. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008, 2017. Yiren Wang, Yingce Xia, Tianyu He, Fei Tian, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. Multi- agent dual learning. ICLR, 2019. Dirk Weissenborn, Douwe Kiela, Jason Weston, and Kyunghyun Cho. Contextualized role interac- tion for neural machine translation, 2019. URL https://openreview.net/forum?id= ryx3_iAcY7. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. Pay less attention with lightweight and dynamic convolutions. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=SkVhlh09tX. Lijun Wu, Fei Tian, Yingce Xia, Yang Fan, Tao Qin, Lai Jian-Huang, and Tie-Yan Liu. Learning to teach with dynamic loss functions. In Advances in Neural Information Processing Systems, pp. 6466–6477, 2018. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine trans- arXiv preprint lation system: Bridging the gap between human and machine translation. arXiv:1609.08144, 2016. Yingce Xia, Tianyu He, Xu Tan, Fei Tian, Di He, and Tao Qin. Tied transformers: Neural machine translation with shared encoder and decoder. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 5466–5473, 2019. Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Yong Yu, Weinan Zhang, and Lei Li. Towards making the most of bert in neural machine translation. arXiv preprint arXiv:1908.05672, 2019a. URL https://arxiv.org/pdf/1908.05672.pdf. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019b. 12 Published as a conference paper at ICLR 2020 A EXPERIMENT SETUP IWSLT’14 & WMT’14 SETTINGS We mainly follow the scripts below to preprocess the data: https://github.com/pytorch/ fairseq/tree/master/examples/translation . Dataset For the low-resource scenario, we choose IWSLT’14 English↔German (En↔De), English→Spanish (En→Es), IWSLT’17 English→French (En→Fr) and English→Chinese (En→Zh) translation. There are 160k, 183k, 236k, 235k bilingual sentence pairs for En↔De, En→Es, En→Fr and En→Zh tasks. Following the common practice (Edunov et al., 2018), for En↔De, we lowercase all words, split 7k sentence pairs from the training dataset for validation and concatenate dev2010, dev2012, tst2010, tst2011, tst2012 as the test set. For other tasks, we do not lowercase the words and use the official validation/test sets of the corresponding years. For rich-resource scenario, we work on WMT’14 En→De and En→Fr, whose corpus sizes are 4.5M and 36M respectively. We concatenate newstest2012 and newstest2013 as the validation set and use newstest2014 as the test set. We apply BPE (Sennrich et al., 2016c) to split words into sub-units. The numbers of BPE merge operation for IWSLT tasks, WMT’14 En→De and En→Fr are 10k, 32k and 40k respectively. We merge the source and target language sentences for all tasks to build the vocabulary except En→Zh. Model Configuration For IWSLT tasks, we use the transformer iwslt de en setting with dropout ratio 0.3. In this setting, the embedding dimension, FFN layer dimension and number of layers are 512, 1024 and 6. For WMT’14 En→De and En→Fr, we use transformer big setting (short for transformer vaswani wmt en de big) with dropout 0.3 and 0.1 respectively. In this setting, the aforementioned three parameters are 1024, 4096 and 6 respectively. Evaluation We use multi-bleu.perl4 to evaluate IWSLT’14 En↔De and WMT translation tasks for fair comparison with previous work. For the remaining tasks, we use a more advance implementation of BLEU score, detokenized sacreBLEU for evaluation5. A.2 DETAILED EXPERIMENT SETTING IN SECTION 3 The IWSLT’14 English-to-German data and model configuration is introduced in Section A.1. For the training stategy, we use Adam (Kingma & Ba, 2014) to optimize the network with β1 = 0.9, β2 = 0.98 and weight-decay = 0.0001. The learning rate scheduler is inverse sqrt, where warmup-init-lr = 10−7, warmup-updates = 4000 and max-lr = 0.0005. A.3 DETAILED MODEL CONFIGURATION IN UNSUPERVISED NMT We leverage one Transformer model with GELU activation function to work on translations of two directions, where each language is associated with a language tag. The embedding dimension, FFN layer dimension and number of layer are 1024, 4096 and 6. The BERT is initialized by the pre- trained XLM model provided by (Lample & Conneau, 2019). # B MORE EXPERIMENT RESULTS B.1 MORE RESULTS ON PRELIMINARY EXPLORATION OF LEVERAGING BERT We use XLM to initialize the model for WMT’14 English→German translation task, whose training corpus is relative large. We eventually obtain 28.09 after 90 epochs, which is still underperform the baseline, 29.12 as we got. Similar problem is also reported in https://github.com/ facebookresearch/XLM/issues/32. We leave the improvement of supervised NMT with XLM as future work. 4https://github.com/moses-smt/mosesdecoder/blob/master/scripts/generic/ multi-bleu.perl # 5https://github.com/mjpost/sacreBLEU. 13 Published as a conference paper at ICLR 2020 B.2 MORE ABLATION STUDY Part I: A different way to deal with multiple attention models Junczys-Dowmunt & Grundkiewicz (2018) proposed a new way to handle multiple attention models. Instead of using Eqn.(2), the input is processed by self-attention, encoder-decoder attention and BERT-decoder attention sequentially. Formally, t = attnS(sl−1 , Sl−1 <t+1, Sl−1 ˆsl t E, H L t, H L t = attnE(ˆsl ¯sl E); t = attnB(¯sl ˜sl t, HB, HB); t = FFN(˜sl sl t). <t+1); (5) The BLEU score is 29.35 for this setting, not as good as our proposed method. # Part II: More results on IWSLT’14 En→De translation Since our BERT-fused model contains two stacked encoders, we carry out two groups of additional baselines: (1) Considering that stacking the BERT and encoder can be seen as a deeper model, we also train another two NMT models with deeper encoders, one with 18 layers (since BERTbase consists of 12 layers) and the other with 12 layers (which achieved best validation performance ranging from 6 to 18 layers). (2) We also compare the results of our approach with ensemble methods. To get an M -model ensemble, we independently train M models with different random seeds (M ∈ Z+). We ensemble both standard Transformers and our BERT-fused models, which are denoted as M -model ensemble (standard) and M -model ensemble (BERT-fused) respectively. Please note that when we aggregate multiple BERT-fused models, we only need to store one replica of the BERT model because the BERT part is not optimized. Table 8: More ablation study on IWSLT’14 En→De. Algorithm BLEU Standard Transformer BERT-fused model 28.57 30.45 12-layer encoder 18-layer encoder 29.27 28.92 2-model ensemble (standard) 3-model ensemble (standard) 4-model ensemble (standard) 2-model ensemble (BERT-fused) 3-model ensemble (BERT-fused) 4-model ensemble (BERT-fused) 29.71 30.08 30.18 31.09 31.45 31.85 The results are shown in Table 8. We have the following observations: 1. Adding more layers can indeed boost the baseline, but still not as good as BERT-fused model. According to our experiments, when increasing the number of layers to 12, we achieve the best BLEU score, 29.27. 2. We also compare our results to ensemble methods. Indeed, ensemble significantly boost the baseline by more than one point. However, even if using ensemble of four models, the BLEU score is still lower than our BERT-fused model (30.18 v.s. 30.45), which shows the effectiveness of our method. We want to point out that our method is intrinsically different from ensemble. Ensemble approaches usually refer to “independently” train several different models for the same task, and then aggregate 14 Published as a conference paper at ICLR 2020 the output of each model to get the eventually task. In BERT-fused model, although we include a pre-trained BERT into our model, there is still only one model serving for the translation task. In this sense, we can also combine our BERT-fused model with ensemble. Our approach benefits from ensemble too. When ensembling two models, we can achieve 31.09 BLEU score. When adding the number of models to four, we eventually achieve 31.85 BLEU score, which is 1.67 point improvement over the ensemble of standard Transformer. # Part III: More results on IWSLT’14 De→En translation We report the ensemble results on IWSLT’14 De→En translation in Table 9. We can get similar conclusion compared to that of IWSLT’14 En→De. # Table 9: More ablation study on IWSLT’14 De→En. Algorithm BLEU Standard Transformer BERT-fused model 34.67 36.11 2-model ensemble (standard) 3-model ensemble (standard) 4-model ensemble (standard) 2-model ensemble (BERT-fused) 3-model ensemble (BERT-fused) 4-model ensemble (BERT-fused) 35.92 36.40 36.54 37.42 37.70 37.71 B.3 MORE RESULTS ON FEEDING BERT OUTPUT TO NMT MODULE The ablation study on more languages is shown in Table 10. Our method achieves the best results compared to all baselines. Table 10: BLEU scores of IWSLT translation tasks. Algorithm En→De De→En En→Es En→Zh En→Fr Standard Transformer Feed BERT feature into embedding Feed BERT feature into all layers of encoder Our BERT-fused model 28.57 29.67 29.61 30.45 34.64 34.90 34.84 36.11 39.0 39.5 39.9 41.4 26.3 28.1 28.1 28.2 35.9 37.3 37.4 38.7 B.4 MORE BASELINES OF IWSLT’14 GERMAN-TO-ENGLISH TRANSLATION We summarize the BLEU scores on IWSLT’14 De→En of existed works and our BERT-fused model approach in Table 11. Approach BLEU Multi-agent dual learning (Wang et al., 2019) Tied-Transformer (Xia et al., 2019) Loss to teach (Wu et al., 2018) Role-interactive layer (Weissenborn et al., 2019) Variational attention (Deng et al., 2018) 35.56 35.52 34.80 34.74 33.68 Our BERT-fused model 36.11 15 Published as a conference paper at ICLR 2020 # B.5 COMPARISON WITH BACK TRANSLATION When using unlabeled data to boost machine learning systems, one of the most notable approaches is back translation (briefly, BT) (Sennrich et al., 2016b): We first train a reversed translation model, use the obtained model to translate the unlabeled data in the target domain back to source domain, obtain a synthetic dataset where the source data is back-translated and finally train the forward model on the augmented dataset. Our method has two main differences with BT method. 1. In BT, the monolingual data from the target side is leveraged. In our proposed approach, we use a BERT of the source language, which indirectly leverages the monolingual data from the source side. In this way, our approach and BT are complementary to each other. In Section 5.4, we have already verified that our method can further improve the results of standard BT on Romanian-to-English translation. 2. To use BT, we have to train a reversed translation model and then back translate the mono- lingual data, which is time-cost due to the decoding process. In BERT-fused model, we only need to download a pre-trained BERT model, incorporate it into our model and con- tinue training. Besides, the BERT module is fixed during training. On IWSLT’14, we also implement BT on wikipedia data, which is a subset of the corpus of training BERT. The model used for back translation are standard Transformer baselines introduced in Sec- tion 5, whose BLEU scores are 28.57 and 34.64 respectively. We back translate 1M, 2M, 5M, 15M and 25M randomly selected German sentences. The results are reported in Table 12. The rows started with BT(·) represent the results of BT, and the numbers in the brackets are the number of sentences for back translation. # Table 12: BLEU scores IWSLT’14 En←De by BT. Algorithm En→De Standard Transformer BERT-fused model 28.57 30.45 BT (1M) BT (2M) BT (5M) BT (15M) BT (25M) 29.42 29.76 29.10 28.26 27.34 IWSLT dataset is a collection of spoken language, and the bilingual training corpus is small (160k). In Wikipedia, the sentences are relatively formal compared to the spoken language, which is out- of-domain of spoken languages. We can see that when using 1M or 2M monolingual data for BT, the BLEU scores can indeed improve from 28.57 to 29.42/29.76. However, simply adding more wikipedia data for BT does not result in more improvement. There is even a slight drop when adding more than 15M monolingual sentences. However, our BERT-fused model can achieve better performances than BT with wikipedia data. # C COMPARISON OF INFERENCE TIME Table 13: Comparisons on inference time (seconds), ‘+’ is the increased ratio of inference time. Dataset Transformer Ours (+) IWSLT’14 En→De IWSLT’14 De→En WMT’14 En→De WMT’14 En→Fr 70 69 67 89 97 103 99 128 38.6% 49.3% 47.8% 43.8% 16 Published as a conference paper at ICLR 2020 We compare the inference time of our approach to the baselines. The results are shown in Table 13, where from the second column to the last column, the numbers are the inference time of standard Transformer, BERT-fused model, and the increase of inference time. Indeed, introducing BERT to encode the input brings additional inference time, resulting in about 40% to 49% increase. But considering the significant improvement of BLEU score, it is acceptable of such extra cost. We will study how to reduce inference time in the future. # D DOWNLOAD LINK OF PRE-TRAINED BERT MODELS We leverage the pre-trained models provided by PyTorch-Transformers6. For IWSLT’14 tasks, we choose BERTbase model with 12 layers and hidden dimension 768. 1. IWSLT14 En→{De, Es, Fr, Zh}, we choose bert-base-uncased. 2. IWSLT14 De→En, we choose bert-base-german-cased. For WMT14 En→{Fr, De}, we choose bert-large-uncased, which is a BERTlarge model with 24 layers and hidden dimension 1024. For WMT16 Ro→En, we choose bert-base-multilingual-cased, because there is no BERT specially trained for the Romanian. For unsupervised En↔Fr and unsupervised En↔Ro, we choose xlm-mlm-enfr1024 and xlm-mlm-enro1024 respectively. The download links are summarized as follows: bert-base-uncased: https://s3.amazonaws.com/models.huggingface.co/ bert/bert-base-uncased.tar.gz. • bert-large-uncased: https://s3.amazonaws.com/models.huggingface. co/bert/bert-large-uncased.tar.gz. • bert-base-multilingual-cased: https://s3.amazonaws.com/models. huggingface.co/bert/bert-base-multilingual-cased.tar.gz. • bert-base-german-cased: https://int-deepset-models-bert.s3. eu-central-1.amazonaws.com/pytorch/bert-base-german-cased. tar.gz. • xlm-mlm-enfr1024: https://s3.amazonaws.com/models.huggingface. co/bert/xlm-mlm-enfr-1024-pytorch_model.bin. • xlm-mlm-enro1024: https://s3.amazonaws.com/models.huggingface. co/bert/xlm-mlm-enro-1024-pytorch_model.bin. # E DETAILS OF THE NOTATIONS Let attn(q, K, V ) denote the attention layer, where q, K and V indicate query, key and value respectively. Here q is a dq-dimensional vector (d ∈ Z), K and V are two sets with |K| = |V |. Each ki ∈ K and vi ∈ V are also dk/dv-dimensional (dq, dk and dv can be different) vectors, i ∈ [|K|]. The attention model works as follows: \Vv| T |K| exp ((W, Wyki) attn(q, K,V) =ha Wyvi, Oi P (( (Wek )) ,Z dol (W,q)" (Wiki), (6) where Wq, Wk and Wv are the parameters to be learned. In Vaswani et al. (2017), attn is im- plemented as a multi-head attention model and we omit the details here to increase readability. Following Vaswani et al. (2017), we define the non-linear transformation layer as FFN(x) = W2 max(W1x + b1, 0) + b2, (7) # 6https://github.com/huggingface/pytorch-transformers 17 Published as a conference paper at ICLR 2020 where x is the input; W1, W2, b1, b2 are the parameters to be learned; max is an element-wise operator. Layer normalization is also applied following Transformer (Vaswani et al., 2017). 18
{ "id": "1907.11692" }
2002.06305
Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping
Fine-tuning pretrained contextual word embedding models to supervised downstream tasks has become commonplace in natural language processing. This process, however, is often brittle: even with the same hyperparameter values, distinct random seeds can lead to substantially different results. To better understand this phenomenon, we experiment with four datasets from the GLUE benchmark, fine-tuning BERT hundreds of times on each while varying only the random seeds. We find substantial performance increases compared to previously reported results, and we quantify how the performance of the best-found model varies as a function of the number of fine-tuning trials. Further, we examine two factors influenced by the choice of random seed: weight initialization and training data order. We find that both contribute comparably to the variance of out-of-sample performance, and that some weight initializations perform well across all tasks explored. On small datasets, we observe that many fine-tuning trials diverge part of the way through training, and we offer best practices for practitioners to stop training less promising runs early. We publicly release all of our experimental data, including training and validation scores for 2,100 trials, to encourage further analysis of training dynamics during fine-tuning.
http://arxiv.org/pdf/2002.06305
Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali Farhadi, Hannaneh Hajishirzi, Noah Smith
cs.CL, cs.LG
null
null
cs.CL
20200215
20200215
0 2 0 2 b e F 5 1 ] L C . s c [ 1 v 5 0 3 6 0 . 2 0 0 2 : v i X r a # Fine-Tuning Pretrained Language Models: Weight Initializations, Data Orders, and Early Stopping Jesse Dodge 1 2 Gabriel Ilharco 3 Roy Schwartz 2 3 Ali Farhadi 2 3 4 Hannaneh Hajishirzi 2 3 Noah Smith 2 3 # Abstract Fine-tuning pretrained contextual word embed- ding models to supervised downstream tasks has become commonplace in natural language pro- cessing. This process, however, is often brittle: even with the same hyperparameter values, dis- tinct random seeds can lead to substantially differ- ent results. To better understand this phenomenon, we experiment with four datasets from the GLUE benchmark, fine-tuning BERT hundreds of times on each while varying only the random seeds. We find substantial performance increases compared to previously reported results, and we quantify how the performance of the best-found model varies as a function of the number of fine-tuning trials. Further, we examine two factors influenced by the choice of random seed: weight initializa- tion and training data order. We find that both contribute comparably to the variance of out-of- sample performance, and that some weight ini- tializations perform well across all tasks explored. On small datasets, we observe that many fine- tuning trials diverge part of the way through train- ing, and we offer best practices for practitioners to stop training less promising runs early. We publicly release all of our experimental data, in- cluding training and validation scores for 2,100 trials, to encourage further analysis of training dynamics during fine-tuning. MRPC RTE CoLA SST 90.7 70.0 62.1 92.5 88.0 70.4 60.6 93.2 91.4 77.3 67.6 95.1 90.9 83.4 62.1 93.2 89.2 83.8 63.6 95.6 90.9 86.6 68.0 96.4 90.9 89.2 71.4 96.9 BERT (Phang et al., 2018) BERT (Liu et al., 2019) BERT (ours) STILTs (Phang et al., 2018) XLNet (Yang et al., 2019) RoBERTa (Liu et al., 2019) ALBERT (Lan et al., 2019) Table 1. Fine-tuning BERT multiple times while varying only ran- dom seeds leads to substantial improvements over previously pub- lished validation results with the same model and experimental setup (top rows), on four tasks from the GLUE benchmark. On some tasks, BERT even becomes competitive with more modern models (bottom rows). Best results with standard BERT fine- tuning regime are indicated in bold, best overall results are under- scored. accuracy on natural language understanding tasks in pop- ular NLP benchmarks such as GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019), and variants of this model have since seen adoption in ever-wider applications (Schwartz et al., 2019; Lu et al., 2019). Typically, these models are first pretrained on large corpora, then fine-tuned on downstream tasks by reusing the model’s parameters as a starting point, while adding one task-specific layer trained from scratch. Despite its simplicity and ubiquity in modern NLP, this process has been shown to be brittle (Devlin et al., 2019; Phang et al., 2018; Zhu et al., 2019; Raffe et al., 2019), where fine-tuning performance can vary substantially across different training episodes, even with fixed hyperparameter values. # 1. Introduction The advent of large-scale self-supervised pretraining has contributed greatly to progress in natural language process- ing (Devlin et al., 2019; Liu et al., 2019; Radford et al., 2019). In particular, BERT (Devlin et al., 2019) advanced 1Language Technologies Institute, School of Computer Sci- ence, Carnegie Mellon University 2Allen Institute for Artificial Intelligence 3Paul G. Allen School of Computer Science and Engi- neering, University of Washington 4XNOR.AI. Correspondence to: Jesse Dodge <[email protected]>. In this work, we investigate this variation by conducting a series of fine-tuning experiments on four tasks in the GLUE benchmark (Wang et al., 2018). Changing only training data order and the weight initialization of the fine-tuning layer—which contains only 0.0006% of the total number of parameters in the model—we find substantial variance in performance across trials. We explore how validation performance of the best found model varies with the number of fine-tuning experiments, finding that, even after hundreds of trials, performance has not fully converged. With the best found performance across Weight Initializations, Data Orders, and Early Stopping all the conducted experiments of fine-tuning BERT, we ob- serve substantial improvements compared to previous pub- lished work with the same model (Table 1). On MRPC (Dolan & Brockett, 2005), BERT performs better than more recent models such as XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019) and ALBERT (Lan et al., 2019). More- over, on RTE (Wang et al., 2018) and CoLA (Warstadt et al., 2019), we observe a 7% (absolute) improvement over previ- ous results with the same model. It is worth highlighting that in our experiments only random seeds are changed—never the fine-tuning regime, hyperparameter values, or pretrained weights. These results demonstrate how model comparisons that only take into account reported performance in a bench- mark can be misleading, and serve as a reminder of the value of more rigorous reporting practices (Dodge et al., 2019). order as two sources of randomness in fine-tuning by varying random seeds that control them, finding that 1) they are comparable as sources of variance in per- formance; 2) in a given dataset, some data orders and weight initializations are consistently better than oth- ers; and 3) some weight initializations perform well across multiple different tasks. • We demonstrate how a simple early stopping algorithm can effectively be used to improve expected perfor- mance using a given computational budget. • We release all of our collected data of 2,100 fine- tuning episodes on four popular datasets from the GLUE benchmark to incentivize further analyses of fine-tuning dynamics. To better understand the high variance across fine-tuning episodes, we separate two factors that affect it: the weight initialization for the task-specific layer; and the training data order resulting from random shuffling. The contribu- tions of each of these have previously been conflated or overlooked, even by works that recognize the importance of multiple trials or random initialization (Phang et al., 2018). By conducting experiments with multiple combinations of random seeds that control each of these factors, we quantify their contribution to the variance across runs. Moreover, we present evidence that some seeds are consistently better than others in a given dataset for both weight initializations and data orders. Surprisingly, we find that some weight initializations perform well across all studied tasks. # 2. Methodology Our experiments consist of fine-tuning pretrained BERT to four downstream tasks from the GLUE benchmark. For a given task, we experiment multiple times with the same model using the same hyperparameter values, while modify- ing only the random seeds that control weight initialization (WI) of the final classification layer and training data order (DO). In this section we describe in detail the datasets and settings for our experiments. # 2.1. Data By frequently evaluating the models through training, we empirically observe that worse performing models can often be distinguished from better ones early in training, moti- vating investigations of early stopping strategies. We show that a simple early stopping algorithm (described in Section 5) is an effective strategy for reducing the computational resources needed to reach a given validation performance and include practical recommendations for a wide range of computational budgets. We examine four datasets from the GLUE benchmark, de- scribed below and summarized in Table 2. The data is publicly available and can be download from the repository jiant.1 Three of our datasets are relatively small (MRPC, RTE, and CoLA), and one relatively large (SST). Since all datasets are framed as binary classification, the model struc- ture for each is the same, as only a single classification layer with two output units is appended to the pretrained BERT. To encourage further research in analyzing training dynam- ics during fine-tuning, we publicly release all of our experi- mental data. This includes, for each of the 2,100 fine-tuning episodes, the training loss at every weight update, and vali- dation performance on at least 30 points in training. Our main contributions are: • We show that running multiple trials with different random seeds can lead to substantial gains in perfor- mance on four datasets from the GLUE benchmark. Further, we present how the performance of the best- found model changes as a function of the number of trials. Microsoft Research Paraphrase Corpus (MRPC; Dolan & Brockett, 2005) contains pairs of sentences, labeled as either nearly semantically equivalent, or not. The dataset is evaluated using the average of F1 and accuracy. Recognizing Textual Entailment (RTE; Wang et al., 2018) combines data from a series of datasets (Dagan et al., 2005; Bar-Haim et al., 2006; Giampiccolo et al., 2007; Ben- tivogli et al., 2009). Each example in RTE is a pair of sentences, and the task is to predict whether the first (the premise) entails the second (the hypothesis). Corpus of Linguistic Acceptability (CoLA; Warstadt et al., 2019) is comprised of English sentences labeled as • We investigate weight initialization and training data 1https://github.com/nyu-mll/jiant Weight Initializations, Data Orders, and Early Stopping evaluation metric majority baseline # training samples # validation samples MRPC RTE CoLA SST Acc./F1 Acc. MCC Acc. 0.53 0.00 0.51 2.5k 8.6k 67k 277 1,043 873 0.75 3.7k 409 Table 2. The datasets used in this work, which comprise four out of nine of the tasks in the GLUE benchmark (Wang et al., 2018). either grammatical or ungrammatical. Models are evaluated on Matthews correlation (MCC; Matthews, 1975), which ranges between –1 and 1, with random guessing being 0. found model from multiple experiments is substantially higher than the expected performance of a single trial. In particular, in Table 1 we report the performance of the best model from all conducted experiments, which represents substantial gains compared to previous work that uses the same model and optimization procedure. On some datasets, we observe numbers competitive with more recent mod- els which have improved pretraining regimes (Phang et al., 2018; Yang et al., 2019; Liu et al., 2019; Lan et al., 2019); compared to BERT, these approaches pretrain on more data, and some utilize more sophisticated modeling or optimiza- tion strategies. We leave it to future work to analyze the variance from random seeds on these other models, and note that running analogous experiments would likely also lead to performance improvements. Stanford Sentiment Treebank (SST; Socher et al., 2013) consists of sentences annotated as expressing positive or neg- ative sentiment (we use the binary version of the annotation), collected from movie reviews. In light of these overall gains and the computational bur- den of running a large number of experiments, we explore how the number of trials influences the expected validation performance. # 2.2. Fine-tuning Following standard practice, we fine-tune BERT (BERT- large, uncased) for three epochs (Phang et al., 2018; Devlin et al., 2019). We fine-tune the entire model (340 million parameters), of which the vast majority start as pretrained weights and the final layer (2048 parameters) is randomly initialized. The weights in the final classification layer are initialized using the standard approach used when fine- tuning pretrained transformers like BERT, RoBERTa, and ALBERT (Devlin et al., 2019; Liu et al., 2019; Lan et al., 2019): sampling from a normal distribution with mean 0 and standard deviation 0.02. All experiments were run on P100 GPUs with 16 GB of RAM. We train with a batch size of 16, a learning rate of 0.00002, and dropout of 0.1; the open source implementation, pretrained weights, and full hyperparameter values and experimental details can be found in the HuggingFace transformer library (Wolf et al., 2019).2 Each experiment is repeated N 2 times, with all possible combinations of N distinct random seeds for WI and N for DO.3 For the datasets MRPC, RTE, and CoLA, we run a total of 625 experiments each (N =25). For the larger SST, we run 225 experiments (N =15). # 3.1. Expected validation performance To quantify the improvement found from running more experiments, we turn to expected validation performance as introduced by Dodge et al. (2019). The standard machine learning experimental setup involves a practitioner training x models, evaluating each of them on validation data, then taking the model which has the best validation performance and evaluating it on test data. Intuitively, as the number of trained models x increases, the best of those x models will improve; expected validation performance calculates the expected value of the best validation performance as a function of x.4 We plot expected validation curves for each dataset in Fig- ure 1 with (plus or minus) the standard deviation shaded.5 The leftmost point on each of these curves (x = 1) shows the expected performance for a budget of a single training run. For all datasets, Figure 1 shows, unsurprisingly, that expected validation performance increases as more compu- tational resources are used. This rising trend continues even up to our largest budget, suggesting even larger budgets could lead to improvements. On the three smaller datasets (MRPC, RTE, and CoLA) there is significant variance at smaller budgets, which indicates that individual runs can have widely varying performance. # 3. The large impact of random seeds Our large set of fine-tuning experiments evidences the siz- able variance in performance across trials varying only ran- dom seeds. This effect is especially pronounced on the smaller datasets; the validation performance of the best- In the most common setup for fine-tuning on these datasets, models are evaluated on the validation data after each epoch, or once after training for multiple epochs (Phang et al., 2018; Devlin et al., 2019). In Figure 1 we show expected perfor- mance as we vary the number of evaluations on validation 2https://github.com/huggingface/ transformers 3Although any random numbers would have sufficed, for com- pleteness: we use the numbers {1, . . . , N } as seeds. 4A full derivation can be found in Dodge et al. (2019). 5We shade between the observed minimum and maximum. Weight Initializations, Data Orders, and Early Stopping MRPC RTE “ 0.92 > oO cs 0.90 50.75 < g 0.88 = S570 S ose is ao} > 0.65 s 0.84 6 o G 9.0.82 § 0.60 a x 0.80 ww 10° 10? 10? 10° 10! 10? Random seed assignments Random seed assignments CoLA x SST 0-950 [S) oO O 065 5 = o & 0.945 SB 0.60 > 3 aa 0.940 z 3 0 0.55 o @ g a & 0.935 — eval 10x per epoch x uv — eval 1x per epoch 0.50 x, 930 — eval 1x in training Ss { 10° 10? 10? 10° 10? 10? Random seed assignments Random seed assignments Figure 1. Expected validation performance (Dodge et al., 2019), plus and minus one standard deviation, as the number of experiments increases. The x-axis represents the budget (e.g., x = 10 indicates a budget large enough to train 10 models). The y-axis is the expected performance of the best of the x models trained. Each plot shows three evaluation scenarios: in the first, the model is frequently evaluated on the validation set during training (blue); in the second, at the end of each epoch (orange); and in the third, only at the end training (green). As we increase the number of evaluations per run we see higher expected performance and smaller variances. Further, more frequently evaluating the model on validation data leads to higher expected validation values. data during training (all models trained for three epochs): once after training (green), after each of the three epochs (orange), and frequently throughout training (ten times per epoch, blue).6 Considering the benefits of more frequent evaluations as shown in Figure 1, we thus recommend this practice in similar scenarios. # 4. Weight initialization and data order Agg. over WI .058 Agg. over DO .059 .061 Total Table 3. Expected (average) standard deviation in validation per- formance across runs. The expected standard deviation of given WI and DO random seeds are close in magnitude, and only slightly below the overall standard deviation. To better understand the high variance in performance across trials, we analyze two source of randomness: the weight initialization of the final classification layer and the order the training data is presented to the model. While previous work on fine-tuning pretrained contextual representation models (Devlin et al., 2019; Phang et al., 2018) has generally used a single random seed to control these two factors, we analyze them separately. Our experiments are conducted with every combination of a set of weight initialization seeds (WI) and a set of data order (DO) seeds that control these factors. One data order can be viewed as one sample from the set of permutations of the training data. Similarly, one weight initialization can be viewed as a specific set of samples from the normal distribution from which we draw them. An overview of the collected data is presented in Figure 2, where each colored cell represents the validation per- formance for a single experiment. In the plots, each row represents a single weight initialization and each column represents a single data order. We sort the rows and columns by their averages; the top row contains experiments with the 6Compared to training, evaluation is typically cheap, since the validation set is smaller than the training set and evaluation requires only a forward pass. Moreover, evaluating on the validation data can be done in parallel to training, and thus does not necessarily slow down training. WI with the highest average performance, and the rightmost column contains experiments with the DO with the highest average performance.7 For MRPC, RTE, and CoLA, a fraction of the trained models diverge, yielding performance close to that of predicting the most frequent label (see Table 2). This partially explains the large variance found in the expected validation curves for those three datasets in Figure 1. # 4.1. Decoupling From Figure 2, it is clear that different random seed com- binations can lead to substantially different validation per- formance. In this section, we investigate the sources of this variance, decoupling the distribution of performance based on each of the factors that control randomness. For each dataset, we compute for each WI and each DO seed the standard deviation in validation performance across all trials with that seed. We then compute the expected (av- erage) standard deviation, aggregated under all WI or all DO seeds, which are shown in Table 3; we show the dis- tribution of standard deviations in the appendix. Although their magnitudes vary significantly between the datasets, 7Each cell represents an independent sample, so the rows and columns can be reordered. Weight Initializations, Data Orders, and Early Stopping MRPC - Acc./F1 RTE - Accuracy CoLA - MCC SST - Accuracy 0.950 0.90 . 0.60 oss 0.948 . 9.50 0.946 0.85 0.40 0.944 0.83 . 0.30 0.942 0.80 . 0.20 0.940 0.78 0.10 0.938 0.75 0.936 Data order random seeds Dataorderrandom seeds Dataorderrandom seeds Data order random seeds alization random seeds 2 = 2 ov = Figure 2. A visualization of validation performance for all experiments, where each colored cell represents the performance of a training run with a specific WI and DO seed. Rows and columns are sorted by their average, such that the best WI seed corresponds to the top row of each plot, and the best DO seed correspond to the right-most column. Especially on smaller datasets a large variance in performance is observed across different seed combinations, and on MRPC and RTE models frequently diverge, performing close to the majority baselines (listed in Table 2). < MRPC RTE CoLA SST S25 — best wi 8 : B20 n Ww >is 2 Fa § 10 a a5 ov E \ 0 y \ y . x 0.75 0.80 0.85 0.90 05 06 O7 08 : . . : : . 0.94 0.95 Acc./F1 Accuracy Accuracy Figure 3. Some seeds are better then others. Plots show the kernel density estimation of the distribution of validation performance for best and worst WI and DO seeds. Curves for DO seeds are shown in dashed lines and for WI in solid lines. MRPC and RTE exhibit pronounced bimodal shapes, where one of the modes represents divergence; models trained with the worst WI and DO are more likely to diverge than learn to predict better than random guessing. Compared to the best seeds, the worst seeds are conspicuously more densely populated in the lower performing regions, for all datasets. the expected standard deviation from the WI and DO seeds is comparable, and are slightly below the overall standard deviation inside a given task. WI DO MRPC RTE CoLA SST 2.0×10−6 2.8×10−4 7.0×10−3 3.3×10−2 8.3×10−3 3.2×10−3 1.1×10−2 1.3×10−5 # 4.2. Some random seeds are better than others To investigate whether some WI or DO seeds are better than their counterparts, Figure 3 plots the random seeds with the best and worst average performance. The best and worst seeds exhibit quite different behavior: compared to the best, the worst seeds have an appreciably higher density on lower performance ranges, indicating that they are generally inferior. On MRPC, RTE, and CoLA the performance of the best and worst WIs are more dissimilar than the best and worst DOs, while on SST the opposite is true. This could be related to the size of the data; MRPC, RTE, and CoLA are smaller datasets, whereas SST is larger, so SST has more data to order and more weight updates to move away from the initialization. Table 4. p-values from ANOVA indicate that there is evidence to reject the null hypothesis that the performance of the best and worst WIs and DOs have distributions with the same means (p < 0.05). Using ANOVA (Fisher, 1935) to test for statistical signif- icance, we examine whether the performance of the best and worst DOs and WIs have distributions with different means. The results are shown in Table 4. For all datasets, we find the best and worst DOs and WIs are significantly different in their expected performance (p < 0.05). We include a discussion of the assumptions behind ANOVA in the appendix. Weight Initializations, Data Orders, and Early Stopping # 4.3. Globally good initializations A natural question that follows is whether some random seeds are good across datasets. While the data order is dataset specific, the same weight initialization can be ap- plied to multiple classifiers trained with different datasets: since all tasks studied are binary classification, models for all datasets share the same architecture, including the classi- fication layer that is being randomly initialized and learned. We compare the different weight initializations across datasets. We find that some initializations perform con- sistently well. For instance, WI seed 12 has the best perfor- mance on CoLA and RTE, the second best on MRPC, and third best on SST. This suggests that, perhaps surprisingly, some weight initializations perform well across tasks. Studying the properties of good weight initializations and data orders is an important question that could lead to sig- nificant empirical gains and enhanced understanding of the fine-tuning process. We defer this question to future work, and release the results of our 2,100 fine-tuning experiments to facilitate further study of this question by the community. # 5. Early stopping Our analysis so far indicates a high variance in the fine- tuning performance of BERT when using different random seeds, where some models fail to converge.8 In this sec- tion we show that better performance can be achieved with the same computational resources by using early stopping algorithms that stop the least promising trials early in train- ing. We also include recommendations for practitioners for setting up experiments meeting a variety of computational budgets. Early discovery of failed experiments Figure 4 shows that performance divergence can often be recognized early in training. These plots show the performance values of 20 randomly chosen models at different times across train- ing. In many of the curves, continuing training of lower performing models all the way through can be a waste of computation. In turn, this suggests the potential of early stopping least promising trials as a viable means of saving computation without large decreases in expected perfor- mance. For instance, after training halfway through the first epoch on CoLA the models which diverged could be stopped. Spearman’s rank correlation between performance at iter- ation i and iteration j across trials. High rank correlation means that the ranking of the models is similar between the two evaluation points, and suggests we can stop the worst performing models early, as they would likely continue to underperform.9 On MRPC, RTE and CoLA, there exists a high correlation between the models’ performance early on (part way through the first epoch) and their final perfor- mance. On the larger SST dataset, we see high correlation between the performance after training for two epochs and the final performance. Early stopping Considering the evidence from the train- ing curves and correlation plots, we analyze a simple al- gorithm for early stopping. Our algorithm is inspired by existing approaches to making hyperparameter search more efficient by stopping some of the least promising experi- ments early (Jamieson & Talwalkar, 2016; Li et al., 2018).10 Here we apply an early stopping algorithm to select the best performing random seed.11 The algorithm has three parameters: t, f , and p. We start by training t trials, and partially through training (f , a fraction of the total number of epochs) evaluate all of them and only continue to fully train the p most promising ones, while discarding the rest. This algorithm takes a total of (tf + p(1 − f ))s steps, where s is the number of steps to fully train a model.12 Start many, stop early, continue some As shown earlier, the computational budget of running this algorithm can be computed directly from an assignment to the parameters t, f , and p. Note that there are different ways to assign these parameters that lead to the same computational budget, and those can lead to significantly distinct performance in expec- tation; to estimate the performance for each configuration we simulate this algorithm by sampling 50,000 times from from our full set of experiments. In Figure 6 we show the best observed assignment of these parameters for budgets between 3 and 90 total epochs of training, or the equivalent of 1 to 30 complete training trials. There are some surpris- ingly consistent trends across datasets and budgets – the number of trials started should be significantly higher than the number trained fully, and the number of trials to train fully should be around x/2. On three out of four datasets, stopping least promising trials after 20–30% of training (less than one epoch) yielded the best results—and on the fourth 9Similar plots with Pearson correlation can be found in the appendix. We further examine the correlation of validation perfor- mances at different points throughout training, shown in Figure 5. One point in one of these plots represents the 10“Early stopping” can also relate to stopping a single training run if the loss hasn’t decreased for a given number of epochs. Here we refer to the notion of stopping a subset of multiple trials. 8This was also observed by Phang et al. (2018), who showed that their proposed STILTs approach reduced the number of di- verging models. 11Our approach does not distinguish between DO and WI. While initial results suggest that this distinction could inspire more so- phisticated early-stopping criteria, we defer this to future work. 12In our experiments, s = 3 epochs. Weight Initializations, Data Orders, and Early Stopping ° ° ° ES a co ° ND Validation performance SST 0.75 . 0.70 0.9 T 0.65 0.8 0.60 07 0.55 0.6 0.50 0.45 0.5 0 1 2 3 0 1 2 3 Epochs Epochs Figure 4. Some promising seeds can be distinguished early in training. The plots show training curves for 20 random WI and DO combinations for each dataset. Models are evaluated every 10th of an epoch (except SST, which was evaluated every 100 steps, equivalent to 42 times per epoch). For the smaller datasets, training is unstable, and a non-negligible portion of the models yields poor performance, which can be identified early on. MRPC 0 1 2 1.00 0.75 0.50 0.25 0.00 0.25 -0.50 -0.75 -1.00 Figure 5. Performance early in training is highly correlated with performance late in training. Each figure shows the Spearman’s rank correlation between the validation performance at different points in training; the axes represent epochs. A point at coordinates i and j in the plots indicates the correlation between the best found performances after i and after j evaluations. Note that the plots are symmetric. dataset this is still a strong strategy. only examine two oft-overlooked choices that can be cast as hyperparameters and still find room for optimization. Early stopping works We compare this algorithm with our baseline of running multiple experiments all the way through training, without any early stopping (f =1, t=p) and using the same amount of computation. Specifically, for a given computational budget equivalent to fully training t models, we measure improvement as the relative error reduc- tion from using early stopping with the best found settings for that computational budget. Figure 7 shows the relative error reduction for each dataset as the computational budget varies, where we observe small but reasonably consistent improvements on all tasks. Melis et al. (2018) heavily tuned the hyperpamareters of an LSTM language model, for some experiments running 1,500 rounds of Bayesian optimization (thus, training 1,500 mod- els). They showed that an LSTM, when given such a large budget for hyperparameter tuning, can outperform more complicated neural models. While such work informs the community about the best performance found after expend- ing very large budgets, it is difficult for future researchers to build on this without some measure of how the perfor- mance changes as a function of computational budget. Our work similarly presents the best-found performance using a large budget (Table 1), but also includes estimates of how performance changes as a function of budget (Figure 1). # 6. Related work Most work on hyperparameter optimization tunes a number of impactful hyperparameters, such as the learning rate, the width of the layers in the model, and the strength of the regularization (Li et al., 2018; Bergstra et al., 2011). For modern machine learning models such tuning has proven to have a large impact on the performance; in this work we A line of research has addressed the distribution from which initializations are drawn. The Xavier initialization (Glorot & Bengio, 2010) and Kaiming initialization (He et al., 2015) initialize weights by sampling from a uniform distribution or normal distribution with variance scaled so as to preserve gradient magnitudes through backpropagation. Similarly, Weight Initializations, Data Orders, and Early Stopping MRPC 100% -70 bo] —s- Exp. started g 86% | —e- Exp. trained fully 60 4 —< Fraction of training budget g 5 28 nw 50 52 cane) +S “5 ne OD 57% -40 WY So cc ss 43% 30 #2 og EE g oO ct g LS 20% -20 gg x x k aw 14% -10 om + 0 ' t 0 5 10 15 20 25 30 RTE 100% -100 5 a £ 5 3§ ws 80% iy BS oo oD 52 6s 43 el )9/ - Oo & 60% ny c 22 is} DoD cc BE oo D'S 40% ee ca 2 os oo “ ge | 20% oo om #4 0 100% > a 86% 20 = ° of 23 fs = 71% 1005 = cane) +S “5 ne OD 57% “80 nn So ec 2e oo Se 3% “OEE oa Zé uw 29% -40 ¢Q x x ww { 14% -20 om + 0 t { SST 100% -60 5 a £ 83% “05 © 28 fs co BD 6% -40 8 5 “§ ne e8 2p 5 D 50% “30 BE ao oc ee © 33% -20°5 os oo wo aa x x k 17% -19 WW 0% 0 t { () 5 10 15 20 25 30 Computational budget sufficient to fully train X models Figure 6. Best observed early stopping parameters on each dataset. For a given budget large enough to fully train x models (each trained for 3 epochs), this plot shows the optimal parameters for early stopping. For instance, in MRPC with a budget large enough for 20 trials, the best observed performance came by starting 41 trials (blue), then continuing only the 11 most promising trials (orange) after 30% of training (green). 0.06 0.04 0.02 Reative error reduction 0.00 0 5 10 15 20 25 30 Computational budget (number of trials) Figure 7. Relative error reduction from the early stopping approach in Figure 6, compared to the baseline of training x models on the full training budget. Performance on RTE and SST is measured using accuracy, on MRPC it is the average of accuracy and F1, and on CoLA it is MCC. “Error” here refers to one-minus-performance for each of these datasets. As the budget increases, the absolute performance on all four datasets increases, and the absolute im- provement from early stopping is fairly consistent. orthogonal initializations (Saxe et al., 2014) aim to prevent exploding or vanishing gradients. In our work, we instead examine how different samples from an initialization distri- bution behave, and we hope future work which introduces new initialization schemes will provide a similar analysis. Active learning techniques, which choose a data order using a criterion such as the model’s uncertainty (Lewis & Gale, 1994), have a rich history. Recently, it has even been shown that that training on mini-batches which are diverse in terms of data or labels (Zhang et al., 2017) can be more sample efficient. The tools we present here can be used to evaluate different seeds for a stochastic active learning algorithm, or to compare different active learning algorithms. # 7. Conclusion In this work we study the impact of random seeds on fine- tuning contextual embedding models, the currently domi- nant paradigm in NLP. We conduct a large set of experiments on four datasets from the GLUE benchmark and observe significant variance across these trials. Overall, these experi- ments lead to substantial performance gains on all tasks. By observing how the expected performance changes as we al- locate more computational resources, we expect that further gains would come from an even larger set of trials. More- over, we examine the two sources of variance across trials, weight initialization and training data order, finding that in expectation, they contribute comparably to the variance in performance. Perhaps surprisingly, we find that some data orders and initializations are better than others, and the lat- ter can even be observed even across tasks. A simple early Weight Initializations, Data Orders, and Early Stopping stopping strategy along with practical recommendations is included to alleviate the computational costs of running multiple trials. All of our experimental data containing thousands of fine-tuning episodes is publicly released. # References Lewis, D. D. and Gale, W. A. A sequential algorithm for training text classifiers. In Proc. of SIGIR, 1994. Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., and Talwalkar, A. Hyperband: A novel bandit-based approach to hyperparameter optimization. The Journal of Machine Learning Research, 2018. Bar-Haim, R., Dagan, I., Dolan, B., Ferro, L., Giampiccolo, D., Magnini, B., and Szpektor, I. The second pascal recognising textual entailment challenge. In Proc. of the II PASCAL challenge, 2006. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv:1907.11692, 2019. Bentivogli, L., Clark, P., Dagan, I., and Giampiccolo, D. The fifth pascal recognizing textual entailment challenge. In TAC, 2009. Lu, J., Batra, D., Parikh, D., and Lee, S. Vilbert: Pretraining task-agnostic visiolinguistic representations for vision- and-language tasks. In Proc. of NeurIPS, 2019. Bergstra, J., Bardenet, R., Bengio, Y., and Kegl, B. Al- gorithms for hyper-parameter optimization. In Proc. of NeurIPS, 2011. Matthews, B. W. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochimica et Biophysica Acta (BBA)-Protein Structure, 1975. Dagan, I., Glickman, O., and Magnini, B. The pascal recog- nising textual entailment challenge. In Machine Learning Challenges Workshop, 2005. Melis, G., Dyer, C., and Blunsom, P. On the state of the art of evaluation in neural language models. In Proc. of ICLR, 2018. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for lan- guage understanding. In Proc. of the ACL, 2019. Phang, J., F´evry, T., and Bowman, S. R. Sentence encoders on stilts: Supplementary training on intermediate labeled- data tasks. arXiv:1811.01088, 2018. Dodge, J., Gururangan, S., Card, D., Schwartz, R., and Smith, N. A. Show your work: Improved reporting of experimental results. In Proc. of EMNLP, 2019. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog, 2019. Dolan, B. and Brockett, C. Automatically constructing a corpus of sentential paraphrases. In Proc. of IWP, 2005. Fisher, R. A. Statistical methods for research workers. Oliver & Boyd (Edinburgh), 1935. Raffe, C., Shazeer, N., Roberts, A., Lee, K. L., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv:1910.10683, 2019. Giampiccolo, D., Magnini, B., Dagan, I., and Dolan, B. The third pascal recognizing textual entailment challenge. In Proc. of the ACL-PASCAL workshop on textual entailment and paraphrasing, 2007. Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proc. of AISTATS, 2010. He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proc. of ICCV, 2015. Saxe, A. M., McClelland, J. L., and Ganguli, S. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In Proc. of ICLR, 2014. Schwartz, D., Toneva, M., and Wehbe, L. Inducing brain- relevant bias in natural language processing models. In Proc. of NeurIPS, 2019. Socher, R., Perelygin, A., Wu, J., Chuang, J., Manning, C. D., Ng, A., and Potts, C. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of EMNLP, 2013. Jamieson, K. and Talwalkar, A. Non-stochastic best arm identification and hyperparameter optimization. In Proc. of AISTATS, 2016. Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proc. of the EMNLP Workshop BlackboxNLP, 2018. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. Albert: A lite bert for self-supervised learning of language representations. arXiv:1909.11942, 2019. Wang, A., Pruksachatkun, Y., Nangia, N., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. Super- glue: A stickier benchmark for general-purpose language understanding systems. In Proc. of NeuRIPS, 2019. Weight Initializations, Data Orders, and Early Stopping Warstadt, A., Singh, A., and Bowman, S. R. Neural network acceptability judgments. TACL, 7:625–641, 2019. Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue, C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., and Brew, J. Huggingface’s transformers: State-of-the- art natural language processing. ArXiv, abs/1910.03771, 2019. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., and Le, Q. V. Xlnet: Generalized autoregressive pretrain- ing for language understanding. In Proc. of NeuRIPS, 2019. Zhang, C., Kjellstrm, H., and Mandt, S. Determinantal point processes for mini-batch diversification. In Proc. of UAI, 2017. Zhu, C., Cheng, Y., Gan, Z., Sun, S., Goldstein, T., and Liu, J. Freelb: Enhanced adversarial training for language understanding. arXiv:1909.11764, 2019. Weight Initializations, Data Orders, and Early Stopping # A. Appendix We plot the distribution of standard deviations in final validation performance across multiple runs, aggregated under a fixed random seed, either for weight initialization or data order. The results are shown in Figure 8, indicating that the inter-seed aggregated variances are comparable in magnitude, considering aggregation over both WI and DO. MRPC RTE CoLA SST H 80 1200 ” e 6 14 12 1000 2 6 60 10 800 40 600 ROO ow 400 20 N 3 200 Kernel Density Estimation O03 0.04 0.05 0.06 0.07 ° o.ba 0.05 0.06 0.07 0.08 0.09 ° 0.00 0.05 010 0.15 0.20 o 0.001 0.002 0.003 0.004 Acc./F1 (standard dev.) Accuracy (standard dev.) MCC (standard dev.) Accuracy (standard dev.) Figure 8. Kernel density estimation of the distribution of standard deviation in validation performance aggregated under fixed random seeds, either for weight initialization (blue) or data order (orange). The red dashed line shows the overall standard deviation for each dataset. The DO and WI curves have expected standard deviation values of similar magnitude, which are also comparable with the overall standard deviation. # B. ANOVA assumptions ANOVA makes three assumptions: 1) independence of the samples, 2) homoscedasticity (roughly equal variance across groups), and 3) normally distributed data. ANOVA is not robust to violations of independence, but each DO and WI is an I.I.D. sample, and thus independent. ANOVA is generally robust to groups with somewhat differing variance if the groups are the same size, which is true in our experiments. ANOVA is more robust to non-normally distributed data for larger sample sizes; our SST experiments are quite close to normally distributed, and the distribution of performance on the smaller datasets is less like a normal distribution but we have larger sample sizes. # C. Pearson Correlation In Figure 9 we include the Pearson correlation between different points in training, whereas Figure 5 showed the rank correlation of the same data. One point in one of these plots represents the Pearson’s correlation between performance at iteration i and iteration j across trials. High correlation means that the performance of the models is similar between the two evaluation points. MRPC 1 2 1.00 0.75 0.50 0.25 0.00 -0.25 -0.50 -0.75 -1.00 Figure 9. Performance early in training is highly correlated with performance late in training. Each figure shows the Spearman’s rank correlation between the validation performance at different points in training; the axes represent epochs. A point at coordinates i and j in the plots indicates the correlation between the best found performances after i and after j evaluations. Note that the plots are symmetric.
{ "id": "1910.10683" }
2002.06353
UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation
With the recent success of the pre-training technique for NLP and image-linguistic tasks, some video-linguistic pre-training works are gradually developed to improve video-text related downstream tasks. However, most of the existing multimodal models are pre-trained for understanding tasks, leading to a pretrain-finetune discrepancy for generation tasks. This paper proposes UniVL: a Unified Video and Language pre-training model for both multimodal understanding and generation. It comprises four components, including two single-modal encoders, a cross encoder, and a decoder with the Transformer backbone. Five objectives, including video-text joint, conditioned masked language model (CMLM), conditioned masked frame model (CMFM), video-text alignment, and language reconstruction, are designed to train each of the components. We further develop two pre-training strategies, stage by stage pre-training (StagedP) and enhanced video representation (EnhancedV), to make the training process of the UniVL more effective. The pre-train is carried out on a sizeable instructional video dataset HowTo100M. Experimental results demonstrate that the UniVL can learn strong video-text representation and achieves state-of-the-art results on five downstream tasks.
http://arxiv.org/pdf/2002.06353
Huaishao Luo, Lei Ji, Botian Shi, Haoyang Huang, Nan Duan, Tianrui Li, Jason Li, Taroon Bharti, Ming Zhou
cs.CV, cs.CL, cs.LG, eess.AS, eess.IV
null
null
cs.CV
20200215
20200915
0 2 0 2 p e S 5 1 ] V C . s c [ 3 v 3 5 3 6 0 . 2 0 0 2 : v i X r a # UniVL: A Unified Video and Language Pre-Training Model for Multimodal Understanding and Generation Huaishao Luo1∗, Lei Ji2,3,4, Botian Shi5, Haoyang Huang2, Nan Duan2, Tianrui Li1, Jason Li6, Taroon Bharti6, Ming Zhou2 1Southwest Jiaotong University, Chengdu, China 2Microsoft Research Asia, Beijing, China 3Institute of Computing Technology, Chinese Academy of Science, Beijing, China 4University of Chinese Academy of Sciences, Beijing, China 5Beijing Institute of Technology, Beijing, China 6Microsoft STCA, Beijing, China [email protected], [email protected] # Abstract With the recent success of the pre-training technique for NLP and image-linguistic tasks, some video-linguistic pre-training works are gradually developed to improve video-text re- lated downstream tasks. However, most of the existing multimodal models are pre-trained for understanding tasks, leading to a pretrain- finetune discrepancy for generation tasks. This paper proposes UniVL: a Unified Video and Language pre-training model for both multi- modal understanding and generation. It com- prises four components, including two single- modal encoders, a cross encoder, and a de- coder with the Transformer backbone. Five objectives, including video-text joint, condi- tioned masked language model (CMLM), con- ditioned masked frame model (CMFM), video- text alignment, and language reconstruction, are designed to train each of the components. We further develop two pre-training strategies, stage by stage pre-training (StagedP) and en- hanced video representation (EnhancedV), to make the training process of the UniVL more effective. The pre-train is carried out on a size- able instructional video dataset HowTo100M. Experimental the UniVL can learn strong video-text represen- tation and achieves state-of-the-art results on five downstream tasks. Video Clip {Caption j place the bacon slices. ! onabaking pan and | cook them in an oven _/ Pre-Trained Model {toast the bread slices in the toaster} Figure 1: A showcase of video and language pre-train based model for multimodal understanding (e.g., re- trieval) and generation (e.g., captioning). using task-specific labeled data. Inspired by the BERT (Devlin et al., 2019) model’s success for NLP tasks, numerous multimodal image-language pre-training models (Lu et al., 2019; Li et al., 2019a,b) have been proposed. Their results have demonstrated the effectiveness of pre-training on various visual and language tasks such as visual question answering. Different from previous text pre-training or image-language pre-training, we fo- cus on video-linguistic pre-training in this paper. # Introduction With the recent advances of self-supervised learn- ing, pre-training techniques play a vital role in learning visual and language representation. The paradigm is to pre-train the model on a large scale unlabeled data and fine-tune the downstream tasks ∗This work was done during the first author’s internship in MSR Asia Videos contain rich visual, acoustic, and lan- guage information for people to acquire knowledge or learn how to perform a task. This motivates researchers to investigate whether AI agents can learn task completion from videos like humans with both low-level visual and high-level semantic language signals. Therefore, multimodal video- language tasks are of great importance to inves- tigate for both research and applications. In this work, we first propose to pre-train a unified video- language model using video and acoustic speech recognition (ASR) transcript in instructional videos to learn a joint representation of both video and language. Then, we fine-tune this model on five typical multimodal tasks, including understanding and generation targets. Figure 1 presents a show- case of our pre-training and fine-tuning flow. Take multimodal video captioning as an example. The model inputs video and ASR transcript and predicts a captioning sentence. VideoBERT (Sun et al., 2019b) and CBT (Sun et al., 2019a) are the first pioneers to investigate video-language pre-training with regard to video representation on instructional videos. They have demonstrated the effectiveness of the BERT based model for capturing video temporal and language sequential features. Besides the above two works, there is some concurrent progress to our model. ActBERT (Zhu and Yang, 2020) leverages global action information to catalyze mutual interactions between linguistic texts and local regional objects. Moreover, a transformer block is introduced to en- code global actions, local regional objects, and lin- guistic descriptions. HERO (Li et al., 2020) hierar- chically encodes multimodal inputs. Furthermore, two new pre-training tasks, video-subtitle matching and frame order modeling, are designed to improve the representation learning. VideoAsMT (Korbar et al., 2020) takes a generative modeling approach that poses the objective as a translation problem between modalities. However, most of previous models only pre-train the model on understanding tasks. In this paper, we pre-train on both understanding and genera- tion tasks through an encoder-decoder paradigm. Although the concurrent work VideoAsMT has a similar encoder-decoder as ours, it is not flexible for downstream tasks with only one single unified framework. In this paper, we develop a flexible approach to learn video and language joint repre- sentation and adapt downstream multimodal tasks. We propose UniVL: a Unified Video and Language pre-training model for multimodal un- derstanding and generation. Our UniVL model adopts Transformer (Vaswani et al., 2017) as the backbone and has four components, including two single-modal encoders, a cross encoder, and a de- coder. In detail, we first encode the text and visual separately by two single-modal encoders. A video- text joint objective performs on these two encoders, which aims to learn better representation for each modality before fusing them. Such a two-stream de- sign is natural to retrieval tasks due to its scalability to very large datasets. The proposed representation can be indexed and has linear complexity in the number of videos. Then we adopt the Transformer based encoder-decoder model to perform the under- standing and generation pre-training by four tasks: conditioned masked language model (CMLM for language corruption), conditioned masked frame model (CMFM for video corruption), video-text alignment, and language reconstruction. Furthermore, we design two pre-training strate- gies, including stage by stage pre-training strategy (StagedP) and Enhanced video representation (En- hancedV), to promote the UniVL pre-training. The StagedP has two parts in our setting. We only pre- train the text encoder and video encoder by the video-text joint objective for the first stage. Then all modules will be pre-trained under the whole objectives in the second stage. Besides, we adopt an entire masking strategy EnhancedV on text to enhance video representation. Our contributions are summarized as follows: 1) We propose a multimodal video-language pre- training model trained on a large-scale instructional video dataset. It is a flexible model for both video- language understanding and generation tasks. 2) The pre-training consists of five objectives, including video-text joint, conditioned masked lan- guage model, conditioned masked frame model, video-text alignment, and language reconstruction. Two pre-training strategies are proposed to make these objectives work harmoniously. 3) We fine-tune our pre-trained model on five typ- ical multimodal video-language tasks: text-based video retrieval, multimodal video captioning, ac- tion segmentation, action step localization, and multimodal sentiment analysis. Extensive exper- iments demonstrate our model’s effectiveness on downstream tasks and achieve state-of-the-art re- sults. # 2 Related Works # 2.1 Single Modal Pre-Training Self-supervised representation learning has been shown to be effective for sequential data, including language and video. Language pre-training mod- els, including BERT (Devlin et al., 2019), GPT (Radford et al., 2018), RoBERTa (Liu et al., 2019), XLNet (Yang et al., 2019), MASS (Song et al., 2019), UniLM (Dong et al., 2019), and BART (Lewis et al., 2019), have achieved great success os oii Cross-modal Encoder { Encoder Encoder Encoder (Text & Vision) . - Encoder ~ ( Encoder ) Ceemrienie)} Geiee) SS) (eet) f Text & Vision Text Vision Text Vision (a) Share Type (b) Cross Type (c) Joint Type Figure 2: Various paradigms for multimodal pre-training. on NLP tasks. BERT (Devlin et al., 2019) is a de- noising auto-encoder network using Transformer with MLM (masked language model) and NSP (next sentence prediction) as pre-training tasks. It has a strong performance for understanding tasks. MASS (Song et al., 2019) focuses on pre-training for generation tasks. UniLM (Dong et al., 2019) and BART (Lewis et al., 2019) continuously study a unified pre-training model for both understanding and generation tasks. Video representation learning mostly focuses on the video sequence reconstruction or future frames prediction as pre-training (pretext) tasks. Early works like (Mathieu et al., 2015; Srivastava et al., 2015; Han et al., 2019) aim to synthetic video frames through the image patches. Similarly, Wang and Gupta (2015) adopt a siamese-triplet net- work to rank continuous patches more similar than patches of different videos. Other works predict the feature vectors in latent space using auto-regressive models with the noise-contrastive estimation (NCE) (Lotter et al., 2016; Oord et al., 2018). Sun et al. (2019a) adopt NCE to predict corrupted (masked) latent space using the auto-encoder model. # 2.2 Multimodal Pre-Training Recently, numerous visual-linguistic pre-training models are proposed for multimodal tasks. For image and text pre-training, ViLBERT (Lu et al., 2019), LXMERT (Tan and Bansal, 2019) adopt two separate Transformers for image and text encod- ing independently. Other models like Visualbert (Li et al., 2019b), Unicoder-VL (Li et al., 2019a), VL-BERT (Su et al., 2020), UNITER (Zhou et al., 2019) use one shared BERT model. These mod- els employ MLM and image-text matching as pre- training tasks which are effective for downstream multimodal tasks. VLP (Zhou et al., 2019) pro- poses a unified image-language model for under- standing and generation tasks. Different from these works, we focus on video and text pre-training for universal representation. For video and text pre-training, VideoBERT (Sun et al., 2019b) and CBT (Sun et al., 2019a) are the first works to explore the capability of pre- training models. Although VideoBERT and CBT pre-train the model on multimodal data, the down- stream tasks mainly take video representation for further prediction. ActBERT (Zhu and Yang, 2020) leverages global action information to catalyze mu- tual interactions between linguistic texts and lo- cal regional objects, and introduces a transformer block to encode global actions, local regional ob- jects, and linguistic descriptions. HERO (Li et al., 2020) encodes multimodal inputs in a hierarchi- cal fashion. Besides, two new pre-training tasks, video-subtitle matching and frame order modeling, are designed to improve the representation learning. However, ActBERT and HERO are only pre-train the models on understanding tasks. VideoAsMT (Korbar et al., 2020) takes a generative modeling approach that poses the objective as a translation problem between modalities. The difference be- tween our work with VideoAsMT is that our model contains two more separate encoders instead of one unified encoder-decoder, while VideoAsMT is inflexible for downstream tasks due to one single unified framework. We summarize three pre-training paradigms to cover the previous vision-text pre-training model considering different encoder architecture in lit- erature, as presented in Figure 2. Unicoder-VL (Li et al., 2019a), VL-BERT (Su et al., 2020), UNITER (Zhou et al., 2019), VLP (Zhou et al., 2019), VideoBERT (Sun et al., 2019b), ActBERT (Zhu and Yang, 2020), and VideoAsMT (Korbar et al., 2020) belong to share-type in Figure 2(a), where the text and vision sequences are combined as the input of one shared Transformer encoder. ViLBERT (Lu et al., 2019) and LXMERT (Tan and Bansal, 2019) are cross-type shown in Figure 2(b). CBT (Sun et al., 2019a) and HERO (Li et al., 2020) are joint-type shown in Figure 2(c). The cross- type and joint-type architectures have two-stream input, and the difference is the interaction across both modalities. Compared with the single-stream input in the share-type, the two-stream input can accommodate each modality’s different processing needs and interact at varying representation depths (Lu et al., 2019). Besides, the joint-type structure has one cross-modal encoder for full interaction between the two streams comparing with the cross- type. We adopt the joint-type as our encoder in this paper. # 3 Method The problem is defined as: given the input video and the corresponding ASR transcript pairs, pre- train a model to learn joint video and text repre- sentation with the self-supervision approach, and fine-tune downstream tasks. In this section, we describe the architecture and pre-training tasks in detail. # 3.1 Model Architecture Figure 3 presents the UniVL as an encoder-decoder architecture. First, the model extracts representa- tions of the input text tokens and the video frame sequences using various feature extractors. A text encoder then adopts the BERT model to embed the text, and a video encoder utilizes the Trans- former encoder to embed the video frames. Next, we employ a Transformer based cross encoder for interacting between the text and the video. Finally, a Transformer decoder is used to reconstruct the input text. # 3.1.1 Pre-processing. We first pre-process video and language before feeding to the UniVL. For the input text, we to- kenize all words by WordPieces (Wu et al., 2016) following the pre-processing method in BERT to obtain the token sequence t = {tli € [1,nJ]}, where ¢; is the i-th token and n is the length of the token sequence. For each video clip, we sample a frame sequence v = {v;|j € [1, m]} and adopt them to extract features, where v; is the j-th group of video frames and m is the group length of the frame sequence. # 3.1.2 Single Modal Encoders. We encode the text and video separately. Such a two-stream design has two advantages: module reusing and retrieval orienting. The module reusing means the text module can benefit from the exist- ing text-based pretrained model, e.g., BERT. The retrieval orienting means the two-stream design is natural to retrieval tasks due to its scalability to ex- tensive datasets. The extracted representation can be indexed, and the calculation of similarity has linear complexity in the number of videos. In this paper, we adopt the BERT-Base uncased model to generate the text representation T ∈ Rn×d after feeding the token sequence t, T = BERT(t), (1) where d is the hidden size of text representation. For the video frame sequence v, we adopt the off-the-shelf image feature extractors, e.g., S3D (Xie et al., 2018), to generate video feature Fv ∈ Rm×df v is the hidden size. A Trans- former encoder is utilized to embed the contextual information of video as follows, V = Transformer(Fv). (2) The dimension of V is Rm×d. # 3.1.3 Cross Encoder. The text encoder and video encoder mainly focus on individual modality. To make the text and video fully interact, we design across encoder, which takes both the text and video modality features as input. Specifically, we first combine the text encoding T and the video encoding V to get the encoding M ∈ R(n+m)×d. Then, a Transformer encoder takes the encoding M as input to generate the attended encoding M ∈ R(n+m)×d, M = Transformer([T; V]), (3) where [; ] denotes the combination operation. It is noted that the combination is operated along with the dimension of sequence, not the dimension of hidden size. One reason is that the text length n and video clip length m are always different. Another reason is that the semantic between text and video are not absolutely aligned. People are likely to describe an event after or before performing it in the video (Miech et al., 2020). ey 1 | [CLS] toast the [MASK] | Text Encoder | [MASK] in the toaster [SEP] | (ansformer Encodes) JV > CMLM bread Alignment o/1 slices — Deen n ences n nanan een . Cross Encoder Decoder Generation Transcript Joint (Transformer Encoder) (Transformer Decoder) toast the bread slices TEE EHEE o/L in the toaster [SEP] ideo Encoder Video Clip CMFM a ay “feature” Retrieval Caption Action Tasks Multimodal Classification segmentation & step localization ‘a ia aa ope j hy’ TepOoTy CHEE) > ) Aloe SUP camera > shale i aldol. OW. caxmamms Figure 3: The main structure of our UniVL, which comprises four components, including two single-modal en- coders, a cross encoder, and a decoder. The model is flexible for many text and video downstream tasks. Four possible tasks are listed. 3.1.4 Decoder. We empower our pre-trained model to have the ca- pability of learning from and then benefiting for generation tasks by attaching a decoder, which is usually a unidirectional recurrent/attention model to generate tokens one by one. Such a decoder mod- ule is proved useful in text-based pre-training tasks, e.g., T5 (Raffel et al., 2019) and BART (Lewis et al., 2020). It is noted that the decoder has a dif- ferent target at different phases. The decoder learns to reconstruct the input text (e.g., transcripts) dur- ing pre-training because of no available text label. When fine-tuning, the decoder is used to generate results, e.g., video caption, where inputs transcripts and video and outputs caption. The input is the attended encoding M of text and video. We unex- ceptionally exploit Transformer to get the decoded feature D ∈ Rl×d from M, D = Transformer(M), (4) where l is the decoder length. orienting operation, which is to align the space of representation between text and video. Considering the misalignment between the text and video clip in narrated videos, we adopt MIL-NCE (Miech et al., 2020) on T and V as our joint objective, LJoint(θ) = −E(t,v)∼B log MIL-NCE (t, v) , of exp(vt! MIL-NCE (t, v) 20 Peg (wt) (6) Z= > exp(vt") + > exp(vt |), (V,t)EPvt (TEN where Pv,t is a set of positive video-transcript pairs. E.g., {(v, t), (v, t−1), (v, t+1)}, where t−1 and t+1 are two closest transcripts in time to t. The negative pairs Nv,t take negative transcripts (or video clips) from other instances within the batch B after fixing v (or t). ˆv, ˜v and ˆt, ˜t are generated through mean-pooling on V and T, respectively. θ is the trainable parameters. # 3.2 Pre-training Objectives We have five pre-training objectives: 1) video- text joint, 2) conditioned masked language model (for text corruption), 3) conditioned masked frame model (for video corruption), 4) video-text align- ment, and 5) language reconstruction. 3.2.1 Video-Text Joint. As our text encoder, the BERT-Base uncased model is a robust extractor of text representation. So, we utilize a video-text joint objective to enhance the capability of the video encoder. It seems a retrieval # 3.2.2 CMLM: Conditioned Masked Language Model. Following BERT, we also randomly mask 15% to- kens with the special token [MASK] in the sen- tence and re-produce the masked tokens under the condition of video input and known tokens. This loss function is defined on the feature matrix of the text part in M as: LCM LM (θ) = −Etm∼t log Pθ (tm | t¬m, v) , (8) (5) (7) where t¬m means the contextual tokens surround- ing the masked token tm, θ is the trainable parame- ters. # 3.2.3 CMFM: Conditioned Masked Frame Model. Similarly, we also propose a masked frame model to predict the correct frames given contextual frames and the input text for semantic constraints. However, it is hard to reconstruct the original RGB frame. We adopt the contrastive learning method to maximize the MI (Mutual information) between the masked output features and the original fea- tures. This loss function is NCE (Sun et al., 2019a). We randomly mask 15% vectors (also 15% frames) with zeros. The objective is to identify the correct frame compared to negative distractors. The loss is defined as: LCM F M (θ) = −Evm∼v log NCE (vm | v¬m, t) , f, T NCE (vn | ayn, t) = PCE on) (10) zZ Z = exp(Fom Mn) +1, cary.) Poms, ‘Um dy m) where v¬m means the surrounding frames except vm, fvm ∈ R1×d is a linear output of f v vm ∈ Fv, Fv is the real-valued vectors of video features, mvm ∈ M(v), and M(v) is the feature matrix of the video part in M. We take other frames in the same batch as negative cases defined as N (vm). 3.2.4 Video-Text Alignment. We use the fused representation that corresponds to the special token [CLS] to predict scores for the video-text alignment, which is similar to the BERT sentence pair classification task. We adopt the NCE loss to learn to discriminate against the positive from negative video-text pairs. To enhance this capability, we not only randomly sample negative cases but also re-sample video clips from the same video (Han et al., 2019). The reason is that the frames inside the same video are more similar than frames of different videos. This loss function is defined as follows, t, L stign(9) = —Eee.yyn log ep (stv) (12) Z = exp (s(t, v)) ty uewy) exp (s(t, u)), (13) (9) where s(·) means two linear layers with a T anh ac- tivation function between them, which is performed on the first hidden state of M. We take other video clips in the same batch B as negative cases N (v). 3.2.5 Language Reconstruction. To reconstruct the input sentence to endow the pre- trained model with the generation capability, we employed an auto-regressive decoder with recon- struction objective, and the loss function is, L Decoder (8) = —Ej, at log Po (ti | tei, t, .v). (14) It is noted that t is the masked version of ground- truth text ˆt when pre-training. As shown in BART (Lewis et al., 2019), pre-training decoder benefits generation tasks. We jointly optimize our model by a weighted loss: LU niV L =LJoint + LCM LM + LCM F M + LAlign + LDecoder. (15) # 3.3 Pre-training Strategies We develop two pre-training strategies to train the UniVL model effectively. 3.3.1 StagedP: Stage by Stage Pre-training. The UniVL can benefit from the pre-trained BERT- Base uncased model in the text encoder module. The natural idea is to train a peer to peer video encoder as the BERT-Base. We adopt a two-stage training fashion. For the first stage, we only pre- serve the text BERT and video Transformer to learn the weights using the Video-Text Joint loss Eq. (5). Next, we decrease the learning rate and continue to further pre-train the UniVL by all five objectives. One advantage is to fasten the pre-training speed, and the other advantage is to make the pre-training progress more smoothing on weights. # 3.3.2 EnhancedV: Enhanced Video Representation. To further enhance the video representation, we adopt a masked modality strategy to make the video to generate transcripts without text input. Specif- ically, we mask the whole text tokens with a 15% possibility. In other words, there are 15% text- video pairs with entire text tokens masked in each mini-batch, and the model utilizes the video infor- mation to complete generation. Such a strategy is a more challenging task for the model to learn a better video representation. # 4 Experiments We first pre-train our model on the large scale dataset. We download videos with ASR transcripts from Howto100M dataset (Miech et al., 2019)1. After filtering the unavailable ones, we get 1.2M videos for pre-training our model. On average, the duration of each video is 6.5 minutes with 110 clip-text pairs. Then, we fine-tune our pre-trained model on five diverse downstream tasks using five datasets, including text-based video retrieval, multimodal video captioning, action segmentation, action step localization, and multimodal sentiment analysis. # 4.1 Datasets # 4.1.1 Youcook2 Youcook2 (Zhou et al., 2018a) contains 2,000 cook- ing videos on 89 recipes with 14K video clips. The overall duration is 176 hours (5.26 minutes on aver- age). Each video clip is annotated with one caption- ing sentence. We evaluate both text-based video retrieval and multimodal video captioning task on this dataset. For the text-based video retrieval task, we fol- low the same experimental setting in (Miech et al., 2019), and use the captions as the input text queries to find the corresponding video clips. For the video captioning task, we use the same setting as in (Shi et al., 2019). We filter the data and make sure there is no overlap between pre-training and evaluation data. In all, we have 1,261 training videos and 439 test videos, that is, 9,776 training clip-text pairs and 3,369 test clip-text pairs. # 4.1.2 MSR-VTT MSR-VTT (Xu et al., 2016) is the open-domain dataset for video retrieval tasks. It has open do- main video clips, and each clip has 20 captioning sentences labeled by human. In all, there are 200K clip-text pairs from 10K videos in 20 categories in- cluding sports, music, etc. Following JSFusion (Yu et al., 2018), we randomly sampled 1,000 clip-text pairs as test data to evaluate the performance of our model on text-based video retrieval task. # 4.1.3 COIN COIN (Tang et al., 2019) is to evaluate action seg- mentation task, which contains 180 different tasks and 11,827 videos. Each video is labeled with 3.91 1https://www.di.ens.fr/willow/research/howto100m/ step segments. In total, the dataset contains videos of 476 hours, with 46,354 annotated segments. # 4.1.4 CrossTask CrossTask (Zhukov et al., 2019) is to evaluate the action step localization task. It contains 83 different tasks and 4.7k videos. For each task, an ordered list of steps with manual descriptions are provided. # 4.1.5 CMU-MOSI Multimodal Opinion Sentiment and Emotion Inten- sity (Zadeh et al., 2016) is sentence-level sentiment analysis and emotion recognition in online videos. CMU-MOSI contains 2,199 opinion video clips, each annotated with real-valued sentiment intensity annotations in the range [-3, +3]. We evaluate the performance of our model on multimodal sentiment analysis. # 4.2 Experimental Details For text encoding, we apply WordPiece embed- dings (Wu et al., 2016) with a 30,000 token vo- cabulary to input to BERT model. We exploit the BERT-base model (Devlin et al., 2019) with 12 layers of Transformer blocks. Each block has 12 attention heads and the hidden size is 768. For video encoding, we first extract the 3D fea- ture from video clips using the S3D model pre- trained by Miech et al. (2020). The basic visual feature can significantly affect the results from our preliminary experiments. The fps of the 3D feature extractor is 16 and the dimension is 1,024. We then employ Transformer Encoder with 6 layers to cap- ture the sequential information on the 3D feature. Each block has 12 attention heads and the hidden size is 768. The model consumes the clip-text pairs. The maximal input tokens of text is 32 and the max- imal number of video features is 48. For short sentence and clip, we concatenate contextual to- kens and frames. For cross encoder and decoder, we use a 2 layers Transformer Encoder as the en- coder and a 3 layer Transformer Decoder as the decoder with 12 heads and 768 hidden size. For generation task during the inference stage, we use the beam search with the size of 5. As previously mentioned, the generated sequence is the ground- truth input transcripts in the pre-training phase. Its target is to sequentially learn full information from the masked transcripts and video features. We pre-train our model on 8 NVIDIA Tesla There are two sets of hyper- Methods R@1 R@5 R@10 Median R Random HGLMM (Klein et al., 2015) HowTo100M (Miech et al., 2019) MIL-NCE (Miech et al., 2020) ActBERT (Zhu and Yang, 2020) VideoAsMT (Korbar et al., 2020) 0.03 4.6 8.2 15.1 9.6 11.6 0.15 14.3 24.5 38.0 26.7 - 0.3 21.6 35.3 51.2 38.0 43.9 1675 75 24 10 19 - UniVL (FT-Joint) UniVL (FT-Align) 22.2 28.9 52.2 57.6 66.2 70.0 5 4 Table 1: Results of text-based video retrieval on Youcook2 dataset. Methods Random C+LSTM+SA (Torabi et al., 2016) VSE (Kiros et al., 2014) SNUVL (Yu et al., 2016) Kaufman et al. (2017) CT-SAN (Yu et al., 2017) JSFusion (Yu et al., 2018) HowTo100M (Miech et al., 2019) MIL-NCE (Miech et al., 2020) ActBERT (Zhu and Yang, 2020) VideoAsMT (Korbar et al., 2020) 0.1 4.2 3.8 3.5 4.7 4.4 10.2 14.9 9.9 8.6 14.7 0.5 12.9 12.7 15.9 16.6 16.6 31.2 40.2 24.0 23.4 - 1.0 19.9 17.1 23.8 24.1 22.3 43.2 52.8 32.4 33.1 52.8 500 55 66 44 41 35 13 9 29.5 36 - UniVL (FT-Joint) UniVL (FT-Align) 20.6 21.2 49.1 49.6 62.9 63.1 6 6 # R@1 R@5 R@10 Median R Table 2: Results of text-based video retrieval on MSR-VTT dataset. parameters considering the stage by stage pre- training strategy. In the first stage, the batch size is set to 600 and the model is trained 50 epochs for 1.5 days. In the second stage, the batch size is set to 48 and the model is trained 50 epochs for 12 days. We use the Adam optimizer (Kingma and Ba, 2015) with an initial learning rate of 1e-3 in the first stage and 1e-4 in the second stage, and employ a linear decay learning rate schedule with a warm-up strategy. # 4.3 Main Results which calculates the score through dot product as in Eq. (6), and use LJoint as the loss during the fine- tuning stage; the other is UniVL (FT-Align), which feeds the encodings to both single encoders and the cross encoder to get unified representation and pre- dict the match score through s(·) in Eq. (12) on the first token ‘[CLS]’. During the fine-tuning stage, the loss is LAlign. We use the Adam optimizer with an initial learning rate of 3e-5 and a batch size of 32 video-caption pairs for Youcook2, an initial learning rate of 5e-5 and a batch size of 128 video- caption pairs for MSR-VTT as hyper-parameters to fine-tune for 5 epochs. # 4.3.1 Text-based Video Retrieval. Text-based video retrieval is defined to retrieve a relevant video/clip given an input text query. As shown in Figure 3 (retrieval block), the model en- codes the input text query and candidate video clips through the text encoder and video encoder respec- tively. Then calculate the matching scores using two different approaches: one is UniVL (FT-Joint), We fine-tune our pre-trained model for text- based video retrieval task on both Youcook2 and MSR-VTT datasets. The evaluation metrics are Re- call@n (R@n) and Median R. Tables 1 and 2 list the retrieval results of all baselines and our model on Youcook2 and MSR-VTT separately. We can see that our model achieves the best performance Methods Input B-3 B-4 M R-L CIDEr Bi-LSTM (Zhou et al., 2018a) EMT (Zhou et al., 2018b) VideoBERT (Sun et al., 2019b) CBT (Sun et al., 2019a) ActBERT (Zhu and Yang, 2020) VideoAsMT (Korbar et al., 2020) AT (Hessel et al., 2019) V V V V V V T - - 6.80 - 8.66 - - 0.87 4.38 4.04 5.12 5.41 5.3 8.55 8.15 11.55 11.01 12.97 13.30 13.4 16.93 - 27.44 27.50 30.44 30.56 - 35.54 - 0.38 0.49 0.64 0.65 - 1.06 DPC (Shi et al., 2019) AT+Video (Hessel et al., 2019) V + T V + T 7.60 - 2.76 9.01 18.08 17.77 - 36.65 - 1.12 UniVL UniVL UniVL 16.46 20.32 V + T 23.87 V T 11.17 14.70 17.35 17.57 19.39 22.35 40.09 41.10 46.52 1.27 1.51 1.81 Table 3: The multimodal video captioning results on Youcook2 dataset. ‘V’ means video and ‘T’ means Transcript. over all baselines to a large extent. We present several baseline methods with or without pre- training. Our model outperforms the Howto100M and VideoAsMT models pre-trained on the same dataset on all metrics. Besides, the experimental results present the a large performance gain with pre-training. We also notice that UniVL (FT-Align) performs better than UniVL (FT-Joint), which demonstrates that fusion representation generated by the cross en- coder is better. Nevertheless, the UniVL (FT-Joint) inference speed is 50 times for Youcook2 and 10 times for MSR-VTT faster than that of the UniVL (FT-Align). Therefore, it is a trade-off between per- formance and efficiency in practical applications. In the following ablation experiment, we exploit UniVL (FT-Joint) in the retrieval task. ric using the pen-source tool2, including BLEU (BLEU-3, B-3; BLEU-4, B-4) (Papineni et al., 2002), METEOR (M) (Banerjee and Lavie, 2005), ROUGE-L (R-L) (Lin and Och, 2004), and CIDEr (Vedantam et al., 2015). We compare our pre- trained model with several baseline methods. We classify the methods with the setting that the in- put is video-only or video+transcript. Zhou et al. (2018a) propose an end-to-end model for both pro- cedural segmentation and captioning. Sun et al. (2019b,a); Zhu and Yang (2020); Korbar et al. (2020) adopt the pre-training strategy and evalu- ate the captioning with the only video as input. Shi et al. (2019) and Hessel et al. (2019) discuss the multimodal input with both video and transcript. Our pre-trained model achieves state-of-the-art re- sults and outperforms the existing pre-trained mod- els, even only considering video as input. # 4.3.2 Multimodal Video Captioning. # 4.3.3 Action Segmentation. Multimodal video captioning aims to generate a sequence of descriptive sentences. As shown in Figure 3 (caption block), the model encodes the input video frames as well as transcripts inside the clips through the video encoder and text encoder respectively, then feeds the encodings to the cross encoder to get unified representation, and finally generates token sequence by the decoder. We use LDecoder as the loss during the fine-tuning stage. The hyper-parameters are an initial learning rate of 3e-5, a batch size of 32 samples, and fine-tune for 5 epochs. We fine-tune our pre-train model on action seg- mentation task using COIN dataset, which is to predict one pre-defined label for each frame of the given video. As shown in Figure 3 (action tasks block), the model encodes the input video frames through the video encoder, followed by a linear classifier upon the output encodings for frame label- ing. We do not use the text encoder due to no text description in the dataset. The evaluation metric is frame-wise accuracy (FA). The hyper-parameters are an initial learning rate of 3e-5, a batch size of 32 samples, and fine-tune for 5 epochs. The results are shown in Table 4. The UniVL significantly outperforms the baselines with more than 14% im- Table 3 lists the caption results of all baselines and our models on Youcook2. This generation task adopts the corpus-level generation evaluation met- 2https://github.com/Maluuba/nlg-eval Methods NN-Viterbi (Richard et al., 2018) VGG (Simonyan and Zisserman, 2014) TCFPN-ISBA (Ding and Xu, 2018) CBT (Sun et al., 2019a) MIL-NCE (Miech et al., 2020) ActBERT (Zhu and Yang, 2020) 21.17 25.79 34.30 53.90 61.00 56.95 UniVL 70.02 # Frame Accuracy (%) Table 4: Action segmentation results on COIN. Methods Alayrac et al. (2016) Zhukov et al. (2019) Supervised (Zhukov et al., 2019) HowTo100M (Miech et al., 2019) MIL-NCE (Miech et al., 2020) ActBERT (Zhu and Yang, 2020) 13.3 22.4 31.6 33.6 40.5 41.4 UniVL 42.0 # Average Recall (%) Table 5: Action step localization results on CrossTask. provements. It shows that the pre-trained UniVL actually learns a good visual representation, even absent of linguistic descriptions. 4.3.4 Action Step Localization. We evaluate the action step localization on CrossTask dataset. As shown in Figure 3 (action tasks block), the model encodes the step description (action) and video clip through the text encoder and the video encoder respectively. And then calculate the relevance scores through dot product similar to the retrieval task. To fairly compare to (Miech et al., 2019, 2020; Zhu and Yang, 2020), we do not fine-tune on the CrossTask dataset. We perform the evaluation protocol by reporting the average recall (CTR) metric for the localization task3. The results are shown in Table 5. Our results are even better than the supervised baseline, which demonstrates our UniVL model can learn better joint text-video representation. employ video and corresponding transcripts to ac- complish this task. As shown in Figure 3 (multi- modal classification block), the model encodes the input video frames as well as transcripts inside the clips through the video encoder and text encoder, respectively. Then feeds the encodings to the cross encoder to get unified representation, and finally predicts the sentiment score by a linear on the first token ‘[CLS]’. The hyper-parameters are an initial learning rate of 1e-5, a batch size 32, and fine-tune for 3 epochs. The results are shown in Table 6. Following (Zadeh et al., 2019), the evaluation metrics are binary accuracy (BA), F1 score, Mean-Absolute Error (MAE), and Pearson Correlation Coefficient (Corr). Compared with the baseline using video, transcript, and audio inputs, our model trained with video and language still achieves the best results without audio information. 4.3.5 Multimodal Sentiment Analysis. We evaluate the multimodal sentiment analysis on CMU-MOSI dataset, the goal of which is to iden- tify the sentiment of speaker based on the speakers display of verbal and nonverbal behaviors. We 3The result is generated following the evaluation process of official project: https://github.com/DmZhukov/CrossTask # 4.4 Ablation Studies We analyze the effectiveness of our model design on pre-training objectives and strategies through ablation studies over text-based video retrieval and multimodal video captioning tasks. We also discuss the effectiveness of various visual features. Methods BA F1 MAE Corr MV-LSTM (Rajagopalan et al., 2016) TFN (Zadeh et al., 2017) MARN (Zadeh et al., 2018b) MFN (Zadeh et al., 2018a) RMFN (Liang et al., 2018) RAVEN (Wang et al., 2019) MulT (Tsai et al., 2019) FMT (Zadeh et al., 2019) 1.019 0.601 1.040 0.633 0.968 0.625 0.965 0.632 0.922 0.681 0.915 0.691 0.870 0.698 81.5/83.5 81.4/83.5 0.837 0.744 73.9/- 73.9/ 77.1/ 77.4/ 78.4/ 78.0/ /83.0 74.0/- 73.4/- 77.0/- 77.3/- 78.0/- -/- -/82.8 UniVL 83.2/84.6 83.3/84.6 0.781 0.767 Table 6: Multimodal sentiment analysis results on CMU-MOSI dataset. BA means binary accuracy, MAE is Mean- absolute Error, and Corr is Pearson Correlation Coefficient. For BA and F1, we report two numbers following Zadeh et al. (2019): the number on the left side of / is calculated based on the approach from Zadeh et al. (2018b), and the right side is by Tsai et al. (2019). Methods UniVL Youcook2 22.2 52.2 -w/o Joint Youcook2 19.5 48.0 -w/o Alignment Youcook2 16.3 42.3 -w/o EnhancedV Youcook2 16.1 41.3 -w/o Decoder Youcook2 14.6 40.3 -w/o StagedP Youcook2 11.9 35.0 23.9 -w/o Pre-training Youcook2 7.7 66.2 62.7 56.2 55.8 55.5 48.9 34.7 5 6 8 8 8 11 21 UniVL MSR-VTT 20.6 49.1 -w/o Joint MSR-VTT 19.6 45.9 -w/o Alignment MSR-VTT 19.3 44.6 -w/o EnhancedV MSR-VTT 18.0 45.3 -w/o Decoder MSR-VTT 18.9 44.9 -w/o StagedP MSR-VTT 18.0 44.3 -w/o Pre-training MSR-VTT 16.7 44.0 62.9 62.6 60.1 59.3 57.8 57.7 55.9 6 6 7 7 7 8 8 Table 7: Ablation study on retrieval task. ‘-w/o’ means reducing the condition above the previous line. Methods B-3 B-4 M R-L CIDEr UniVL 23.87 17.35 22.35 46.52 -w/o Joint 23.96 17.54 22.48 46.77 -w/o Alignment 23.51 17.24 22.02 45.90 -w/o EnhancedV 23.15 17.04 21.83 45.89 -w/o Decoder 19.01 13.22 19.43 43.62 -w/o StagedP 18.13 12.49 18.78 42.64 16.27 37.44 -w/o Pre-training 14.23 9.46 1.81 1.84 1.77 1.76 1.53 1.46 1.15 Table 8: Ablation study on caption task of Youcook2 dataset. ‘-w/o’ means reducing the condition above the previous line. Method Visual Feature R@1 R@5 R@10 Median R UniVL on Youcook2 RS152 + RX101 11.5 29.1 40.1 22.2 52.2 66.2 S3D 17 5 UniVL on MSR-VTT RS152 + RX101 18.7 44.4 58.9 20.6 49.1 62.9 S3D 7 6 Table 9: Ablation study of visual features for re- trieval task. RS152 denotes ResNet-152, RX101 means ResNeXt-101. Method Visual Feature B-3 B-4 M R-L CIDEr UniVL RS152 + RX101 20.42 14.31 19.92 42.35 1.47 23.87 17.35 22.35 46.52 1.81 S3D Table 10: Ablation study of visual features for multi- modal video captioning results on Youcook2 dataset. RS152 denotes ResNet-152, RX101 means ResNeXt- 101. # 4.4.1 Modules and Strategies. Table 7 shows the effectiveness of each objective or strategy on the retrieval task. The results are re- ported on both Youcook2 and MSR-VTT datasets. Simultaneously, Table 8 demonstrates the effective- ness of each objective or strategy on the caption task. For the retrieval task, we exploit UniVL (FT- Joint) fine-tuning strategy to study the objectives: Joint loss, Alignment loss, and Decoder loss, and the strategies: StagedP and EnhancedV show con- sistent improvement. From the result, we can see that the cross encoder and decoder modules can promote the joint representation of video and text. For the caption task, we find that the decoder mod- ule shows great advantage and achieves more than 3 points gain on the BLUE-4 metric. Another find- ing is that the Joint loss decreases the generation task a little, although it performs well in the re- trieval task. Excessive emphasis on coarse-grained matching can affect the fine-grained description at the generation task. # 4.4.2 Visual Features. We compare the S3D video feature pre-trained on Howto100M and ResNet-152 plus ResNeXt-101 pre-trained on labeled ImageNet and Kinetics re- spectively. The ResNet-152 (RS152) and ResNeXt- 101 (RX101) are used to extract 2D and 3D features from video clips respectively similar to Miech et al. (2019)’s work. As shown in Table 9 and Table 10, the visual feature is important in our pre-training model and It is worth studying an the downstream tasks. end to end training from raw videos instead of extracted fixed video features in the future. How- ever, the time-cost and the memory-cost are enor- mous. The key bottleneck is visual representation, and we propose two possible approaches: design- ing a lightweight training scheme, e.g., training on keyframes of video, using a small feature dimen- sion size. # 5 Conclusion and Discussion This paper proposes UniVL with self-supervised learning for video and language representation on large scale videos. The UniVL is designed with four modules and five objectives for both video- language understanding and generation tasks. It is a flexible model for most of the multimodal down- stream tasks considering both efficiency and effec- tiveness. We conduct extensive experiments on evaluating our model for five downstream tasks, e.g., text-based video retrieval and multimodal video captioning. The experimental results demon- strate that our pre-trained model can improve the performance to a large extent over the baseline models and achieve state-of-the-art results on five typical multimodal tasks. Besides, we will investi- gate our model’s performance on more massive datasets and more downstream tasks for future work. # References Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. 2016. Unsupervised learning from narrated instruction videos. In CVPR, pages 4575– 4583. Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In Proceedings of the acl workshop on intrinsic and extrinsic evalu- ation measures for machine translation and/or sum- marization, pages 65–72. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT, pages 4171–4186. Li Ding and Chenliang Xu. 2018. Weakly-supervised action segmentation with iterative soft boundary as- signment. In CVPR, pages 6508–6516. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. arXiv:1905.03197. arXiv preprint Tengda Han, Weidi Xie, and Andrew Zisserman. 2019. Video representation learning by dense predictive In Proceedings of the IEEE International coding. Conference on Computer Vision Workshops, pages 0–0. Jack Hessel, Bo Pang, Zhenhai Zhu, and Radu Soricut. 2019. A case study on combining asr and visual fea- tures for generating instructional video captions. In CoNLL. Dotan Kaufman, Gil Levi, Tal Hassner, and Lior Wolf. 2017. Temporal tessellation: A unified approach for video analysis. In ICCV, pages 94–104. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. ICLR. Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. 2014. Unifying visual-semantic embeddings arXiv with multimodal neural language models. preprint arXiv:1411.2539. Benjamin Klein, Guy Lev, Gil Sadeh, and Lior Wolf. 2015. Associating neural word embeddings with deep image representations using fisher vectors. In CVPR, pages 4437–4446. Bruno Korbar, Fabio Petroni, Rohit Girdhar, and Video understand- arXiv preprint Lorenzo Torresani. 2020. ing as machine translation. arXiv:2006.07203. Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461. Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In ACL, pages 7871–7880. Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2019a. Unicoder-vl: A universal en- coder for vision and language by cross-modal pre- training. arXiv preprint arXiv:1908.06066. Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, and Jingjing Liu. 2020. Hero: Hi- for video+ language omni- erarchical encoder arXiv preprint representation pre-training. arXiv:2005.00200. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019b. Visualbert: A simple and performant baseline for vision and lan- guage. arXiv preprint arXiv:1908.03557. Paul Pu Liang, Ziyin Liu, Amir Zadeh, and Louis- Philippe Morency. 2018. Multimodal language anal- In EMNLP, ysis with recurrent multistage fusion. pages 150–161. Chin-Yew Lin and Franz Josef Och. 2004. Auto- matic evaluation of machine translation quality us- ing longest common subsequence and skip-bigram statistics. In ACL, page 605. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. William Lotter, Gabriel Kreiman, and David Cox. 2016. Deep predictive coding networks for video predic- arXiv preprint tion and unsupervised learning. arXiv:1605.08104. Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visi- olinguistic representations for vision-and-language tasks. In NeurIPS, pages 13–23. Michael Mathieu, Camille Couprie, and Yann Le- Deep multi-scale video predic- arXiv preprint Cun. 2015. tion beyond mean square error. arXiv:1511.05440. Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman. 2020. End-to-End Learning of Visual Represen- tations from Uncurated Instructional Videos. In CVPR. Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. 2019. Howto100m: Learning a text-video embed- ding by watching hundred million narrated video clips. ICCV. Aaron van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive pre- dictive coding. arXiv preprint arXiv:1807.03748. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In ACL, pages 311– 318. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv e-prints. Shyam Sundar Rajagopalan, Louis-Philippe Morency, Tadas Baltrusaitis, and Roland Goecke. 2016. Ex- tending long short-term memory for multi-view structured learning. In ECCV, pages 338–353. Alexander Richard, Hilde Kuehne, Ahsan Iqbal, and Neuralnetwork-viterbi: A Juergen Gall. 2018. framework for weakly supervised video learning. In CVPR, pages 7386–7395. Botian Shi, Lei Ji, Yaobo Liang, Nan Duan, Peng Chen, Zhendong Niu, and Ming Zhou. 2019. Dense proce- dure captioning in narrated instructional videos. In ACL, pages 6382–6391. Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie- Yan Liu. 2019. Mass: Masked sequence to sequence pre-training for language generation. arXiv preprint arXiv:1905.02450. Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. 2015. Unsupervised learning of video representations using lstms. In ICML, pages 843–852. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: pre- training of generic visual-linguistic representations. In ICLR. Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. 2019a. Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743. Chen Sun, Austin Myers, Carl Vondrick, Kevin Mur- phy, and Cordelia Schmid. 2019b. Videobert: A joint model for video and language representation learning. ICCV. Hao Tan and Mohit Bansal. 2019. Lxmert: Learning cross-modality encoder representations from trans- formers. arXiv preprint arXiv:1908.07490. Yansong Tang, Dajun Ding, Yongming Rao, Yu Zheng, Danyang Zhang, Lili Zhao, Jiwen Lu, and Jie Zhou. 2019. COIN: A large-scale dataset for comprehen- In CVPR, pages sive instructional video analysis. 1207–1216. Atousa Torabi, Niket Tandon, and Leonid Sigal. 2016. Learning language-visual embedding for movie un- derstanding with natural-language. arXiv preprint arXiv:1609.08124. Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J. Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. 2019. Multimodal transformer for unaligned multimodal language sequences. In ACL, pages 6558–6569. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS, pages 5998–6008. Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- scription evaluation. In CVPR, pages 4566–4575. Xiaolong Wang and Abhinav Gupta. 2015. Unsuper- vised learning of visual representations using videos. In ICCV, pages 2794–2802. Yansen Wang, Ying Shen, Zhun Liu, Paul Pu Liang, Amir Zadeh, and Louis-Philippe Morency. 2019. Words can shift: Dynamically adjusting word rep- In AAAI, resentations using nonverbal behaviors. pages 7216–7223. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between hu- arXiv preprint man and machine translation. arXiv:1609.08144. Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. 2018. Rethinking spatiotempo- ral feature learning: Speed-accuracy trade-offs in video classification. In ECCV, pages 318–335. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msr- vtt: A large video description dataset for bridging video and language. In CVPR, pages 5288–5296. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- arXiv preprint ing for language understanding. arXiv:1906.08237. Youngjae Yu, Jongseok Kim, and Gunhee Kim. 2018. A joint sequence fusion model for video question an- swering and retrieval. In ECCV, pages 487–503. Youngjae Yu, Hyungjin Ko, Jongwook Choi, and Gun- hee Kim. 2016. Video captioning and retrieval mod- els with semantic attention. In ECCVLSMDC2016 Workshop. Youngjae Yu, Hyungjin Ko, Jongwook Choi, and Gun- hee Kim. 2017. End-to-end concept word detection for video captioning, retrieval, and question answer- ing. In CVPR, pages 3261–3269. Amir Zadeh, Minghai Chen, Soujanya Poria, Erik Cam- bria, and Louis-Philippe Morency. 2017. Tensor fu- sion network for multimodal sentiment analysis. In EMNLP, pages 1103–1114. Amir Zadeh, Paul Pu Liang, Navonil Mazumder, Soujanya Poria, Erik Cambria, and Louis-Philippe Morency. 2018a. Memory fusion network for multi- view sequential learning. AAAI. Amir Zadeh, Paul Pu Liang, Soujanya Poria, Pra- teek Vij, Erik Cambria, and Louis-Philippe Morency. 2018b. Multi-attention recurrent network for human communication comprehension. In AAAI. Amir Zadeh, Chengfeng Mao, Kelly Shi, Yiwei Zhang, Paul Pu Liang, Soujanya Poria, and Louis-Philippe Morency. 2019. Factorized multimodal transformer for multimodal sequential learning. arXiv preprint arXiv:1911.09826. Amir Zadeh, Rowan Zellers, Eli Pincus, and Louis- Philippe Morency. 2016. Multimodal sentiment in- tensity analysis in videos: Facial gestures and verbal messages. IEEE Intelligent Systems, 31(6):82–88. Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J Corso, and Jianfeng Gao. 2019. Uni- fied vision-language pre-training for image caption- ing and vqa. arXiv preprint arXiv:1909.11059. Luowei Zhou, Chenliang Xu, and Jason J Corso. 2018a. Towards automatic learning of procedures from web instructional videos. In AAAI. Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher, and Caiming Xiong. 2018b. End-to-end dense video captioning with masked transformer. In CVPR, pages 8739–8748. Linchao Zhu and Yi Yang. 2020. Actbert: Learning global-local video-text representations. In CVPR. Dimitri Zhukov, Jean-Baptiste Alayrac, Ramazan Gok- berk Cinbis, David Fouhey, Ivan Laptev, and Josef Sivic. 2019. Cross-task weakly supervised learning from instructional videos. In CVPR.
{ "id": "1807.03748" }
2002.06275
TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval
Pre-trained language models like BERT have achieved great success in a wide variety of NLP tasks, while the superior performance comes with high demand in computational resources, which hinders the application in low-latency IR systems. We present TwinBERT model for effective and efficient retrieval, which has twin-structured BERT-like encoders to represent query and document respectively and a crossing layer to combine the embeddings and produce a similarity score. Different from BERT, where the two input sentences are concatenated and encoded together, TwinBERT decouples them during encoding and produces the embeddings for query and document independently, which allows document embeddings to be pre-computed offline and cached in memory. Thereupon, the computation left for run-time is from the query encoding and query-document crossing only. This single change can save large amount of computation time and resources, and therefore significantly improve serving efficiency. Moreover, a few well-designed network layers and training strategies are proposed to further reduce computational cost while at the same time keep the performance as remarkable as BERT model. Lastly, we develop two versions of TwinBERT for retrieval and relevance tasks correspondingly, and both of them achieve close or on-par performance to BERT-Base model. The model was trained following the teacher-student framework and evaluated with data from one of the major search engines. Experimental results showed that the inference time was significantly reduced and was firstly controlled around 20ms on CPUs while at the same time the performance gain from fine-tuned BERT-Base model was mostly retained. Integration of the models into production systems also demonstrated remarkable improvements on relevance metrics with negligible influence on latency.
http://arxiv.org/pdf/2002.06275
Wenhao Lu, Jian Jiao, Ruofei Zhang
cs.IR, cs.LG, Natural language processing
null
null
cs.IR
20200214
20200214
0 2 0 2 b e F 4 1 ] R I . s c [ 1 v 5 7 2 6 0 . 2 0 0 2 : v i X r a TwinBERT: Distilling Knowledge to Twin-Structured BERT Models for Efficient Retrieval Jian Jiao Bing Ads of AI & Research Group Microsoft One Microsoft Way Redmond, WA 98052-6399 [email protected] ABSTRACT Pre-trained language models like BERT have achieved great success in a wide variety of NLP tasks, while the superior performance comes with high demand in computational resources, which hin- ders the application in low-latency IR systems. We present Twin- BERT model for effective and efficient retrieval, which has twin- structured BERT-like encoders to represent query and document respectively and a crossing layer to combine the embeddings and produce a similarity score. Different from BERT, where the two input sentences are concatenated and encoded together, TwinBERT decouples them during encoding and produces the embeddings for query and document independently, which allows document embeddings to be pre-computed offline and cached in memory. Thereupon, the computation left for run-time is from the query encoding and query-document crossing only. This single change can save large amount of computation time and resources, and therefore significantly improve serving efficiency. Moreover, a few well-designed network layers and training strategies are proposed to further reduce computational cost while at the same time keep the performance as remarkable as BERT model. Lastly, we develop two versions of TwinBERT for retrieval and relevance tasks corre- spondingly, and both of them achieve close or on-par performance to BERT-Base model. The model was trained following the teacher-student framework and evaluated with data from one of the major search engines. Ex- perimental results showed that the inference time was significantly reduced and was firstly controlled around 20ms on CPUs while at the same time the performance gain from fine-tuned BERT-Base model was mostly retained. Integration of the models into pro- duction systems also demonstrated remarkable improvements on relevance metrics with negligible influence on latency. CCS CONCEPTS •Computing methodologies → Massively parallel algorithms; Machine learning algorithms; Supervised learning; Neural networks; Learning latent representations; •Information sys- tems → Document representation; Query representation; Spon- sored search advertising; 1 INTRODUCTION Pre-trained language models such as BERT [6] and GPT [25] have led a series of breakthroughs in a broad variety of NLP tasks in- cluding question answering, natural language inference, sentiment classification and others, and more impressively, they even sur- passed human performance on some of them 1. However, to serve these deep-structured models in a production system, besides accu- racy, latency is also an important factor to consider. A BERT-Base model, for instance, has 110 million parameters and 12 stacked multi-head attention networks, which is extremely computation- ally intensive and makes it challenging to deploy such a model in a real-world system. In the age of information explosion, to meet people’s informa- tion needs, a variety of modern applications have been developed including web search, online advertising, product recommendation, digital assistant and personalized feed. At the heart of these sys- tems, information retrieval (IR) plays an important role in handling the increasingly growing volume of information. The quality of an IR system crucially depends on the deep understanding of queries and documents, which fundamentally is an NLP problem and could benefit from the state-of-the-art pre-trained models. However, con- sidering the large-scale and low-latency nature of IR systems, the long inference time of these models becomes a bottleneck for their applications in the area. Most of the prior knowledge distillation efforts on BERT [13], [22], [30], [31] showed effectiveness in com- pressing the complex models and reducing the inference time to a certain degree. Nevertheless, few of them could meet the latency requirement of IR systems. To address the latency problem brought by the advanced NLP techniques, this paper proposes a novel language representation model for efficient retrieval, called TwinBERT. The model has twin- structured BERT-like encoders to encode the query and document respectively and a crossing layer to combine the two embeddings and produce a similarity score. The model was evaluated using data collected from one of the major search engines and the accuracy performance is close to a complex BERT-Base model. More impor- tantly, the implementation with PyTorch, although not as efficient as C/C++ and others, showed considerable reduction in inference time, and the average time cost on CPU over 1,000 random queries was only around 20 milliseconds for scoring 100 documents. KEYWORDS Deep Learning; Deep Neural Network (DNN); Semantic Embedding; Information Retrieval; k-NN; CDSSM; PyTorch; Sponsored Search; BERT; Knowledge Distillation Our contributions are summarized as below. 1) a twin-BERT structure which separates BERT inputs when encoding and allows embeddings to be pre-computed 2) an efficient retrieval model based on cosine similarity supporting ANN search 3) an efficient relevance # 1https://gluebenchmark.com/leaderboard/ prediction model based on residual network with performance close to BERT-Base. The rest of the paper is organized as follows. Section 2 presents a literature review on related works. Section 3 briefly introduces the context of paid search. Section 4 discusses details of TwinBERT including network architecture, model training and online serving. Section 5 reports the experimental results of TwinBERT compared to baseline models. Section 6 introduces how TwinBERT is deployed and used in production system. In the end, Section 7 concludes the work and lists future directions to explore. # 2 RELATED WORK Learning Representations through Language Models Language representations, as the building blocks of NLP models, are impressively effective in improving model performance on NLP tasks, and therefore have become an important research area over the years. According to how the representations are employed in downstream tasks, prior works in the area can be broadly grouped into two categories: feature-based approaches and fine-tuning ap- proaches. Word representations, sentence-based representations, and most recently contextual word representations are three direc- tions of the feature-based representations. Word2Vec [21], GloVe [23] and FastText [1] focus on learning word representations and different senses of a word are all combined into one vector while Skip-thought [15], FastSent [10], Quick-thought [20], Universal sen- tence encoder [3] and other works [4], [29] extract sentence-level representations. Unlike the previous works, ELMo [24] derives word representation based on the entire sentence and captures representations of words on multiple granularity by combining vectors from intermediate layers of a multi-layer BiLSTM. All these methods only require one round of training before used in any downstream tasks. In recent two years, pre-trained models such as GPT [25] and BERT [6] demonstrated superior performance. In contrast to previous works, the representations from these models are learned in two phases. In the first phase, a language model is learned in an unsupervised manner. In the second phase, the model is fine-tuned with task-specific label data to produce representa- tions used in downstream tasks. BERT stands for bidirectional encoder representations from transformers and achieved the state-of-the-art performance on a broad variety of NLP tasks. BERT is pre-trained on a large corpus of unlabelled text data including the entire Wikipedia and BooksCor- pus [41]. It has two pre-training tasks: masked language model (MLM) and next sentence prediction (NSP). MLM enforces the model to learn parameters by optimizing the prediction of masked tokens. To better serve the downstream binary classification tasks such as question answering (QA) and natural language inference (NLI), NSP is introduced to jointly train with MLM, which requires a pair of sentences as input. Through the multi-layer bidirectional structure, tokens from the two sentences deeply interact with each other and as a result, model performance is effectively improved for binary classification tasks. As a side effect, the computational cost is also highly increased, especially in the area of information retrieval, where one query needs to be paired with a large number of docu- ment candidates. BERT has overwhelming influence and to extend the work, a few variants have been developed including MTDNN 2 [18], XLNet [37], ERNIE [39], RoBERTa [19], ALBERT [16] and T5 [26]. # Distilling Knowledge to Compact Models With limited computational resources and strict latency require- ment, expensive models such as BERT normally cannot be directly deployed in real-world applications, and knowledge distillation (KD) [2],[11] is typically adopted to address the issue. The idea is to transfer the knowledge learnt from an expensive high-performance teacher model to a compact student model without significant per- formance loss. In contrast to traditional machine learning tasks, a loss function is defined on soften probabilities produced by the teacher model instead of hard labels, which are so-called soft la- bels, and soft labels supposedly have higher entropy which could provide more information and less variance. Prior efforts in KD on deep-structured models like BERT mainly focused on the transfer techniques. [31] augmented the training data for distillation with synthetic examples. BERT-PKD [30] learned distilled knowledge from intermediate layers besides the output layer. [32] demon- strated pre-trained student had better performance than random initialization. TinyBERT [13] further expanded distillation to trans- former layers. [22] proposed teacher assistant to bridge the gap between student and teacher. MT-DNN ensemble [17], [40] and MKDN [38] improved the student model performance via learning from multiple teachers. To the best of our knowledge, none of these works have at- tempted to decouple the two-sentence input, which could therefore reduce the inference time complexity for two-input cases from quadratic to linear time complexity. 3 SPONSORED SEARCH TwinBERT is developed in the context of sponsored search. Readers can refer to [7] for a comprehensive introduction of the topic. In short, sponsored search engine delivers ads alongside the organic search results. There are often three parties involved in the spon- sored search ecosystem: the user, the advertiser and the search platform. The goal of the platform is to display a list of ads that best match user’s intent. Below is the minimum set of key concepts for discussions that follow. Query: A short text string that contains user’s intent. Users enter queries in a search box to look for related informa- tion. Keyword: A short text string that expresses advertiser’s in- tent. Keywords are provided by advertisers and are not visible to end users but they are pivotal in that search engine relies on them to match user intents. Impression: An ad being displayed to the end user, on the result page of the search engine. On the backend of a paid search engine, the number of keywords created by advertisers are typically at the scale of billions. Fast IR techniques are firstly applied to reduce the number of keywords to a much smaller matched subset and then sent to downstream components, where more complex and less efficient algorithms are used to finalize the ads to display. To be consistent with the above context, keywords are used instead of documents throughout the paper. # Output Pooling Layer Multi-Head Self-Attention 1g Token Embedding Token Embedding Query Keyword Figure 1: TwinBERT Architecture 4 TWINBERT The architecture of TwinBERT is presented in this section with a few well-designed network layers to balance the effectiveness and efficiency. Other topics including model training and online serving are also discussed in detail. 4.1 Model Architecture As shown in Figure 1, the architecture of TwinBERT consists of two multi-layer transformer encoders and a crossing layer to combine the vector outputs of encoders and produce the final output. It is noteworthy that the parameters of the two encoders of query and keyword could be shared or different. The detailed comparison of the two styles is discussed in Section 5. Similar to BERT model architecture, at the bottom of each en- coder is the embedding layer, where the query and keyword sen- tences are represented separately as embeddings and then fed into corresponding encoders. The middle part of each encoder is a stack of transformer encoders with the same implementation as described in [33] but a different setting. Following the notations in BERT, the number of layers is denoted as L, the hidden size is H , and the number of self-attention heads is A. In this work, the performance is mainly reported with the following model setting: L = 6, H = 512 and A = 8 (the size of the feed-forward intermediate layer is also set to equal to H ). The last and top layer of the encoder is the weighted 3 pooling layer which applies a weighted sum of the final hidden vectors and produces a single embedding for each input sentence. 4.2 Input Representation In TwinBERT, the two input sentences are decoupled and encoded separately, with each encoder only taking care of one single sen- tence. Different from BERT, there is no need to introduce a separator token [SEP] to separate the two segments and the input sequence length is roughly reduced by half. According to [33], the per-layer 2) on sequence length and other complexity of self-attention is O(n operations are O(n). As a result of the cut on sequence length, the overall inference cost is correspondingly decreased. The other classification token [CLS] in BERT is dropped in weighted-average pooling while reserved only in classification token pooling, which will be discussed in the pooling layer section. For token embeddings, TwinBERT uses the tri-letter based word embeddings introduced in [28]. Compared to the 30K dimensional WordPiece embeddings [36] in BERT, tri-letter based embeddings have larger vocabulary size (50K), and therefore can bear more information for better performance. On the other hand, they are more efficient when extracted at inference time since the extrac- tion of each token is independent, while WordPiece extraction is a recursive process. BERT embeddings are combinations of three components: to- ken embeddings, segment embeddings and position embeddings. While, the input of a TwinBERT encoder only contains one single sentence and segment embeddings are unnecessary. Therefore, the input embeddings only consist of the sum of token embeddings and position embeddings. 4.3 Pooling Layer The output of the encoder is a sequence of vectors, each correspond- ing to an input token with position information implied from its index in the input sentence. To provide a unique fix-length vector representation for both inputs, a pooling layer is added to provide a robust approach to unify all token vectors into a single sentence level embedding. Specifically, two pooling methods are experi- mented: weighted-average pooling and classification token pooling. Compared to standard average pooing, weighted-average pool- ing introduces a weight to each token vector and the output is the weighted-average of all token vectors. The weight parameters are learned as part of the entire network. The second method is inspired by the special classification token ([CLS]) in BERT and is so called classification token pooling. The implementation involves prefixing the sequence with [CLS] at the input layer. The output of encoder is simply the final hidden vector of [CLS]. Com- parison results of the two methods are presented in the experiment section. 4.4 Crossing Layer Given the sentence embeddings of query and keyword, here comes the question: how to combine the two? Two versions of TwinBERT are proposed to address the problem, denoted as TwinBERTcos and TwinBERTres respectively: Cosine similarity Cosine similarity is an intuitive approach for combining two vectors of the same length. Formally, cosine similarity is defined as qck llall - IHkl| cos(q,k) = (1) , where q and k correspond to the embedding vectors of query and keyword. The output falls in the range of [−1, 1], while the soft targets from teacher model are between 0 and 1. In order to align the two, an additional logistic regression layer is applied to the cosine similarity score and convert it to [0, 1]. Cosine similarity projects the two embeddings to the same vector space, and when both vectors are normalized, it can be easily trans- formed into Euclidean distance. Thereupon, approximate nearest neighbor (ANN) algorithms can be naturally applied for retrieval tasks [12]. Residual network Residual networks were firstly proposed in [9] to solve the image recognition problem. Inspired by [27], where residual layers were used in a non-convolutional network in the NLP domain, they are adopted here to overcome over-fitting and gradient vanishing problems. Specifically, the embeddings of query and keyword are first combined by a max operator and then fed into the residual connection. The formal definition for residual function is as follows: y = F (x,W , b) + x , where x is the max of query vector q and keyword vector k and F is the mapping function from x to the residual with parameters W and b. Using concatenation instead of max operator is another option. Here, the motivation behind choosing max over concatenation is that it provides a down-sampling effect and also softly maps the two embeddings to a closer vector space. Similarly, a logistic regression layer is applied to the output vector of residual function y to predict the binary relevance label. Compared to cosine similarity, the deep-structured network could model more complex problems and therefore produce better per- formance, but as a tradeoff, it is less efficient in computation and cannot easily work with the ANN algorithms. 4.5 Knowledge Distillation TwinBERT is trained following the teacher-student framework via knowledge distillation since comparing to learning from scratch, student models usually have better performance [11]. For simplicity, Google’s 12-layer BERT-Base model is fine-tuned using editorial query-keyword relevance labels as the teacher model and is then used to score a collection of impressed query-keyword pairs. The logits z are outputted to generate soft labels using the following equation exp(zi/T) dj exp(z;/T) i= , where T is the temperature parameter controlling softness of the labels. When T = 1, it is equivalent to standard softmax function. As T grows, the target values become softer and hence provide more information. Specifically, in TwinBERT, T is set to 2. The cross-entropy loss function for binary classification is de- fined as N loss = — Y*(yi log(p:) + (1 — yi) log(1 — pi) i=1 4 it Input query Search results ! Online Query Repres Â¥ Matching of Ranking ] . x Keyword Index Store ry Keyword Representation j Offline Keywords Figure 2: Online Serving , where N is the number of samples and p is the predicted probability. It was claimed in [31] that mean squared error (MSE) produced better results but our experiments showed the opposite. 4.6 Online Serving TwinBERT is designed to adopt the latest NLP breakthroughs to IR systems, particularly the paid search engine. Figure 2 outlines the high-level architecture of TwinBERT-based information retrieval system. The keywords that advertisers entered are stored in a distributed database. As the offline process of the system, keywords are ex- tracted from the database and represented as embeddings {kj |j < m} through keyword-side TwinBERT encoder. For efficient re- trieval, ANN techniques such as locality-sensitive hashing [5] and k-d trees [8] are typically employed to improve search performance. Specifically in TwinBERT, the keyword embeddings are stored and organized in a graph structure as described in [34]. At run-time, when a user enters a search query, the query em- bedding q is generated on-the-fly by the query-side TwinBERT encoder and ANN search is performed to find the top results from the pre-built keyword indices, which are normally pre-loaded in memory. 5 EXPERIMENTS This section presents training details and experimental results of TwinBERT models. Section 5.1 introduces the data and hyper- parameters used in teacher and student model training. Sections 5.2 and 5.3 give evaluation results on relevance and retrieval tasks. In the relevance experiment, a few baseline models and two versions of TwinBERT models, TwinBERTcos and TwinBERTres which differ in the design of the crossing layer, were evaluated. In the retrieval experiment, where ANN was used when searching keywords in a vector space, TwinBERTcos was picked and compared with C-DSSM [28]. In 5.4, a few effective training strategies are discussed. 5.5 gives the overall evaluation results. Lastly, in 5.6, inference time of TwinBERT models with different configurations are reported for better understanding on the design of TwinBERT models. # 5.1 Teacher and Student Model Training 5.1.1 Training teacher model. The teacher model used in this paper was a BERT-Base (BERT12) model fine-tuned from the un- cased checkpoint trained and released by the authors of [6]. 5.8 million query-keyword data was used for fine-tuning. In the data, query-keywords were given labels which indicate 4 different lev- els of relevance: bad, fair, good and excellent. In the fine-tuning process, fair, good and excellent were mapped into one level which was non-bad and the model learnt query-keyword relevance from binary labels (bad vs. non-bad) based on cross-entropy loss. Hyper- parameters of fine-tuning were the same as what suggested in [6]. Batch size was set to 2, 048 and model was trained for 5 epochs. 5.1.2 Training student models. The student models in this paper were distilled from the same teacher model. 500 million impressions were sampled from log and scored by the teacher model to generate soft targets for student model training. In the training process of TwinBERT, model parameters were randomly initialized and Adam [14] was used for optimization. Training was done on four V100 GPUs and hyper-parameters were adopted from BERT pre-training [6]: learning rate = 1e − 4, β1 = 0.9, β2 = 0.999, L2 weight decay = 0.01. The model was trained for 10 epochs with batch size set to 2, 048. The two encoders in TwinBERT models were trained with shared parameters. # 5.2 Evaluation on Relevance Task 5.2.1 Experiment setup. In the relevance experiment, C-DSSM and 3-layer BERT (BERT3) were chosen for baseline student models. The former proved to be effective for information retrieval tasks and here for fair comparison, the hidden size was set to be the same as what used in TwinBERT, which was 512. The latter, as a student model, was used in multiple knowledge distillation works [30], [38]. In addition, BERT3 has about 46 million parameters, which is comparable to TwinBERT model (35 million). To evaluate the relevance performance of teacher and student models, two test sets were sampled from logs in a major sponsored search system. There are roughly 600,000 and 700,000 instances in test set 1 and 2 respectively. The two sets were sampled from differ- ent components in the system, so describe different perspectives of query-keyword relevance in the system. Both test sets are held-out. 5 Table 1: ROC-AUC of different settings on test set 1 Token. 1 Tri-letter 2 Tri-letter 3 Tri-letter 4 Tri-letter 5 WordPiece 6 Tri-letter 7 Tri-letter Pos. √ √ √ √ √ √ Pooling Crossing Weighted Cos Weighted Max + Res Weighted Max + Res Weighted Concat + Res Weighted Max + Res Weighted Max + Res Max + Res CLS L AUC1 0.8883 6 0.9010 6 0.8994 3 0.8995 6 0.8987 6 0.8989 6 0.8897 6 5.2.2 Effects of Design Choices. In the design of TwinBERT, a few choices were experimented before the model was finalized. The results are summarized in Table 1. Number of layers: Comparing Setting 2 vs. Setting 3 suggests that reducing the number of layers by half results in around 0.16% drop in performance, which is significant when talking about hun- dreds of millions of impressions. If latency capacity allows, it is better to have deeper structure. Crossing layer: Using max to combine the query and keyword embeddings has better performance than the naive concatenation with about 0.15% gain (Setting 2 vs. Setting 4). Again, it is signif- icant considering the scale of search. Max function produces an abstraction of the two representations of query and keyword, and helps with the generalization. Token embedding: Character-level trigram representation out- performs WordPiece by 0.26% (Setting 2 vs. Setting 5). Compared to WordPiece, trigrams could map different forms of a word to a similar representation and have more dimensions. In the context of sponsored search where query and keyword tend to have more out of vocabulary words (e.g., typo words or invented names), these pros are shown to be effective in boosting the performance. Besides, character-level trigrams are more efficient at extraction. Position embedding: In sponsored search, both the query and keyword are often short phrases but the order of words is still important and meaningful for understanding. Position embedding helps to improve the performance by about 0.23% by comparing Setting 2 to Setting 6. Classification token: Although in BERT, the signal from clas- sification token hidden vector proves to be effective in many down- stream tasks, it is less effective than weighted average when there is a need to combine two embeddings. The difference between Setting 7 and Setting 2 is as high as 1%, more significant than other changes. 5.2.3 Model accuracy. Table 2 shows the ROC-AUC of Twin- BERT models comparing with C-DSSM, BERT3 and BERT12. The AUC comparison of different models is consistent on two test sets. First of all, both TwinBERTcos and TwinBERTres outperform C- DSSM model by 1.9% and 3.4% on test set 1 while 2.0% and 6.3% on test set 2, which exhibits the advance of TwinBERT model’s archi- tecture. However, the performance gap between TwinBERTcos and TwinBERTres suggests that the current cosine version is still not effective enough to express the interaction between query and key- word but the more complex residual network can. Compared with BERT3, TwinBERTres achieves higher AUC (+0.17% and +0.07%), and most impressively, its performance is close to BERT12 with Table 2: ROC-AUC of TwinBERT models comparing with C- DSSM, BERT3 and BERT12 on two test sets Model C-DSSM BERT3 TwinBERTcos TwinBERTres BERT12 AUC1 AUC2 0.8571 0.8713 0.9107 0.8995 0.8743 0.8883 0.9113 0.9010 0.9137 0.9011 Table 3: Density differences of all 4 labels by comparing top 5 results from TwinBERTcos and C-DSSM bad fair good excellent -7.4% -2.6% 1.9% 18.8% only -0.01% and -0.26% differences, which proves the effectiveness of TwinBERT model in distilling knowledge from a BERT-like teacher model. # 5.3 Evaluation on Retrieval Task 5.3.1 Experiment setup. In the retrieval experiment, C-DSSM was selected as the baseline and compared with TwinBERTcos. Both models were trained on the same training data with the same hyper- parameters as described in the relevance experiment. The evalua- tion was conducted in three steps. Firstly, embeddings of queries and keywords were generated with the model, and a keyword index was built based on the embeddings of keywords. Secondly, ANN search was performed to find the top results from the pre-built keyword indices. Lastly, top N results for each query were col- lected and nDCG (normalized Discounted Cumulative Gain) was evaluated for each model based on the editorial labels. This time, all 4 labels were used for evaluation. In the experiment, the query set had 2,000 randomly sampled queries and keyword set had 100 million randomly sampled keywords. Top 5 results were collected for nDCG evaluation. 5.3.2 Model accuracy. nDCGs of TwinBERTcos and C-DSSM at different positions were presented in Figure 3. The solid lines give the nDCGs at different positions for TwinBERTcos and C-DSSM models. At all positions, TwinBERTcos is consistently better than C-DSSM by at least 5.3%. The dashed lines show another group of nDCGs by converting 4-level labels back to binary label. Similarly, TwinBERTcos outperforms C-DSSM by at least 3.6%. Both results indicate that TwinBERTcos embeddings capture more information about query and keyword. Table 3 gives the density differences of all 4 labels by comparing top 5 results from both models. The density differences show TwinBERTcos recalls 18.8% more excellent and 7.4% less bad query-keywords, which proves its superiority in retrieval tasks. 5.4 Effective Training Strategies Two training strategies, actual label fine-tuning and asymmetric training, were tested on top of the standard training process and will 6 1 0.9 nDCG C-DSSM (4 labels) TwinBERTcos (4 labels) C-DSSM (2 labels) TwinBERTcos (2 labels) 0.8 0.7 0.6 position 1 2 3 4 5 Figure 3: nDCGs of TwinBERTcos and C-DSSM Table 4: ROC-AUC of TwinBERT w/ actual label fine-tuning (FT) and asymmetric training (ASYM) on two test sets AUC1 AUC2 0.9113 0.9010 TwinBERTres 0.8743 0.8883 TwinBERTcos 0.9140 0.9030 TwinBERTres w/ FT 0.8953 0.8926 TwinBERTcos w/ FT 0.9127 TwinBERTres w/ ASYM+FT 0.9033 0.9057 TwinBERTcos w/ ASYM+FT 0.8982 be discussed in this section independently, as they are orthogonal to the design of TwinBERT model. 5.4.1 Actual label fine-tuning. In the standard training process, TwinBERT model learns parameters from soft labels generated by a teacher model. Actual label fine-tuning adds a round of fine-tuning based on the editorial labels post the standard training process. Learning rate was further tuned down to 2e-5. The fine-tuning step took 2 epochs to converge. The first 4 rows of Table 4 give the AUC of TwinBERT models w/ and wo/ actual label fine-tuning on the same test sets used in the relevance experiment. On TwinBERTres, the improvements are 0.22% and 0.3%, while on TwinBERTcos, the improvements are much more significant (0.48% and 2.4%). The gains demonstrate positive effect of actual label fine-tuning on both models. Often, a teacher model establishes the upper bound of the performance of student models, and it is worth pointing out that the fine-tuned TwinBERTres has already beat the teacher model on both sets by 0.21% and 0.03%, which indicates that by introducing actual labels, the fine-tuning step can bring in additional information. 5.4.2 Asymmetric training. In the current architecture of Twin- BERT model, to keep the structure simple, parameters are shared between encoders, while asymmetric parameter training could po- tentially bring higher performance. To further explore the effect, TwinBERT models were retrained with independent parameters between the encoders. All other training parameters for both stan- dard training and label fine-tuning stayed the same. The last 4 rows of Table 4 give the AUC of models w/ and wo/ asymmetric training. On TwinBERTcos, asymmetric training brings 0.63% and 1.2% AUC gains on two test sets, which means TwinBERTcos does benefit from the more complex configuration. However, on TwinBERTres, even though training loss has slightly drop, AUC shows +0.03% and -0.14% differences on two test sets suggesting asymmetric is barely effective when crossing layer is more complex. 5.5 Overall Results In summary, the best TwinBERT model (TwinBERTres) achieves 3.4% and 0.17% AUC improvements over C-DSSM and BERT3 stu- dent models following the teacher-student framework, which demon- strates its effectiveness in distilling knowledge from a BERT-like teacher model. On top of that, actual label fine-tuning and asym- metric training help boosting the performance with another 0.26% incremental gain. Overall, the best TwinBERT model outper- forms C-DSSM and BERT3 models by 3.7% and 0.42% while also beats the teacher model, BERT12, by 0.24%. 5.6 Inference Time To test the inference time, we implemented TwinBERT and the baseline models using PyTorch based on [35] and ran benchmarks on a workstation with the following configuration: Intel® Core™ i7- 4790 CPU @ 3.6GHz and 32.0GB memory. To eliminate the impact of noise on queries, we evaluated the average inference time on 1,000 queries and the results are summarized in Table 5. One of the benefits of TwinBERT compared to BERT is that the two inputs are decoupled and if the query stays the same, there is no need to regenerate the query embedding. This could be more clearly explained by the time complexity of TwinBERT and BERT w.r.t number of queries (Nq ) and number of keywords (Nk ). The time complexity of TwinBERT is O(Te Nq (1 + Nk ) +Tc Nq Nk ), while it is O(TB Nq Nk ) for a BERT model. Here, we useTe , Tc , TB to denote the time cost of a single encoder in TwinBERT model, the crossing layer in TwinBERT model and BERT model respectively. Another benefit of TwinBERT is that, in certain scenarios like sponsored search, the keyword embeddings could be pre-computed and loaded in memory so there’s no computation for keyword encoding at run- time. Thus, the time complexity of TwinBERT during serving could be even simplified to O(Te Nq + Tc Nq Nk ). In Table 5, QEL refers to the number of query encoding loops and the boolean factor (Memory) indicates if the keyword embeddings are in memory. The number of keywords is another important factor to consider when talking about efficiency for the initial retrieval phase and refinement/ranking phase after. More specifically, it impacts the time complexity of the evaluation of crossing layers in TwinBERT model O(Tc Nq Nk ) and the evaluation of BERT model O(TB Nq Nk ). In the test, the average number of keywords per query is designed to be 100. The first two rows in Table 5 correspond to the inference time for TwinBERTcos and TwinBERTres to score 100 keywords per query assuming only the query-side encoder and crossing layer are performed. The computation cost of cosine similarity is much lower than residual network, which leads to the 8ms difference. Compared 7 Table 5: Average inference time for TwinBERT, BERT3 and BERT12 over 1,000 queries. QEL refers to the number of query encoding loops. Model TwinBERTcos TwinBERTres TwinBERTres TwinBERTres BERT3 BERT12 QEL Memory √ √ 1 1 1 100 100 100 Inf. time (ms) 14 22 1,077 2,144 1,699 9,282 to the last two rows of BERT3 and BERT12, where the query and keyword are concatenated and encoded as a whole, the efficiency of TwinBERTres is 77 and 422 times faster. With TwinBERTcos, the efficiency is even 121 and 663 times faster. Moreover, if the keyword embeddings are generated at run-time, the overall inference time is still better than BERT3 according to Row 3 and Row 5. However, if the query embedding is also repeatedly generated, the inference time of TwinBERT becomes higher than BERT3 as listed in Row 4. 6 TWINBERT IN PRODUCTION SYSTEM TwinBERT models have been successfully deployed in the backend of a major sponsored search system and are proved to be very effective and efficient in both retrieval and relevance tasks with acceptable latency. The models achieved 90+% of the incremental gains observed from a fine-tuned BERT12 model and bad ads in online impressions decreased by 10+% in production. However, the additional serving cost is minimum even on CPUs, compared to the service of a BERT12 model which is not feasible on CPUs but requires hundreds of GPUs. On query side, model is served with onnx runtime online and the latency of TwinBERT inference is less than 10ms on average, which could be shadowed by other serving components if it is not on a critical path. On document side, embeddings are prepared offline and indexed in a distributed database for serving. At run-time, only the latency of crossing layers is introduced to the overall latency, which is subtle in a distributed serving system. In addition, the embeddings could be mapped to a lower dimension to further improve the efficiency of crossing layers and also reduce the cost on storage in practise. 7 CONCLUSION AND FUTURE WORK The TwinBERT model presented in this paper successfully adapts the technical advances from the pre-trained language models to the area of information retrieval. Decoupling the two inputs and pre-computing embeddings offline improve efficiency by 77+ and 422+ times on BERT3 and BERT12 on a presumable number of 100 queries, which enables real-time online serving on CPUs. The innovations on network layers manage to keep majority of the performance gain from BERT12, which makes TwinBERT effective in both retrieval and relevance tasks. TwinBERT models have demonstrated to be as effective as BERT- Base model. Going forward, more experiments need to be con- ducted to evaluate the performance with a teacher model that has larger capacity such as BERT-Large. Furthermore, TwinBERT mod- els are developed based on original Transformer. As the increas- ing demand of model performance, further improvement could be achieved through innovations on Transformer. To make the presentation pragmatic and intuitive, TwinBERT is introduced in the context of information retrieval. However, Twin- BERT is not constrained by a specific problem domain. Looking forward, efforts will be spent on other domains such as question answering. REFERENCES [1] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5 (2017), 135–146. [2] Cristian Buciluffj, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 535–541. [3] Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun- Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal Sentence Encoder. CoRR abs/1803.11175 (2018). arXiv:1803.11175 http://arxiv.org/abs/1803.11175 [4] Alexis Conneau, Douwe Kiela, Holger Schwenk, Loic Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. arXiv preprint arXiv:1705.02364 (2017). [5] Mayur Datar, Nicole Immorlica, Piotr Indyk, and Vahab S Mirrokni. 2004. Locality- sensitive hashing scheme based on p-stable distributions. In Proceedings of the twentieth annual symposium on Computational geometry. ACM, 253–262. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). [6] [7] Benjamin Edelman, Michael Ostrovsky, and Michael Schwarz. 2007. Internet advertising and the generalized second-price auction: Selling billions of dollars worth of keywords. American economic review 97, 1 (2007), 242–259. Jerome H Friedman, Jon Louis Bentley, and Raphael Ari Finkel. 1976. An algo- rithm for finding best matches in logarithmic time. ACM Trans. Math. Software 3, SLAC-PUB-1549-REV. 2 (1976), 209–226. [8] [9] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770–778. [10] Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning Distributed Representations of Sentences from Unlabelled Data. CoRR abs/1602.03483 (2016). arXiv:1602.03483 http://arxiv.org/abs/1602.03483 [11] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015). [12] Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the 22nd ACM international conference on Information & Knowledge Management. ACM, 2333–2338. [13] Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. TinyBERT: Distilling BERT for Natural Language Understanding. arXiv preprint arXiv:1909.10351 (2019). [14] Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. http: //arxiv.org/abs/1412.6980 [15] Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Advances in neural information processing systems. 3294–3302. [16] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. arXiv preprint arXiv:1909.11942 (2019). [17] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Improv- ing Multi-Task Deep Neural Networks via Knowledge Distillation for Natural Language Understanding. arXiv preprint arXiv:1904.09482 (2019). [18] Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019. Multi- task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504 (2019). [19] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019). [20] Lajanugen Logeswaran and Honglak Lee. 2018. An efficient framework for learning sentence representations. CoRR abs/1803.02893 (2018). arXiv:1803.02893 http://arxiv.org/abs/1803.02893 8 [21] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013). [22] Seyed-Iman Mirzadeh, Mehrdad Farajtabar, Ang Li, and Hassan Ghasemzadeh. 2019. Improved knowledge distillation via teacher assistant: Bridging the gap between student and teacher. arXiv preprint arXiv:1902.03393 (2019). Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 1532–1543. [24] Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018). [23] [25] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Technical Report. Technical report, OpenAI. [26] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv:cs.LG/1910.10683 [27] Ying Shan, T Ryan Hoens, Jian Jiao, Haijing Wang, Dong Yu, and JC Mao. 2016. Deep crossing: Web-scale modeling without manually crafted combinatorial features. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 255–262. [28] Yelong Shen, Xiaodong He, Jianfeng Gao, Li Deng, and Gr´egoire Mesnil. 2014. Learning semantic representations using convolutional neural networks for web search. In Proceedings of the 23rd International Conference on World Wide Web. ACM, 373–374. [29] Sandeep Subramanian, Adam Trischler, Yoshua Bengio, and Christopher J. Pal. 2018. Learning General Purpose Distributed Sentence Representations via Large Scale Multi-task Learning. CoRR abs/1804.00079 (2018). arXiv:1804.00079 http: //arxiv.org/abs/1804.00079 [30] Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distilla- tion for bert model compression. arXiv preprint arXiv:1908.09355 (2019). [31] Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling Task-Specific Knowledge from BERT into Simple Neural Networks. arXiv preprint arXiv:1903.12136 (2019). Iulia Turc, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Well- read students learn better: The impact of student initialization on knowledge distillation. arXiv preprint arXiv:1908.08962 (2019). [32] [33] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998–6008. Jingdong Wang and Shipeng Li. 2012. Query-driven iterated neighborhood graph search for large scale indexing. In Proceedings of the 20th ACM international conference on Multimedia. ACM, 179–188. [34] [35] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement De- langue, Anthony Moi, Pierric Cistac, Tim Rault, R’emi Louf, Morgan Funtowicz, and Jamie Brew. 2019. HuggingFace’s Transformers: State-of-the-art Natural Language Processing. ArXiv abs/1910.03771 (2019). [36] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016). [37] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 1906. XLNet: Generalized autoregressive pretraining for language understanding, 2019. URL https://www. arxiv. org/abs (1906). [38] Ze Yang, Linjun Shou, Ming Gong, Wutao Lin, and Daxin Jiang. 2019. Model Compression with Multi-Task Knowledge Distillation for Web-scale Question Answering System. arXiv preprint arXiv:1904.09636 (2019). [39] Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: Enhanced Language Representation with Informative Entities. arXiv preprint arXiv:1905.07129 (2019). [40] Wei Zhu, Xiaofeng Zhou, Keqiang Wang, Xun Luo, Xiepeng Li, Yuan Ni, and Guotong Xie. 2019. PANLP at MEDIQA 2019: Pre-trained Language Models, Transfer Learning and Knowledge Distillation. In Proceedings of the 18th BioNLP Workshop and Shared Task. 380–388. [41] Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books. CoRR abs/1506.06724 (2015). arXiv:1506.06724 http://arxiv.org/abs/1506.06724
{ "id": "1810.04805" }
2002.05829
HULK: An Energy Efficiency Benchmark Platform for Responsible Natural Language Processing
Computation-intensive pretrained models have been taking the lead of many natural language processing benchmarks such as GLUE. However, energy efficiency in the process of model training and inference becomes a critical bottleneck. We introduce HULK, a multi-task energy efficiency benchmarking platform for responsible natural language processing. With HULK, we compare pretrained models' energy efficiency from the perspectives of time and cost. Baseline benchmarking results are provided for further analysis. The fine-tuning efficiency of different pretrained models can differ a lot among different tasks and fewer parameter number does not necessarily imply better efficiency. We analyzed such phenomenon and demonstrate the method of comparing the multi-task efficiency of pretrained models. Our platform is available at https://sites.engineering.ucsb.edu/~xiyou/hulk/.
http://arxiv.org/pdf/2002.05829
Xiyou Zhou, Zhiyu Chen, Xiaoyong Jin, William Yang Wang
cs.CL
7 pages, 4 figures
null
cs.CL
20200214
20200214
0 2 0 2 b e F 4 1 ] L C . s c [ 1 v 9 2 8 5 0 . 2 0 0 2 : v i X r a # HULK: An Energy Efficiency Benchmark Platform for Responsible Natural Language Processing Xiyou Zhou, Zhiyu Chen, Xiaoyong Jin, William Yang Wang Department of Computer Science, University of California Santa Barbara {xiyou, zhiyuchen, x jin, william}@cs.ucsb.edu # Abstract Computation-intensive pretrained models have been taking the lead of many natural language processing benchmarks such as GLUE (Wang et al., 2018). However, energy efficiency in the process of model training and inference becomes a critical bottleneck. We introduce HULK, a multi-task energy efficiency bench- marking platform for responsible natural lan- guage processing. With HULK, we compare pretrained models’ energy efficiency from the perspectives of time and cost. Baseline bench- marking results are provided for further anal- ysis. The fine-tuning efficiency of different pretrained models can differ a lot among dif- ferent tasks and fewer parameter number does not necessarily imply better efficiency. We analyzed such phenomenon and demonstrate the method of comparing the multi-task effi- ciency of pretrained models. Our platform is available at https://sites.engineering. ucsb.edu/˜xiyou/hulk/. 1 # 1 Introduction Environmental concerns of machine learning re- search has been rising as the carbon emission of certain tasks like neural architecture search reached an exceptional “ocean boiling” level (Strubell et al., 2019). Increased carbon emission has been one of the key factors to aggravate global warming 1. Research and development process like parame- ter search further increase the environment impact. When using cloud-based machines, the environ- ment impact is strongly correlated with budget. The recent emergence of leaderboards such as SQuAD (Rajpurkar et al., 2016), GLUE (Wang et al., 2018) and SuperGLUE (Wang et al., 2019) has greatly boosted the development of advanced models in the NLP community. Pretrained models have proven to be the key ingredient for achieving state of the art in conventional metrics. However, such models can be extremely expensive to train. For example, XLNet-Large (Yang et al., 2019) was trained on 512 TPU v3 chips for 500K steps, which costs around 61,440 dollars2, let alone staggeringly large carbon emission. Moreover, despite impressive performance gain, the fine-tuning and inference efficiency of NLP models remain under-explored. As recently men- tioned in a tweet3, the popular AI text adventure game AI Dungeon has reached 100 million infer- ences. The energy efficiency of inference cost could be critical to both business planning and en- vironment impact. Previous work (Schwartz et al., 2019; Dodge et al., 2019) on this topic proposed new metrics like FPO (floating point operations) and new prac- tice to report experimental results based on com- puting budget. Other benchmarks like (Coleman et al., 2017) and (Mattson et al., 2019) compares the efficiency of models on the classic reading com- prehension task SQuAD and machine translation tasks. However, there has not been a concrete or practical reference for accurate estimation on NLP model pretraining, fine-tunning and inference con- sidering multi-task energy efficiency. Energy efficiency can be reflected in many met- rics including carbon emission, electricity usage, time consumption, number of parameters and FPO as shown in (Schwartz et al., 2019). Carbon emis- sion and electricity are intuitive measures yet ei- ther hard to track or hardware-dependent. Number of parameteres does not reflect the acutal cost for model training and inference. FPO is steady for models but cannot be directly used for cost estima- tion. Here in order to provide a practical reference # 1Source: https://climate.nasa.gov/causes/ # 2Source: https://bit.ly/301qUMo 3Source: https://bit.ly/2GAFBNO Model Hardware Time Cost Params BERTBASE (Devlin et al., 2018) BERTLARGE (Devlin et al., 2018) XLNetBASE (Yang et al., 2019) XLNetLARGE (Yang et al., 2019) RoBERTaBASE (Liu et al., 2019) RoBERTaLARGE (Liu et al., 2019) ALBERTBASE (Lan et al., 2019) ALBERTLARGE (Lan et al., 2019) ALBERTXLARGE (Lan et al., 2019) ALBERTXXLARGE (Lan et al., 2019) DistilBERT* (Sanh et al., 2019) 4 days 4 days – 2.5 days 1 day 1 day – – – 32 hours 8×16G V100 GPU 90 hours 4 TPU Pods 16 TPU Pods – 512 TPU v3 1024 V100 GPUs 1024 V100 GPUs 64 TPU v3 – – 1024 TPU v3 $1,728 $6,912 – $61,440 $75,203 $75,203 – – – $65,536 $2203.2 108M 334M 117M 361M 125M 356M 12M 18M 59M 223M 66M Table 1: Pretraining costs of baseline models. Hardware and pretraining time are collected from original papers, with which costs are estimated with current TPU price at $8 per hour with 4 core TPU v3 chips and V100 GPU at $3.06 per hour. DistilBERT model is trained upon a pretrained BERT model. Parameter numbers are esti- mated using the pretrained models implemented in the Transformers (https://github.com/huggingface/ transformers) library (Wolf et al., 2019), shown in million. for model selection for real applications, especially model development outside of academia, we keep track of the time consumption and acutal budget for comparison. Cloud based machines are employed for cost estimation as they are easily accessible and consistent in hardware configuration and per- formance. In the following sections, we would use time and cost to denote the time elapsed and the acutal budget in model pretraining / training / inference. In most NLP pretrained model setting, there are three phases: pretraining, fine-tuning and inference. If a model is trained from scratch, we consider such model has no pretraining phase but fine-tuned from scratch. Typically pretraining takes several days and hundreds of dollars, according to Table 1. Fine- tuning takes a few minutes to hours, costing a lot less than pretraining phase. Inference takes several milli-seconds to seconds, costing much less than fine-tuning phase. Meanwhile, pretraining is done before fine-tuning once for all, while fine-tuning could be performed multiple times as training data updates. Inference is expected to be called numer- ous times for downstream applications. Such char- acteristics make it an intuitive choice to separate different phases during benchmarking. Our HULK benchmark, as shown in Figure 1, utilizes several classic datasets that have been widely adopted in the community as benchmark- ing tasks to benchmark energy efficiency and com- pares pretrained models in a multi-task fashion. The tasks include natural language inference task MNLI (Williams et al., 2017), sentiment analy- sis task SST-2 (Socher et al., 2013) and Named Entity Recognition Task CoNLL-2003 (Sang and De Meulder, 2003). Such tasks are selected to pro- vide a thourough comparison of end-to-end energy efficiency in pretraining, fine-tuning and inference. With the HULK benchmark, we quantify the en- ergy efficiency of model pretraining, fine-tuning and inference phase by comparing the time and cost they require to reach certain overall task-specific performance level on selected datasets. The design principle and benchmarking process are detailed in section 2. We also explore the relation between model parameter and fine-tuning efficiency and demonstrate consistency of energy efficiency be- tween tasks for different pretrained models. # 2 Benchmark Overview For pretraining phase, the benchmark is designed to favor energy efficient models in terms of time and cost that each model takes to reach certain multi-task performance pretrained from scratch. For example, we keep track of the time and cost of a BERT model pretrained from scratch. After every thousand of pretraining steps, we clone the model for fine-tuning and see if the final perfor- mance can reach our cut-off level. When the level is reached, time and cost for pretraining is used for comparison. Models faster or cheaper to pretrain are recommended. For fine-tuning phase, we consider the time and cost each model requires to reach certain multi- CoNLL 2003 MNLI SST-2 Train Size Dev Size 14,041 3,250 392,702 19,647 67,349 872 Cut-off Metric SOTA 91 F1 93.5 85 Acc 91.85 90 Acc 97.4 Table 2: Dataset Information task performance fine-tuned from given pretrained models because for each single task with different difficulty and instance number, the fine-tuning char- acteristics may differ a lot. When pretrained mod- els are used to deal with non-standard downstream task, especially ad hoc application in industry, the training set’s difficulty cannot be accurately esti- mated. Therefore, it’s important to compare the multi-task efficiency for model choice. For inference phase, the time and cost of each model making inference for single instance on mul- tiple tasks are considered in the similar fashion as the fine-tuning phase. # 2.1 Dataset Overview The datasets we used are widely adopted in NLP community. Quantitative details of datasets can be found in Table 2. The selected tasks are shown below: CoNLL 2003 The Conference on Com- putational Natural Learning (CoNLL-2003) shared task concerns language- independent named entity recognition (Sang and De Meulder, 2003). The task concentrates on four types of named entities: persons, loca- tions, organizations and other miscellaneous entities. Here we only use the English dataset. The English data is a collection of news wire articles from the Reuters Corpus. Result is reflected as F1 score considering the label accuracy and recall on dev set. MNLI The Multi-Genre Natural Language Inference Corpus (Williams et al., 2017) is a crowdsourced collection of sentence pairs with textual entailment annotations. Given a premise sentence and a hypothesis sentence, the task is to predict whether the premise entails the hypothesis (entailment), contradicts the hypoth- esis (contradiction), or neither (neutral). The premise sentences are gathered from ten differ- ent sources, including transcribed speech, fic- tion, and government reports. The accuracy score is reported as the average of performance on matched and mismatched dev sets. SST-2 The Stanford Sentiment Treebank (Socher et al., 2013) consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the senti- ment of a given sentence. Following the set- ting of GLUE, we also use the two-way (posi- tive/negative) class split, and use only sentence- level labels. The tasks are selected based on how represen- titve the dataset is. CoNLL 2003 has been a widely used dataset for named entity recognition and acu- tally requires output of token level labeling. NER is a core NLP task and CoNLL 2003 has been a clas- sic dataset in this area. SST-2 and MNLI are part of the GLUE benchmark, representing sentence level labeling tasks. SST-2 has been frequently used in sentiment analysis across different genera- tions of models. MNLI is a newly introduced large dataset for natural language inference. The train- ing time for MNLI is relatively long and the task requires a lot more training instances. We select the three tasks for a diverse yet practical bench- mark for pretrained models without constrain the models to sentence level classification tasks. In addition, their efficiency differ significantly in the fine-tuning and inference phase. Such difference can still be reflected on the final score after nor- malization as shown in Table 3. Provided with more computing resource , we can bring in more datasets for even more thorough benchmarking in the furture. We illustrate the evaluation criteria in the following subsection. # 2.2 Evaluation Criteria In machine learning model training and inference, slight parameter change can have subtle impact on the final result. In order to make a practical refer- ence for pretrained model selection, we compare models’ end-to-end performance with respect to the pretraining time, pretraining cost, training time, training cost, inference time, infernce latency and cost following the setting of (Coleman et al., 2017). For pretraining phase, we design the process to explore how much computing resource is re- quired to reach certain multi-task performance by fine-tuning after the pretraining. Therefore, during HULK Save the world, one flop at a time. An Energy Efficiency Benchmark Platform for Responsible Natural Language Processing Named Entity Recognition - CONLL 2003 Rank Time to 90 Test F1 Model 1 90.26 Nov 2019 BERT-Large-Cased BERT Baseline 2 155.43, Nov 2019 RoBERTa-LARGE RoBERTa Baseline Hardware Framework GTX 2080TI Pytorch 0.3.1 post2 GTX 2080Ti Pytorch 0.3.1 post2 Rank Time to 90 Test F1 Model 1 90.26 Nov 2019 BERT-Large-Cased BERT Baseline 2 155.43, Nov 2019 RoBERTa-LARGE RoBERTa Baseline Hardware Framework GTX 2080TI Pytorch 0.3.1 post2 GTX 2080Ti Pytorch 0.3.1 post2 Figure 1: Screenshot of the leaderboard of website. Datasets CoNLL 2003 SST-2 MNLI Model Time Score Time Score BERTBASE BERTLARGE XLNetBASE XLNetLARGE RoBERTaBASE RoBERTaLARGE ALBERTBASE ALBERTLARGE 43.43 90.26 67.14 243.00 70.57 155.43 340.64 844.85 2.08 1.00 1.34 0.37 1.28 0.58 0.26 0.11 207.15 92.45 102.45 367.11 38.45 57.65 2,767.90 3,708.49 0.45 1.00 0.90 0.25 2.40 1.60 0.03 0.02 N/A 9,106.72 7,704.71 939.62 274.87 397.12 N/A N/A 0.00 1.00 1.18 9.69 7.14 22.93 0.00 0.00 2.53 3.00 3.42 10.31 10.82 25.11 0.29 0.13 Table 3: Multi-task Baseline Fine-tuning Costs. Time is given in seconds and score is computed by the division of TimeBERTLARGE /Timemodel.The experiments are conducted on a single GTX 2080 Ti GPU following the evaluation ceriteria. The overall score is computed by summing up scores of each individual task. For cost based leaderboads, we also use the budget to compute a new score for each task and summarize similarly. “N/A” means fail to reach the given performance after 5 epochs. model pretraining, after a number of steps, we use the half-pretrained model for fine-tuning and see if the fine-tuned model can reach our cut-off perfor- mance. When it does, we count the time and cost in the pretraining process for benchmarking and analysis. compute the ratio of BERTLARGE’s time and cost to that of each model as the normalized measure as shown in Table 3 and Table 4. For inference phase, we follow the principles in fune-tuning except we use the time and cost of inference for benchmarking. For fine-tuning phase, we want to compare the general efficiency of pretrained model reaching cut-off performance on selected dataset. During fine-tuning, we evaluate the half-fine-tuned model on development set after a certain number of steps. When the performance reach our cut-off perfor- mance, we count the time and cost in this fine- tuning process for benchmarking and analysis. To be specific, for a single pretrained model, the effi- ciency score on different tasks is defined as the sum of normalized time and cost. Here we normalize the time and cost because they vary dramatically between tasks. In order to simplify the process, we # 2.3 Performance Cut-off Selection The selection of performance cutoff could be very critical because we consider certrain models being qualified after reaching certrain performance on development set. Meanwhile, certrain tasks can reach a “sweet point” where after relatively smaller amount of training time, the model reaches perfor- mance close to the final results despite negelagi- ble difference. We select the cut-off performance threshold by obersvering the recent state-of-the-art performance on selected tasks. Datasets CoNLL 2003 SST-2 MNLI Model Time Score Time Score BERTBASE BERTLARGE XLNetBASE XLNetLARGE RoBERTaBASE RoBERTaLARGE ALBERTBASE ALBERTLARGE 2.68 8.51 5.16 14.84 2.65 8.35 2.65 8.49 3.18 1.00 1.65 0.57 3.21 1.02 3.21 1.00 2.70 8.46 5.01 14.69 2.68 8.36 2.68 8.44 3.13 1.00 1.69 0.58 3.16 1.01 3.18 1.00 2.67 8.53 5.10 15.27 2.70 8.70 2.72 8.78 3.19 1.00 1.67 0.56 3.16 0.98 3.14 0.97 9.5 3.00 5.01 1.71 9.53 3.01 9.53 2.97 Table 4: Multi-task Baseline Inference Costs. Time is given in milliseconds and score is computed by the division of TimeBERTLARGE /Timemodel.The experiments are conducted on a single GTX 2080 Ti GPU following the evaluation ceriteria similar to fine-tuning part. It’s clear that the inference time between tasks is more consistent compared to fine-tuning phase. # 2.4 Submission to Benchmark Submissions can be made to our benchmark through sending code and results to our HULK benchmark CodaLab competition4 following the guidelines in both our FAQ part of website and competition introduction. We require the submis- sions to include detailed end-to-end model training information including model run time, cost(cloud based machine only), parameter number and part of the development set output for result validation. A training / fine-tuning log including time consump- tion and dev set performance after certain steps is also required. For inference, development set output, time consumption and hardware / software details should be provided. In order for model re- producity, source code is required. for BERTBASE to make sure the model converges and can reach expected performance as soon as possible with parameter searching. As shown in Figure 2, the fine-tuning perfor- mance curve differs a lot among pretrained models. The x-axis denoting time consumed is shown in log-scale for better comparison of different models. None of the models acutally take the lead in all tasks. However, if two pretrained models are in the same family, such as BERTBASE and BERTLARGE, the model with smaller number of parameters tend to converge a bit faster than the other in the NER and SST-2 task. In the MNLI task, such trend does not apply possibly due to increased diffculty level and training instance number which favor larger model capacity. # 3 Baseline Settings and Analysis For computation-heavy tasks, we adopt the re- ported resource requirements in the original papers as the pretraining phase baselines. For fine-tuning and inference phase, we conduct extensive experiments on given hardware (GTX 2080Ti GPU) with different model settings as shown in Table 3 and Table 4. We also collect the devlopment set performance with time in fine- tuning to investigate in how the model are fine- tuned for different tasks. Even though ALBERT model has a lot less pa- rameters than BERT, according to Table 1, the fine-tuning time of ALBERT model is significantly more than BERT models. This is probably because ALBERT uses large hidden size and more expen- sive matrix computation. The parameter sharing technique actually makes it harder to fine-tune the model. RoBERTaLARGE model relatively stable in all tasks. # 4 Related Work In our fine-tuning setting, we are given a specific hardware and software configuration, we adjust the hyper-parameter to minimize the time required for fine-tuning towards cut-off performance. For exam- ple, we choose proper batchsize and learning rate 4The CodaLab competition is accessible from the website. GLUE benchmark (Wang et al., 2018) is a popular multi-task benchmarking and diagnosis platform providing score evaluating multi-task NLP mod- els considering multiple single task performance. SuperGLUE (Wang et al., 2019) further develops the task and enriches the dataset used in evalua- tion, making the task more challenging. These —— BERT-BASE —— RoBERTa-LARGE —— BERT-LARGE —— XLNet-BASE —— RoBERTa-BASE —— XLNet-LARGE —— ALBERT-BASE —— ALBERT-LARGE 0.8 0.6 F1(Dev) 0.4 0.2 107 107 10? Time-sec BERT-BASE BERT-LARGE RoBERTa-BASE —— RoBERTa-LARGE —— XLNet-BASE —— XLNet-LARGE —— ALBERT-BASE —— ALBERT-LARGE —— —— —— 0.9 Accuracy(Dev) 2 2 q ® © a 0.5 102 10? 10? Time-sec —— BERT-BASE —— RoBERTa-LARGE —— BERT-LARGE —— XLNet-LARGE —— RoBERTa-BASE —— XLNet-BASE —— ALBERT-BASE —— ALBERT-LARGE 0.8 © u © a Accuracy(Dev) © a 0.4 107 10? 10% Time-sec Figure 2: The comparison between different pretrained models for CoNLL 2003, SST-2 and MNLI datasets trained on a single GTX 2080Ti GPU. The curves are smoothed by computing average with 2 adjacent data points. The experiments are conducted by selecting hyper-parameters to minimize the time consumption yet making sure the model can converge after certain amount of time. Results are demonstrated using per- formance on development score after certain steps fine- tuned on the training dataset. multi-task benchmarks does not take computation efficiency into consideration but still innovates the development of pretrained models. MLPerf (Mattson et al., 2019) compares training and inference efficiency from hardware perspective, providing helpful resources on hardware selection and model training. Their benchmark is limited to focusing on several typical applications including image classification and machine translation. Previous work (Schwartz et al., 2019; Dodge et al., 2019) on related topic working towards “Green AI” proposes new metrics like FPO and new principle in efficiency evaluation. We further make more detailed and practical contributions towards model energy efficiency benchmarking. Other work like DAWNBenchmark (Coleman et al., 2017) looks into the area of end-to-end model efficiency comparison for both computer vision and NLP task SQuAD. The benchmark does not com- pare multi-task efficiency performance and covered only one NLP task. The Efficient NMT shared task of The 2nd Work- shop on Neural Machine Translation and Genera- tion proposed efficiency track to compare neural machine translation models’ inference time. Our platform covers more phases and support multi-task comparison. # 5 Conclusion We developed the HULK platform focusing on the energy efficiency evaluation of NLP models based on their end-to-end performance on selected NLP tasks. The HULK platform compares models in pretraining, fine-tuning and inference phase, mak- ing it clear to follow and propose more training and inference efficient models. We have compared the fine-tuning efficiency of given models during baseline testing and demonstrated more parame- ters lead to slower fine-tuning when using same model but does not hold when model changes.We expect more submissions in the future to flourish and enrich our benchmark. # Acknowledgments This work is supported by the Institute of Energy Efficiency (IEE) at UCSB’s seed grant in Summer 2019 to improve the energy efficiency of AI and machine learning.5. 5https://iee.ucsb.edu/news/making-ai-more-energy- efficient # References Cody Coleman, Deepak Narayanan, Daniel Kang, Tian Zhao, Jian Zhang, Luigi Nardi, Peter Bailis, Kunle Olukotun, Chris R´e, and Matei Zaharia. 2017. Dawnbench: An end-to-end deep learning bench- mark and competition. Training, 100(101):102. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A Smith. 2019. Show your work: Improved reporting of experimental results. arXiv preprint arXiv:1909.03004. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learn- arXiv preprint ing of language representations. arXiv:1909.11942. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Peter Mattson, Christine Cheng, Cody Coleman, Greg Diamos, Paulius Micikevicius, David Patterson, Hanlin Tang, Gu-Yeon Wei, Peter Bailis, Victor Bit- torf, et al. 2019. Mlperf training benchmark. arXiv preprint arXiv:1910.01500. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Intro- duction to the conll-2003 shared task: Language- arXiv independent named entity recognition. preprint cs/0306050. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Roy Schwartz, Jesse Dodge, Noah A Smith, and arXiv preprint Oren Etzioni. 2019. Green ai. arXiv:1907.10597. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- In Proceedings of the 2013 conference on bank. empirical methods in natural language processing, pages 1631–1642. Emma Strubell, Ananya Ganesh, and Andrew Mc- Energy and policy considera- arXiv preprint Callum. 2019. tions for deep learning in nlp. arXiv:1906.02243. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Super- glue: A stickier benchmark for general-purpose arXiv preprint language understanding systems. arXiv:1905.00537. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2017. A broad-coverage challenge corpus for arXiv sentence understanding through inference. preprint arXiv:1704.05426. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R’emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface’s trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- arXiv preprint ing for language understanding. arXiv:1906.08237.
{ "id": "1910.01108" }
2002.05709
A Simple Framework for Contrastive Learning of Visual Representations
This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.
http://arxiv.org/pdf/2002.05709
Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton
cs.LG, cs.CV, stat.ML
ICML'2020. Code and pretrained models at https://github.com/google-research/simclr
null
cs.LG
20200213
20200701
0 2 0 2 l u J 1 ] G L . s c [ 3 v 9 0 7 5 0 . 2 0 0 2 : v i X r a # A Simple Framework for Contrastive Learning of Visual Representations # Ting Chen 1 Simon Kornblith 1 Mohammad Norouzi 1 Geoffrey Hinton 1 # Abstract This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self- supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learn- able nonlinear transformation between the repre- sentation and the contrastive loss substantially im- proves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by Sim- CLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of- the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outper- forming AlexNet with 100× fewer labels. 1 # 1. Introduction x% Supervised i 3eSimCLR e® __RSIMELR (2x) > oo eCPCv2-L Q . § 70F xsimcir some MoCo (4x) 8 ePIRL-c2x AMDIM - 65 a eMoCo (2x) a acPcv2 PIRL-ens. ec PIRL eBigBiGAN % 60 @MoCo oD LA S £ eRotation 55 elnstDisc 25 50 100 200 Number of Parameters (Millions) 400 626 Figure 1. ImageNet Top-1 accuracy of linear classifiers trained on representations learned with different self-supervised meth- ods (pretrained on ImageNet). Gray cross indicates supervised ResNet-50. Our method, SimCLR, is shown in bold. However, pixel-level generation is computationally expen- sive and may not be necessary for representation learning. Discriminative approaches learn representations using objec- tive functions similar to those used for supervised learning, but train networks to perform pretext tasks where both the in- puts and labels are derived from an unlabeled dataset. Many such approaches have relied on heuristics to design pretext tasks (Doersch et al., 2015; Zhang et al., 2016; Noroozi & Favaro, 2016; Gidaris et al., 2018), which could limit the generality of the learned representations. Discriminative approaches based on contrastive learning in the latent space have recently shown great promise, achieving state-of-the- art results (Hadsell et al., 2006; Dosovitskiy et al., 2014; Oord et al., 2018; Bachman et al., 2019). Learning effective visual representations without human supervision is a long-standing problem. Most mainstream approaches fall into one of two classes: generative or dis- criminative. Generative approaches learn to generate or otherwise model pixels in the input space (Hinton et al., 2006; Kingma & Welling, 2013; Goodfellow et al., 2014). 1Google Research, Brain Team. Correspondence to: Ting Chen <[email protected]>. In this work, we introduce a simple framework for con- trastive learning of visual representations, which we call SimCLR. Not only does SimCLR outperform previous work (Figure 1), but it is also simpler, requiring neither special- ized architectures (Bachman et al., 2019; Hénaff et al., 2019) nor a memory bank (Wu et al., 2018; Tian et al., 2019; He et al., 2019; Misra & van der Maaten, 2019). Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s). In order to understand what enables good contrastive repre- sentation learning, we systematically study the major com- ponents of our framework and show that: 1Code available at https://github.com/google-research/simclr. (4x) A Simple Framework for Contrastive Learning of Visual Representations • Composition of multiple data augmentation operations is crucial in defining the contrastive prediction tasks that yield effective representations. In addition, unsupervised contrastive learning benefits from stronger data augmen- tation than supervised learning. • Introducing a learnable nonlinear transformation be- tween the representation and the contrastive loss substan- tially improves the quality of the learned representations. • Representation learning with contrastive cross entropy loss benefits from normalized embeddings and an appro- priately adjusted temperature parameter. Maximize agreement Zi > 2; a)] fat) hi <— Representation —> hj • Contrastive learning benefits from larger batch sizes and longer training compared to its supervised counterpart. Like supervised learning, contrastive learning benefits from deeper and wider networks. We combine these findings to achieve a new state-of-the-art in self-supervised and semi-supervised learning on Ima- geNet ILSVRC-2012 (Russakovsky et al., 2015). Under the linear evaluation protocol, SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art (Hénaff et al., 2019). When fine-tuned with only 1% of the ImageNet labels, SimCLR achieves 85.8% top-5 accuracy, a relative improvement of 10% (Hénaff et al., 2019). When fine-tuned on other natural image classifica- tion datasets, SimCLR performs on par with or better than a strong supervised baseline (Kornblith et al., 2019) on 10 out of 12 datasets. Figure 2. A simple framework for contrastive learning of visual representations. Two separate data augmentation operators are sampled from the same family of augmentations (t ~ 7 and t' ~ T) and applied to each data example to obtain two correlated views. A base encoder network f(-) and a projection head g(-) are trained to maximize agreement using a contrastive loss. After training is completed, we throw away the projection head g(-) and use encoder f(-) and representation h for downstream tasks. to obtain hi = f ( ˜xi) = ResNet( ˜xi) where hi ∈ Rd is the output after the average pooling layer. • A small neural network projection head g(·) that maps representations to the space where contrastive loss is applied. We use a MLP with one hidden layer to obtain zi = g(hi) = W (2)σ(W (1)hi) where σ is a ReLU non- linearity. As shown in section 4, we find it beneficial to define the contrastive loss on zi’s rather than hi’s. # 2. Method # 2.1. The Contrastive Learning Framework Inspired by recent contrastive learning algorithms (see Sec- tion 7 for an overview), SimCLR learns representations by maximizing agreement between differently augmented views of the same data example via a contrastive loss in the latent space. As illustrated in Figure 2, this framework comprises the following four major components. • A stochastic data augmentation module that transforms any given data example randomly resulting in two cor- related views of the same example, denoted ˜xi and ˜xj, which we consider as a positive pair. In this work, we sequentially apply three simple augmentations: random cropping followed by resize back to the original size, ran- dom color distortions, and random Gaussian blur. As shown in Section 3, the combination of random crop and color distortion is crucial to achieve a good performance. • A neural network base encoder f (·) that extracts repre- sentation vectors from augmented data examples. Our framework allows various choices of the network archi- tecture without any constraints. We opt for simplicity and adopt the commonly used ResNet (He et al., 2016) e A contrastive loss function defined for a contrastive pre- diction task. Given a set {%,} including a positive pair of examples #; and x;, the contrastive prediction task aims to identify 2; in {a),};.4; for a given &;. We randomly sample a minibatch of NV examples and define the contrastive prediction task on pairs of augmented exam- ples derived from the minibatch, resulting in 2N data points. We do not sample negative examples explicitly. Instead, given a positive pair, similar to (Chen et al., 2017), we treat the other 2(V — 1) augmented examples within a minibatch as negative examples. Let sim(u,v) = u!v/|lull||e|| de- note the dot product between f2 normalized wu and v (i.e. cosine similarity). Then the loss function for a positive pair of examples (i, j) is defined as exp(sim(z;, 2;)/T) . ini Liza exp(sim(z;, 24)/7) (1) £i,5 = — log where 1,44 € {0, 1} is an indicator function evaluating to 1 iff k A i and 7 denotes a temperature parameter. The fi- nal loss is computed across all positive pairs, both (7, j) and (j,i), in a mini-batch. This loss has been used in previous work (Sohn, 2016; Wu et al., 2018; Oord et al., 2018); for convenience, we term it N7-Xent (the normalized temperature-scaled cross entropy loss). A Simple Framework for Contrastive Learning of Visual Representations Algorithm 1 SimCLR’s main learning algorithm. input: batch size N, constant 7, structure of f, g, T. sampled minibatch {2;,}/_, do for all k € {1,..., N} do draw two augmentation functions t~T, t!/~T # the first augmentation B41 = t(xe) hop—1 = f (Z2n-1) Z2k-1 = g(R2k—1) # the second augmentation Lox = t' (xp) # representation # projection ho = f (2x) # representation Zon = g(hox) # projection end for for alli € {1,...,2N} andj € {1,..., 2N} do 81g = 2) 2; /(llzalllle,ll) # pairwise similarity end for define ((i, j) as &(i, j)=— log saw FO L= gy hy [e(2k—1, 2k) + &(2k, 2-1) update networks f and g to minimize L for return encoder network f(-), and throw away g(-) input: batch size N , constant τ , structure of f , g, T . for sampled minibatch {xk}N for all k ∈ {1, . . . , N } do end for return encoder network f (·), and throw away g(·) Algorithm 1 summarizes the proposed method. # 2.2. Training with Large Batch Size To keep it simple, we do not train the model with a memory bank (Wu et al., 2018; He et al., 2019). Instead, we vary the training batch size N from 256 to 8192. A batch size of 8192 gives us 16382 negative examples per positive pair from both augmentation views. Training with large batch size may be unstable when using standard SGD/Momentum with linear learning rate scaling (Goyal et al., 2017). To stabilize the training, we use the LARS optimizer (You et al., 2017) for all batch sizes. We train our model with Cloud TPUs, using 32 to 128 cores depending on the batch size.2 Global BN. Standard ResNets use batch normaliza- tion (Ioffe & Szegedy, 2015). In distributed training with data parallelism, the BN mean and variance are typically aggregated locally per device. In our contrastive learning, as positive pairs are computed in the same device, the model can exploit the local information leakage to improve pre- diction accuracy without improving representations. We ad- dress this issue by aggregating BN mean and variance over all devices during the training. Other approaches include shuffling data examples across devices (He et al., 2019), or replacing BN with layer norm (Hénaff et al., 2019). D A B C (a) Global and local views. (b) Adjacent views. Figure 3. Solid rectangles are images, dashed rectangles are ran- dom crops. By randomly cropping images, we sample contrastive prediction tasks that include global to local view (B → A) or adjacent view (D → C) prediction. # 2.3. Evaluation Protocol Here we lay out the protocol for our empirical studies, which aim to understand different design choices in our framework. Dataset and Metrics. Most of our study for unsupervised pretraining (learning encoder network f without labels) is done using the ImageNet ILSVRC-2012 dataset (Rus- sakovsky et al., 2015). Some additional pretraining experi- ments on CIFAR-10 (Krizhevsky & Hinton, 2009) can be found in Appendix B.9. We also test the pretrained results on a wide range of datasets for transfer learning. To evalu- ate the learned representations, we follow the widely used linear evaluation protocol (Zhang et al., 2016; Oord et al., 2018; Bachman et al., 2019; Kolesnikov et al., 2019), where a linear classifier is trained on top of the frozen base net- work, and test accuracy is used as a proxy for representation quality. Beyond linear evaluation, we also compare against state-of-the-art on semi-supervised and transfer learning. Default setting. Unless otherwise specified, for data aug- mentation we use random crop and resize (with random flip), color distortions, and Gaussian blur (for details, see Appendix A). We use ResNet-50 as the base encoder net- work, and a 2-layer MLP projection head to project the representation to a 128-dimensional latent space. As the loss, we use NT-Xent, optimized using LARS with learning rate of 4.8 (= 0.3 × BatchSize/256) and weight decay of 10−6. We train at batch size 4096 for 100 epochs.3 Fur- thermore, we use linear warmup for the first 10 epochs, and decay the learning rate with the cosine decay schedule without restarts (Loshchilov & Hutter, 2016). # 3. Data Augmentation for Contrastive Representation Learning 2With 128 TPU v3 cores, it takes ∼1.5 hours to train our ResNet-50 with a batch size of 4096 for 100 epochs. Data augmentation defines predictive tasks. While data augmentation has been widely used in both supervised and unsupervised representation learning (Krizhevsky et al., 3Although max performance is not reached in 100 epochs, rea- sonable results are achieved, allowing fair and efficient ablations. A Simple Framework for Contrastive Learning of Visual Representations (a) Original (b) Crop and resize (c) Crop, resize (and flip) (d) Color distort. (drop) (e) Color distort. (jitter) (f) Rotate {90◦, 180◦, 270◦} # (g) Cutout (h) Gaussian noise # (i) Gaussian blur (j) Sobel filtering Figure 4. Illustrations of the studied data augmentation operators. Each augmentation can transform data stochastically with some internal parameters (e.g. rotation degree, noise level). Note that we only test these operators in ablation, the augmentation policy used to train our models only includes random crop (with flip and resize), color distortion, and Gaussian blur. (Original image cc-by: Von.grzanka) 2012; Hénaff et al., 2019; Bachman et al., 2019), it has not been considered as a systematic way to define the con- trastive prediction task. Many existing approaches define contrastive prediction tasks by changing the architecture. For example, Hjelm et al. (2018); Bachman et al. (2019) achieve global-to-local view prediction via constraining the receptive field in the network architecture, whereas Oord et al. (2018); Hénaff et al. (2019) achieve neighboring view prediction via a fixed image splitting procedure and a con- text aggregation network. We show that this complexity can be avoided by performing simple random cropping (with resizing) of target images, which creates a family of predic- tive tasks subsuming the above mentioned two, as shown in Figure 3. This simple design choice conveniently decouples the predictive task from other components such as the neural network architecture. Broader contrastive prediction tasks can be defined by extending the family of augmentations and composing them stochastically. Crop Cutout Color Sobel Noise 1st transformation Blur Rotate nN 2 q e oho ov oo 2nd transformation Figure 5. Linear evaluation (ImageNet top-1 accuracy) under in- dividual or composition of data augmentations, applied only to one branch. For all columns but the last, diagonal entries corre- spond to single transformation, and off-diagonals correspond to composition of two transformations (applied sequentially). The last column reflects the average over the row. # 3.1. Composition of data augmentation operations is crucial for learning good representations To systematically study the impact of data augmentation, we consider several common augmentations here. One type of augmentation involves spatial/geometric transformation of data, such as cropping and resizing (with horizontal flipping), rotation (Gidaris et al., 2018) and cutout (De- Vries & Taylor, 2017). The other type of augmentation involves appearance transformation, such as color distortion (including color dropping, brightness, contrast, saturation, hue) (Howard, 2013; Szegedy et al., 2015), Gaussian blur, and Sobel filtering. Figure 4 visualizes the augmentations that we study in this work. To understand the effects of individual data augmentations and the importance of augmentation composition, we in- vestigate the performance of our framework when applying augmentations individually or in pairs. Since ImageNet images are of different sizes, we always apply crop and re- size images (Krizhevsky et al., 2012; Szegedy et al., 2015), which makes it difficult to study other augmentations in the absence of cropping. To eliminate this confound, we consider an asymmetric data transformation setting for this ablation. Specifically, we always first randomly crop im- ages and resize them to the same resolution, and we then apply the targeted transformation(s) only to one branch of the framework in Figure 2, while leaving the other branch as the identity (i.e. t(xi) = xi). Note that this asymmet- A Simple Framework for Contrastive Learning of Visual Representations AANA a Lila a # (a) Without color distortion. (b) With color distortion. Figure 6. Histograms of pixel intensities (over all channels) for different crops of two different images (i.e. two rows). The image for the first row is from Figure 4. All axes have the same range. Methods 1/8 Color distortion strength 1/4 1/2 1 1 (+Blur) AutoAug SimCLR Supervised 59.6 77.0 61.0 76.7 62.6 76.5 63.2 75.7 64.5 75.4 61.1 77.1 Table 1. Top-1 accuracy of unsupervised ResNet-50 using linear evaluation and supervised ResNet-505, under varied color distor- tion strength (see Appendix A) and other data transformations. Strength 1 (+Blur) is our default data augmentation policy. ric data augmentation hurts the performance. Nonetheless, this setup should not substantively change the impact of individual data augmentations or their compositions. Figure 5 shows linear evaluation results under individual and composition of transformations. We observe that no single transformation suffices to learn good representations, even though the model can almost perfectly identify the positive pairs in the contrastive task. When composing aug- mentations, the contrastive prediction task becomes harder, but the quality of representation improves dramatically. Ap- pendix B.2 provides a further study on composing broader set of augmentations. 80 suis RSO(2x) Sup. R50(4x) Sup. R50 se FR50(AX)* 75 ee oe PRBO(2x)* ° “ R50(4x) e @R101(axFt52(2%) 70) seasox © R50(2x) ®R34(4x) ®R152 a R101 265 R18(4x) ° *R50 ec R34(2x) 60 R18(2x) 55 R34 50 | ®R18 ie) 50 100 150 200 250 300 350 400 450 Number of Parameters # (Millions) Figure 7. Linear evaluation of models with varied depth and width. Models in blue dots are ours trained for 100 epochs, models in red stars are ours trained for 1000 epochs, and models in green crosses are supervised ResNets trained for 90 epochs7 (He et al., 2016). shown in Table 1. Stronger color augmentation substan- tially improves the linear evaluation of the learned unsuper- vised models. In this context, AutoAugment (Cubuk et al., 2019), a sophisticated augmentation policy found using su- pervised learning, does not work better than simple cropping + (stronger) color distortion. When training supervised mod- els with the same set of augmentations, we observe that stronger color augmentation does not improve or even hurts their performance. Thus, our experiments show that unsu- pervised contrastive learning benefits from stronger (color) data augmentation than supervised learning. Although pre- vious work has reported that data augmentation is useful for self-supervised learning (Doersch et al., 2015; Bachman et al., 2019; Hénaff et al., 2019; Asano et al., 2019), we show that data augmentation that does not yield accuracy benefits for supervised learning can still help considerably with contrastive learning. One composition of augmentations stands out: random crop- ping and random color distortion. We conjecture that one serious issue when using only random cropping as data augmentation is that most patches from an image share a similar color distribution. Figure 6 shows that color his- tograms alone suffice to distinguish images. Neural nets may exploit this shortcut to solve the predictive task. There- fore, it is critical to compose cropping with color distortion in order to learn generalizable features. # 3.2. Contrastive learning needs stronger data augmentation than supervised learning To further demonstrate the importance of the color aug- mentation, we adjust the strength of color augmentation as # 4. Architectures for Encoder and Head # 4.1. Unsupervised contrastive learning benefits (more) from bigger models Figure 7 shows, perhaps unsurprisingly, that increasing depth and width both improve performance. While similar findings hold for supervised learning (He et al., 2016), we find the gap between supervised models and linear classifiers trained on unsupervised models shrinks as the model size increases, suggesting that unsupervised learning benefits more from bigger models than its supervised counterpart. 5Supervised models are trained for 90 epochs; longer training improves performance of stronger augmentation by ∼ 0.5%. 7Training longer does not improve supervised ResNets (see Appendix B.3). A Simple Framework for Contrastive Learning of Visual Representations Name | Negative loss function | Gradient w.r.t. u exp(ul yt /r ex) T. NT-Xent Pyt /r — log Dyetet.w} exP(u" v/T) a- eee) /rut ->,- eee) frye NT-Logistic log o(u’v* /7) + loga(—u"v /7) (o(-uT vt /7))/rvt — o(ulv /r)/tTe~ Margin Triplet —max(u’v- — uTvt +m,0) vt —v ifutvt —ulv” <melseO Table 2. Negative loss functions and their gradients. All input vectors, i.e. w, viv, are €2 normalized. NT-Xent is an abbreviation for “Normalized Temperature-scaled Cross Entropy”. Different loss functions impose different weightings of positive and negative examples. 70 a So Projection mmm Linear mm Non-linear = ha 30 == e we sor 408 Projection output dimensionality Top1 w 3 4 ts Figure 8. Linear evaluation of representations with different pro- jection heads g(·) and various dimensions of z = g(h). The representation h (before projection) is 2048-dimensional here. What to predict? Random guess Representation g(h) h Color vs grayscale Rotation Orig. vs corrupted Orig. vs Sobel filtered 80 25 50 50 99.3 67.6 99.5 96.6 97.4 25.6 59.6 56.3 Table 3. Accuracy of training additional MLPs on different repre- sentations to predict the transformation applied. Other than crop and color augmentation, we additionally and independently add rotation (one of {0◦, 90◦, 180◦, 270◦}), Gaussian noise, and So- bel filtering transformation during the pretraining for the last three rows. Both h and g(h) are of the same dimensionality, i.e. 2048. # 4.2. A nonlinear projection head improves the representation quality of the layer before it be found in Appendix B.4. We then study the importance of including a projection head, i.e. g(h). Figure 8 shows linear evaluation results using three different architecture for the head: (1) identity mapping; (2) linear projection, as used by several previous approaches (Wu et al., 2018); and (3) the default nonlinear projection with one additional hidden layer (and ReLU acti- vation), similar to Bachman et al. (2019). We observe that a nonlinear projection is better than a linear projection (+3%), and much better than no projection (>10%). When a pro- jection head is used, similar results are observed regardless of output dimension. Furthermore, even when nonlinear projection is used, the layer before the projection head, h, is still much better (>10%) than the layer after, z = g(h), which shows that the hidden layer before the projection head is a better representation than the layer after. We conjecture that the importance of using the representa- tion before the nonlinear projection is due to loss of informa- tion induced by the contrastive loss. In particular, z = g(h) is trained to be invariant to data transformation. Thus, g can remove information that may be useful for the downstream task, such as the color or orientation of objects. By leverag- ing the nonlinear transformation g(·), more information can be formed and maintained in h. To verify this hypothesis, we conduct experiments that use either h or g(h) to learn to predict the transformation applied during the pretraining. Here we set g(h) = W (2)σ(W (1)h), with the same input and output dimensionality (i.e. 2048). Table 3 shows h contains much more information about the transformation applied, while g(h) loses information. Further analysis can # 5. Loss Functions and Batch Size # 5.1. Normalized cross entropy loss with adjustable temperature works better than alternatives We compare the NT-Xent loss against other commonly used contrastive loss functions, such as logistic loss (Mikolov et al., 2013), and margin loss (Schroff et al., 2015). Table 2 shows the objective function as well as the gradient to the input of the loss function. Looking at the gradient, we observe 1) ¢2 normalization (i.e. cosine similarity) along with temperature effectively weights different examples, and an appropriate temperature can help the model learn from hard negatives; and 2) unlike cross-entropy, other objec- tive functions do not weigh the negatives by their relative hardness. As a result, one must apply semi-hard negative mining (Schroff et al., 2015) for these loss functions: in- stead of computing the gradient over all loss terms, one can compute the gradient using semi-hard negative terms (i.e., those that are within the loss margin and closest in distance, but farther than positive examples). To make the comparisons fair, we use the same 2 normaliza- tion for all loss functions, and we tune the hyperparameters, and report their best results.’ Table 4 shows that, while (semi-hard) negative mining helps, the best result is still much worse than our default NT-Xent loss. 8Details can be found in Appendix B.10. For simplicity, we only consider the negatives from one augmentation view. A Simple Framework for Contrastive Learning of Visual Representations Margin NT-Logi. Margin (sh) NT-Logi.(sh) NT-Xent 50.9 51.6 57.5 57.9 63.9 Table 4. Linear evaluation (top-1) for models trained with different loss functions. “sh” means using semi-hard negative mining. é2norm? 7 | Entropy Contrastive ace. | Top 1 0.05 1.0 90.5 59.7 Yes 0.1 4.5 87.8 64.4 0.5 8.2 68.2 60.7 1 8.3 59.1 58.0 10 0.5 OL7 57.2 No 100 0.5 92.1 57.0 Method Architecture Param (M) Top 1 Top 5 Methods using ResNet-50: ResNet-50 Local Agg. ResNet-50 MoCo ResNet-50 PIRL ResNet-50 CPC v2 SimCLR (ours) ResNet-50 24 24 24 24 24 60.2 60.6 63.6 63.8 69.3 - - - 85.3 89.0 Methods using other architectures: RevNet-50 (4×) Rotation RevNet-50 (4×) BigBiGAN Custom-ResNet AMDIM ResNet-50 (2×) CMC ResNet-50 (4×) MoCo ResNet-161 (∗) CPC v2 SimCLR (ours) ResNet-50 (2×) SimCLR (ours) ResNet-50 (4×) 86 86 626 188 375 305 94 375 55.4 61.3 68.1 68.4 68.6 71.5 74.2 76.5 - 81.9 - 88.2 - 90.1 92.0 93.2 Table 5. Linear evaluation for models trained with different choices of £2 norm and temperature T for NT-Xent loss. The contrastive distribution is over 4096 examples. Table 6. ImageNet accuracies of linear classifiers trained on repre- sentations learned with different self-supervised methods. 70.0 Batch size mm 256 512 1024 2048 4096 8192 65.0 62.5 a 2° 0 57.5 55.0 52. 50.0 er mE 100 200 300 400 500 600 700 800 900 1000 Training epochs iu Figure 9. Linear evaluation models (ResNet-50) trained with differ- ent batch size and epochs. Each bar is a single run from scratch.10 Method Architecture Label fraction 10% 1% Top 5 Supervised baseline ResNet-50 48.4 80.4 Methods using other label-propagation: ResNet-50 Pseudo-label ResNet-50 VAT+Entropy Min. UDA (w. RandAug) ResNet-50 FixMatch (w. RandAug) ResNet-50 S4L (Rot+VAT+En. M.) ResNet-50 (4×) 51.6 47.0 - - - 82.4 83.4 88.5 89.1 91.2 Methods using representation learning only: InstDisc BigBiGAN PIRL CPC v2 SimCLR (ours) SimCLR (ours) SimCLR (ours) ResNet-50 RevNet-50 (4×) ResNet-50 ResNet-161(∗) ResNet-50 ResNet-50 (2×) ResNet-50 (4×) 39.2 55.2 57.2 77.9 75.5 83.0 85.8 77.4 78.8 83.8 91.2 87.8 91.2 92.6 We next test the importance of the fj normalization (i.e. cosine similarity vs dot product) and temperature 7 in our default NT-Xent loss. Table 5 shows that without normal- ization and proper temperature scaling, performance is sig- nificantly worse. Without £2 normalization, the contrastive task accuracy is higher, but the resulting representation is worse under linear evaluation. # 5.2. Contrastive learning benefits (more) from larger batch sizes and longer training Table 7. ImageNet accuracy of models trained with few labels. supervised learning (Goyal et al., 2017), in contrastive learn- ing, larger batch sizes provide more negative examples, facilitating convergence (i.e. taking fewer epochs and steps for a given accuracy). Training longer also provides more negative examples, improving the results. In Appendix B.1, results with even longer training steps are provided. Figure 9 shows the impact of batch size when models are trained for different numbers of epochs. We find that, when the number of training epochs is small (e.g. 100 epochs), larger batch sizes have a significant advantage over the smaller ones. With more training steps/epochs, the gaps between different batch sizes decrease or disappear, pro- vided the batches are randomly resampled. In contrast to # 6. Comparison with State-of-the-art In this subsection, similar to Kolesnikov et al. (2019); He et al. (2019), we use ResNet-50 in 3 different hidden layer widths (width multipliers of 1×, 2×, and 4×). For better convergence, our models here are trained for 1000 epochs. 10A linear learning rate scaling is used here. Figure B.1 shows using a square root learning rate scaling can improve performance of ones with small batch sizes. Linear evaluation. Table 6 compares our results with previ- ous approaches (Zhuang et al., 2019; He et al., 2019; Misra & van der Maaten, 2019; Hénaff et al., 2019; Kolesnikov et al., 2019; Donahue & Simonyan, 2019; Bachman et al., A Simple Framework for Contrastive Learning of Visual Representations Food CIFAR10 CIFAR100 Birdsnap SUN397 Cars Aircraft VOC2007 DTD Pets Caltech-101 Flowers Linear evaluation: SimCLR (ours) 76.9 75.2 Supervised 95.3 95.7 80.2 81.2 48.4 56.4 65.9 64.9 60.0 68.8 61.2 63.8 84.2 83.8 78.9 89.2 78.7 92.3 93.9 94.1 95.0 94.2 Fine-tuned: SimCLR (ours) 89.4 88.7 Supervised 88.3 Random init 98.6 98.3 96.0 89.0 88.7 81.9 78.2 77.8 77.0 68.1 67.0 53.7 92.1 91.4 91.3 87.0 88.0 84.8 86.6 86.5 69.4 77.8 92.1 78.8 93.2 64.1 82.7 94.1 94.2 72.5 97.6 98.0 92.5 Table 8. Comparison of transfer learning performance of our self-supervised approach with supervised baselines across 12 natural image classification datasets, for ResNet-50 (4×) models pretrained on ImageNet. Results not significantly worse than the best (p > 0.05, permutation test) are shown in bold. See Appendix B.8 for experimental details and results with standard ResNet-50. 2019; Tian et al., 2019) in the linear evaluation setting (see Appendix B.6). Table 1 shows more numerical compar- isons among different methods. We are able to use standard networks to obtain substantially better results compared to previous methods that require specifically designed archi- tectures. The best result obtained with our ResNet-50 (4×) can match the supervised pretrained ResNet-50. Semi-supervised learning. We follow Zhai et al. (2019) and sample 1% or 10% of the labeled ILSVRC-12 training datasets in a class-balanced way (∼12.8 and ∼128 images per class respectively). 11 We simply fine-tune the whole base network on the labeled data without regularization (see Appendix B.5). Table 7 shows the comparisons of our results against recent methods (Zhai et al., 2019; Xie et al., 2019; Sohn et al., 2020; Wu et al., 2018; Donahue & Simonyan, 2019; Misra & van der Maaten, 2019; Hénaff et al., 2019). The supervised baseline from (Zhai et al., 2019) is strong due to intensive search of hyper-parameters (including augmentation). Again, our approach significantly improves over state-of-the-art with both 1% and 10% of the labels. Interestingly, fine-tuning our pretrained ResNet-50 (2×, 4×) on full ImageNet are also significantly better then training from scratch (up to 2%, see Appendix B.2). Transfer learning. We evaluate transfer learning perfor- mance across 12 natural image datasets in both linear evalu- ation (fixed feature extractor) and fine-tuning settings. Fol- lowing Kornblith et al. (2019), we perform hyperparameter tuning for each model-dataset combination and select the best hyperparameters on a validation set. Table 8 shows results with the ResNet-50 (4×) model. When fine-tuned, our self-supervised model significantly outperforms the su- pervised baseline on 5 datasets, whereas the supervised baseline is superior on only 2 (i.e. Pets and Flowers). On the remaining 5 datasets, the models are statistically tied. Full experimental details as well as results with the standard ResNet-50 architecture are provided in Appendix B.8. 11The details of sampling and exact subsets can be found in https://www.tensorflow.org/datasets/catalog/imagenet2012_subset. # 7. Related Work The idea of making representations of an image agree with each other under small transformations dates back to Becker & Hinton (1992). We extend it by leveraging recent ad- vances in data augmentation, network architecture and con- trastive loss. A similar consistency idea, but for class label prediction, has been explored in other contexts such as semi- supervised learning (Xie et al., 2019; Berthelot et al., 2019). Handcrafted pretext tasks. The recent renaissance of self- supervised learning began with artificially designed pretext tasks, such as relative patch prediction (Doersch et al., 2015), solving jigsaw puzzles (Noroozi & Favaro, 2016), coloriza- tion (Zhang et al., 2016) and rotation prediction (Gidaris et al., 2018; Chen et al., 2019). Although good results can be obtained with bigger networks and longer train- ing (Kolesnikov et al., 2019), these pretext tasks rely on somewhat ad-hoc heuristics, which limits the generality of learned representations. Contrastive visual representation learning. Dating back to Hadsell et al. (2006), these approaches learn represen- tations by contrasting positive pairs against negative pairs. Along these lines, Dosovitskiy et al. (2014) proposes to treat each instance as a class represented by a feature vector (in a parametric form). Wu et al. (2018) proposes to use a memory bank to store the instance class representation vector, an approach adopted and extended in several recent papers (Zhuang et al., 2019; Tian et al., 2019; He et al., 2019; Misra & van der Maaten, 2019). Other work explores the use of in-batch samples for negative sampling instead of a memory bank (Doersch & Zisserman, 2017; Ye et al., 2019; Ji et al., 2019). Recent literature has attempted to relate the success of their methods to maximization of mutual information between latent representations (Oord et al., 2018; Hénaff et al., 2019; Hjelm et al., 2018; Bachman et al., 2019). However, it is not clear if the success of contrastive approaches is determined by the mutual information, or by the specific form of the contrastive loss (Tschannen et al., 2019). A Simple Framework for Contrastive Learning of Visual Representations We note that almost all individual components of our frame- work have appeared in previous work, although the specific instantiations may be different. The superiority of our frame- work relative to previous work is not explained by any single design choice, but by their composition. We provide a com- prehensive comparison of our design choices with those of previous work in Appendix C. Chen, T., Sun, Y., Shi, Y., and Hong, L. On sampling strategies for neural network-based collaborative filtering. In Proceed- ings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 767–776, 2017. Chen, T., Zhai, X., Ritter, M., Lucic, M., and Houlsby, N. Self- supervised gans via auxiliary rotation loss. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 12154–12163, 2019. # 8. Conclusion In this work, we present a simple framework and its in- stantiation for contrastive visual representation learning. We carefully study its components, and show the effects of different design choices. By combining our findings, we improve considerably over previous methods for self- supervised, semi-supervised, and transfer learning. Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., and Vedaldi, A. Describing textures in the wild. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3606– 3613. IEEE, 2014. Cubuk, E. D., Zoph, B., Mane, D., Vasudevan, V., and Le, Q. V. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 113–123, 2019. Our approach differs from standard supervised learning on ImageNet only in the choice of data augmentation, the use of a nonlinear head at the end of the network, and the loss func- tion. The strength of this simple framework suggests that, despite a recent surge in interest, self-supervised learning remains undervalued. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017. Doersch, C. and Zisserman, A. Multi-task self-supervised visual learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2051–2060, 2017. # Acknowledgements We would like to thank Xiaohua Zhai, Rafael Müller and Yani Ioannou for their feedback on the draft. We are also grateful for general support from Google Research teams in Toronto and elsewhere. Doersch, C., Gupta, A., and Efros, A. A. Unsupervised visual representation learning by context prediction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1422–1430, 2015. Donahue, J. and Simonyan, K. Large scale adversarial representa- tion learning. In Advances in Neural Information Processing Systems, pp. 10541–10551, 2019. # References Asano, Y. M., Rupprecht, C., and Vedaldi, A. A critical analysis of self-supervision, or what we can learn from a single image. arXiv preprint arXiv:1904.13132, 2019. Donahue, J., Jia, Y., Vinyals, O., Hoffman, J., Zhang, N., Tzeng, E., and Darrell, T. Decaf: A deep convolutional activation feature for generic visual recognition. In International Conference on Machine Learning, pp. 647–655, 2014. Bachman, P., Hjelm, R. D., and Buchwalter, W. Learning rep- resentations by maximizing mutual information across views. In Advances in Neural Information Processing Systems, pp. 15509–15519, 2019. Dosovitskiy, A., Springenberg, J. T., Riedmiller, M., and Brox, T. Discriminative unsupervised feature learning with convolutional neural networks. In Advances in neural information processing systems, pp. 766–774, 2014. Becker, S. and Hinton, G. E. Self-organizing neural network that discovers surfaces in random-dot stereograms. Nature, 355 (6356):161–163, 1992. Everingham, M., Van Gool, L., Williams, C. K., Winn, J., and Zisserman, A. The pascal visual object classes (voc) challenge. International Journal of Computer Vision, 88(2):303–338, 2010. Berg, T., Liu, J., Lee, S. W., Alexander, M. L., Jacobs, D. W., and Belhumeur, P. N. Birdsnap: Large-scale fine-grained visual categorization of birds. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2019–2026. IEEE, 2014. Fei-Fei, L., Fergus, R., and Perona, P. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshop on Generative-Model Based Vision, 2004. Berthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., and Raffel, C. A. Mixmatch: A holistic approach to semi- supervised learning. In Advances in Neural Information Pro- cessing Systems, pp. 5050–5060, 2019. Gidaris, S., Singh, P., and Komodakis, N. Unsupervised represen- tation learning by predicting image rotations. arXiv preprint arXiv:1803.07728, 2018. Bossard, L., Guillaumin, M., and Van Gool, L. Food-101–mining discriminative components with random forests. In European conference on computer vision, pp. 446–461. Springer, 2014. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde- Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680, 2014. A Simple Framework for Contrastive Learning of Visual Representations Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677, 2017. Loshchilov, I. and Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. Maaten, L. v. d. and Hinton, G. Visualizing data using t-sne. Jour- nal of machine learning research, 9(Nov):2579–2605, 2008. Hadsell, R., Chopra, S., and LeCun, Y. Dimensionality reduction by learning an invariant mapping. In 2006 IEEE Computer So- ciety Conference on Computer Vision and Pattern Recognition (CVPR’06), volume 2, pp. 1735–1742. IEEE, 2006. Maji, S., Kannala, J., Rahtu, E., Blaschko, M., and Vedaldi, A. Fine-grained visual classification of aircraft. Technical report, 2013. He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016. Mikolov, T., Chen, K., Corrado, G., and Dean, J. Efficient esti- mation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722, 2019. Misra, I. and van der Maaten, L. ing of pretext-invariant representations. arXiv:1912.01991, 2019. Self-supervised learn- arXiv preprint Hénaff, O. J., Razavi, A., Doersch, C., Eslami, S., and Oord, A. v. d. Data-efficient image recognition with contrastive predictive coding. arXiv preprint arXiv:1905.09272, 2019. Nilsback, M.-E. and Zisserman, A. Automated flower classification over a large number of classes. In Computer Vision, Graphics & Image Processing, 2008. ICVGIP’08. Sixth Indian Conference on, pp. 722–729. IEEE, 2008. Hinton, G. E., Osindero, S., and Teh, Y.-W. A fast learning al- gorithm for deep belief nets. Neural computation, 18(7):1527– 1554, 2006. Noroozi, M. and Favaro, P. Unsupervised learning of visual repre- sentations by solving jigsaw puzzles. In European Conference on Computer Vision, pp. 69–84. Springer, 2016. Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., and Bengio, Y. Learning deep repre- sentations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670, 2018. Oord, A. v. d., Li, Y., and Vinyals, O. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. Howard, A. G. Some improvements on deep convolutional neural network based image classification. arXiv preprint arXiv:1312.5402, 2013. Parkhi, O. M., Vedaldi, A., Zisserman, A., and Jawahar, C. Cats and dogs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3498–3505. IEEE, 2012. Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015. Ji, X., Henriques, J. F., and Vedaldi, A. Invariant information clustering for unsupervised image classification and segmenta- tion. In Proceedings of the IEEE International Conference on Computer Vision, pp. 9865–9874, 2019. Schroff, F., Kalenichenko, D., and Philbin, J. Facenet: A unified In Proceed- embedding for face recognition and clustering. ings of the IEEE conference on computer vision and pattern recognition, pp. 815–823, 2015. Kingma, D. P. and Welling, M. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. Kolesnikov, A., Zhai, X., and Beyer, L. Revisiting self-supervised In Proceedings of the IEEE visual representation learning. conference on Computer Vision and Pattern Recognition, pp. 1920–1929, 2019. Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Sohn, K. Improved deep metric learning with multi-class n-pair loss objective. In Advances in neural information processing systems, pp. 1857–1865, 2016. Kornblith, S., Shlens, J., and Le, Q. V. Do better ImageNet models In Proceedings of the IEEE conference on transfer better? computer vision and pattern recognition, pp. 2661–2671, 2019. Sohn, K., Berthelot, D., Li, C.-L., Zhang, Z., Carlini, N., Cubuk, E. D., Kurakin, A., Zhang, H., and Raffel, C. Fixmatch: Simpli- fying semi-supervised learning with consistency and confidence. arXiv preprint arXiv:2001.07685, 2020. Krause, J., Deng, J., Stark, M., and Fei-Fei, L. Collecting a large-scale dataset of fine-grained cars. In Second Workshop on Fine-Grained Visual Categorization, 2013. Krizhevsky, A. and Hinton, G. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. URL https://www.cs.toronto.edu/~kriz/ learning-features-2009-TR.pdf. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1–9, 2015. Tian, Y., Krishnan, D., and Isola, P. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019. Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classifi- cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. Tschannen, M., Djolonga, J., Rubenstein, P. K., Gelly, S., and Lu- cic, M. On mutual information maximization for representation learning. arXiv preprint arXiv:1907.13625, 2019. A Simple Framework for Contrastive Learning of Visual Representations Wu, Z., Xiong, Y., Yu, S. X., and Lin, D. Unsupervised feature learning via non-parametric instance discrimination. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3733–3742, 2018. Xiao, J., Hays, J., Ehinger, K. A., Oliva, A., and Torralba, A. Sun database: Large-scale scene recognition from abbey to zoo. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3485–3492. IEEE, 2010. Xie, Q., Dai, Z., Hovy, E., Luong, M.-T., and Le, Q. V. Unsu- pervised data augmentation. arXiv preprint arXiv:1904.12848, 2019. Ye, M., Zhang, X., Yuen, P. C., and Chang, S.-F. Unsupervised embedding learning via invariant and spreading instance feature. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6210–6219, 2019. You, Y., Gitman, I., and Ginsburg, B. Large batch training of con- volutional networks. arXiv preprint arXiv:1708.03888, 2017. Zhai, X., Oliver, A., Kolesnikov, A., and Beyer, L. S4l: Self- supervised semi-supervised learning. In The IEEE International Conference on Computer Vision (ICCV), October 2019. Zhang, R., Isola, P., and Efros, A. A. Colorful image coloriza- tion. In European conference on computer vision, pp. 649–666. Springer, 2016. Zhuang, C., Zhai, A. L., and Yamins, D. Local aggregation for unsupervised learning of visual embeddings. In Proceedings of the IEEE International Conference on Computer Vision, pp. 6002–6012, 2019. A Simple Framework for Contrastive Learning of Visual Representations # A. Data Augmentation Details In our default pretraining setting (which is used to train our best models), we utilize random crop (with resize and random flip), random color distortion, and random Gaussian blur as the data augmentations. The details of these three augmentations are provided below. Random crop and resize to 224x224 We use standard Inception-style random cropping (Szegedy et al., 2015). The crop of random size (uniform from 0.08 to 1.0 in area) of the original size and a random aspect ratio (default: of 3/4 to 4/3) of the original aspect ratio is made. This crop is finally resized to the original size. This has been imple- mented in Tensorflow as “slim.preprocessing.inception_preprocessing.distorted_bounding_box_crop”, or in Pytorch as “torchvision.transforms.RandomResizedCrop”. Additionally, the random crop (with resize) is always followed by a random horizontal/left-to-right flip with 50% probability. This is helpful but not essential. By removing this from our default augmentation policy, the top-1 linear evaluation drops from 64.5% to 63.4% for our ResNet-50 model trained in 100 epochs. Color distortion Color distortion is composed by color jittering and color dropping. We find stronger color jittering usually helps, so we set a strength parameter. A pseudo-code for color distortion using TensorFlow is as follows. import tensorflow as tf def color_distortion(image, s=1.0): # image is a tensor with value range in [0, 1]. # s is the strength of color distortion. def color_jitter(x): # one can also shuffle the order of following augmentations # each time they are applied. x = tf.image.random_brightness(x, max_delta=0.8*s) x = tf.image.random_contrast(x, lower=1-0.8*s, upper=1+0.8*s) x = tf.image.random_saturation(x, lower=1-0.8*s, upper=1+0.8*s) x = tf.image.random_hue(x, max_delta=0.2*s) x = tf.clip_by_value(x, 0, 1) return x def color_drop(x): image = tf.image.rgb_to_grayscale(image) image = tf.tile(image, [1, 1, 3]) # randomly apply transformation with probability p. image = random_apply(color_jitter, image, p=0.8) image = random_apply(color_drop, image, p=0.2) return image A pseudo-code for color distortion using Pytorch is as follows 12. from torchvision import transforms def get_color_distortion(s=1.0): # s is the strength of color distortion. color_jitter = transforms.ColorJitter(0.8*s, 0.8*s, 0.8*s, 0.2*s) rnd_color_jitter = transforms.RandomApply([color_jitter], p=0.8) rnd_gray = transforms.RandomGrayscale(p=0.2) color_distort = transforms.Compose([ rnd_color_jitter, rnd_gray]) 12Our code and results are based on Tensorflow, the Pytorch code here is a reference. A Simple Framework for Contrastive Learning of Visual Representations return color_distort Gaussian blur This augmentation is in our default policy. We find it helpful, as it improves our ResNet-50 trained for 100 epochs from 63.2% to 64.5%. We blur the image 50% of the time using a Gaussian kernel. We randomly sample σ ∈ [0.1, 2.0], and the kernel size is set to be 10% of the image height/width. # B. Additional Experimental Results # B.1. Batch Size and Training Steps Figure B.1 shows the top-5 accuracy on linear evaluation when trained with different batch sizes and training epochs. The conclusion is very similar to top-1 accuracy shown before, except that the differences between different batch sizes and training steps seems slightly smaller here. In both Figure 9 and Figure B.1, we use a linear scaling of learning rate similar to (Goyal et al., 2017) when training with different batch sizes. Although linear learning rate scaling is popular with SGD/Momentum optimizer, we find a square root learning rate scaling is more desirable with LARS optimizer. With square root learning rate scaling, we have LearningRate = 0.075 × BatchSize, instead of LearningRate = 0.3 × BatchSize/256 in the linear scaling case, but the learning rate is the same under both scaling methods when batch size of 4096 (our default batch size). A comparison is presented in Table B.1, where we observe that square root learning rate scaling improves the performance for models trained with small batch sizes and in smaller number of epochs. Batch size \ Epochs 100 200 400 800 256 512 1024 2048 4096 8192 57.5 / 62.8 60.7 / 63.8 62.8 / 64.3 64.0 / 64.7 64.6 / 64.5 64.8 / 64.8 61.9 / 64.3 64.0 / 65.6 65.3 / 66.1 66.1 / 66.8 66.5 / 66.8 66.6 / 67.0 64.7 / 65.7 66.2 / 66.7 67.2 / 67.2 68.1 / 67.9 68.2 / 68.0 67.8 / 68.3 66.6 / 66.5 67.8 / 67.4 68.5 / 68.3 68.9 / 68.8 68.9 / 69.1 69.0 / 69.1 Table B.1. Linear evaluation (top-1) under different batch sizes and training epochs. On the left side of slash sign are models trained with linear LR scaling, and on the right are models trained with square root LR scaling. The result is bolded if it is more than 0.5% better. Square root LR scaling works better for smaller batch size trained in fewer epochs (with LARS optimizer). We also train with larger batch size (up to 32K) and longer (up to 3200 epochs), with the square root learning rate scaling. A shown in Figure B.2, the performance seems to saturate with a batch size of 8192, while training longer can still significantly improve the performance. 90.0 87.5 85.0 82.5 in § 80.0 - Batch size 775 mm 256 mmm 512 75.0 mmm 1024 mmm 2048 72.5 mm 4096 mmm 8192 100 200 300 400 500 600 700 800 900 1000 Training epochs 72 Batch size mmm 256 70 mm 512 mmm 1024 6g = 2048 mmm 4096 da mmm 8192 ey 66 16384 F mm 32768 64 62 Il | 50 100 200 400 800 1600 3200 Training epochs Figure B.1. Linear evaluation (top-5) of ResNet-50 trained with different batch sizes and epochs. Each bar is a single run from scratch. See Figure 9 for top-1 accuracy. Figure B.2. Linear evaluation (top-1) of ResNet-50 trained with different batch sizes and longer epochs. Here a square root learn- ing rate, instead of a linear one, is utilized. A Simple Framework for Contrastive Learning of Visual Representations # B.2. Broader composition of data augmentations further improves performance Our best results in the main text (Table 6 and 7) can be further improved when expanding the default augmentation policy to include the following: (1) Sobel filtering, (2) additional color distortion (equalize, solarize), and (3) motion blur. For linear evaluation protocol, the ResNet-50 models (1×, 2×, 4×) trained with broader data augmentations achieve 70.0 (+0.7), 74.4 (+0.2), 76.8 (+0.3), respectively. Table B.2 shows ImageNet accuracy obtained by fine-tuning the SimCLR model (see Appendix B.5 for the details of fine-tuning procedure). Interestingly, when fine-tuned on full (100%) ImageNet training set, our ResNet (4×) model achieves 80.4% top-1 / 95.4% top-5 13, which is significantly better than that (78.4% top-1 / 94.2% top-5) of training from scratch using the same set of augmentations (i.e. random crop and horizontal flip). For ResNet-50 (2×), fine-tuning our pre-trained ResNet-50 (2×) is also better than training from scratch (77.8% top-1 / 93.9% top-5). There is no improvement from fine-tuning for ResNet-50. Architecture Label fraction 10% Top 1 Top 5 Top 1 Top 5 Top 1 Top 5 1% 100% ResNet-50 ResNet-50 (2×) ResNet-50 (4×) 49.4 59.4 64.1 76.6 83.7 86.6 66.1 71.8 74.8 88.1 91.2 92.8 76.0 79.1 80.4 93.1 94.8 95.4 Table B.2. Classification accuracy obtained by fine-tuning the SimCLR (which is pretrained with broader data augmentations) on 1%, 10% and full of ImageNet. As a reference, our ResNet-50 (4×) trained from scratch on 100% labels achieves 78.4% top-1 / 94.2% top-5. # B.3. Effects of Longer Training for Supervised Models Here we perform experiments to see how training steps and stronger data augmentation affect supervised training. We test ResNet-50 and ResNet-50 (4×) under the same set of data augmentations (random crops, color distortion, 50% Gaussian blur) as used in our unsupervised models. Figure B.3 shows the top-1 accuracy. We observe that there is no significant benefit from training supervised models longer on ImageNet. Stronger data augmentation slightly improves the accuracy of ResNet-50 (4×) but does not help on ResNet-50. When stronger data augmentation is applied, ResNet-50 generally requires longer training (e.g. 500 epochs 14) to obtain the optimal result, while ResNet-50 (4×) does not benefit from longer training. Model Training epochs Crop Top 1 +Color +Color+Blur ResNet-50 90 500 1000 76.5 76.2 75.8 75.6 76.5 75.2 75.3 76.7 76.4 ResNet-50 (4×) 90 500 1000 78.4 78.3 77.9 78.9 78.4 78.2 78.7 78.5 78.3 Table B.3. Top-1 accuracy of supervised models trained longer under various data augmentation procedures (from the same set of data augmentations for contrastive learning). # B.4. Understanding The Non-Linear Projection Head Figure B.3 shows the eigenvalue distribution of linear projection matrix W ∈ R2048×2048 used to compute z = W h. This matrix has relatively few large eigenvalues, indicating that it is approximately low-rank. Figure B.4 shows t-SNE (Maaten & Hinton, 2008) visualizations of h and z = g(h) for randomly selected 10 classes by our best ResNet-50 (top-1 linear evaluation 69.3%). Classes represented by h are better separated compared to z. 13It is 80.1% top-1 / 95.2% top-5 without broader augmentations for pretraining SimCLR. 14With AutoAugment (Cubuk et al., 2019), optimal test accuracy can be achieved between 900 and 500 epochs. A Simple Framework for Contrastive Learning of Visual Representations (a) Y-axis in uniform scale. (b) Y-axis in log scale. (a) h (b) z = g(h) Figure B.3. Squared real eigenvalue distribution of linear projection matrix W ∈ R2048×2048 used to compute g(h) = W h. Figure B.4. t-SNE visualizations of hidden vectors of images from a randomly selected 10 classes in the validation set. # B.5. Semi-supervised Learning via Fine-Tuning Fine-tuning Procedure We fine-tune using the Nesterov momentum optimizer with a batch size of 4096, momentum of 0.9, and a learning rate of 0.8 (following LearningRate = 0.05 × BatchSize/256) without warmup. Only random cropping (with random left-to-right flipping and resizing to 224x224) is used for preprocessing. We do not use any regularization (including weight decay). For 1% labeled data we fine-tune for 60 epochs, and for 10% labeled data we fine-tune for 30 epochs. For the inference, we resize the given image to 256x256, and take a single center crop of 224x224. Table B.4 shows the comparisons of top-1 accuracy for different methods for semi-supervised learning. Our models significantly improve state-of-the-art. Method Architecture Label fraction 10% 1% Top 1 Supervised baseline ResNet-50 25.4 56.4 Methods using label-propagation: UDA (w. RandAug) FixMatch (w. RandAug) S4L (Rot+VAT+Ent. Min.) ResNet-50 (4×) ResNet-50 ResNet-50 - - - 68.8 71.5 73.2 Methods using self-supervised representation learning only: CPC v2 SimCLR (ours) SimCLR (ours) SimCLR (ours) ResNet-161(∗) ResNet-50 ResNet-50 (2×) ResNet-50 (4×) 52.7 48.3 58.5 63.0 73.1 65.6 71.7 74.4 Table B.4. ImageNet top-1 accuracy of models trained with few labels. See Table 7 for top-5 accuracy. # B.6. Linear Evaluation For linear evaluation, we follow similar procedure as fine-tuning (described in Appendix B.5), except that a larger learning rate of 1.6 (following LearningRate = 0.1 × BatchSize/256) and longer training of 90 epochs. Alternatively, using LARS optimizer with the pretraining hyper-parameters also yield similar results. Furthermore, we find that attaching the linear classifier on top of the base encoder (with a stop_gradient on the input to linear classifier to prevent the label information from influencing the encoder) and train them simultaneously during the pretraining achieves similar performance. # B.7. Correlation Between Linear Evaluation and Fine-Tuning Here we study the correlation between linear evaluation and fine-tuning under different settings of training step and network architecture. Figure B.5 shows linear evaluation versus fine-tuning when training epochs of a ResNet-50 (using batch size of 4096) are varied from 50 to 3200 as in Figure B.2. While they are almost linearly correlated, it seems fine-tuning on a small fraction A Simple Framework for Contrastive Learning of Visual Representations of labels benefits more from training longer. 50.0 66 ° S475 BS qa a c c 64 © 45.0 3 2 D c E425 = 62 3 3 2 40.0 & iz - 37.5 ir 60 35.0 62 64 66 68 70 62 64 66 68 70 Linear eval Linear eval Figure B.5. Top-1 accuracy of models trained in different epochs (from Figure B.2), under linear evaluation and fine-tuning. Figure B.6 shows shows linear evaluation versus fine-tuning for different architectures of choice. 54 Width * 69 Width — e ix “x + e ix 51] @ 2x e@ x bad e 4x 66] @ 4x on 48 Depth ° . ~ Depth + s e 18 Sez] @ 18 bed 345) 2 34 x 3 % 34 * o i c gw 50 a @ 50 3°] e101 56°) & 101 239| @ 152 . 2 @ 152 ° no e is7 36 rs * 33 54 30; @ 51 e 50 55 60 65 70 50 55 60 65 70 Linear eval Linear eval Figure B.6. Top-1 accuracy of different architectures under linear evaluation and fine-tuning. # B.8. Transfer Learning We evaluated the performance of our self-supervised representation for transfer learning in two settings: linear evaluation, where a logistic regression classifier is trained to classify a new dataset based on the self-supervised representation learned on ImageNet, and fine-tuning, where we allow all weights to vary during training. In both cases, we follow the approach described by Kornblith et al. (2019), although our preprocessing differs slightly. # B.8.1. METHODS Datasets We investigated transfer learning performance on the Food-101 dataset (Bossard et al., 2014), CIFAR-10 and CIFAR-100 (Krizhevsky & Hinton, 2009), Birdsnap (Berg et al., 2014), the SUN397 scene dataset (Xiao et al., 2010), Stanford Cars (Krause et al., 2013), FGVC Aircraft (Maji et al., 2013), the PASCAL VOC 2007 classification task (Everingham et al., 2010), the Describable Textures Dataset (DTD) (Cimpoi et al., 2014), Oxford-IIIT Pets (Parkhi et al., 2012), Caltech-101 (Fei-Fei et al., 2004), and Oxford 102 Flowers (Nilsback & Zisserman, 2008). We follow the evaluation protocols in the papers introducing these datasets, i.e., we report top-1 accuracy for Food-101, CIFAR-10, CIFAR-100, Birdsnap, SUN397, Stanford Cars, and DTD; mean per-class accuracy for FGVC Aircraft, Oxford-IIIT Pets, Caltech-101, and Oxford 102 Flowers; and the 11-point mAP metric as defined in Everingham et al. (2010) for PASCAL VOC 2007. For DTD and SUN397, the dataset creators defined multiple train/test splits; we report results only for the first split. Caltech-101 defines no train/test split, so we randomly chose 30 images per class and test on the remainder, for fair comparison with previous work (Donahue et al., 2014; Simonyan & Zisserman, 2014). We used the validation sets specified by the dataset creators to select hyperparameters for FGVC Aircraft, PASCAL VOC A Simple Framework for Contrastive Learning of Visual Representations 2007, DTD, and Oxford 102 Flowers. For other datasets, we held out a subset of the training set for validation while performing hyperparameter tuning. After selecting the optimal hyperparameters on the validation set, we retrained the model using the selected parameters using all training and validation images. We report accuracy on the test set. Transfer Learning via a Linear Classifier We trained an ¢-regularized multinomial logistic regression classifier on features extracted from the frozen pretrained network. We used L-BFGS to optimize the softmax cross-entropy objective and we did not apply data augmentation. As preprocessing, all images were resized to 224 pixels along the shorter side using bicubic resampling, after which we took a 224 x 224 center crop. We selected the ¢2 regularization parameter from a range of 45 logarithmically spaced values between 10~° and 10°. Transfer Learning via Fine-Tuning We fine-tuned the entire network using the weights of the pretrained network as initialization. We trained for 20,000 steps at a batch size of 256 using SGD with Nesterov momentum with a momentum parameter of 0.9. We set the momentum parameter for the batch normalization statistics to max(1 − 10/s, 0.9) where s is the number of steps per epoch. As data augmentation during fine-tuning, we performed only random crops with resize and flips; in contrast to pretraining, we did not perform color augmentation or blurring. At test time, we resized images to 256 pixels along the shorter side and took a 224 × 224 center crop. (Additional accuracy improvements may be possible with further optimization of data augmentation, particularly on the CIFAR-10 and CIFAR-100 datasets.) We selected the learning rate and weight decay, with a grid of 7 logarithmically spaced learning rates between 0.0001 and 0.1 and 7 logarithmically spaced values of weight decay between 10−6 and 10−3, as well as no weight decay. We divide these values of weight decay by the learning rate. Training from Random Initialization We trained the network from random initialization using the same procedure as for fine-tuning, but for longer, and with an altered hyperparameter grid. We chose hyperparameters from a grid of 7 logarithmically spaced learning rates between 0.001 and 1.0 and 8 logarithmically spaced values of weight decay between 10−5 and 10−1.5. Importantly, our random initialization baselines are trained for 40,000 steps, which is sufficiently long to achieve near-maximal accuracy, as demonstrated in Figure 8 of Kornblith et al. (2019). On Birdsnap, there are no statistically significant differences among methods, and on Food-101, Stanford Cars, and FGVC Aircraft datasets, fine-tuning provides only a small advantage over training from random initialization. However, on the remaining 8 datasets, pretraining has clear advantages. Supervised Baselines We compare against architecturally identical ResNet models trained on ImageNet with standard cross-entropy loss. These models are trained with the same data augmentation as our self-supervised models (crops, strong color augmentation, and blur) and are also trained for 1000 epochs. We found that, although stronger data augmentation and longer training time do not benefit accuracy on ImageNet, these models performed significantly better than a supervised baseline trained for 90 epochs and ordinary data augmentation for linear evaluation on a subset of transfer datasets. The supervised ResNet-50 baseline achieves 76.3% top-1 accuracy on ImageNet, vs. 69.3% for the self-supervised counterpart, while the ResNet-50 (4×) baseline achieves 78.3%, vs. 76.5% for the self-supervised model. Statistical Significance Testing We test for the significance of differences between model with a permutation test. Given predictions of two models, we generate 100,000 samples from the null distribution by randomly exchanging predictions for each example and computing the difference in accuracy after performing this randomization. We then compute the percentage of samples from the null distribution that are more extreme than the observed difference in predictions. For top-1 accuracy, this procedure yields the same result as the exact McNemar test. The assumption of exchangeability under the null hypothesis is also valid for mean per-class accuracy, but not when computing average precision curves. Thus, we perform significance testing for a difference in accuracy on VOC 2007 rather than a difference in mAP. A caveat of this procedure is that it does not consider run-to-run variability when training the models, only variability arising from using a finite sample of images for evaluation. # B.8.2. RESULTS WITH STANDARD RESNET The ResNet-50 (4×) results shown in Table 8 of the text show no clear advantage to the supervised or self-supervised models. With the narrower ResNet-50 architecture, however, supervised learning maintains a clear advantage over self-supervised learning. The supervised ResNet-50 model outperforms the self-supervised model on all datasets with linear evaluation, and most (10 of 12) datasets with fine-tuning. The weaker performance of the ResNet model compared to the ResNet (4×) A Simple Framework for Contrastive Learning of Visual Representations Food CIFAR10 CIFAR100 Birdsnap SUN397 Cars Aircraft VOC2007 DTD Pets Caltech-101 Flowers Linear evaluation: SimCLR (ours) 68.4 72.3 Supervised 90.6 93.6 71.6 78.3 37.4 53.7 58.8 61.9 50.3 66.7 50.3 61.0 80.5 82.8 74.5 83.6 74.9 91.5 90.3 94.5 Fine-tuned: SimCLR (ours) 88.2 88.3 Supervised 86.9 Random init 97.7 97.5 95.9 85.9 86.4 80.2 75.9 75.8 76.1 63.5 64.3 53.6 91.3 92.1 91.4 88.1 86.0 85.9 84.1 85.0 67.3 73.2 89.2 74.6 92.1 64.8 81.5 92.1 93.3 72.6 91.2 94.7 97.0 97.6 92.0 Table B.5. Comparison of transfer learning performance of our self-supervised approach with supervised baselines across 12 natural image datasets, using ImageNet-pretrained ResNet models. See also Figure 8 for results with the ResNet (4×) architecture. model may relate to the accuracy gap between the supervised and self-supervised models on ImageNet. The self-supervised ResNet gets 69.3% top-1 accuracy, 6.8% worse than the supervised model in absolute terms, whereas the self-supervised ResNet (4×) model gets 76.5%, which is only 1.8% worse than the supervised model. # B.9. CIFAR-10 While we focus on using ImageNet as the main dataset for pretraining our unsupervised model, our method also works with other datasets. We demonstrate it by testing on CIFAR-10 as follows. Setup As our goal is not to optimize CIFAR-10 performance, but rather to provide further confirmation of our observations on ImageNet, we use the same architecture (ResNet-50) for CIFAR-10 experiments. Because CIFAR-10 images are much smaller than ImageNet images, we replace the first 7x7 Conv of stride 2 with 3x3 Conv of stride 1, and also remove the first max pooling operation. For data augmentation, we use the same Inception crop (flip and resize to 32x32) as ImageNet,15 and color distortion (strength=0.5), leaving out Gaussian blur. We pretrain with learning rate in {0.5, 1.0, 1.5}, temperature in {0.1, 0.5, 1.0}, and batch size in {256, 512, 1024, 2048, 4096}. The rest of the settings (including optimizer, weight decay, etc.) are the same as our ImageNet training. Our best model trained with batch size 1024 can achieve a linear evaluation accuracy of 94.0%, compared to 95.1% from the supervised baseline using the same architecture and batch size. The best self-supervised model that reports linear evaluation result on CIFAR-10 is AMDIM (Bachman et al., 2019), which achieves 91.2% with a model 25× larger than ours. We note that our model can be improved by incorporating extra data augmentations as well as using a more suitable base network. Performance under different batch sizes and training steps Figure B.7 shows the linear evaluation performance under different batch sizes and training steps. The results are consistent with our observations on ImageNet, although the largest batch size of 4096 seems to cause a small degradation in performance on CIFAR-10. Batch size mm 256 mmm 512 mmm 1024 mmm 2048 === 4096 "100 200 300 400 500 600 700 800 900 1000 Training epochs © N © o co i) Top1 co © g co N | as soem co Figure B.7. Linear evaluation of ResNet-50 (with ad- justed stem) trained with different batch size and epochs on CIFAR-10 dataset. Each bar is averaged over 3 runs with different learning rates (0.5, 1.0, 1.5) and temperature τ = 0.5. Error bar denotes standard deviation. 15It is worth noting that, although CIFAR-10 images are much smaller than ImageNet images and image size does not differ among examples, cropping with resizing is still a very effective augmentation for contrastive learning. A Simple Framework for Contrastive Learning of Visual Representations Optimal temperature under different batch sizes Figure B.8 shows the linear evaluation of model trained with three different temperatures under various batch sizes. We find that when training to convergence (e.g. training epochs > 300), the optimal temperature in {0.1, 0.5, 1.0} is 0.5 and seems consistent regardless of the batch sizes. However, the performance with τ = 0.1 improves as batch size increases, which may suggest a small shift of optimal temperature towards 0.1. (a) Training epochs ≤ 300 (b) Training epochs > 300 # fs Figure B.8. Linear evaluation of the model (ResNet-50) trained with three temperatures on different batch sizes on CIFAR-10. Each bar is averaged over multiple runs with different learning rates and total train epochs. Error bar denotes standard deviation. # B.10. Tuning For Other Loss Functions The learning rate that works best for NT-Xent loss may not be a good learning rate for other loss functions. To ensure a fair comparison, we also tune hyperparameters for both margin loss and logistic loss. Specifically, we tune learning rate in {0.01, 0.1, 0.3, 0.5, 1.0} for both loss functions. We further tune the margin in {0, 0.4, 0.8, 1.6} for margin loss, the temperature in {0.1, 0.2, 0.5, 1.0} for logistic loss. For simplicity, we only consider the negatives from one augmentation view (instead of both sides), which slightly impairs performance but ensures fair comparison. # C. Further Comparison to Related Methods As we have noted in the main text, most individual components of SimCLR have appeared in previous work, and the improved performance is a result of a combination of these design choices. Table C.1 provides a high-level comparison of the design choices of our method with those of previous methods. Compared with previous work, our design choices are generally simpler. Model Data Augmentation —_ Base Encoder Projection Head —_ Loss Batch Size — Train Epochs CPC v2 = Custom ResNet-161 (modified) | PixelCNN Xent 512# ~200 AMDIM - Fast AutoAug. Custom ResNet Non-linear MLP Xent w/clip,reg 1008 150 CMC Fast AutoAug. ResNet-50 (2x, Lt+tab) Linear layer Xent w/ 2,7 156* 280 MoCo Crop+color ResNet-50 (4x) Linear layer Xent w/ 2,7 256* 200 PIRL Crop+color ResNet-50 (2x) Linear layer Xent w/ 2,7 1024* 800 SimCLR — Crop+color+blur ResNet-50 (4x) Non-linear MLP_— Xent w/ £2, 7 4096 1000 Table C.1. A high-level comparison of design choices and training setup (for best result on ImageNet) for each method. Note that descriptions provided here are general; even when they match for two methods, formulations and implementations may differ (e.g. for color augmentation). Refer to the original papers for more details. #Examples are split into multiple patches, which enlarges the effective batch size. ∗A memory bank is employed. In below, we provide an in-depth comparison of our method to the recently proposed contrastive representation learning methods: • DIM/AMDIM (Hjelm et al., 2018; Bachman et al., 2019) achieve global-to-local/local-to-neighbor prediction by predicting the middle layer of ConvNet. The ConvNet is a ResNet that has bewen modified to place significant constraints on the receptive fields of the network (e.g. replacing many 3x3 Convs with 1x1 Convs). In our framework, we decouple the prediction task and encoder architecture, by random cropping (with resizing) and using the final A Simple Framework for Contrastive Learning of Visual Representations representations of two augmented views for prediction, so we can use standard and more powerful ResNets. Our NT-Xent loss function leverages normalization and temperature to restrict the range of similarity scores, whereas they use a tanh function with regularization. We use a simpler data augmentation policy, while they use FastAutoAugment for their best result. • CPC v1 and v2 (Oord et al., 2018; Hénaff et al., 2019) define the context prediction task using a deterministic strategy to split examples into patches, and a context aggregation network (a PixelCNN) to aggregate these patches. The base encoder network sees only patches, which are considerably smaller than the original image. We decouple the prediction task and the encoder architecture, so we do not require a context aggregation network, and our encoder can look at the images of wider spectrum of resolutions. In addition, we use the NT-Xent loss function, which leverages normalization and temperature, whereas they use an unnormalized cross-entropy-based objective. We use simpler data augmentation. • InstDisc, MoCo, PIRL (Wu et al., 2018; He et al., 2019; Misra & van der Maaten, 2019) generalize the Exemplar approach originally proposed by Dosovitskiy et al. (2014) and leverage an explicit memory bank. We do not use a memory bank; we find that, with a larger batch size, in-batch negative example sampling suffices. We also utilize a nonlinear projection head, and use the representation before the projection head. Although we use similar types of augmentations (e.g., random crop and color distortion), we expect specific parameters may be different. • CMC (Tian et al., 2019) uses a separated network for each view, while we simply use a single network shared for all randomly augmented views. The data augmentation, projection head and loss function are also different. We use larger batch size instead of a memory bank. • Whereas Ye et al. (2019) maximize similarity between augmented and unaugmented copies of the same image, we apply data augmentation symmetrically to both branches of our framework (Figure 2). We also apply a nonlinear projection on the output of base feature network, and use the representation before projection network, whereas Ye et al. (2019) use the linearly projected final hidden vector as the representation. When training with large batch sizes using multiple accelerators, we use global BN to avoid shortcuts that can greatly decrease representation quality.
{ "id": "1807.03748" }
2002.04809
Lookahead: A Far-Sighted Alternative of Magnitude-based Pruning
Magnitude-based pruning is one of the simplest methods for pruning neural networks. Despite its simplicity, magnitude-based pruning and its variants demonstrated remarkable performances for pruning modern architectures. Based on the observation that magnitude-based pruning indeed minimizes the Frobenius distortion of a linear operator corresponding to a single layer, we develop a simple pruning method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization. Our experimental results demonstrate that the proposed method consistently outperforms magnitude-based pruning on various networks, including VGG and ResNet, particularly in the high-sparsity regime. See https://github.com/alinlab/lookahead_pruning for codes.
http://arxiv.org/pdf/2002.04809
Sejun Park, Jaeho Lee, Sangwoo Mo, Jinwoo Shin
cs.LG, stat.ML
ICLR 2020, camera ready
null
cs.LG
20200212
20200212
0 2 0 2 b e F 2 1 ] G L . s c [ 1 v 9 0 8 4 0 . 2 0 0 2 : v i X r a Published as a conference paper at ICLR 2020 # LOOKAHEAD: A FAR-SIGHTED ALTERNATIVE OF MAGNITUDE-BASED PRUNING Sejun Park∗†, Jaeho Lee∗†‡, Sangwoo Mo† and Jinwoo Shin†‡ † KAIST EE {sejun.park,jaeho-lee,swmo,jinwoos}@kaist.ac.kr # ‡ KAIST AI # ABSTRACT Magnitude-based pruning is one of the simplest methods for pruning neural net- works. Despite its simplicity, magnitude-based pruning and its variants demon- strated remarkable performances for pruning modern architectures. Based on the observation that magnitude-based pruning indeed minimizes the Frobenius distor- tion of a linear operator corresponding to a single layer, we develop a simple prun- ing method, coined lookahead pruning, by extending the single layer optimization to a multi-layer optimization. Our experimental results demonstrate that the pro- posed method consistently outperforms magnitude-based pruning on various net- works, including VGG and ResNet, particularly in the high-sparsity regime. See https://github.com/alinlab/lookahead_pruning for codes. # INTRODUCTION The “magnitude-equals-saliency” approach has been long underlooked as an overly simplistic base- line among all imaginable techniques to eliminate unnecessary weights from over-parametrized neural networks. Since the early works of LeCun et al. (1989); Hassibi & Stork (1993) which provided more theoretically grounded alternatives of magnitude-based pruning (MP) based on sec- ond derivatives of the loss function, a wide range of methods including Bayesian / information- theoretic approaches (Neal, 1996; Louizos et al., 2017; Molchanov et al., 2017; Dai et al., 2018), £,-regularization (Wen et al., 2016; Liu et al., 2017; Louizos et al., 2018), sharing redundant chan- nels (Zhang et al., 2018; Ding et al., 2019), and reinforcement learning approaches (Lin et al., 2017; Bellec et al., 2018; He et al., 2018) have been proposed as more sophisticated alternatives. On the other hand, the capabilities of MP heuristics are gaining attention once more. Combined with minimalistic techniques including iterative pruning (Han et al., 2015) and dynamic reestablishment of connections (Zhu & Gupta, 2017), a recent large-scale study by Gale et al. (2019) claims that MP can achieve a state-of-the-art trade-off between sparsity and accuracy on ResNet-50. The unreason- able effectiveness of magnitude scores often extends beyond the strict domain of network pruning; a recent experiment by Frankle & Carbin (2019) suggests the existence of an automatic subnetwork discovery mechanism underlying the standard gradient-based optimization procedures of deep, over- parametrized neural networks by showing that the MP algorithm finds an efficient trainable subnet- work. These observations constitute a call to revisit the “magnitude-equals-saliency” approach for a better understanding of the deep neural network itself. As an attempt to better understand the nature of MP methods, we study a generalization of magnitude scores under a functional approximation framework; by viewing MP as a relaxed minimization of distortion in layerwise operators introduced by zeroing out parameters, we consider a multi-layer extension of the distortion minimization problem. Minimization of the newly suggested distortion measure, which ‘looks ahead’ the impact of pruning on neighboring layers, gives birth to a novel pruning strategy, coined lookahead pruning (LAP). In this paper, we focus on the comparison of the proposed LAP scheme to its MP counterpart. We empirically demonstrate that LAP consistently outperforms MP under various setups, including lin- ear networks, fully-connected networks, and deep convolutional and residual networks. In particular, LAP consistently enables more than ×2 gain in the compression rate of the considered models, with # ∗equal contribution 1 Published as a conference paper at ICLR 2020 (a) MP (b) LAP O O PVP GOGO Figure 1: An illustration of magnitude-based pruning (MP) and lookahead pruning (LAP). MP only considers a single weight while LAP also considers the effects of neighboring edges. increasing benefits under the high-sparsity regime. Apart from its performance, lookahead pruning enjoys additional attractive properties: • Easy-to-use: Like magnitude-based pruning, the proposed LAP is a simple score-based approach agnostic to model and data, which can be implemented by computationally light elementary tensor operations. Unlike most Hessian-based methods, LAP does not rely on the availability of training data except for the retraining phase. It also has no hyper-parameter to tune, in contrast to other sophisticated training-based and optimization-based schemes. • Versatility: As our method simply replaces the “magnitude-as-saliency” criterion with a looka- head alternative, it can be deployed jointly with algorithmic tweaks developed for magnitude- based pruning, such as iterative pruning and retraining (Han et al., 2015) or joint pruning and training with dynamic reconnections (Zhu & Gupta, 2017; Gale et al., 2019). The remainder of this manuscript is structured as follows: In Section 2, we introduce a functional approximation perspective toward MP and motivate LAP and its variants as a generalization of MP for multiple layer setups; in Section 3 we explore the capabilities of LAP and its variants with simple models, then move on to apply LAP to larger-scale models. # 2 LOOKAHEAD: A FAR-SIGHTED LAYER APPROXIMATION We begin by a more formal description of the magnitude-based pruning (MP) algorithm (Han et al., 2015). Given an L-layer neural network associated with weight tensors W1, . . . , WL, the MP al- gorithm removes connections with the smallest absolute weights from each weight tensor until the desired level of sparsity has been achieved. This layerwise procedure is equivalent to finding a mask M whose entries are either 0 or 1, incurring a smallest Frobenius distortion, measured by min W-MOW\,, 1 M:||Mllo=s ! Ilr (1) where © denotes the Hadamard product, || - ||o denotes the entrywise fo-norm, and s is a sparsity constraint imposed by some operational criteria. Aiming to minimize the Frobenius distortion (Eq. (1)), the MP algorithm naturally admits a func- tional approximation interpretation. For the case of a fully-connected layer, the maximal difference between the output from a pruned and an unpruned layer can be bounded as # Wa —(MOW)alle < |W - MO Wlo- |allo < |W -MOW|e- llalle. Namely, the product of the layerwise Frobenius distortion upper bounds the output distortion of the network incurred by pruning weights. Note that this perspective on MP as a worst-case distortion minimization was already made in Dong et al. (2017), which inspired an advent of the layerwise optimal brain surgery (L-OBS) procedure. A similar idea holds for convolutional layers. For the case of a two-dimensional convolution with a single input and a single output channel, the corresponding linear operator takes a form of a doubly block circulant matrix constructed from the associated kernel tensor (see, e.g., Goodfellow et al. (2016)). Here, the Frobenius distortion of doubly block circulant matrices can be controlled by the Frobenius distortion of the weight tensor of the convolutional layer.1 1The case of multiple input/output channels or non-circular convolution can be dealt with similarly using channel-wise circulant matrices as a block. We refer the interested readers to Sedghi et al. (2019). 2 (2) Published as a conference paper at ICLR 2020 Algorithm 1 Lookahead Pruning (LAP) 1: Input: Weight tensors Wi,..., W, of a trained network, desired sparsities s),..., 51 : Output: Pruned weight tensors W1,..., Wz : fori=1,..., L do Compute £;(w) according to Eq. (4) for all entry w of W; Set ws, as a s;-th smallest element of {£;(w) : w is an entry of W;} Set M; + 1{W; — ws, > 0} Set W; + M; © W; end for Sry aAMEYDN # 2.1 LOOKAHEAD DISTORTION AS A BLOCK APPROXIMATION ERROR The myopic optimization (Eq. (1)) based on the per-layer Frobenius distortion falls short even in the simplest case of the two-layer linear neural network with one-dimensional output, where we consider predictors taking form Y¥ =u'Wea and try to minimize the Frobenius distortion of u'! W (equivalent to £2 distortion in this case). Here, if u; is extremely large, pruning any nonzero element in the i-th row of W may incur a significant Frobenius distortion. Motivated by this observation, we consider a block approximation analogue of the magnitude-based pruning objective Eq. (1). Consider an L-layer neural network associated with weight tensors W1, . . . , WL, and assume linear activation for simplicity (will be extended to nonlinear cases later in this section). Let J (Wi) denote the Jacobian matrix corresponding to the linear operator charac- terized by Wi. For pruning the i-th layer, we take into account the weight tensors of adjacent layers Wi−1, Wi+1 in addition to the original weight tensor Wi. In particular, we propose to minimize the Frobenius distortion of the operator block J (Wi+1)J (Wi)J (Wi−1), i.e., arin, ITT WIT Wir) — TWiT (Mi ®W)TIWiadle GB) An explicit minimization of the block distortion (Eq. (3)), however, is computationally intractable in general (see Appendix D for a more detailed discussion). To avoid an excessive computational overhead, we propose to use the following score-based pruning algorithm, coined lookahead pruning (LAP), for approximating Eq. (3): For each tensor W;, we prune the weights w with the smallest value of lookahead distortion (in a single step), defined as Li(w) := \|T (Wisi) T (Wi) TF (Wi-1) — F (Wisi) I (Wilw=0)T (Wi-1) lle (4) where W;|,,=0 denotes the tensor whose entries are equal to the entries of W; except for having zeroed out w. We let both Wo and W;,+, to be tensors consisting of ones. In other words, lookahead distortion (Eq. (4)) measures the distortion (in Frobenius norm) induced by pruning w while all other weights remain intact. For three-layer blocks consisting only of fully-connected layers and convolutional layers, Eq. (4) reduces to the following compact formula: for an edge w connected to the j-th input neuron/channel and the k-th output neuron/channel of the i-th layer, where its formal derivation is presented in Appendix E. Li(w) = |w . ial. | (5) Wisals. A] F l. where |w| denotes the weight of w, W [j, :] denotes the slice of W composed of weights connected to the j-th output neuron/channel, and W [:, k] denotes the same for the k-th input neuron/channel. In LAP, we compute the lookahead distortion for all weights, and then remove weights with the smallest distortions in a single step (as done in MP). A formal description of LAP is presented in Algorithm 1. We also note the running time of LAP is comparable with that of MP (see Appendix G). LAP on linear networks. To illustrate the benefit of lookahead, we evaluate the performance of MP and LAP on a linear fully-connected network with a single hidden layer of 1,000 nodes, trained with the MNIST image classification dataset. Fig. 2a and Fig. 2b depict the test accuracy of models pruned with each method, before and after retraining steps. As can be expected from the discrepancy between the minimization objectives (Eqs. (1) and (3)), networks pruned with LAP outperform networks pruned with MP at every sparsity level, in terms 3 Published as a conference paper at ICLR 2020 (a) before retraining (b) after retraining Figure 2: Test accuracy of pruned linear network under varying levels of sparsity, (a) before and (b) after a retraining phase. MP denotes magnitude-based pruning and LAP denotes lookahead pruning. All reported points are averaged over 5 trials. of its performance before a retraining phase. Remarkably, we observe that test accuracy of models pruned with LAP monotonically increases from 91.2% to 92.3% as the sparsity level increases, until the fraction of surviving weights reaches 1.28%. At the same sparsity level, models pruned with MP achieves only 71.9% test accuracy. We also observe that LAP leads MP at every sparsity level even after a retraining phase, with an increasing margin as we consider a higher level of sparsity. Understanding LAP with nonlinear activations. Most neural network models in practice deploy nonlinear activation functions, e.g., rectified linear units (ReLU). Although the lookahead distortion has been initially derived using linear activation functions, LAP can also be used for nonlinear networks, as the quantity Li(w) remains relevant to the original block approximation point of view. This is especially true when the network is severely over-parametrized. To see this, consider a case where one aims to prune a connection in the first layer of a two-layer fully-connected network with ReLU, i.e., x Woo(Wi2), (6) where o(x) = max{0, x} is applied entrywise. Under the over-parametrized scenario, zeroing out a single weight may alter the activation pattern of connected neurons with only negligible probability, which allows one to decouple the probability of activation of each neuron from the act of pruning each connection. This enables us to approximate the root mean square distortion of the network output introduced by pruning w of W; by \/px£1(w), where k is the index of the output neuron that w is connected to, and p;, denotes the probability of activation for the k-th neuron. In this sense, LAP (Algorithm 1) can be understood as assuming i.i.d. activations of neurons, due to a lack of additional access to training data. In other words, LAP admits a natural extension to the regime where we assume additional access to training data during the pruning phase. This variant, coined LAP-act, will be formally described in Appendix F, with experimental comparisons to another data- dependent baseline of optimal brain damage (OBD) (LeCun et al., 1989). Another theoretical justification of using the lookahead distortion (Eq. (5)) for neural networks with nonlinear activation functions comes from recent discoveries regarding the implicit bias imposed by training via stochastic gradient descent (Du et al., 2018). See Appendix M for a detailed discussion. As will be empirically shown in Section 3.1, LAP is an effective pruning strategy for sigmoids and tanh activations, that are not piece-wise linear as ReLU. # 2.2 LOOKAHEAD PRUNING WITH BATCH NORMALIZATION Batch normalization (BN), introduced by Ioffe & Szegedy (2015), aims to normalize the output of a layer per batch by scaling and shifting the outputs with trainable parameters. Based on our functional approximation perspective, having batch normalization layers in a neural network is not an issue for MP, which relies on the magnitudes of weights; batch normalization only affects the distribution of the input for each layer, not the layer itself. On the other hand, as the lookahead distortion (Eq. (3)) characterizes the distortion of the multi-layer block, one must take into account batch normalization when assessing the abstract importance of each connection. The revision of lookahead pruning under the presence of batch normalization can be done fairly simply. Note that such a normalization process can be expressed as reaOxrtb, (7) 4 Published as a conference paper at ICLR 2020 for some a, b ∈ Rdim(x). Hence, we revise lookahead pruning to prune the connections with a minimum value of Li(w) = |w| - ai—r[j]ai[A] - Wil | || Wisk, A) F l. ®) where ai[k] denotes the k-th index scaling factor for the BN layer placed at the output of the i-th fully-connected or convolutional layer (if BN layer does not exist, let ai[k] = 1). This modification of LAP makes it an efficient pruning strategy, as will be empirically verified in Section 3.3. 2.3 VARIANTS OF LOOKAHEAD PRUNING As the LAP algorithm (Algorithm 1) takes into account current states of the neighboring layers, LAP admits several variants in terms of lookahead direction, the order of pruning, and sequential pruning methods; these methods are extensively studied in Section 3.2. Along with “vanilla” LAP, we consider in total, six variants, which we now describe below: Mono-directional LAPs. To prune a layer, LAP considers both preceding and succeeding layers. Looking forward, i.e., only considering the succeeding layer, can be viewed as an educated modi- fication of the internal representation the present layer produces. Looking backward, on the other hand, can be interpreted as only taking into account the expected structure of input coming into the current layer. The corresponding variants, coined LFP and LBP, are tested. Order of pruning. Instead of using the unpruned tensors of preceding/succeeding layers, we also consider performing LAP based on already-pruned layers. This observation brings up a question of the order of pruning; an option is to prune in a forward direction, i.e., prune the preceding layer first and use the pruned weight to prune the succeeding, and the other is to prune backward. Both methods are tested, which are referred to as LAP-forward and LAP-backward, respectively. Sequential pruning. We also consider a sequential version of LAP-forward/backward methods. More specifically, if we aim to prune total p% of weights from each layer, we divide the pruning bud- get into five pruning steps and gradually prune (p/5)% of the weights per step in forward/backward direction. Sequential variants will be marked with a suffix “-seq”. # 3 EXPERIMENTS In this section, we compare the empirical performance of LAP with that of MP. More specifically, we validate the applicability of LAP to nonlinear activation functions in Section 3.1. In Section 3.2, we test LAP variants from Section 2.3. In Section 3.3, we test LAP on VGG (Simonyan & Zisserman, 2015), ResNet (He et al., 2016), and Wide ResNet (WRN, Zagoruyko & Komodakis (2016)). Experiment setup. We consider five neural network architectures: (1) The fully-connected net- work (FCN) under consideration is consist of four hidden layers, each with 500 neurons. (2) The convolutional network (Conv-6) consists of six convolutional layers, followed by a fully-connected classifier with two hidden layers with 256 neurons each; this model is identical to that appearing in the work of Frankle & Carbin (2019) suggested as a scaled-down variant of VGG.2 (3) VGG-19 is used, with an addition of batch normalization layers after each convolutional layers, and a reduced number of fully-connected layers from three to one.3 (4) ResNets of depths {18, 50} are used. (5) WRN of 16 convolutional layers and widening factor 8 (WRN-16-8) is used. All networks used ReLU activation function, except for the experiments in Section 3.1. We mainly consider image clas- sification tasks. In particular, FCN is trained on MNIST dataset (Lecun et al., 1998), Conv-6, VGG, and ResNet are trained on CIFAR-10 dataset (Krizhevsky & Hinton, 2009), and VGG, ResNet, and WRN are trained on Tiny-ImageNet.4 We focus on the one-shot pruning of MP and LAP, i.e., mod- els are trained with a single training-pruning-retraining cycle. All results in this section are averaged over five independent trials. We provide more details on setups in Appendix A. 2Convolutional layers are organized as [64, 64] − MaxPool − [128, 128] − MaxPool − [256, 256]. 3This is a popular configuration of VGG for CIFAR-10 (Liu et al., 2019; Frankle & Carbin, 2019) 4Tiny-ImageNet visual recognition challenge, https://tiny-imagenet.herokuapp.com. 5 Published as a conference paper at ICLR 2020 (a) sigmoid (b) tanh (c) ReLU (d) before retraining Figure 3: Test accuracy of FCN with (a) sigmoid, (b) tanh, (c) ReLU activations; (d) test accuracy of FCN with ReLU activation before retraining, for the MNIST dataset. 3.1 NETWORKS WITH NONLINEAR ACTIVATION FUNCTIONS We first compare the performance of LAP with that of MP on FCN using three different types of activation functions: sigmoid, and tanh, and ReLU. Figs. 3a to 3c depict the performance of models pruned with LAP (Green) and MP (Red) under various levels of sparsity. Although LAP was motivated primarily from linear networks and partially justified for positive- homogenous activation functions such as ReLU, the experimental results show that LAP consis- tently outperforms MP even on networks using sigmoidal activation functions. We remark that LAP outperforms MP by a larger margin as fewer weights survive (less than 1%). Such a pattern will be observed repeatedly in the remaining experiments of this paper. In addition, we also check whether LAP still exhibits better test accuracy before retraining under the usage of nonlinear activation functions, as in the linear network case (Fig. 2b). Fig. 3d illustrates the test accuracy of pruned FCN using ReLU on the MNIST dataset before retraining. We observe that the network pruned by LAP continues to perform better than MP in this case; the network pruned by LAP retains the original test accuracy until only 38% of the weights survive, and shows less than 1% performance drop with only 20% of the weights remaining. On the other hand, MP requires 54% and 30% to achieve the same level of performance, respectively. In other words, the models pruned with MP requires about 50% more survived parameters than the models pruned with LAP to achieve a similar level of performance before being retrained using additional training batches. 3.2 EVALUATING LAP VARIANTS Now we evaluate LAP and its variants introduced in Section 2.3 on FCN and Conv-6, each trained on MNIST and CIFAR-10, respectively. Table 1 summarizes the experimental results on FCN and Table 2 summarizes the results on Conv-6. In addition to the baseline comparison with MP, we also compare with random pruning (RP), where the connection to be pruned was decided completely independently. We observe that LAP performs consistently better than MP and RP with similar or smaller variance in any case. In the case of an extreme sparsity, LAP enjoys a significant perfor- mance gain; over 75% gain on FCN and 14% on Conv-6. This performance gain comes from a better training accuracy, instead of a better generalization; see Appendix L for more information. Comparing mono-directional lookahead variants, we observe that LFP performs better than LBP in the low-sparsity regime, while LBP performs better in the high-sparsity regime; in any case, LAP performed better than both methods. Intriguingly, the same pattern appeared in the case of the ordered pruning. Here, LAP-forward can be considered an analogue of LBP in the sense that they both consider layers closer to the input to be more critical. Likewise, LAP-backward can be considered an analogue of LFP. We observe that LAP-forward performs better than LAP-backward in the high-sparsity regime, and vice versa in the low-sparsity regime. Our interpretation is as follows: Whenever the sparsity level is low, carefully curating the input signal is not important due to high redundancies in the natural image signal. This causes a relatively low margin of increment by looking backward in comparison to looking forward. When the sparsity level is high, the input signal is scarce, and the relative importance of preserving the input signal is higher. Finally, we observe that employing forward/backward ordering and sequential methods leads to better performance, especially in the high-sparsity regime. There is no clear benefit of adopting directional methods in the low-sparsity regime. The relative gain in performance with respect to LAP is either marginal or unreliable. 6 Published as a conference paper at ICLR 2020 Table 1: Test error rates of FCN on MNIST. Subscripts denote standard deviations, and bracketed numbers denote relative gains with respect to MP. Unpruned models have 1.98% error rate. 6.36% 3.21% 1.63% 0.84% 0.43% 0.23% 0.12% MP (baseline) RP 1.75±0.11 2.36±0.13 2.11±0.14 2.72±0.16 2.53±0.09 3.64±0.17 3.32±0.27 17.54±7.07 4.77±0.22 82.48±4.03 19.85±8.67 88.65±0.00 67.62±9.91 88.65±0.00 LFP LBP LAP 1.63±0.08 (-6.41%) 1.75±0.17 (+0.69%) 1.67±0.11 (-4.24%) 1.89±0.11 (-10.60%) 2.04±0.12 (-3.31%) 1.89±0.12 (-10.61%) 2.43±0.10 (-3.95%) 2.61±0.15 (+3.00%) 2.48±0.13 (-2.05%) 3.32±0.13 (-0.12%) 3.62±0.17 (+8.97%) 3.29±0.06 (-1.08%) 4.23±0.38 (-11.40%) 4.19±0.31 (-12.23%) 3.93±0.26 (-17.72%) 9.59±1.70 (-51.70%) 9.09±1.41 (-54.21%) 6.72±0.44 (-66.15%) 50.11±12.99 (-25.91%) 28.51±14.85 (-57.84%) 16.45±5.61 (-75.68%) LAP-forward LAP-backward 1.60±0.08 (-8.25%) 1.63±0.11 (-6.64%) 1.93±0.15 (-8.43%) 1.88±0.07 (-10.80%) 2.51±0.11 (-0.95%) 2.35±0.02 (-7.03%) 3.56±0.19 (+7.03%) 3.12±0.08 (-6.08%) 4.47±0.20 (-6.41%) 3.87±0.18 (-19.02%) 6.58±0.33 (-66.81%) 5.62±0.17 (-71.71%) 12.00±0.73 (-82.26%) 13.00±3.30 (-80.78%) LAP-forward-seq LAP-backward-seq 1.68±0.11 (-3.66%) 1.57±0.08 (-10.08%) 1.92±0.10 (-9.09%) 1.84±0.10 (-12.41%) 2.49±0.14 (-1.42%) 2.20±0.10 (-13.27%) 3.39±0.24 (+1.93%) 3.13±0.16 (-5.90%) 4.21±0.06 (-11.86%) 3.62±0.14 (-24.13%) 6.20±0.32 (-68.73%) 5.42±0.27 (-72.71%) 10.98±1.03 (-83.76%) 11.92±4.61 (-82.36%) Table 2: Test error rates of Conv-6 on CIFAR-10. Subscripts denote standard deviations, and brack- eted numbers denote relative gains with respect to MP. Unpruned models have 11.97% error rate. 10.62% 8.86% 7.39% 6.18% 5.17% 4.32% 3.62% MP (baseline) RP 11.86±0.33 26.85±1.23 12.20±0.21 29.72±1.13 13.30±0.30 32.98±1.10 15.81±0.59 35.92±1.08 20.19±2.35 39.13±1.05 24.43±1.48 41.20±1.19 28.60±2.10 43.60±0.82 LFP LBP LAP 11.81±0.35 (-0.39%) 12.08±0.17 (+1.84%) 11.76±0.24 (-0.83%) 12.18±0.23 (-0.20%) 12.34±0.36 (-1.15%) 12.16±0.27 (-0.34%) 13.27±0.44 (-0.26%) 13.26±0.16 (-0.33%) 13.05±0.14 (-1.86%) 15.04±0.43 (-4.87%) 14.93±0.85 (-5.57%) 14.39±0.44 (-8.99%) 18.50±0.80 (-8.37%) 18.11±1.27 (-10.31%) 17.10±1.26 (-15.30%) 22.86±1.66 (-6.40%) 22.57±0.94 (-7.59%) 21.24±1.16 (-13.04%) 26.65±1.33 (-6.83%) 26.34±1.60 (-7.91%) 24.52±1.11 (-14.29%) LAP-forward LAP-backward 11.82±0.16 (-0.33%) 11.82±0.25 (-0.32%) 12.35±0.34 (+1.24%) 12.29±0.06 (+0.68%) 13.09±0.36 (-1.62%) 12.93±0.38 (-2.78%) 14.42±0.45 (-8.79%) 14.55±0.58 (-7.98%) 17.05±1.30 (-15.57%) 17.00±0.84 (-15.78%) 20.28±1.40 (-16.98%) 20.00±0.82 (-18.11%) 22.80±0.51 (-20.30%) 23.37±1.16 (-18.30%) LAP-forward-seq LAP-backward-seq 12.01±0.17 (+1.28%) 11.81±0.16 (-0.39%) 12.47±0.37 (+2.21%) 12.35±0.26 (+1.25%) 13.19±0.19 (-0.81%) 13.25±0.21 (-0.41%) 14.12±0.28 (-10.70%) 14.17±0.44 (-10.37%) 16.73±0.95 (-17.13%) 16.99±0.97 (-15.87%) 19.63±1.81 (-19.62%) 19.94±1.02 (-18.38%) 22.44±1.31 (-21.54%) 23.15±1.12 (-19.08%) 3.3 DEEPER NETWORKS: VGG, RESNET, AND WRN We also compare empirical performances of MP with LAP on deeper networks. We trained VGG-19 and ResNet-18 on CIFAR-10 (Tables 3 and 4), and VGG-19, ResNet-50, and WRN-16-8 on Tiny- ImageNet (Tables 5 to 7). For models trained on CIFAR-10, we also test LAP-forward to verify the observation that it outperforms LAP in the high-sparsity regime on such deeper models. We also report additional experimental results on VGG-{11, 16} trained on CIFAR-10 in Appendix B. For models trained on Tiny-ImageNet, top-1 error rates are reported in Appendix C. From Tables 3 to 7, we make the following two observations: First, as in Section 3.2, the models pruned with LAP consistently achieve a higher or similar level of accuracy compared to models pruned with MP, at all sparsity levels. In particular, test accuracies tend to decay at a much slower rate with LAP. In Table 3, for instance, we observe that the models pruned by LAP retain test accuracies of 70∼80% even with less than 2% of weights remaining. In contrast, the performance of models pruned with MP falls drastically, to below 30% accuracy. This observation is consistent on both CIFAR-10 and Tiny-ImageNet datasets. Second, the advantages of considering an ordered pruning method (LAP-forward) over LAP is lim- ited. While we observe from Table 3 that LAP-forward outperforms both MP and LAP in the high- sparsity regime, the gain is marginal considering standard deviations. LAP-forward is consistently worse than LAP (by at most 1% in absolute scale) in the low-sparsity regime. 7 Published as a conference paper at ICLR 2020 Table 3: Test error rates of VGG-19 on CIFAR-10. Subscripts denote standard deviations, and bracketed numbers denote relative gains with respect to MP. Unpruned models have 9.02% error rate. 12.09% 8.74% 6.31% 4.56% 3.30% 2.38% 1.72% 1.24% MP (baseline) 8.99±0.12 9.90±0.09 11.43±0.24 15.62±1.68 29.10±8.78 40.27±11.51 63.27±11.91 77.90±7.94 LAP LAP-forward 8.89±0.14 (-1.07%) 9.63±0.25 (+7.16%) 9.51±0.22 (-3.96%) 10.31±0.23 (+4.12%) 10.56±0.28 (-7.63%) 11.10±0.22 (-2.89%) 12.11±0.44 (-22.48%) 12.24±0.33 (-21.66%) 13.64±0.77 (-53.13%) 13.54±0.28 (-53.46%) 16.38±1.47 (-59.31%) 16.03±0.46 (-60.18%) 20.88±1.71 (-67.00%) 19.33±1.14 (-69.44%) 22.82±0.81 (-70.71%) 21.59±0.32 (-72.29%) Table 4: Test error rates of ResNet-18 on CIFAR-10. Subscripts denote standard deviations, and bracketed numbers denote relative gains with respect to MP. Unpruned models have 8.68% error rate. 10.30% 6.33% 3.89% 2.40% 1.48% 0.92% 0.57% 0.36% MP (baseline) 8.18±0.33 8.74±0.15 9.82±0.18 11.28±0.30 14.31±0.18 18.56±0.36 22.93±0.93 26.77±1.04 LAP LAP-forward 8.09±0.10 (-1.08%) 8.19±0.15 (+0.12%) 8.97±0.22 (+2.59%) 9.17±0.07 (+4.85%) 9.74±0.15 (-0.81%) 10.32±0.27 (+5.09%) 11.35±0.20 (+0.64%) 12.38±0.30 (+9.79%) 13.73±0.24 (-4.08%) 15.31±0.62 (+6.96%) 16.29±0.29 (-12.23%) 18.56±0.88 (-0.02%) 20.22±0.53 (-11.82%) 21.09±0.53 (-8.04%) 22.45±0.64 (-15.82%) 23.89±0.46 (-10.44%) Table 5: Top-5 test error rates of VGG-19 on Tiny-ImageNet. Subscripts denote standard deviations, and bracketed numbers denote relative gains with respect to MP. Unpruned models have 36.89% error rate. Top-1 test error rates are presented in Table 10. 12.16% 10.34% 8.80% 7.48% 6.36% 5.41% 4.61% 3.92% MP (baseline) 36.40±1.31 37.37±1.08 38.40±1.30 40.23±1.26 42.68±1.97 45.83±2.76 49.79±2.67 56.15±5.14 LAP LAP-forward 36.01±1.31 (-1.07%) 36.98±1.04 (+1.58%) 37.03±0.90 (-0.90%) 37.35±0.90 (-0.04%) 38.20±1.61 (-0.52%) 38.49±1.10 (+0.24%) 39.36±1.30 (-2.16%) 39.57±0.97 (-1.65%) 40.95±1.46 (-4.05%) 40.94±1.49 (-4.06%) 43.14±1.33 (-5.87%) 43.30±1.57 (-5.53%) 45.29±1.80 (-9.02%) 45.76±1.37 (-8.08%) 48.34±0.30 (-13.92%) 48.95±1.70 (-12.84%) Table 6: Top-5 test error rates of ResNet-50 on Tiny-ImageNet. Subscripts denote standard de- viations, and bracketed numbers denote relative gains with respect to MP. Unpruned models have 23.19% error rate. Top-1 test error rates are presented in Table 11. 6.52% 4.74% 3.45% 2.51% 1.83% 1.34% 0.98% 0.72% 23.88±0.27 24.99±0.34 26.84±0.39 29.54±0.58 34.04±0.48 40.19±0.36 45.13±0.57 59.18±16.31 23.64±0.40 (-1.00%) 24.26±0.48 (+1.57%) 24.91±0.25 (-0.34%) 24.92±0.41 (-0.30%) 26.52±0.38 (-1.17%) 27.66±0.55 (+3.08%) 28.84±0.43 (-2.38%) 30.93±0.81 (+4.71%) 33.71±0.58 (-0.98%) 35.90±1.24 (+5.46%) 39.07±0.45 (-2.79%) 39.99±0.58 (-0.48%) 43.05±0.97 (-4.61%) 43.42±0.52 (-3.79%) 46.16±1.04 (-22.00%) 45.45±0.78 (-23.19%) Table 7: Top-5 test error rates of WRN-16-8 on Tiny-ImageNet. Subscripts denote standard de- viations, and bracketed numbers denote relative gains with respect to MP. Unpruned models have 25.77% error rate. Top-1 test error rates are presented in Table 12. 12.22% 8.85% 6.41% 4.65% 3.37% 2.45% 1.77% 1.29% MP (baseline) 25.27±0.73 26.79±0.87 28.84±1.04 31.91±0.80 37.01±1.42 42.89±2.43 51.10±2.59 59.73±2.85 LAP LAP-forward 24.99±0.85 (-1.12%) 26.30±0.88 (+4.08%) 26.55±1.45 (-0.87%) 28.52±2.13 (+6.48%) 28.68±1.17 (-0.58%) 30.98±1.39 (+7.42%) 32.22±2.51 (+0.98%) 34.72±1.82 (+8.83%) 35.82±2.06 (-3.22%) 38.41±2.48 (+3.79%) 41.37±3.07 (-3.55%) 42.02±2.46 (-2.02%) 45.43±4.48 (-11.10%) 45.10±1.80 (-11.74%) 51.83±1.91 (-13.22%) 51.92±1.94 (-13.07%) # 4 CONCLUSION In this work, we interpret magnitude-based pruning as a solution to the minimization of the Frobe- nius distortion of a single layer operation incurred by pruning. Based on this framework, we consider the minimization of the Frobenius distortion of multi-layer operation, and propose a novel lookahead pruning (LAP) scheme as a computationally efficient algorithm to solve the optimization. Although LAP was motivated from linear networks, it extends to nonlinear networks which indeed minimizes the root mean square lookahead distortion assuming i.i.d. activations. We empirically show its effec- tiveness on networks with nonlinear activation functions, and test the algorithm on various network architectures including VGG, ResNet and WRN, where LAP consistently performs better than MP. 8 Published as a conference paper at ICLR 2020 Acknowledgments. We thank Seunghyun Lee for providing helpful feedbacks and suggestions in preparing the early version of the manuscript. JL also gratefully acknowledges Jungseul Ok and Phillip M. Long for enlightening discussions about theoretical natures of neural network prun- ing. This research was supported by the Engineering Research Center Program through the Na- tional Research Foundation of Korea (NRF), funded by the Korean Government MSIT (NRF- 2018R1A5A1059921). # REFERENCES S. Arora, N. Cohen, and E. Hazan. On the optimization of deep networks: Implicit acceleration by overparametrization. In Proceedings of the International Conference on Machine Learning, 2018. G. Bellec, D. Kappel, W. Maass, and R. Legenstein. Deep rewiring: training very sparse deep networks. In International Conference on Learning Representations, 2018. B. Dai, C. Zhu, B. Guo, and D. Wipf. Compressing neural networks using the variational information bottleneck. In Proceedings of the International Conference on Machine Learning, 2018. F. Dangel, F. Kunstner, and P. Hennig. BackPACK: Packing more into backprop. In International Conference on Learning Representations, 2020. X. Ding, G. Ding, Y. Guo, and J. Han. Centripetal SGD for pruning very deep convolutional net- works with complicated structure. In IEEE Conference on Computer Vision and Pattern Recog- nition, 2019. Xin Dong, Shangyu Chen, and Sinno Pan. Learning to prune deep neural networks via layer-wise optimal brain surgeon. In Advances in Neural Information Processing Systems, 2017. S. S. Du, W. Hu, and J. D. Lee. Algorithmic regularization in learning deep homogeneous models: Layers are automatically balanced. In Advances in Neural Information Processing Systems, 2018. J. Frankle and M. Carbin. The lottery ticket hypothesis: Finding sparse, trainable neural networks. In International Conference on Learning Representations, 2019. T. Gale, E. Elsen, and S. Hooker. The state of sparsity in deep neural networks. arXiv preprint 1902.09574, 2019. X. Glorot and Y. Bengio. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the International Conference on Artificial Intelligence and Statistics, 2010. I. Goodfellow, Y. Bengio, and A. Courville. Deep Learning. MIT Press, 2016. http://www. deeplearningbook.org. S. Han, J. Pool, J. Tran, and W. J. Dally. Learning both weights and connections for efficient neural networks. In Advances in Neural Information Processing Systems, 2015. B. Hassibi and D. G. Stork. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems, 1993. K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2016. Y. He, J. Lin, Z. Liu, H. Wang, L. Li, and S. Han. AMC: AutoML for model compression and acceleration on mobile devices. In European Conference on Computer Vision, 2018. S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, 2015. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. 9 Published as a conference paper at ICLR 2020 Y. LeCun, J. S. Denker, and S. A. Solla. Optimal brain damage. In Advances in Neural Information Processing Systems, 1989. Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recog- nition. In Proceedings of the IEEE, 1998. J. Lin, Y. Rao, J. Lu, and J. Zhou. Runtime neural pruning. In Advances in Neural Information Processing Systems, 2017. Z. Liu, J. Li, Z. Shen, G. Huang, S. Yan, and C. Zhang. Learning efficient convolutional networks through network slimming. In IEEE International Conference on Computer Vision, 2017. Z. Liu, M. Sun, T. Zhou, G. Huang, and T. Darrell. Rethinking the value of network pruning. In International Conference on Learning Representations, 2019. C. Louizos, K. Ullrich, and M. Welling. Bayesian compression for deep learning. In Advances in Neural Information Processing Systems, 2017. C. Louizos, M. Welling, and D. P. Kingma. Learning sparse neural networks through l0 regulariza- tion. In International Conference on Learning Representations, 2018. D. Molchanov, A. Ashukha, and D. Vetrov. Variational dropout sparsified deep neural networks. In Proceedings of the International Conference on Machine Learning, 2017. K. G. Murty and S. N. Kabadi. Some NP-complete problems in quadratic and nonlinear program- ming. Mathematical programming, 39(2):117–129, 1987. R. M. Neal. Bayesian learning for neural networks. Springer, 1996. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. Chen. MobileNetV2: Inverted residuals and linear bottlenecks. In IEEE Conference on Computer Vision and Pattern Recognition, 2018. H. Sedghi, V. Gupta, and P. M. Long. The singular values of convolutional layers. In International Conference on Learning Representations, 2019. K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recogni- tion. In International Conference on Learning Representations, 2015. W. Wen, C. Wu, Y. Wang, Y. Chen, and H. Li. Learning structured sparsity in deep networks. In Advances in Neural Information Processing Systems, 2016. J. Ye, X. Lu, Z. Lin, and J. Z. Wang. Rethinking smaller-norm-less-informative assumption in chan- nel pruning of convolutional layers. In International Conference on Learning Representations, 2018. S. Zagoruyko and N. Komodakis. Wide residual networks. In Proceedings of British Machine Vision Conference, 2016. D. Zhang, H. Wang, M. Figueiredo, and L. Balzano. Learning to share: simultaneous parameter ty- ing and sparsification in deep learning. In International Conference on Learning Representations, 2018. M. Zhu and S. Gupta. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878, 2017. 10 Published as a conference paper at ICLR 2020 # A EXPERIMENTAL SETUP Models and datasets. We consider four neural network architectures: (1) The fully-connected network (FCN) under consideration is composed of four hidden layers, each with 500 hidden neu- rons. (2) The convolutional network (Conv-6) consists of six convolutional layers, followed by a fully-connected classifier with two hidden layers with 256 hidden neurons each; this model is iden- tical to that appearing in the work of Frankle & Carbin (2019) suggested as a scaled-down variant of VGG.5 (3) VGGs of depths {11, 16, 19} were used, with an addition of batch normalization layers after each convolutional layers, and a reduced number of fully-connected layers from three to one.6 (4) ResNets with depth {18, 50} are used. (5) Wide ResNets with depth 16 and widening factor 8 is used. All networks are initialized via the method of Glorot & Bengio (2010), except for ResNets and WRN. We use the ReLU activation function except for the experiments in Section 3.1. We fo- cus on image classification tasks. FCN is trained with MNIST dataset (Lecun et al., 1998), Conv-6, VGG-{11, 16, 19} and ResNet-18 are trained with CIFAR-10 dataset (Krizhevsky & Hinton, 2009), and VGG-19, ResNet-50, WRN-16-8 ware trained with Tiny-ImageNet dataset. Optimizers and hyperparameters. We use Adam optimizer (Kingma & Ba, 2015) with batch size 60. We use a learning rate of 1.2 · 10−3 for FCN and 3 · 10−4 for all other models. For FCN, we use [50k, 50k] for the initial training phase and retraining phase. For Conv-6, we use [30k, 20k] steps. For VGG-11 and ResNet-18, we use [35k, 25k] steps. For VGG-16, we use [50k, 35k]. For VGG-19, ResNet-50, and WRN-16-8 we use [60k, 40k]. We do not use any weight decay, learning rate scheduling, or regularization. Sparsity levels. To determine the layerwise pruning ratio, we largely follow the the guidelines of Han et al. (2015); Frankle & Carbin (2019): For integer values of τ , we keep pτ fraction of weights in all convolutional layers and qτ fraction in all fully-connected layers, except for the last layer where we use (1 + q)/2 instead. For FCN, we use (p, q) = (0, 0.5). For Conv-6, VGGs ResNets, and WRN, we use (0.85, 0.8). For ResNet-{18, 50}, we do not prune the first convolutional layer. The range of sparsity for reported figures in all tables is decided as follows: we start from τ where test error rate starts falling below that of an unpruned model and report the results at τ, τ + 1, τ + 2, . . . for FCN and Conv-6, τ, τ + 2, τ + 4, . . . for VGGs, ResNet-50, and WRN, and τ, τ + 3, τ + 6, . . . for ResNet-18. # B ADDITIONAL VGG EXPERIMENTS Table 8: Test error rates of VGG-11 on CIFAR-10. Subscripts denote standard deviations, and brack- eted numbers denote relative gains with respect to MP. Unpruned models have 11.51% error rate. 16.74% 12.10% 8.74% 6.32% 4.56% 3.30% 2.38% 1.72% MP (baseline) 11.41±0.24 12.38±0.14 13.54±0.35 16.08±1.13 19.76±1.67 28.12±3.45 45.38±11.69 55.97±15.99 LAP LAP-forward 11.19±0.15 (-1.96%) 11.47±0.30 (+0.56%) 11.79±0.44 (-4.78%) 12.33±0.12 (-0.44%) 12.95±0.14 (-4.39%) 13.15±0.22 (-2.87%) 13.95±0.17 (-13.25%) 13.96±0.25 (-13.18%) 15.59±0.35 (-21.13%) 15.42±0.21 (-21.97%) 20.96±6.02 (-25.47%) 18.22±0.69 (-35.20%) 22.00±1.09 (-51.52%) 21.74±1.59 (-52.10%) 28.96±3.30 (-48.25%) 25.85±1.40 (-53.82%) Table 9: Test error rates of VGG-16 on CIFAR-10. Subscripts denote standard deviations, and bracketed numbers denote relative gains with respect to MP. Unpruned models have 9.33% error rate. 10.28% 7.43% 5.37% 3.88% 2.80% 2.03% 1.46% 1.06% 9.55±0.11 10.78±0.45 13.42±2.19 17.83±3.08 26.61±4.91 48.87±5.85 69.39±11.85 83.47±5.60 9.35±0.18 (-2.05%) 9.45±0.17 (-1.03%) 10.07±0.19 (-6.59%) 10.40±0.20 (-3.49%) 11.52±0.26 (-14.21%) 11.33±0.15 (-15.60%) 12.57±0.34 (-29.50%) 13.09±0.21 (-26.56%) 14.23±0.27 (-46.52%) 14.61±0.25 (-45.08%) 17.01±1.46 (-65.19%) 17.10±0.19 (-65.02%) 25.03±2.08 (-63.92%) 22.39±0.74 (-67.74%) 32.45±12.20 (-61.12%) 24.99±0.49 (-70.06%) 5Convolutional layers are organized as [64, 64] − MaxPool − [128, 128] − MaxPool − [256, 256]. 6This is a popular configuration of VGG for CIFAR-10 (Liu et al., 2019; Frankle & Carbin, 2019) 11 Published as a conference paper at ICLR 2020 # C TOP-1 ERROR RATES FOR TINY-IMAGENET EXPERIMENTS Table 10: Top-1 test error rates of VGG-19 on Tiny-ImageNet. Subscripts denote standard devi- ations, and bracketed numbers denote relative gains with respect to MP. Unpruned models have 64.55% error rate. 12.16% 10.34% 8.80% 7.48% 6.36% 5.41% 4.61% MP (baseline) 63.35±1.44 64.43±1.05 65.44±1.31 67.09±1.04 69.40±1.40 72.36±2.09 75.35±1.75 LAP LAP-forward 63.15±1.52 (-0.31%) 64.22±1.11 (+1.38%) 63.91±1.38 (-0.80%) 64.77±0.96 (+0.53%) 65.56±1.42 (+0.18%) 65.63±1.21 (+0.28%) 66.56±0.93 (-0.80%) 67.03±1.23 (-0.09%) 68.40±1.08 (-1.44%) 68.52±1.39 (-1.26%) 70.45±0.67 (-2.63%) 70.55±1.21 (-2.50%) 72.16±1.62 (-4.24%) 73.13±0.97 (-2.95%) 3.92% 79.98±3.28 75.05±0.29 (-6.17%) 75.71±1.33 (-5.34%) Table 11: Top-1 test error rates of ResNet-50 on Tiny-ImageNet. Subscripts denote standard de- viations, and bracketed numbers denote relative gains with respect to MP. Unpruned models have 47.50% error rate. 6.52% 4.74% 3.45% 2.51% 1.83% 1.34% 0.98% MP (baseline) 48.18±0.39 49.85±0.30 52.28±0.24 55.46±0.57 60.51±0.39 66.60±0.42 70.75±0.33 LAP LAP-forward 48.27±0.13 (+0.20%) 48.69±0.52 (+1.05%) 49.96±0.26 (+0.22%) 50.25±0.26 (+0.79%) 51.92±0.21 (-0.69%) 53.55±0.42 (+2.42%) 54.91±0.45 (-0.99%) 57.59±0.61 (+3.84%) 60.31±0.18 (-0.34%) 62.74±0.87 (+3.69%) 65.46±0.27 (-1.71%) 66.59±0.89 (-0.02%) 69.13±0.91 (-2.29%) 69.55±0.25 (-1.69%) 0.72% 80.02±8.94 71.81±0.84 (-10.26%) 71.49±0.57 (-10.67%) Table 12: Top-1 test error rates of WRN-16-8 on Tiny-ImageNet. Subscripts denote standard de- viations, and bracketed numbers denote relative gains with respect to MP. Unpruned models have 51.85% error rate. 12.22% 8.85% 6.41% 4.65% 3.37% 2.45% 1.77% MP (baseline) 50.38±1.00 52.64±0.84 55.23±1.13 58.79±0.81 64.11±1.23 69.22±2.03 75.90±2.03 LAP LAP-forward 49.85±1.19 (-1.04%) 51.86±1.14 (+2.95%) 52.33±1.69 (-0.60%) 54.77±2.37 (+4.05%) 54.96±1.26 (-0.49%) 57.65±1.75 (+4.38%) 59.06±2.40 (+0.46%) 61.84±1.39 (+5.18%) 62.68±1.57 (-2.23%) 65.30±2.16 (+1.85%) 67.82±2.39 (-2.02%) 69.03±2.46 (-0.27%) 71.30±3.65 (-6.06%) 71.75±1.66 (-5.46%) 1.29% 81.83±2.17 76.51±1.54 (-6.50%) 77.00±1.23 (-5.91%) 12 Published as a conference paper at ICLR 2020 # D NP-HARDNESS OF EQ. (3) In this section, we show that the optimization in Eq. (3) is NP-hard by showing the reduction from the following binary quadratic programming which is NP-hard (Murty & Kabadi, 1987): min x∈{0,1}n xT Ax (9) for some symmetric matrix A ∈ Rn×n. Without loss of generality, we assume that the minimum eigenvalue of A (denoted with λ) is negative; if not, Eq. (9) admits a trivial solution x = (0, . . . , 0). Assuming λ < 0, Eq. (9) can be reformulated as: min x∈{0,1}n xT Hx + λ i xi (10) where H = A − λI. Here, one can easily observe that the above optimization can be solved by solving the below optimization for s = 1, . . . , n min a’ He (11) re {0,1}":32, wi=s Finally, we introduce the below equality # «! Hx =a'UAU's = ||VAUT al|; = ||VAUT xl|z = ||VAu'1- «! Hx =a'UAU's (12) = ||VAUT al|; (13) √ = ||VAUT xl|z (14) √ √ = ||VAu'1- VAU' (1-2) ©1)||7 (15) where 1 denotes a vector of ones, U is a matrix consisting of the eigenvectors of H as its column vectors, and A is a diagonal matrix with corresponding (positive) eigenvalues of H as its diagonal elements. The above equality shows that Eq. (11) is a special case of Eq. (3) by choosing W, = VAUT, We = 1,W3 = land M = 1 — z. This completes the reduction from Eq. (9) to Eq. (3). # E DERIVATION OF EQ. (5) In this section, we provide a derivation of Eq. (5) for the fully-connected layers. The convolutional layers can be handled similarly by substituting the multiplications in Eqs. (16) and (17) by the convolutions. The Jacobian matrix of the linear operator correponding to a fully-connected layer is the weight matrix itself, i.e. J (Wi) = Wi. From this, lookahead distortion can be reformulated as Lilw) = | WiiWiW;1 — WisrWijw-oWVi-) (16) Now, we decompose the matrix product W;,W;W;_, in terms of entries of W; as below: Wi+1WiWi−1 = Wi[k, j]Wi+1[:, k]Wi−1[j, :] j,k (17) where Wi[k, j], Wi+1[:, k], and Wi−1[j, :] denote (j, k)-th element of Wi, k-th column of Wi+1, and j-th row of Wi−1, respectively. The contribution of a single entry w := Wi[k, j] to the prod- uct Wi+1WiWi−1 is equivalent to w · Wi+1[:, k]Wi−1[j, :]. Therefore, in terms of the Frobenius distortion, we conclude that Li(w) = ||w »Wisils, k|Wi-aly, Mlle =|w]- ||Wi-al, Ile which completes the derivation of Eq. (5) for fully-connected layers. # : ||Wisil:, All,» 13 (12) Published as a conference paper at ICLR 2020 # F LAP-ACT: IMPROVING LAP USING TRAINING DATA Recall two observations made from the example of two-layer fully connected network with ReLU activation appearing in Section 2.1: LAP is designed to reflect the lack of knowledge about the training data at the pruning phase; once the activation probability of each neuron can be estimated, it is possible to refine LAP to account for this information. In this section, we continue our discussion on the second observation. In particular, we study an extension of LAP called lookahead pruning with activation (LAP-act) which prunes the weight with smallest value of L;(w) := |@| - Mab. il. : ant, i (18) Here, W; is a scaled version of W; and @ is the corresponding scaled value of w, defined by Wild Wild = (D> vee) Wiles a9) kelij where Ii,j denotes the set of ReLU indices in the j-th output neuron/channel of i-th layer. For example, Ii,j = {j} for fully connected layers and Ii,j is a set of ReLU indices in the j-th channel for convolutional layers. Also, pk denotes the k-th ReLU’s probability of activation, which can be estimated by passing the training data. We derive LAP-act (Eq. (18)) in Appendix F.1 and perform preliminary empirical validations in Appendix F.2 with using optimal brain damage (OBD) as a baseline. We also evaluate a variant of LAP using Hessian scores of OBD instead of magnitude scores. It turns out that in the small networks (FCN, Conv-6), LAP-act outperforms OBD. F.1 DERIVATION OF LAP-ACT Consider a case where one aims to prune a connection of a network with ReLU, i.e., ars I(Wi)o(F(Wi-1)---o(T(Wi)x) +++), (20) where σ(x) = max{0, x} is applied entrywise. Under the over-parametrized scenario, zeroing out a single weight may alter the activation pattern of connected neurons with only negligible probability, which allows one to decouple the probability of activation of each neuron from the act of pruning each connection. From this observation, we first construct the below random distortion, following the philosophy of the linear lookahead distortion Eq. (4) Li(w) := \|F(Wisa)(F(Wi) — T(Wilw—0))F (Wis) lle (21) where .7(W;) denotes a random matrix where J(W;)[k,:] = gi{k] - 7(W;)[h,:] and g;[k] is a 0-1 random variable corresponding to the activation, i.e., g;[k] = 1 if and only if the k-th output, ie., ReLU, of the i-th layer is activated. However, directly computing the expected distortion with re- spect to the real activation distribution might be computationally expensive. To resolve this issue, we approximate the root mean square lookahead distortion by applying the mean-field approximation to the activation probability of neurons, i.e., all activations are assumed to be independent, as VEomnian Li(w)?] ~ VEnn, (gil) Li (w)?] =: L£;(w) (22) where g = [gi]i, p(g) denotes the empirical activation distribution of all neurons and [T; , p(gi[k]) denotes the mean-field approximation of p(g). Indeed, the lookahead distortion with ReLU non- linearity (Eq. (22)) or three-layer blocks consisting only of the fully-connected layers and the con- volutional layers can be easily computed by using the rescaled weight matrix W;: Veloilkl j=( > Veloilkl = 1) I=0): Wilds] (23) kelij where Ii,j denotes the set of ReLU indices in the j-th output neuron/channel of i-th layer. For example, Ii,j = {j} for fully connected layers and Ii,j is a set of ReLU indices in the j-th channel 14 Published as a conference paper at ICLR 2020 for convolutional layers. Finally, for an edge w connected to the j-th input neuron/channel and the k-th output neuron/channel of the i-th layer, Eq. (22) reduces to L,(w) = |@|- Ww, i, | Wisk, ijl. (24) where t# denotes the rescaled value of w. This completes the derivation of Eq. (18). F.2 EXPERIMENTS WITH LAP-ACT We compare the performance of three algorithms utilizing training data at the pruning phase: opti- mal brain damage (OBD) which approximates the loss via second order Taylor seris approximation with the Hessian diagonal (LeCun et al., 1989), LAP using OBD instead of weight magnitudes (OBD+LAP), and LAP-act as described in this section. We compare the performances of three al- gorithms under the same experimental setup as in Section 3.2. To compute the Hessian diagonal for OBD and OBD+LAP, we use a recently introduced software package called “BackPACK,” (Dangel et al., 2020), which is the only open-source package supporting an efficient of Hessians, up to our knowledge. Note that the algorithms evaluated in this section are also evaluated for global pruning experiments in Appendix I. The experimental results for FCN and Conv-6 are presented in Tables 13 and 14. Comparing to algorithms relying solely on the model parameters for pruning (MP/LAP in Tables 1 and 2), we observe that OBD performs better in general, especially in the high sparsity regime. This observation is coherent to the findings of LeCun et al. (1989). Intriguingly, however, we observe that applying lookahead critertion to OBD (OBD+LAP) significantly enhances to OBD significantly enhances the performance in the high sparsity regime. We hypothesize that LAP helps capturing a correlation among scores (magnitude or Hessian-based) of adjacent layers. Also, we observe that LAP-act consistently exhibits a better performance compared to OBD. This result is somewhat surprising, in the sense that LAP-act only utilizes (easier-to-estimate) information about activation probabilities of each neuron to correct lookahead distortion. The average running time of OBD, OBD+LAP, and LAP-act is summarized in Table 15. We use Xeon E5-2630v4 2.20GHz for pruning edges, and additionally used a single NVidia GeForce GTX- 1080 for the computation of Hessian diagonals (used for OBD, OBD+LAP) and activation prob- abiility (for LAP-act). We observe that LAP-act runs in a significantly less running time than OBD/OBD+LAP, and the gap widens as the number of parameters and the dimensionality of the dataset increases (from MNIST to CIFAR-10). Table 13: Test error rates of FCN on MNIST. Subscripts denote standard deviations, and bracketed numbers denote relative gains with respect to OBD. Unpruned models achieve 1.98% error rate. 6.36% 3.21% 1.63% 0.84% 0.43% 0.23% 0.12% OBD (baseline) 1.87±0.05 2.07±0.13 2.51±0.10 3.07±0.12 4.08±0.14 5.66±0.39 11.01±1.71 OBD+LAP LAP-act 1.81±0.05 (-3.42%) 1.78±0.07 (-4.60%) 2.18±0.13 (+5.31%) 1.85±0.09 (-10.63%) 2.52±0.14 (+0.48%) 2.21±0.13 (-12.11%) 3.48±0.14 (+13.35%) 2.73±0.04 (-11.13%) 4.16±0.35 (+1.91%) 3.50±0.35 (-14.31%) 5.88±0.51 (+3.81%) 4.74±0.21 (-16.21%) 8.65±0.56 (-21.41%) 7.99±0.19 (-27.48%) Table 14: Test error rates of Conv-6 on CIFAR-10. Subscripts denote standard deviations, and bracketed numbers denote relative gains with respect to OBD. Unpruned models achieve 11.97% error rate. 10.62% 8.86% 7.39% 6.18% 5.17% 4.32% 3.62% OBD (baseline) 12.10±0.21 12.81±0.61 13.18±0.26 14.28±0.55 15.54±0.40 16.83±0.27 19.14±0.32 OBD+LAP LAP-act 12.51±0.21 (+3.41%) 12.11±0.12 (+0.12%) 13.22±0.48 (+3.20%) 12.72±0.11 (-0.69%) 13.68±0.57 (+2.23%) 12.92±0.48 (-3.47%) 14.31±0.36 (+0.18%) 13.45±0.25 (-5.87%) 15.09±0.36 (-2.90%) 14.86±0.13 (-4.40%) 16.31±0.51 (-3.13%) 16.47±0.36 (-2.13%) 17.29±0.47 (-9.65%) 18.48±0.33 (-3.46%) 15 Published as a conference paper at ICLR 2020 Table 15: Computation time of OBD, OBD+LAP and LAP-act (averaged over 100 trials). FCN Conv-6 OBD (baseline) 11.38 (s) 167.87 (s) OBD+LAP LAP-act 11.61 (s) 6.28 (s) 168.03 (s) 8.95 (s) # weight parameters 1.15M 2.26M # G COMPUTATIONAL COST OF LOOKING AHEAD In this section, we briefly describe how a computation of lookahead distortion Eq. (5) can be done efficiently, and provide experimental comparisons of average computation times for MP and LAP. It turns out that most of the computational load for LAP comes from the sorting procedure, and tensor operations introduce only a minimal overhead. MP comprises of three steps: (1) computing the absolute value of the tensor, (2) sorting the absolute values, and (3) selecting the cut-off threshold and zero-ing out the weights under the threshold. Steps (2) and (3) remain the same in LAP, and typically takes O(n log n) steps (n denotes the number of parameters in a layer). On the other hand, Step (1) is replaced by computing the lookahead distortion Li(w) = [wl > Wiad, Ile Waris elle for each parameter w. Fortunately, this need not be computed separately for each parameter. Indeed, one can perform tensor operations to compute the squared lookahead distortion, which has the same ordering with lookahead distortion. For fully-connected layers with 2-dimensional Jacobians, the squared lookahead distortion for Wj, € R&+2*%,W; € R&*4-1, Wi_, € R&-1*4-2 js Li) = din WS)! © (WP?) 0 (W241), (25) where 1; denotes all-one matrix of size d;_2 x d;; multiplying 1; denotes summing operation along an axis and duplicating summed results into the axis, and ©? denotes the element-wise square oper- ation. The case of convolutional layers can be handled similarly. We note that an implementation of Eq. (25) is very simple. Indeed, the following PyTorch code segment calculates a lookahead score matrix: def lookahead_score(W,W_prev,W_next): W_prev_sq = (W_prev ** 2).sum(dim=1) W_prev_mat = W_prev_sq.view(1,-1).repeat(W.size(0),1) W_next_sq = (W_next ** 2).sum(dim=0) W_next_mat = W_next_sq.view(-1,1).repeat(1,W.size(1)) return (W**2)*W_prev_mat*W_next_mat Combined with modern tensor computation frameworks, computing Eq. (25) does not introduce heavy overhead. To show this, we compare the computation time of MP and LAP for six neural networks in Table 16, where we fixed the layerwise pruning rate to be uniformly 90%. The codes are implemented with PyTorch, and the computations have taken place on 40 CPUs of Intel Xeon E5-2630v4 @ 2.20GHz. All figures are averaged over 100 trials. We make two observations from Table 16. First, the time required for LAP did not exceed 150% of the time required for MP, confirming our claim on the computational benefits of LAP. Second, most of the added computation comes from considering the factors from batch normalization, without which the added computation load is ≈5%. 16 Published as a conference paper at ICLR 2020 Table 16: Computation time of MP and LAP on FCN, Conv-6, VGG-{11,16,19}, ResNet-18. All figures are averaged over 100 independent trials. Bracketed numbers denote relative increments. Number of weight parameters denote the number of parameters that are the target of pruning. FCN Conv-6 VGG-11 VGG-16 VGG-19 ResNet-18 MP (baseline) 46.23 (ms) 108.92 (ms) 542.95 (ms) 865.91 (ms) 1188.29 (ms) 641.59 (ms) LAP (w/o batchnorm) LAP 47.73 (ms) (+3.14%) - - 116.74 (ms) (+7.18%) - - 560.60 (ms) (+3.25%) 805.98 (ms) (+48.44%) 912.47 (ms) (+5.28%) 1213.24 (ms) (+40.11%) 1241.55 (ms) (+4.48%) 1653.02 (ms) (+39.19%) 671.61 (ms) (+4.68%) 943.86 (ms) (+47.11%) # weight parameters 1.15M 2.26M 9.23M 14.72M 20.03M 10.99M # H LOOKAHEAD FOR CHANNEL PRUNING In the main text, LAP is compared to MP in the context of unstructured pruning, where we do not impose any structural constraints on the set of connections to be pruned together. On the other hand, the magnitude-based pruning methods are also being used popularly as a baseline for channel pruning (Ye et al., 2018), which falls under the category of structured pruning. MP in channel pruning is typically done by removing channels with smallest aggregated weight magnitudes; this aggregation can be done by either taking ¢;-norm or ¢2-norm of magnitudes. Simi- larly, we can consider channel pruning scheme based on an ¢; or £2 aggregation of LAP distortions, which we will call LAP-¢; and LAP-f2 (as opposed to MP-¢; and MP-f2). We compare the performances of LAP-based channel pruning methods to MP-based channel pruning methods, along with another baseline of random channel pruning (denoted with RP). We test with Conv-6 (Table 17) and VGG-19 (Table 18) networks on CIFAR-10 dataset. All reported figures are averaged over five trials, experimental settings are identical to the unstructure pruning experiments unless noted otherwise. Similar to the case of unstructured pruning, we observe that LAP-based methods consistently out- perform MP-based methods. Comparing ¢; with £2 aggregation, we note that LAP-¢2 performs better than LAP-¢; in both experiments, by a small margin. Among MP-based methods, we do not observe any similar dominance. Table 17: Test error rates of Conv-6 on CIFAR-10 for channel pruning. Subscripts denote standard deviations, and bracketed numbers denote relative gains with respect to the best of MP-¢; and MP- fy. Unpruned models achieve 11.97% error rate. 34.40% 24.01% 16.81% 11.77% 8.24% 5.76% 4.04% 2.82% MP-¢; 12.11+038 12.55+044 13.624044 16.8541.14 20.05+061 23.98+4092 27.75+089 37.56+2.16 MP-¢5 11.97+039 12.6640.24 14.174053 16.69+108 20.09+096 24.61+4194 28.304147 35.18+1.80 RP 12.944041 14.82+0.27 17.57+065 20.19+054 22.50+069 25.86+0.72 30.64+087 38.26+2.78 LAP-€; —12.08+4028 12.5740.26 13.374029 15.46+0.71 18.30+053 21.40+066 24.8841.10 30.43+1.07 (40.87%) (40.16%) (-1.85%) = (-7.42%) (-8.76%) (-10.75%) (-10.37%) (-13.50%) LAP-(2 11.70+4037 12.3140.23 13.704051 15.424062 17.944091 21.3841.24 24.364155 30.5543.04 (-2.21%) (-1.90%) (40.62%) —(-7.62%) (-10.55%) (-10.84%) (-12.23%) —(-13.16%) Table 18: Test error rates of VGG-19 on CIFAR-10 for channel pruning. Subscripts denote standard deviations, and bracketed numbers denote relative gains with respect to the best of MP-¢; and MP- fy. Unpruned models achieve 9.02% error rate. 34.30% 28.70% 24.01% 20.09% 16.81% 14.06% 11.76% 9.84% 9.25±0.23 9.40±0.23 10.58±0.61 9.81±0.36 9.73±0.52 11.72±1.26 10.12±0.15 10.27±0.18 12.86±0.89 10.77±0.73 10.61±0.74 19.49±12.70 14.28±1.57 12.26±1.79 20.19±2.45 14.53±1.48 13.74±1.96 24.99±6.33 18.84±3.53 17.70±3.46 46.18±18.08 23.71±4.94 33.27±15.72 54.52±16.61 9.05±0.23 (-2.23%) 9.06±0.20 (-2.10%) 9.46±0.25 (-2.75%) 9.42±0.36 (-3.21%) 10.07±0.46 (-0.47%) 9.74±0.37 (-3.77%) 10.53±0.27 (-0.81%) 10.53±0.40 (-0.79%) 10.95±0.19 (-10.73%) 10.74±0.22 (-12.39%) 12.37±0.74 (-9.99%) 11.87±0.33 (-13.61%) 15.50±0.81 (-12.43%) 13.51±0.27 (-23.66%) 16.65±3.28 (-29.77%) 15.67±2.78 (-33.92%) 17 Published as a conference paper at ICLR 2020 # I LOOKAHEAD FOR GLOBAL PRUNING In this section, we present global pruning results for MP, LAP, OBD, OBD+LAP and LAP-act in Table 19 and Table 20. In this methods, we prune a fraction of weights with smallest scores (e.g. weight magnitude, lookahead distortion, Hessian-based scores) among all weights in the whole net- work. The suffix “-normalize” in the tables denotes that the score is normalized by the Frobenius norm of the corresponding layer’s score. For MP, LAP, OBD+LAP and LAP-act, we only report the results for global pruning with normalization, as the normalized versions outperform the unnormal- ized ones. In the case of OBD, whose score is already globally designed, we report the results for both unnormalized and normalized versions. As demonstrated in Section 3.2 for fixed layerwise pruning rates, we observe that LAP and its vari- ants perform better than their global pruning baselines, i.e. MP-normalize and OBD. We also note that LAP-normalize performs better than MP with pre-specified layerwise pruning rates (appeared in Section 3.2), with a larger gap for higher levels of sparsity. Table 19: Test error rates of FCN on MNIST for global pruning. Subscripts denote standard devia- tions, and bracketed numbers denote relative gains with respect to MP-normalize (for data-agnostic algorithms) and OBD-normalize (for data-dependent algorithms), respectively. Unpruned models achieve 1.98% error rate. 6.36% 3.21% 1.63% 0.84% 0.43% 0.23% 0.12% MP-normalize (baseline) LAP-normalize 1.82±0.08 1.71±0.09 (-6.16%) 2.16±0.06 2.07±0.10 (-4.26%) 2.72±0.17 2.69±0.09 (-1.03%) 3.54±0.09 3.42±0.22 (-3.33%) 6.54±0.35 4.15±0.07 (-36.57%) 59.59±16.23 6.68±0.55 (-88.79%) 88.65±0.00 19.18±3.81 (-78.36%) OBD (baseline) OBD-normalize OBD+LAP-normalize LAP-act-normalize 1.71±0.13 1.71±0.09 (-0.12%) 1.84±0.13 (+7.48%) 1.68±0.13 (-1.87%) 1.93±0.13 1.92±0.10 (-0.52%) 2.00±0.13 (+3.73%) 1.80±0.09 (-6.84%) 2.12±0.12 2.22±0.08 (+4.62%) 2.22±0.16 (+4.91%) 2.06±0.10 (-3.02%) 2.82±0.17 2.77±0.25 (-1.84%) 2.93±0.34 (+3.97%) 2.80±0.19 (-0.78%) 3.59±0.31 3.55±0.19 (-1.11%) 3.55±0.27 (-1.22%) 3.50±0.12 (-2.56%) 5.12±0.22 4.99±0.26 (-2.54%) 5.04±0.76 (-1.52%) 4.82±0.27 (-5.90%) 10.52±1.14 11.08±2.73 (+5.36%) 8.33±2.51 (-20.79%) 8.50±1.16 (-19.21%) Table 20: Test error rates of Conv-6 on CIFAR-10 for global pruning. Subscripts denote standard deviations, and bracketed numbers denote relative gains with respect to MP-normalize (for data- agnostic algorithms) and OBD-normalize (for data-dependent algorithms), respectively. Unpruned models achieve 11.97% error rate. 10.62% 8.86% 7.39% 6.18% 5.17% 4.32% 3.62% MP-normalize (baseline) LAP-normalize 12.42±0.17 11.81±0.32 (-4.91%) 13.14±0.35 12.23±0.25 (-6.87%) 14.17±0.40 12.44±0.22 (-12.19%) 15.39±0.40 13.02±0.12 (-15.42%) 17.57±0.46 13.73±0.16 (-21.86%) 21.04±0.42 14.81±0.34 (-29.61%) 24.40±1.57 15.97±0.30 (-34.54%) OBD (baseline) OBD-normalize OBD+LAP-normalize LAP-act-normalize 12.03±0.64 11.69±0.34 (-2.86%) 12.11±0.32 (+0.68%) 11.92±0.23 (-0.90%) 12.30±0.53 11.93±0.21 (-2.99%) 12.66±0.46 (+2.96%) 12.24±0.05 (-0.49%) 12.64±0.15 12.58±0.08 (-0.47%) 13.36±0.47 (+5.66%) 12.51±0.45 (-1.08%) 13.16±0.23 12.87±0.22 (-2.26%) 13.60±0.33 (+3.30%) 12.89±0.36 (-2.05%) 13.75±0.45 13.62±0.28 (-0.89%) 14.05±0.34 (+2.24%) 13.53±0.41 (-1.54%) 14.70±0.53 14.60±0.24 (-0.67%) 14.98±0.33 (+1.89%) 14.21±0.40 (-3.31%) 16.11±0.50 15.82±0.44 (-1.75%) 15.82±0.39 (-1.80%) 15.42±0.16 (-4.26%) 18 Published as a conference paper at ICLR 2020 # J LAP-ALL: LOOKING AHEAD THE WHOLE NETWORK We also report some experimental results on a variant of lookahead pruning, coined LAP-all, which treats (a linearized version of) the whole network as an operator block. More specifically, one attempts to minimize the Frobenius distortion of the operator block min || Jaiga J (Wi) Sita — Tazis1 J (Mi O Wi) Fi-14\|p; Mi:||Millo=si where Ji+j:i := J (Wi+j)J (Wi+j−1) · · · J (Wi). We test LAP-all on FCN under the same setup as in Section 3.2, and report the results in Table 21. All figures are averaged over five trials. We observe that LAP-all achieves a similar level of performance to LAP, while LAP-all underper- forms under a high-sparsity regime. We suspect that such shortfall originates from the accumulation of error terms incurred by ignoring the effect of activation functions, by which the benefits of look- ing further fades. An in-depth theoretical analysis for the determination of an optimal “sight range” of LAP would be an interesting future direction. Table 21: Test error rates of FCN on MNIST, with LAP-all variant. Subscripts denote standard deviations. Unpruned models achieve 1.98% error rate. 6.36% 3.21% 1.63% 0.84% 0.43% 0.23% 0.12% 1.75± 0.11 2.36± 0.13 2.11± 0.14 2.72± 0.16 2.53± 0.09 3.64± 0.17 3.32± 0.27 17.54± 7.07 4.77± 0.22 82.48± 4.03 19.85± 8.67 88.65± 0.00 67.62± 9.91 88.65± 0.00 1.67± 0.11 1.64± 0.05 1.89± 0.12 2.06± 0.17 2.48± 0.13 2.53± 0.15 3.29± 0.06 3.23± 0.13 3.93± 0.26 4.01± 0.10 6.72± 0.44 6.78± 0.44 16.45± 5.61 25.64± 5.42 # K COMPARISON WITH SMALLER NETWORKS As a sanity check, we compare the performance of large neural networks pruned via MP and LAP to the performance of a small network. In particular, we prune VGG-16, VGG-19, and ResNet- 18 trained on CIFAR-10 dataset, to have a similar number of parameters to MobileNetV2 (Sandler et al., 2018). For training and pruning VGGs and ResNet, we follows the prior setup in Appendix A while we use the same setup for training MobileNetV2 (Adam optimizer with learning rate of 3 · 10−4 with batch size 60, and trained 60k steps). We observe that models pruned via LAP (and MP) exhibit better performance compared to MobileNetV2, even when pruned to have a smaller number of parameters. Table 22: Test error rates of various networks on CIFAR-10. Subscripts denote standard deviations, and bracketed numbers denote relative gains with respect to the unpruned MobileNetV2. VGG-16 VGG-19 ResNet-18 MobileNetV2 Unpruned 9.33±0.15 9.02±0.36 8.68±0.21 9.81±0.30 MP LAP 8.92±0.18 (-9.07%) 8.77±0.20 (-10.60%) 9.46±0.25 (-3.57%) 9.30±0.25 (-5.20%) 7.70±0.23 (-21.51%) 7.73±0.29 (-21.20%) - - # weight parameters 2.09M/14.72M 2.06M/20.03M 2.17M/10.99M (19.17%) (14.23%) (10.28%) 2.20M 19 Published as a conference paper at ICLR 2020 # L WHERE IS THE PERFORMANCE GAIN OF LAP COMING FROM? In this section, we briefly discuss where the benefits of the sub-network discovered by LAP comes from; does LAP subnetwork have a better generalizability or expressibility? For this purpose, we look into the generalization gap, i.e., the gap between the training and test accuracies, of the hypoth- esis learned via LAP procedure. Below we present a plot of test accuracies (Fig. 4a) and a plot of generalization gap (Fig. 4b) for FCN trained with MNIST dataset. The plot hints us that the network structure learned by LAP may not necessarily have a smaller generalizability. Remarkably, the gen- eralization gap of the MP-pruned models and the LAP-pruned models are very similar to each other; the benefits of LAP subnetwork compared to MP would be that it can express a better-performing architecture with a network of similar sparsity and generalizability. (a) Test accuracy (b) Generalization gap Figure 4: Test accuracy and generalization gap of FCN trained on MNIST. # M CONNECTIONS TO IMPLICIT BIAS OF SGD Another theoretical justification of using the lookahead distortion (Eq. (5)) for neural networks with nonlinear activation functions comes from recent discoveries regarding the implicit bias imposed by training procedures using stochastic gradient descent. More specifically, Du et al. (2018) proves the following result, generalizing the findings of Arora et al. (2018): For any two neighboring layers of fully-connected neural network using positive homogeneous activation functions, the quantity Wisrl, alla — Wels, alle (26) remains constant for any hidden neuron j over training via gradient flow. In other words, the total outward flow of weights is tied to the inward flow of weights for each neuron. This observation hints at the possibility of a relative undergrowth of weight magnitude of an ‘important’ connection, in the case where the connection shares the same input/output neuron with other ‘important’ connections. From this viewpoint, the multiplicative factors in Eq. (5) take into account the abstract notion of neuronal importance score, assigning significance to connections to the neuron through which more gradient signals have flowed through. Without considering such factors, LAP reduces to the ordinary magnitude-based pruning. 20
{ "id": "1710.01878" }
2002.05202
GLU Variants Improve Transformer
Gated Linear Units (arXiv:1612.08083) consist of the component-wise product of two linear projections, one of which is first passed through a sigmoid function. Variations on GLU are possible, using different nonlinear (or even linear) functions in place of sigmoid. We test these variants in the feed-forward sublayers of the Transformer (arXiv:1706.03762) sequence-to-sequence model, and find that some of them yield quality improvements over the typically-used ReLU or GELU activations.
http://arxiv.org/pdf/2002.05202
Noam Shazeer
cs.LG, cs.NE, stat.ML
null
null
cs.LG
20200212
20200212
0 2 0 2 b e F 2 1 ] G L . s c [ 1 v 2 0 2 5 0 . 2 0 0 2 : v i X r a # GLU Variants Improve Transformer # Noam Shazeer Google [email protected] December 30, 2021 # Abstract Gated Linear Units [Dauphin et al., 2016] consist of the component-wise product of two linear pro- jections, one of which is first passed through a sigmoid function. Variations on GLU are possible, using different nonlinear (or even linear) functions in place of sigmoid. We test these variants in the feed- forward sublayers of the Transformer [Vaswani et al., 2017] sequence-to-sequence model, and find that some of them yield quality improvements over the typically-used ReLU or GELU activations. 1 # 1 Introduction The Transformer [Vaswani et al., 2017] sequence-to-sequence model alternates between multi-head attention, and what it calls "position-wise feed-forward networks" (FFN). The FFN takes a vector x (the hidden repre- sentation at a particular position in the sequence) and passes it through two learned linear transformations, (represented by the matrices W1 and W2 and bias vectors b1 and b2). A rectified-linear (ReLU) [Glorot et al., 2011] activation function applied between the two linear transformations. FFN(x, W1, W2, b1, b2) = max(0, xW1 + b1)W2 + b2 (1) Following the T5 codebase [Raffel et al., 2019] 1, we use a version with no bias: FFNReLU(x, W1, W2) = max(xW1, 0)W2 (2) Subsequent work has proposed replacing the ReLU with other nonlinear activation functions such as Gaussian Error Linear Units, GELU(x) = xΦ(x) [Hendrycks and Gimpel, 2016], and Swishβ(x) = xσ(βx) [Ramachandran et al., 2017]. FFNGELU(x, W1, W2) = GELU(xW1)W2 FFNSwish(x, W1, W2) = Swish1(xW1)W2 (3) # 2 Gated Linear Units (GLU) and Variants [Dauphin et al., 2016] introduced Gated Linear Units (GLU), a neural network layer defined as the component- wise product of two linear transformations of the input, one of which is sigmoid-activated. They also suggest omitting the activation, which they call a "bilinear" layer and attribute to [Mnih and Hinton, 2007]. GLU(x, W, V, b, c) = σ(xW + b) ⊗ (xV + c) Bilinear(x, W, V, b, c) = (xW + b) ⊗ (xV + c) (4) We can also define GLU variants using other activation functions: # 1Also in the interest of ML fairness. 1 ReGLU(x, W, V, b, c) = max(0, xW + b) ⊗ (xV + c) GEGLU(x, W, V, b, c) = GELU(xW + b) ⊗ (xV + c) SwiGLU(x, W, V, b, c, β) = Swishβ(xW + b) ⊗ (xV + c) (5) In this paper, we propose additional variations on the Transformer FFN layer which use GLU or one of its variants in place of the first linear transformation and the activation function. Again, we omit the bias terms. FFNGLU(x, W, V, W2) = (σ(xW ) ⊗ xV )W2 FFNBilinear(x, W, V, W2) = (xW ⊗ xV )W2 FFNReGLU(x, W, V, W2) = (max(0, xW ) ⊗ xV )W2 FFNGEGLU(x, W, V, W2) = (GELU(xW ) ⊗ xV )W2 FFNSwiGLU(x, W, V, W2) = (Swish1(xW ) ⊗ xV )W2 (6) All of these layers have three weight matrices, as opposed to two for the original FFN. To keep the number of parameters and the amount of computation constant, we reduce the number of hidden units df f (the second dimension of W and V and the first dimension of W2) by a factor of 2 3 when comparing these layers to the original two-matrix version. # 3 Experiments on Text-to-Text Transfer Transformer (T5) We test the FFN variants we have described on the transfer-learning setup from [Raffel et al., 2019]. An encoder-decoder transformer model [Vaswani et al., 2017] is trained on a denoising objective of predicting missing text segments, and subsequently fine-tuned on various language understanding tasks. # 3.1 Model Architecture We use the same code base, model architecture, and training task as the base model from [Raffel et al., 2019]. The encoder and decoder each consist of 12 layers, with dmodel = 768. For the attention layers, h = 12 and dk = dv = 64. The FFN layers have hidden size df f = 3072. As we describe above, for the GLU-variant-based FFN layers, which have thee weight matrices instead of two, we reduce the hidden layer to df f = 2048, so as to maintain the same parameter and operation counts as the base model. Table 1: Heldout-set log-perplexity for Transformer models on the segment-filling task from [Raffel et al., 2019]. All models are matched for parameters and computation. Training Steps FFNReLU(baseline) FFNGELU FFNSwish FFNGLU FFNBilinear FFNGEGLU FFNSwiGLU FFNReGLU 65,536 1.997 (0.005) 1.983 (0.005) 1.994 (0.003) 1.982 (0.006) 1.960 (0.005) 1.942 (0.004) 1.944 (0.010) 1.953 (0.003) 524,288 1.677 1.679 1.683 1.663 1.648 1.633 1.636 1.645 2 # 3.2 Pre-Training and Perplexity Results Identically to [Raffel et al., 2019], we pre-train for 524,288 steps on the span-filling objective on the C4 dataset. Each training batch consists of 128 examples, each of which has an input of 512 tokens and an output of 114 tokens, the output containing multiple spans of tokens which were deleted from the input2. Similarly to [Raffel et al., 2019], we use the Adafactor optimizer [Shazeer and Stern, 2018] and an inverse- square-root learning-rate schedule. We also decay the learning rate linearly for the final 10 percent of the training steps. Our main departure from [Raffel et al., 2019] is that we use no dropout during pre-training. We find this to produce superior results. We compute the log-perplexity on the training objective on a heldout shard of C4, which we believe to be a good indicator of model quality. For each model architecture, we also trained four models for a shorter period (65,536 steps) to measure inter-run variability. The results are listed in table 1. The GEGLU and SwiGLU variants produce the best perplexities. # 3.3 Fine-Tuning We then fine-tune each fully-trained model once on an examples-proportional mixture of the Stanford Question-Answering Dataset (SQuAD) [Rajpurkar et al., 2016] and all the language understanding tasks in the GLUE [Wang et al., 2018] and SuperGlue [Wang et al., 2019] benchmarks.3 Fine-tuning consists of 131072 steps with a learning rate of 10−3. As in training, the input sequences for each step have a combined length of approximately 65,536 tokens. Following [Raffel et al., 2019], we use a dropout rate of 0.1 on the layer outputs, feed-forward hidden-layers and attention weights. The embedding matrices are fixed during fine-tuning. Tables 2, 3 and 4 show results on the development sets. For each task, we report the best score of any of the checkpoints recorded during fine-tuning. While the results are noisy, the new GLU-variants perform best on most of the tasks. For comparison, at the bottom of each of the tables we list the reuslts from [Raffel et al., 2019]. The model is identical to our FFNReLU model. Their results are notably worse, which we believe was caused by their use of dropout during pre-training. Also listed are the inter-run standard deviations measured by [Raffel et al., 2019]. Table 2: GLUE Language-Understanding Benchmark [Wang et al., 2018] (dev). QQP MNLIm MNLImm QNLI Acc Acc 92.81 91.75 92.39 91.62 92.33 91.67 92.92 91.62 92.81 91.85 92.92 91.69 92.93 91.87 92.68 91.72 90.48 91.56 0.361 0.070 CoLA SST-2 MRPC MRPC STSB STSB QQP Acc 90.20 90.20 89.46 89.46 89.71 89.46 88.97 89.22 88.92 1.019 Score F1 89.01 88.63 88.84 88.79 89.11 88.95 89.14 88.86 88.67 0.108 SCC 89.42 89.49 88.98 89.35 90.13 89.84 90.13 89.85 87.94 0.418 PCC 89.64 89.69 89.20 89.46 90.26 90.06 90.32 89.97 88.02 0.374 F1 93.08 92.81 92.31 92.39 92.68 92.28 92.23 92.06 92.07 0.729 Acc 94.04 94.04 93.69 94.27 93.92 94.38 93.92 94.38 Acc 85.83 85.89 85.22 86.36 86.15 86.90 86.45 86.20 84.24 0.291 Average MCC 51.32 53.48 49.79 49.16 53.65 51.02 51.59 56.16 Acc 86.42 86.13 85.02 86.18 86.17 87.08 86.47 86.40 84.57 0.231 83.80 83.86 83.60 84.20 84.12 83.79 84.36 84.67 FFNReLU FFNGELU FFNSwish FFNGLU FFNGEGLU FFNBilinear FFNSwiGLU FFNReGLU [Raffel et al., 2019] ibid. stddev. 53.84 1.111 92.68 0.569 83.28 0.235 RTE Acc 80.14 80.51 81.23 84.12 79.42 81.95 83.39 81.59 76.28 1.393 # 4 Conclusions We have extended the GLU family of layers and proposed their use in Transformer. In a transfer-learning setup, the new variants seem to produce better perplexities for the de-noising objective used in pre-training, as well as better results on many downstream language-understanding tasks. These architectures are simple to implement, and have no apparent computational drawbacks. We offer no explanation as to why these architectures seem to work; we attribute their success, as all else, to divine benevolence. 2Each training step took approximately 0.15 seconds on a 32-core TPUv2 cluster. 3This departs from [Raffel et al., 2019], who fine-tuned separately on the different tasks. We chose one fine-tuning run for simplicity. 3 Table 3: SuperGLUE Language-Understanding Benchmark [Wang et al., 2019] (dev). Score Average 72.76 72.98 72.40 73.95 73.96 73.81 74.56 73.66 71.36 0.416 CB F1 83.37 86.24 77.75 77.26 82.09 82.49 82.39 86.37 BoolQ Acc 80.15 80.64 80.43 80.95 81.19 81.53 81.19 80.89 76.62 0.365 CB Acc 89.29 91.07 83.93 83.93 87.50 89.29 89.29 91.07 CoPA MultiRC MultiRC ReCoRD ReCoRD RTE WiC WSC Acc Acc 77.88 70.00 75.96 74.00 81.73 67.00 87.50 73.00 83.65 72.00 76.00 78.85 86.54 73.00 79.81 67.00 78.56 66.20 2.029 2.741 Acc 83.39 81.59 81.95 84.12 83.39 82.67 85.20 84.48 75.34 1.228 F1 76.93 75.93 76.34 76.07 77.43 76.04 75.56 75.32 66.13 0.716 EM 72.91 72.03 72.36 73.50 74.60 74.10 74.55 74.18 68.16 0.379 EM 39.14 38.61 39.14 39.03 41.03 40.92 38.72 40.50 25.78 1.011 Acc 67.71 68.34 68.18 67.71 67.08 69.28 67.24 67.40 68.04 0.850 F1 73.73 72.96 73.34 74.22 75.28 74.97 75.35 75.07 69.05 0.370 FFNReLU FFNGELU FFNSwish FFNGLU FFNGEGLU FFNBilinear FFNSwiGLU FFNReGLU [Raffel et al., 2019] ibid. stddev. 91.22 3.237 91.96 2.560 Table 4: SQuAD [Rajpurkar et al., 2016] v1.1 (dev). F1 90.87 90.79 90.76 90.69 91.12 91.06 91.03 91.18 EM 83.18 83.09 83.25 82.88 83.55 83.82 83.42 83.53 80.88 0.343 FFNReLU FFNGELU FFNSwish FFNGLU FFNGEGLU FFNBilinear FFNSwiGLU FFNReGLU [Raffel et al., 2019] ibid. Standard Deviation 88.81 0.226 # References Yann N. Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolu- tional networks. CoRR, abs/1612.08083, 2016. URL http://arxiv.org/abs/1612.08083. 1 Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 315–323, 2011. 1 Dan Hendrycks and Kevin Gimpel. Bridging nonlinearities and stochastic regularizers with gaussian error linear units. CoRR, abs/1606.08415, 2016. URL http://arxiv.org/abs/1606.08415. 1 Andriy Mnih and Geoffrey Hinton. Three new graphical models for statistical language modelling. Proceedings of the 24th international conference on Machine learning, pages 641–648, 2007. 1 In Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv e-prints, 2019. 1, 2, 3, 4 Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016. 3, 4 Prajit Ramachandran, Barret Zoph, and Quoc V Le. Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017. 1 Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235, 2018. 3 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. 1, 2 4 Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: arXiv preprint A multi-task benchmark and analysis platform for natural language understanding. arXiv:1804.07461, 2018. 3 Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537, 2019. 3, 4 5
{ "id": "1804.04235" }
2002.05512
Real or Not Real, that is the Question
While generative adversarial networks (GAN) have been widely adopted in various topics, in this paper we generalize the standard GAN to a new perspective by treating realness as a random variable that can be estimated from multiple angles. In this generalized framework, referred to as RealnessGAN, the discriminator outputs a distribution as the measure of realness. While RealnessGAN shares similar theoretical guarantees with the standard GAN, it provides more insights on adversarial learning. Compared to multiple baselines, RealnessGAN provides stronger guidance for the generator, achieving improvements on both synthetic and real-world datasets. Moreover, it enables the basic DCGAN architecture to generate realistic images at 1024*1024 resolution when trained from scratch.
http://arxiv.org/pdf/2002.05512
Yuanbo Xiangli, Yubin Deng, Bo Dai, Chen Change Loy, Dahua Lin
cs.LG, cs.CV, eess.IV, stat.ML
ICLR2020 spotlight. 1) train GAN by maximizing kl-divergence. 2) train non-progressive GAN (DCGAN) architecture at 1024*1024 resolution
null
cs.LG
20200212
20200212
0 2 0 2 b e F 2 1 ] G L . s c [ 1 v 2 1 5 5 0 . 2 0 0 2 : v i X r a Published as a conference paper at ICLR 2020 # REAL OR NOT REAL, THAT IS THE QUESTION Yuanbo Xiangli1∗, Yubin Deng1∗, Bo Dai1∗, Chen Change Loy2, Dahua Lin1 The Chinese University of Hong Kong {xy019,dy015,bdai,dhlin}@ie.cuhk.edu.hk # ABSTRACT While generative adversarial networks (GAN) have been widely adopted in var- ious topics, in this paper we generalize the standard GAN to a new perspective by treating realness as a random variable that can be estimated from multiple In this generalized framework, referred to as RealnessGAN1, the dis- angles. criminator outputs a distribution as the measure of realness. While RealnessGAN shares similar theoretical guarantees with the standard GAN, it provides more in- sights on adversarial learning. Compared to multiple baselines, RealnessGAN provides stronger guidance for the generator, achieving improvements on both synthetic and real-world datasets. Moreover, it enables the basic DCGAN (Rad- ford et al., 2015) architecture to generate realistic images at 1024*1024 resolution when trained from scratch. # INTRODUCTION The development of generative adversarial network (GAN) (Goodfellow et al., 2014; Radford et al., 2015; Arjovsky et al., 2017) is one of the most important topics in machine learning since its first appearance in (Goodfellow et al., 2014). It learns a discriminator along with the target generator in an adversarial manner, where the discriminator distinguishes generated samples from real ones. Due to its flexibility when dealing with high dimensional data, GAN has obtained remarkable progresses on realistic image generation (Brock et al., 2019). In the standard formulation (Goodfellow et al., 2014), the realness of an input sample is estimated by the discriminator using a single scalar. However, for high dimensional data such as images, we naturally perceive them from more than one angles and deduce whether it is life-like based on multiple criteria. As shown in Fig.1, when a portrait is given, one might focus on its facial structure, skin tint, hair texture and even details like iris and teeth if allowed, each of which indicates a different aspect of realness. Based on this observation, the single scalar could be viewed as an abstract or a summarization of multiple measures, which together reflect the overall realness of an image. Such a concise measurement may convey insufficient information to guide the generator, potentially leading to well-known issues such as mode-collapse and gradient vanishing. In this paper, we propose to generalize the standard framework (Goodfellow et al., 2014) by treating realness as a random variable, represented as a distribution rather than a single scalar. We refer to ∗Equal contribution. 1Code will be available at https://github.com/kam1107/RealnessGAN ; . i| ' 1 i ' 1 a i = : ' (b) (a) Figure 1: The perception of realness depends on various aspects. (a) Human-perceived flawless. (b) Potentially reduced realness due to: inharmonious facial structure/components, unnatural back- ground, abnormal style combination and texture distortion. 1 Published as a conference paper at ICLR 2020 such a generalization as RealnessGAN. The learning process of RealnessGAN abide by the stan- dard setting, but in a distributional form. While the standard GAN can be viewed as a special case of RealnessGAN, RealnessGAN and the standard GAN share similar theoretical guarantees. i.e. Re- alnessGAN converges to a Nash-equilibrium where the generator and the discriminator reach their optimalities. Moreover, by expanding the scalar realness score into a distributional one, the dis- criminator D naturally provides stronger guidance to the generator G where G needs to match not only the overall realness (as in the standard GAN), but the underlying realness distribution as well. Consequently, RealnessGAN facilitates G to better approximate the data manifold while generat- ing decent samples. As shown in the experiments, based on a rather simple DCGAN architecture, RealnessGAN could successfully learn from scratch to generate realistic images at 1024*1024 res- olution. # 2 REALNESSGAN 2.1 GENERATIVE ADVERSARIAL NETWORKS Generative adversarial network jointly learns a generator G and a discriminator D, where G attempts to generate samples that are indistinguishable from the real ones, and D classifies generated and real samples. In the original work of (Goodfellow et al., 2014), the learning process of D and G follows a minimax game with value function V (G, D): min G max D V (G, D) = Ex∼pdata[log D(x)] + Ez∼pz [log(1 − D(G(z)))], (1) = Ex∼pdata[log(D(x) − 0)] + Ex∼pg [log(1 − D(x))], where the approximated data distribution pg is defined by a prior pz on input latent variables and G. As proved by Goodfellow et al. (2014), under such a learning objective, the optimal D satisfies D∗ pdata(x)+pg(x) for a fixed G. Fixing D at its optimal, the optimal G satisfies pg = pdata. The theoretical guarantees provide strong supports for GAN’s success in many applications (Rad- ford et al., 2015; Yu et al., 2017; Zhu et al., 2017; Dai et al., 2017), and inspired multiple variants (Arjovsky et al., 2017; Mao et al., 2017; Zhao et al., 2017; Berthelot et al., 2017) to improve the original design. Nevertheless, a single scalar is constantly adopted as the measure of realness, while the concept of realness is essentially a random variable covering multiple factors, e.g. texture and overall configuration in the case of images. In this work, we intend to follow this observation, encouraging the discriminator D to learn a realness distribution. 2.2 A DISTRIBUTIONAL VIEW ON REALNESS We start by substituting the scalar output of a discriminator D with a distribution prealness, so that for an input sample x, D(x) = {prealness(x, u); u ∈ Ω}, where Ω is the set of outcomes of prealness. Each outcome u can be viewed as a potential realness measure, estimated via some criteria. While 0 and 1 in equation 2 are used as two virtual ground-truth scalars that respectively represent the realness of real and fake images, we also need two virtual ground-truth distributions to stand for the realness distributions of real and fake images. We refer to these two distributions as A1 (real) and A0 (fake), which are also defined on Ω. As in the standard GAN where 0 and 1 can be replaced with other scalars such as −1 and 1, there are various choices for A1 and A0. Factors lead to a good pair of A1 and A0 will be discussed later. Accordingly, the difference between two scalars is replaced with the Kullback-Leibler (KL) divergence. The minimax game between a generator G and a distributional discriminator D thus becomes max min V(G, D) = Exrpies[Pxi(A1||D(x))] + Ee~p, [Pxi(Ao||D(«))]- (3) An immediate observation is that if we let prealness be a discrete distribution with two outcomes {u0, u1}, and set A0(u0) = A1(u1) = 1 and A0(u1) = A1(u0) = 0, the updated objective in equa- tion 3 can be explicitly converted to the original objective in equation 2, suggesting RealnessGAN is a generalized version of the original GAN. Following this observation, we then extend the theoretical analysis in Goodfellow et al. (2014) to the case of RealnessGAN. Similar to Goodfellow et al. (2014), our analysis concerns the space of 2 (2) Published as a conference paper at ICLR 2020 probability density functions, where D and G are assumed to have infinite capacities. We start from finding the optimal realness discriminator D for any given generator G. Theorem 1. When G is fixed, for any outcome u and input sample x, the optimal discriminator D satisfies A1(u)Daata(@) + Ao(u)pg (x) ; Dal@u) = Paata(®) + Pg (#) ° Proof. Given a fixed G, the objective of D is: min VG, D) = Ea~pay(Pxi(Ail|D(#))] + Ea~p,[Pr(Aal|D(2))]. (5) # min D x pdata(x) u A1(u) log A1(u) D(x, u) du + pg(x) u A0(u) log A0(u) D(x, u) du dx, (6) = == f (ale) Ar) + py(@)h( An) de — | [ (vival) Antu) + pg(x)Ao(u)) log D(x, u)dudz, (7) where h(A1) and h(A0) are their entropies. Marking the first term in equation 7 as C1 since it is irrelevant to D, the objective thus is equivalent to: min D V (G, D) = − x (pdata(x) + pg(x)) u pdata(x)A1(u) + pg(x)A0(u) pdata(x) + pg(x) log D(x, u)dudx + C1, (8) where px(u) = pdata(x)A1(u)+pg(x)A0(u) we then have pdata(x)+pg(x) is a distribution defined on Ω. Let C2 = pdata(x) + pg(x), min V(G,D)=C +f Cy (- | De(u) log D(a, u)du + h(pz) — hive) dx, (9) =Ci +f CaP er(pe| D(a))de + | Cah(ve)te: (10) Observing equation|10| one can see that for any valid 2, when Dx, (p_||D(a)) achieves its mini- mum, D obtains its optimal D*, leading to D*(a) = pz, which concludes the proof. Next, we move on to the conditions for G to reach its optimal when D = Do. Theorem 2. When D = Dj, and there exists an outcome u € Q such that maximum of V(G, Dg) is achieved if and only if Pg = Paaa- # A,(u) # Ao(u), the Proof. When pg = Paata, DE (#,u) = Ar) FAo(u) | we have: * 1) 2A (u) V*(G, Dé) = [ Avo) toe Alu) + Alu) ; 2Ao(u) log Ai(u) + Ao(u) + Ag(w) du. (11) Paata, DE (#,u) = Ar) FAo(u) | we 2A (u) = [ Avo) toe Alu) + Alu) D@) from V(G, D@) gives: Dé) — V*(G, Dé) # Subtracting V*(G, D@) V'(G, Dé) = V(G, Dé) Dé) = V(G, Dé) — V*(G, Dé) _ (Daata(®) + Pg(#))(Ar(u) + Ao(w)) = [ [ (Pos) Ar(0) + P92) Aol) 05 en ne (12) _9 Paata(#) Ai (u) + pg (a) Ao(u) 1 banal lA (OY) toll) dud. dn 2 °8 (pia) Fs (@) Aw FAW) uaz, (13) PaataA1 + pg Ao | (Paata + Pg)(Ai + Ao) 2 4 ~2D «1 ( ). (14) 3 Published as a conference paper at ICLR 2020 Since V*(G, DZ) is a constant with respect to G, maximizing V(G, DZ.) is equivalent to maximiz- ing V’(G, Dt). The optimal V’(G, D¢,) is achieved if and only if the KL divergence reaches its minimum, where: pdataA1 + pgA0 2 (pdata − pg)(A1 − A0) = 0, = (pdata + pg)(A1 + A0) 4 , (15) (16) for any valid 2 and u. Hence, as long as there exists a valid u that A;(u) 4 Ao(u), we have Pdata = Pg for any valid a. 2.3 DISCUSSION The theoretical analysis gives us more insights on RealnessGAN. Number of outcomes: according to equation|16} each wu € with Ao(u) A Ai(u) may work as a constraint, pushing p, towards pgata- In the case of discrete distributions, along with the increment of the number of outcomes, the constraints imposed on G accordingly become more rigorous and can cost G more effort to learn. This is due to the fact that having more outcomes suggests a more fine- grained shape of the realness distribution for G to match. In Sec{4] we verified that it is beneficial to update G' an increasing number of times before D’s update as the number of outcomes grows. Effectiveness of anchors: view equation|16]as a cost function to minimize, when Pdata A Pg, for some u € Q, the larger the difference between A; (wu) and Ao(u) is, the stronger the constraint on G becomes. Intuitively, RealnessGAN can be more efficiently trained if we choose Ap and Aj to be adequately different. Objective of G: according to equation 3, the best way to fool D is to increase the KL divergence between D(x) and the anchor distribution A0 of fake samples, rather than decreasing the KL diver- gence between D(x) and the anchor distribution A1 of real samples. It’s worth noting that these two objectives are equivalent in the original work (Goodfellow et al., 2014). An intuitive explanation is that, in the distributional view of realness, realness distributions of real samples are not necessarily identical. It is possible that each of them corresponds to a distinct one. While A1 only serves as an anchor, it is ineffective to drag all generated samples towards the same target. Flexibility of RealnessGAN: as a generalization of the standard framework, it is straightforward to integrate RealnessGAN with different GAN architectures, such as progressive GANs (Karras et al., 2018; 2019) and conditional GANs (Zhu et al., 2017; Ledig et al., 2017). Moreover, one may also combine the perspective of RealnessGAN with other reformulations of the standard GAN, such as replacing the KL divergence in equation 3 with the Earth Mover’s Distance. 2.4 IMPLEMENTATION In our implementation, the realness distribution prealness is characterized as a discrete distribution over N outcomes Ω = {u0, u1, ..., uN −1}. Given an input sample x, the discriminator D returns N probabilities on these outcomes, following: evil) Preainess(@, Ui) = See (17) 7e where ψ = (ψ0, ψ1, ..., ψN −1) are the parameters of D. Similarly, A1 and A0 are discrete distri- butions defined on Ω. As shown in the theoretical analysis, the ideal objective for G is maximizing the KL divergence between D(x) of generated samples and A0: (Gobjective1) min —Ez~p, [Dri (Ao|| D(G(z))]. (18) However, as the discriminator D is not always at its optimal, especially in the early stage, directly applying this objective in practice could only lead to a generator with limited generative power. Con- sequently, a regularizer is needed to improve G. There are several choices for the regularizer, such as the relativistic term introduced in (Jolicoeur-Martineau, 2019) that minimizes the KL divergence 4 Published as a conference paper at ICLR 2020 between D(x) of generated samples and random real samples, or the term that minimizes the KL divergence between A1 and D(x) of generated samples, each of which leads to a different objective: (Gobjectivez) min Ee~pau,2~p2{Pxi(D(@)|| D(G(2))] — Ez~p. [Pri (Aoll D(G(z))], 19) (Gobjectives) min Ez~p,[Dxi(A1||D(G(2))] — Ez~p. [Pu (Ao||D(G(z))). (20) In Sec.4, these objectives are compared. And the objective in equation 19 is adopted as the default choice. Feature resampling. In practice, especially in the context of images, we are learning from a limited number of discrete samples coming from a continuous data manifold. We may encounter issues caused by insufficient data coverage during the training process. Inspired by conditioning augmen- tation mentioned in [2016), we introduce a resampling technique performed on the realness output to augment data variance. Given a mini-batch {a,...,@,¢~1} of size M, a Gaus- sian distribution N (j1;, 04) is fitted on {%; (ao), #i(a1), ..., Wi(@ar—1)}, which are logits computed by D on i-th outcome. We then resample M new logits {¢/(ao),..., W}(a@m_1); Wi ~ N (wi, 1) } for i-th outcome and use them succeedingly. The randomness introduced by resampling benefits the training of RealnessGAN in two aspects. First of all, it augments data by probing instances around the limited training samples, leading to more robust models. Secondly, the resampling approach implicitly demands instances of ψi(x) to be homologous throughout the mini-batch, such that each outcome reflects realness consistently across samples. We empirically found the learning curve of RealnessGAN is more stable if feature resampling is utilized, especially in the latter stage, where models are prone to overfit. # 3 RELATED WORK Generative adversarial network (GAN) was first proposed in (Goodfellow et al., 2014), which jointly learns a discriminator D and a generator G in an adversarial manner. Due to its outstanding learning ability, GANs have been adopted in various generative tasks (Radford et al., 2015; Yu et al., 2017; Zhu et al., 2017), among which Deep Convolutional GAN (DCGAN) (Radford et al., 2015) has shown promising results in image generation. Although remarkable progress has been made. GAN is known to suffer from gradient diminish- ing and mode collapse. Variants of GAN have been proposed targeting these issues. Specifically, Wasserstein GAN (WGAN) Arjovsky et al. (2017) replaces JS-divergence with Earth-Mover’s Dis- tance, and Least-Square GAN (LSGAN) (Mao et al., 2017) transforms the objective of G to Pearson divergence. Energy-based GAN (EBGAN) (Zhao et al., 2017) and Boundary Equilibrium GAN (BE- GAN) (Berthelot et al., 2017) employ a pre-trained auto-encoder as the discriminator, learning to distinguish between real and generated samples via reconstruction. Besides adjusting the objective of GAN, alternative approaches include more sophisticated architectures and training paradigms. Generally, ProgressiveGAN (Karras et al., 2018) and StyleGAN (Karras et al., 2019) propose a pro- gressive paradigm, which starts from a shallow model focusing on a low resolution, and gradually grows into a deeper model to incorporate more details as resolution grows. On the other hand, COCO-GAN (Lin et al., 2019) tackles high resolution image generation in a divide-and-conquer strategy. It learns to produce decent patches at corresponding sub-regions, and splices the patches to produce a higher resolution image. It’s worth noting that many works on generative adversarial networks have discussed ‘distributions’ (Goodfellow et al., 2014; Radford et al., 2015; Arjovsky et al., 2017), which usually refers to the underlying distribution of samples. Some of the existing works aim to improve the original objec- tive using different metrics to measure the divergence between the learned distribution pg and the real distribution pdata. Nevertheless, a single scalar is constantly adopted to represent the concept of realness. In this paper, we propose a complementary modification that models realness as a random variable follows the distribution prealness. In the future work, we may study the combination of real- ness discriminator and other GAN variants to enhance the effectiveness and stability of adversarial learning. 5 Published as a conference paper at ICLR 2020 Real Data 3, _Std-GAN Loss WGAN-GP Loss LSGAN Loss HingeGAN Loss Realness Loss a ; : ; af 7 \] 2 202 2 ae 2 ee t——- oH 28 Figure 2: Left: real data sampled from the mixture of 9 Gaussian distributions. Right: samples generated by Std-GAN, WGAN-GP, LSGAN, HingeGAN and RealnessGAN. # 4 EXPERIMENTS In this section we study RealnessGAN from multiple aspects. Specifically, 1) we firstly focus on RealnessGAN’s mode coverage ability on a synthetic dataset. 2) Then we evaluate RealnessGAN on CIFAR10 (32*32) (Krizhevsky, 2009) and CelebA (256*256) (Liu et al., 2015) datasets qualitatively and quantitatively. 3) Finally we explore RealnessGAN on high-resolution image generation task, which is known to be challenging for unconditional non-progressive architectures. Surprisingly, on the FFHQ dataset (Karras et al., 2019), RealnessGAN managed to generate images at the 1024*1024 resolution based on a non-progressive architecture. We compare RealnessGAN to other popular objectives in generative adversarial learning, including the standard GAN (Std-GAN) (Radford et al., 2015), WGAN-GP (Arjovsky et al., 2017), HingeGAN (Zhao et al., 2017) and LSGAN (Mao et al., 2017). For experiments on synthetic dataset, we use a generator with four fully-connected hidden layers, each of which has 400 units, followed by batch normalization and ReLU activation. The discrimina- tor has three fully-connected hidden layers, with 200 units each layer. LinearMaxout with 5 maxout pieces are adopted and no batch normalization is used in the discriminator. The latent input z is a 32-dimensional vector sampled from a Gaussian distribution N (0, I). All models are trained using Adam (Kingma & Ba, 2015) for 500 iterations. On real-world datasets, the network architecture is identical to the DCGAN architecture in Radford et al. (2015), with the prior pz(z) a 128-dimensional Gaussian distribution N (0, I). Models are trained using Adam (Kingma & Ba, 2015) for 520k iterations. To guarantee training stability, we adopt settings that are proved to be effective for baseline methods. Batch normalization (Ioffe & Szegedy, 2015) is used in G, and spectral normalization (Miyato et al., 2018) is used in D. For WGAN-GP we use lr = 1e − 4, β1 = 0.5, β2 = 0.9, updating D for 5 times per G’s update (Gulrajani et al., 2017); for the remaining models, we use lr = 2e − 4, β1 = 0.5, β2 = 0.999, updating D for one time per G’s update (Radford et al., 2015). Fr´echet Inception Distance (FID) (Heusel et al., 2017) and Sliced Wasserstein Distance (SWD) (Karras et al., 2018) are reported as the evaluation metrics. Unless otherwise stated, A1 and A0 are chosen to resemble the shapes of two normal distributions with a positive skewness and a negative skewness, respectively. In particular, the number of outcomes are empirically set to 51 for CelebA and FFHQ datasets, and 3 for CIFAR10 dataset. # 4.1 SYNTHETIC DATASET Since pdata is usually intractable on real datasets, we use a toy dataset to compare the learned dis- tribution pg and the data distribution pdata. The toy dataset consists of 100, 000 2D points sampled from a mixture of 9 isotropic Gaussian distributions whose means are arranged in a 3 by 3 grid, with variances equal to 0.05. As shown in Fig.2, the data distribution pdata contains 9 welly separated modes, making it a difficult task despite its low-dimensional nature. To evaluate pg, we draw 10, 000 samples and measure their quality and diversity. As suggested in (Dumoulin et al., 2016), we regard a sample as of high quality if it is within 4σ from the µ of its nearest Gaussian. When a Gaussian is assigned with more than 100 high quality samples, we consider this mode of pdata is recovered in pg. Fig.2 visualizes the sampled points of different methods, where LSGAN and HingeGAN suffer from significant mode collapse, recovering only a single mode. Points sampled by WGAN-GP are overly disperse, and only 0.03% of them are of high quality. While Std-GAN recovers 4 modes in pdata with 32.4% high quality samples, 8 modes are recovered by RealnessGAN with 60.2% high quality samples. The average σs of these high quality samples in Std-GAN and RealnessGAN are respectively 0.083 and 0.043. The results suggest that treating realness as a random variable rather than a single scalar leads to a more strict discriminator 6 Published as a conference paper at ICLR 2020 Number of Outcomes out=2, Giters_out=5, Giter=1oul=10, Giter=1__0Ut=20, Giter=1___out=30, Glter=1___out=d0, Gter=t 2 A +) ros 2 pn 2 k_G=1, q 3 q G C 3 o = + KD=1 2 oa “* 2 pod a — 2 s , out=2,Giter=1__out=5, Giter=3 out=10, Giter=6 out=20, Giter=14 —out=30, Giter=25 _out=40, Giter=36 2 2 —~war| 2 TT 2 > Converge} ° ° r ° ? o { n 2 2 —*. 2 ea | | -— Number of Outcomes out=2, Giters_out=5, Giter=1oul=10, Giter=1__0Ut=20, Giter=1___out=30, Glter=1___out=d0, Gter=t 2 A +) ros 2 pn 2 k_G=1, q 3 q G C 3 o = + KD=1 2 oa “* 2 pod a — 2 s , out=2,Giter=1__out=5, Giter=3 out=10, Giter=6 out=20, Giter=14 —out=30, Giter=25 _out=40, Giter=36 2 2 —~war| 2 TT 2 > Converge} ° ° r ° ? o { n 2 2 —*. 2 ea | | -— %HQ samples oo °° ao NU @ \i ° a o ~ #recovered modes 2 u 0.4-. : : : 30 40 50 #outcomes Figure 3: First row: the results of RealnessGAN when fixing kG = kD = 1 and increasing the number of outcomes. Second row: the results of RealnessGAN when kG is properly increased. Bottom curves: under the settings of second row, the ratio of high quality samples and the number of recovered modes. that criticizes generated samples from various aspects, which provides more informative guidance. Consequently, pg learned by RealnessGAN is more diverse and compact. We further study the effect of adjusting the number of outcomes in the realness distribution prealness on this dataset. To start with, we fix kG and kD to be 1, which are the number of updates for G and D in one iteration, and adjust the number of outcomes of prealness, A0 and A1. As shown in the first row of Fig.3, it can be observed that in general G recovers less modes as the number of outcomes grows, which is a direct result of D becoming increasingly rigorous and imposing more constraints on G. An intuitive solution is to increase kG such that G is able to catch up with current D. The second row of Fig.3 demonstrates the converged cases achieved with suitable kGs, suggesting RealnessGAN is effective when sufficient learning capacity is granted to G. The ratio of high quality samples rHQ and the number of recovered modes nmode in these cases are plotted in Fig.3. The two curves imply that besides kG, rHQ and nmode are all positively related to the number of outcomes, validating that measuring realness from more aspects leads to a better generator. 4.2 REAL-WORLD DATASETS As GAN has shown promising results when modeling complex data such as natural images, we evaluate RealnessGAN on real-world datasets, namely CIFAR10, CelebA and FFHQ, which re- spectively contains images at 32*32, 256*256 and 1024*1024 resolutions. The training curves of baseline methods and RealnessGAN on CelebA and CIFAR10 are shown in Fig.4. The qualitative results measured in FID and SWD are listed in Tab.1. We report the minimum, the maximum, the mean and the standard deviation computed along the training process. On both datasets, compared to baselines, RealnessGAN obtains better scores in both metrics. Meantime, the learning process of RealnessGAN is smoother and steadier (see SD in Tab.1 and curves in Fig.4). Samples of generated images on both datasets are included in Fig.8. On FFHQ, we push the resolution of generated images to 1024*1024, which is known to be challeng- ing especially for a non-progressive architecture. As shown in Fig.8, despite building on a relatively simple DCGAN architecture, RealnessGAN is able to produce realistic samples from scratch at such a high resolution. Quantitatively, RealnessGAN obtains an FID score of 17.18. For reference, our 7 Published as a conference paper at ICLR 2020 (a) FID on CelebA (b) SWD on CelebA (c) FID on CIFAR10 (d) SWD on CIFAR10 Figure 4: Training curves of different methods in terms of FID and SWD on both CelebA and CIFAR10, where the raise of curves in the later stage indicate mode collapse. Best viewed in color. Table 1: Minimum (min), maximum (max), mean and standard deviation (SD) of FID and SWD on CelebA and CIFAR10, calculated at 20k, 30k, ... iterations. The best indicators in baseline methods are underlined. Method FID ↓ SWD (×103) ↓ Min Max Mean SD Min Max Mean SD CelebA Std-GAN WGAN-GP LSGAN HingeGAN 27.02 70.28 30.76 25.57 70.43 104.60 57.97 75.03 34.85 81.15 34.99 33.89 9.40 8.27 5.15 10.61 14.81 17.85 16.72 14.91 68.06 30.56 23.99 54.30 30.58 22.09 20.39 28.86 15.39 2.93 2.25 10.34 RealnessGAN 23.51 81.3 30.82 7.61 12.72 31.39 17.11 3.59 CIFAR10 Std-GAN WGAN-GP LSGAN HingeGAN 38.56 41.86 42.01 42.40 88.68 79.25 75.06 117.49 47.46 46.96 48.41 57.30 15.96 5.57 7.72 20.69 28.76 28.17 31.99 32.18 57.71 36.04 40.46 61.74 37.55 30.98 34.75 41.85 7.02 1.78 2.34 7.31 RealnessGAN 34.59 102.98 42.30 11.84 22.80 53.38 26.98 5.47 re-implemented StyleGAN (Karras et al., 2019) trained under a similar setting receives an FID score of 16.12. These results strongly support the effectiveness of RealnessGAN, as StyleGAN is one of the most advanced GAN architectures so far. 4.3 ABLATION STUDY The implementation of RealnessGAN offers several choices that also worth digging into. On syn- thetic dataset, we explored the relationship between the number of outcomes and G’s update fre- quency. On real-world dataset, apart from evaluating RealnessGAN as a whole, we also studied the affect of feature resampling, different settings of A0 and A1 and choices of G’s objective. Table 2: Minimum (min), maximum (max), mean and standard deviation (SD) of FID on CelebA using different anchor distributions, calculated at 20k, 30k, ... iterations. Dxr(Ai||Ao) Min Max Mean SD 1.66 31.01 9611 40.75 11.83 5.11 26.22 87.98 36.11 9.83 7.81 25.98 85.51 36.30 10.04 11.05 23.51 81.30 30.82 7.61 — w/o resampling — w/resampling . ” "erations 108) * * Figure 5: Training FID curves of Realness- GAN with and without feature re-sampling. 8 Published as a conference paper at ICLR 2020 Figure 6: Samples generated by RealnessGAN trained with the ideal objective (equation Table 3: FID scores of G on CIFAR10, trained with different objectives. G Objective FID Objective1 (equation 18) Objective2 (equation 19) Objective3 (equation 20) DCGAN WGAN-GP LSGAN HingeGAN 36.73 34.59 36.21 38.56 41.86 42.01 42.40 wo a Tr ectovelforeten) Rus occ) > = [erations (20K) Figure 7: Training curves of RealnessGAN on CelebA using objective2 (equation 19) and objective3 (equation 20). Feature Resampling. Fig.5 shows the training curves of RealnessGAN with and without feature resampling. It can be noticed that despite the results are similar, feature resampling stabilizes the training process especially in the latter stage. Effectiveness of Anchors. Tab.2 reports the results of varying the KL divergence between anchor distributions A0 and A1. The FID score indicates that, as the KL divergence between A0 and A1 increases, RealnessGAN tends to perform better, which verifies our discussion in Sec.2.3 that a larger difference between anchor distributions imposes stronger constraints on G. To further testify, two different pairs of anchors with similar KL divergences (11.95 and 11.67) are exploited and they yield comparable FID scores (23.98 and 24.22). Objective of G. As mentioned in Sec[2.3] theoretically, the objective of G_ is maxg Ez~p,[Dxi(Ao||D(a))]. However, in practice, since D is not always optimal, we need either a pair of Ap and A; that are drastically different, or an additional constraint to aid this objective. Fig|6}shows that, with the ideal objective alone, when the KL divergence between Ap and A, is sufficiently large, on CelebA we could obtain a generator with limited generative power. On the other hand, by applying constraints as discussed in Sec[2.4| G can learn to produce more realistic samples as demonstrated in Fig 8] Similar results are observed on CIFAR10, where RealnessGAN obtains comparable FID scores with and without constraints, as shown in Tab[3} Fig{7] [7] also provides the training curves of RealnessGAN on CelebA using these two alternative objectives. # 5 CONCLUSION In this paper, we extend the view of realness in generative adversarial networks under a distributional perspective. In our proposed extension, RealnessGAN, we represent the concept of realness as a realness distribution rather than a single scalar. so that the corresponding discriminator estimates realness from multiple angles, providing more informative guidance to the generator. We prove RealnessGAN has theoretical guarantees on the optimality of the generator and the discriminator. On both synthetic and real-world datasets, RealnessGAN also demonstrates the ability of effectively and steadily capturing the underlying data distribution. 9 Published as a conference paper at ICLR 2020 SY ER+ 2h Sees eee ot tet tt SY ER+ 2h Sees eee ot tet tt Figure 8: Images sampled from RealnessGAN, respectively trained on CIFAR10 (top), CelebA (mid- dle) and FFHQ (bottom). Acknowledgement We thank Zhizhong Li for helpful discussion on the theoretical analysis. This work is partially supported by the Collaborative Research Grant of ”Large-scale Multi-modality Analytics” from SenseTime (CUHK Agreement No. TS1712093), the General Research Funds (GRF) of Hong Kong (No. 14209217 and No. 14205719), Singapore MOE AcRF Tier 1, NTU SUG, and NTU NAP. 10 Published as a conference paper at ICLR 2020 # REFERENCES Martin Arjovsky, Soumith Chintala, and L´eon Bottou. Wasserstein gan. arXiv preprint arXiv:1701.07875, 2017. David Berthelot, Thomas Schumm, and Luke Metz. Began: Boundary equilibrium generative ad- versarial networks. arXiv preprint arXiv:1703.10717, 2017. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. In ICLR, 2019. Bo Dai, Sanja Fidler, Raquel Urtasun, and Dahua Lin. Towards diverse and natural image descrip- tions via a conditional gan. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2970–2979, 2017. Vincent Dumoulin, Ishmael Belghazi, Ben Poole, Alex Lamb, Mart´ın Arjovsky, Olivier Mastropi- etro, and Aaron C. Courville. Adversarially learned inference. ArXiv, abs/1606.00704, 2016. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, In NIPS, pp. 2672–2680, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. 2014. Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, and Aaron Courville. Im- In Proceedings of the 31st International Conference on proved training of wasserstein gans. Neural Information Processing Systems, NIPS’17, pp. 5769–5779, USA, 2017. Curran Asso- ciates Inc. ISBN 978-1-5108-6096-4. URL http://dl.acm.org/citation.cfm?id= 3295222.3295327. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Advances in Neural Information Processing Systems, pp. 6626–6637, 2017. Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32Nd International Conference on Inter- national Conference on Machine Learning - Volume 37, ICML’15, pp. 448–456. JMLR.org, 2015. URL http://dl.acm.org/citation.cfm?id=3045118.3045167. Alexia Jolicoeur-Martineau. The relativistic discriminator: a key element missing from standard gan. In ICLR, 2019. Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of GANs for im- proved quality, stability, and variation. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hk99zCeAb. Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative In The IEEE Conference on Computer Vision and Pattern Recognition adversarial networks. (CVPR), June 2019. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http: //arxiv.org/abs/1412.6980. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. Christian Ledig, Lucas Theis, Ferenc Husz´ar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo- In 2017 IEEE realistic single image super-resolution using a generative adversarial network. Conference on Computer Vision and Pattern Recognition (CVPR), pp. 105–114, July 2017. doi: 10.1109/CVPR.2017.19. Chieh Hubert Lin, Chia-Che Chang, Yu-Sheng Chen, Da-Cheng Juan, Wei Wei, and Hwann-Tzong Chen. COCO-GAN: Conditional coordinate generative adversarial network, 2019. URL https: //openreview.net/forum?id=r14Aas09Y7. 11 Published as a conference paper at ICLR 2020 Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. Xudong Mao, Qing Li, Haoran Xie, Raymond YK Lau, Zhen Wang, and Stephen Paul Smolley. Least squares generative adversarial networks. In ICCV, 2017. Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=B1QRgziT-. Alec Radford, Luke Metz, and Soumith Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. In AAAI, 2017. Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaolei Huang, Xiaogang Wang, and Dim- itris N. Metaxas. Stackgan: Text to photo-realistic image synthesis with stacked generative ad- versarial networks. 2017 IEEE International Conference on Computer Vision (ICCV), pp. 5908– 5916, 2016. Junbo Zhao, Michael Mathieu, and Yann LeCun. Energy-based generative adversarial network. In ICLR, 2017. Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In ICCV, 2017. 12
{ "id": "1511.06434" }
2002.03932
Pre-training Tasks for Embedding-based Large-scale Retrieval
We consider the large-scale query-document retrieval problem: given a query (e.g., a question), return the set of relevant documents (e.g., paragraphs containing the answer) from a large document corpus. This problem is often solved in two steps. The retrieval phase first reduces the solution space, returning a subset of candidate documents. The scoring phase then re-ranks the documents. Critically, the retrieval algorithm not only desires high recall but also requires to be highly efficient, returning candidates in time sublinear to the number of documents. Unlike the scoring phase witnessing significant advances recently due to the BERT-style pre-training tasks on cross-attention models, the retrieval phase remains less well studied. Most previous works rely on classic Information Retrieval (IR) methods such as BM-25 (token matching + TF-IDF weights). These models only accept sparse handcrafted features and can not be optimized for different downstream tasks of interest. In this paper, we conduct a comprehensive study on the embedding-based retrieval models. We show that the key ingredient of learning a strong embedding-based Transformer model is the set of pre-training tasks. With adequately designed paragraph-level pre-training tasks, the Transformer models can remarkably improve over the widely-used BM-25 as well as embedding models without Transformers. The paragraph-level pre-training tasks we studied are Inverse Cloze Task (ICT), Body First Selection (BFS), Wiki Link Prediction (WLP), and the combination of all three.
http://arxiv.org/pdf/2002.03932
Wei-Cheng Chang, Felix X. Yu, Yin-Wen Chang, Yiming Yang, Sanjiv Kumar
cs.LG, cs.CL, cs.IR, stat.ML
Accepted by ICLR 2020
null
cs.LG
20200210
20200210
0 2 0 2 b e F 0 1 ] G L . s c [ 1 v 2 3 9 3 0 . 2 0 0 2 : v i X r a Published as a conference paper at ICLR 2020 PRE-TRAINING TASKS LARGE-SCALE RETRIEVAL FOR EMBEDDING-BASED Wei-Cheng Chang∗, Felix X. Yu, Yin-Wen Chang, Yiming Yang, Sanjiv Kumar Carnegie Mellon University & Google {wchang2,yiming}@cs.cmu.edu, {felixyu,yinwen,sanjivk}@google.com # ABSTRACT We consider the large-scale query-document retrieval problem: given a query (e.g., a question), return the set of relevant documents (e.g., paragraphs contain- ing the answer) from a large document corpus. This problem is often solved in two steps. The retrieval phase first reduces the solution space, returning a subset of candidate documents. The scoring phase then re-ranks the documents. Criti- cally, the retrieval algorithm not only desires high recall but also requires to be highly efficient, returning candidates in time sublinear to the number of docu- ments. Unlike the scoring phase witnessing significant advances recently due to the BERT-style pre-training tasks on cross-attention models, the retrieval phase remains less well studied. Most previous works rely on classic Information Re- trieval (IR) methods such as BM-25 (token matching + TF-IDF weights). These models only accept sparse handcrafted features and can not be optimized for dif- ferent downstream tasks of interest. In this paper, we conduct a comprehensive study on the embedding-based retrieval models. We show that the key ingredient of learning a strong embedding-based Transformer model is the set of pre-training tasks. With adequately designed paragraph-level pre-training tasks, the Trans- former models can remarkably improve over the widely-used BM-25 as well as embedding models without Transformers. The paragraph-level pre-training tasks we studied are Inverse Cloze Task (ICT), Body First Selection (BFS), Wiki Link Prediction (WLP), and the combination of all three. # INTRODUCTION We consider the large-scale retrieval problem: given a query, return the most relevant documents from a large corpus, where the size of the corpus can be hundreds of thousands or more. One can view this problem as learning a scoring function f : X × Y → R, that maps a pair of a query and a document (q, d) ∈ X × Y to a score f (q, d). The function should be designed such that the relevant (q, d) pairs have high scores, whereas the irrelevant ones have low scores. Many real-world applications besides query-document retrieval can be cast into this form. For example, in recommen- dation systems, q represents a user query and d represents a candidate item to recommend (Krichene et al., 2019). In extreme multi-label classification, q represents a web-page document and d repre- sents the categories or hashtags of interests (Jain et al., 2019; Chang et al., 2019). In open-domain question answering, q represents a question and d represents an evidence passage containing the answer (Chen et al., 2017; Hu et al., 2019; Lee et al., 2019). Central to the above is designing the scoring function f . Recently, BERT (Devlin et al., 2019), along with its many successors such as XLNet (Yang et al., 2019b) and RoBERTa (Liu et al., 2019), has led to significant improvements to many NLP tasks such as sentence pairs classification and question-answering. In BERT, the scoring function f is a pre-trained deep bidirectional Transformer model. While BERT-style cross-attention models are very successful, it cannot be directly applied to large-scale retrieval problems because computing f (q, d) for every possible document can be prohibitively expensive. Thus, one typically first uses a less powerful but more efficient algorithm (another scoring function f ) to reduce the solution space (the “retrieval phase”), and then use the BERT-style model to re-rank the retrieved documents (the “scoring phase”). ∗work performed when interning at Google. 1 Published as a conference paper at ICLR 2020 The retrieval phase is critical. Ideally speaking, the algorithm should have a high recall; otherwise, many relevant documents won’t even be considered in the scoring phase. The algorithm also needs to be highly efficient: it should return a small subset of relevant documents in time sublinear to the number of all documents. Although significant developments are advancing the scoring algorithms, the retrieval algorithms remain less studied, and this is the focus of this paper. The retrieval algorithm can be put into two categories. The first type is classic information retrieval (IR) algorithms relying on token-based matching. One example is BM-25 (Robertson et al., 2009), which remains to be the most commonly-used (Nguyen et al., 2016; Yang et al., 2017; 2019a) and hard to beat (Chapelle & Chang, 2011; Lee et al., 2019) algorithm. Here the scoring function f is based on token-matching between the two high-dimensional sparse vectors with TF-IDF token weights, and retrieval can be done in sublinear time using the inverted index. Despite the wide usage, these algorithms are handcrafted and therefore cannot be optimized for a specific task. The second option is an embedding-based model that jointly embeds queries and documents in the same embedding space and use an inner product or cosine distance to measure the similarity between queries and documents. Let the query embedding model be φ(·) and the document embedding model be ψ(·). The scoring function is (4,4) = (¢(4), (a). In the inference stage, retrieving relevant documents then becomes finding the nearest neighbors of a query in the embedding space. Since the embeddings of all candidate documents can be pre- computed and indexed, the inference can be done efficiently with approximate nearest neighbor search algorithms in the embedding space (Shrivastava & Li, 2014; Guo et al., 2016). In this paper, we refer to the above embedding-based model as the two-tower retrieval model, be- cause the query and document embeddings are coming from two separate “towers” of neural net- works. In the literature, it is also known as the Siamese network (Das et al., 2016; Triantafillou et al., 2017) or dual-encoder model (Cer et al., 2018; Mazar´e et al., 2018). Compared to the sparse token-based models, the two-tower models can capture deeper semantic relationships within queries and documents, and the models can be optimized specifically for the task being considered. In the heart of two-tower models is the embedding functions φ(·) and ψ(·). A modern choice is using Transformers to model the attention within queries and within documents, rather than the cross-attention between them as in the BERT model. The token-level masked-LM (MLM) pre- training task is crucial to the success of BERT-style cross-attention models. Nevertheless, what pre-training tasks are useful for improving two-tower Transformer models in large-scale retrieval, remains a crucial yet unsolved research problem. In this paper, we aim to answer this question by studying different pre-training tasks for the two-tower Transformer models. We contribute the following insight: • The two-tower Transformer models with proper pre-training can significantly outperform the widely used BM-25 algorithm; • Paragraph-level pre-training tasks such as Inverse Cloze Task (ICT), Body First Selection (BFS), and Wiki Link Prediction (WLP) hugely improve the retrieval quality, whereas the most widely used pre-training task (the token-level masked-LM) gives only marginal gains. • The two-tower models with deep transformer encoders benefit more from paragraph-level pre-training compared to its shallow bag-of-word counterpart (BoW-MLP). To the best of our knowledge, this is the first comprehensive study on pre-training tasks for efficient large-scale retrieval algorithms. The rest of the paper is organized as follows. We start by introduc- ing the two-tower retrieval model in Section 2. The pre-training tasks are presented in 3, and the experiments and analysis are presented in Section 4. Finally, we conclude this work in Section 5. # 2 THE TWO-TOWER RETRIEVAL MODEL Given a query q ∈ X and a document d ∈ Y, we consider two-tower retrieval models that consist of two encoder functions, φ : X → Rk and ψ : Y → Rk which map a sequence of tokens in X and Y to their associated embeddings φ(q) and ψ(d), respectively. The scoring function f : Rk × Rk → R 2 Published as a conference paper at ICLR 2020 © _aa Ee Ee {ics} { ao | { tse | (i | | au ra iF } {rcs} { ao | { tserr} { ao | { ter) | F ¥ r x 7 7 7 {rcs} {ain | {tserr} {rus} { ain } [ seri | {rcisi } { ain | { ter) | { ain | { tseF) | Figure 1: Difference between two-tower models and cross-attention models. Following previous works, we consider [CLS] embedding and average pooling as the aggregator’s output for the two- tower Transformer model and the two-tower MLP model, respectively. is then defined to be the inner product1 of the embeddings f(a, 4) = (9(4), ¥(d)). (1) In this paper, we are interested in parameterizing the encoders φ, ψ as deep Transformer mod- els (Vaswani et al., 2017) due to its expressive power in modeling natural language. In the rest of this section, we illustrate the advantage of two-tower models in the inference phase; dis- cuss the pros and cons of two-tower models in comparison with BERT-like cross-attention models; present the learning procedure of estimating model parameters under maximum likelihood principle; and review the related works. Inference The difference between two-tower models and cross-attention models is shown in Fig- ure 1. The advantage of two-tower models is the efficiency in the inference time. First, all the document embeddings can be pre-computed. Then, given an unseen query q, we only need to rank the document based on its inner product with the query embedding. This is way more efficient than running inference on a cross-attention BERT-style model (often used in the scoring stage). To see this, the scoring function of BERT-style model is with the form fθ,w(q, d) = ψθ(q ⊕ d)T w, (2) where ⊕ denotes the concatenate operation of the query and the document sequence and w ∈ Rk is an additional model parameters. In BERT, for each query, one has to make the above expensive inference on all documents. For example, with the 128-dimensional embedding space, inner prod- uct between 1000 query embeddings with 1 million document embeddings only takes hundreds of milliseconds on CPUs, while computing the same scores with cross-attention models takes hours if not more even on GPUs. Furthermore, retrieving the closest documents in the embedding space can be performed in sublin- ear time with the well-studied maximum inner product (MIPS) algorithms with almost no loss in recall (Shrivastava & Li, 2014; Guo et al., 2016). Learning One unique advantage of the two-tower retrieval model in comparison with classic IR algorithms is the ability to train it for specific tasks. In this paper, we assume that the train- ing data is presented as relevant “positive” query-document pairs T = {(qi, di)}|T | i=1. Let θ be the model parameters. We estimate the model parameters by maximizing the log likelihood 1This also includes cosine similarity scoring functions when the embeddings φ(q), ψ(d) are normalized. 3 Published as a conference paper at ICLR 2020 # maxθ (q,d)∈T log pθ(d|q) where the conditional probability is defined by the Softmax: >’ exp (fo(a. d)) Daren exp (fo(q, 4’) ' po(d\q) = (3) and D is the set of all possible documents. The Softmax involves computing the expensive denomi- nator of Equation (3), a.k.a, the partition function, that scales linearly to the number of documents. In practice, we use the Sampled Softmax, an approximation of the full-Softmax where we replace D by a small subset of documents in the current batch, with a proper correcting term to ensure the un- biasedness of the partition function (Bengio & Sen´ecal, 2008). Sampled Softmax has been widely used in language modeling (Chen et al., 2016; Grave et al., 2017), recommendation systems (Yu et al., 2017; Krichene et al., 2019) and extreme classification (Blanc & Rendle, 2018; Reddi et al., 2019). Since we often have a limited amount of supervised data from the downstream task, it is important to first train the retrieval model with positive pairs T from a set of pre-training tasks. We then fine-tune it with positive pairs T from the downstream task. We will present the set of pre-training tasks we study in Section 3. Related Works Cer et al. (2018) study the two-tower Transformer model as a universal sen- tence encoder. The model is learned with multiple tasks including the unsupervised Skip-Thought task (Kiros et al., 2015), the supervised conversation input-response task (Henderson et al., 2017), and the supervised sentence classification SNLI task (Bowman et al., 2015). Humeau et al. (2019) propose the Poly-encoders architecture to balance the computation/expressiveness tradeoff between two-tower models and cross-attention models. Reimers & Gurevych (2019) fine-tune the deep two- tower models on two supervised datasets, SNLI and MNLI (Williams et al., 2018), then apply it in solving other downstream tasks. Unlike all the above works that consider training the two-tower Transformer models on a limited amount of supervised corpus for the sentence classification tasks, we study different pre-training tasks and their contributions in the large-scale retrieval settings. Another closely related topic is the open-domain question answering. Previous works consider using BM25 or other lexical matching methods to retrieve the top-k relevant passages efficiently and then deploy the more expensive cross-attention scoring function to find the answer (Chen et al., 2017; Yang et al., 2017; 2019a). Das et al. (2019) encode query and document separately with LSTM encoders. They employ a training procedure different from ours and do not consider pre-training. Very recently, Lee et al. (2019) propose to pre-train two-tower Transformer models with the Inverse Cloze Task (ICT) to replace BM25 in the passage retrieval phase. The advantage is that the retriever can be trained jointly with the reader/scorer. Nevertheless, their pre-trained two-tower models do not outperform BM25 on the SQuAD dataset, potentially because the fine-tuning is only performed on the query-tower. Model distillation (Hinton et al., 2015) can be used to compress expensive BERT-like cross-attention models into efficient two-tower Transformer models for large-scale retrieval problems. For example, Tang et al. (2019) demonstrate initial success in distilling the BERT model into a two-tower model with BiLSTM as encoders. The pre-training tasks we study in this paper can be used as additional supervision in the distillation process, and therefore complementary to model distillation. # 3 PRE-TRAINING TASKS OF DIFFERENT SEMANTIC GRANULARITIES As mentioned in Section 2, due to the limited amount of supervised data from downstream tasks, a crucial step of learning deep retrieval models is to pre-train the model with a set of pre-training tasks (we will verify this in Section 4). Sentence-level pre-training tasks have been studied before. One example is reconstructing the surface form of surrounding sentences given the encoded sentence (Le & Mikolov, 2014; Kiros et al., 2015), and another one is discriminating the next sentence from random candidates (Jernite et al., 2017; Logeswaran & Lee, 2018). In this paper, we assume that the pre-training data is defined as positive query-document (q, d) pairs. A good pre-training task should have the following two properties. 1) It should be relevant to the downstream task. For example, when solving the question-answering retrieval problem, the model should capture different granularities of semantics between the query and document. The semantics 4 Published as a conference paper at ICLR 2020 Geoffrey Everest Hinton cc FAs FASC'""! (born 6 December 1947) is an English Canadian cognitive psychologis lachine learning (ML) is the scientific study of algorithms and statistical models that computer systems use to p scientist, most noted for his work on artificial neural networks. Since 2013 he divides his time working for Google e and the University of Toronto.('2'9) d Ronald ms. Hinton was cx a highly cited pape ished in. 1986 ithm for training multi-layer neural networks,!"! although they were not the first to prope used in a wide variety of applications, such as email filtering and computer vision, where it is difficult or infeasible conventional algorithm for effectively performing the task. d by some as a leading fi « (16][17E18](19]/20] The in the deep learning community and is referred to by s @-TeCOd on milestone o 8 AiexNe esione father een Learning e.dramatic image-recoan nileston heA desioned Machine learning is closely related to computational statistics, which focuses on making predictions using comput Alex Krizhevsky!2") for the ImageNet challenge 2012! 1 helped to revolutionize the field of computer vision.!2) Hit of mathematical optimization delivers methods, theory and application domains to the field of machine leaming. D awarded the 2018 Turing Prize alongside Yoshua Bengio and Yann LeCun for their work on deep learning.!24) field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning.) application across business problems, machine learning is also referred to as predictive analytics, Contents (show) Contents [show] Education {cat} : Hinton was educated at King's College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental ps Overview [ecit) continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1978 fc The name machine learning was coined in 1959 by Arthur Samuel.!§) Tom M. Mitchell provided a widely quoted, m supervised by Christopher Longuet-Higgins.!51l25] definition of the algorithms studied in the machine leaming field: "A computer program is said to learn from experir respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, Career and research {<i} experience E."! This definition of the tasks in which machine learning is concerned offers a fundamentally opera! cs ace pe ree ge eee ee ~ rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Mach pte his PhD he worked at the University of Sussex, and (after difficulty finding funding in Britain)!®! the Universi! San Diego, and Carnegie Mellon University.!"] He was the founding director of the Gatsby Charitable Foundation Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what v entities) can do?".”! In Turing’s proposal the various characteristics that could be possessed by a thinking machir various implications in constructing one are exposed. Neuroscience Unit at University College London, !"! and is currently2”) a professor in the computer science depal qd University of Toronto. He holds a Canada Research Chair in Machine Learning, and is currently an advisor for thy Machines & Brains program at the Canadian Institute for Advanced Research. Hinton taught a free online course Machine learning tasks {ecit} Networks on the education platform Coursera in 2012.28] Hinton joined Google in March 2013 when his compan’ J!"c., was acquired. He is planning to “divide his time between his university research and his work at Google’.(*°) Machine learning tasks are classified into several broad categories. In supervised learning, the algorithm builds a from a set of data that contains both the inputs and the desired outputs. For example, if the task were determininc hell research investigates ways of using neural networks for machine learning, memory, perception and oy contained a certain object, the training data for a supervised learning algorithm would include images with and wit I has authored or co-authored over 200 peer reviewed publications 2590) eee - input), and each image would have a label (the output) designating whether it contained the object. In special cas Figure 2: An illustrative example of the three pre-training tasks where each query q is highlighted in different colors. All queries are paired with the same text block d. Concretely, (q1,d) of ICT is defined locally within a paragraph; (q2,d) of BFS is defined globally within an article; (q3,d) of WLP is defined distantly across two related articles hyper-linked by the Wikipedia entity. can be the local context within a paragraph, global consistency within a document, and even semantic relation between two documents. 2) It should be cost-efficient to collect the pre-training data, ideally not requiring additional human supervision. In light of the above requirements, we present three pre-training tasks that emphasize different as- pects of semantics between queries and documents: Inverse Cloze Task (ICT), Body First Selection (BFS), and Wiki Link Prediction (WLP). In specific, BFS and WLP are newly proposed in this paper. The training data for all these tasks can be freely obtained based from Wikipedia without an additional manual labeling process. Figure 2 provides illustrative examples of these tasks. Inverse Cloze Task (ICT) Given a passage p consisting of n sentences, p = {s1, . . . , sn}, the query q is a sentence randomly drawn from the passage, q = si, i ∼ [1, n], and the document d is the rest of sentences, d = {s1, . . . , si−1, si+1, . . . , sn}. See (q1,d) in Figure 2 as an example. This task captures the semantic context of a sentence and was originally proposed by Lee et al. (2019). Body First Selection (BFS) We propose BFS to capture semantic relationship outside of the local paragraph. Here, the query q2 is a random sentence in the first section of a Wikipedia page, and the document d is a random passage from the same page (Figure 2). Since the first section of a Wikipedia article is often the description or summary of the whole page, we expect it to contain information central to the topic. Wiki Link Prediction (WLP) We propose WLP to capture inter-page semantic relation. The query q3 is a random sentence in the first section of a Wikipedia page, and the document d is a passage from another page where there is a hyperlink link to the page of q3 (Figure 2). Intuitively, a hyperlink link indicates relationship between the two Wikipedia pages. Again, we take a sentence from the first section because it is often the description or summary of the topic. Masked LM (MLM) In addition to the above tasks, we also consider the classic masked language model (MLM) pre-training task as a baseline: predict the randomly masked tokens in a sentence. MLM is the primary pre-training task used in BERT (Devlin et al., 2019). 5 Published as a conference paper at ICLR 2020 Pre-training tasks #tokens #pairs avg. #query tokens #doc tokens ICT BFS WLP 11.2B 50.2M 3.3B 17.5M 2.7B 24.9M 30.41 28.02 29.42 193.89 160.46 82.14 Table 1: Data statistics of three pre-training tasks. #query tokens represent average number of tokens per query, and #doc tokens represent average number of tokens per passage. # 4 EXPERIMENTS 4.1 EXPERIMENTAL SETTING The two-tower retrieval model Each tower of the retrieval model follows the architecture and hyper-parameters of the 12 layers BERT-base model. For both towers, the final embedding is gen- erated by applying a linear layer on the hidden state of the [CLS] token. The embedding dimension is 512. The sequence length for the query encoder and document encoder are set to be 64 and 288, respectively. We pre-train the model on 32 TPU v3 chips for 100K steps with an Adam optimizer and batch size of 8192. This process takes about 2.5 days. We use the Adam optimizer with an initial learning rate 1 × 10−4 with the warm-up ratio 0.1, followed by a linear learning rate decay. For fine-tuning, the learning rate of Adam is set to 5 × 10−5 with 2000 training steps and batch size 512. Pre-training tasks We compare the token-level pre-training task MLM with the three paragraph- level pre-training tasks, ICT, BFS and WLP. The data of ICT, BFS and WLP are generated from the Wikipedia corpus. The data statistics are reported in Table 1. Note that #tokens represents the number of sub-words tokenized by WordPiece (Wu et al., 2016). The pre-training tasks define the positive (q, d) pair for learning the two-tower Transformer models. For ICT, the d is a pair of article title and passage separated by [SEP] symbol as input to the doc-tower. We propose to pre-train the two-tower Transformer models jointly with all three paragraph-level pre- training tasks, hence the name ICT+BFS+WLP. Here the model is pre-trained on one combined set of (q, d) pairs, where each pair is uniformly sampled from the three pre-training tasks in Table 1. See Section 4.2 and 4.3 for its outstanding performance over other baselines. Downstream tasks We consider the Retrieval Question-Answering (ReQA) benchmark, proposed by Ahmad et al. (2019).2 The two QA datasets we consider are SQuAD and Natural Questions. Note that each entry of QA datasets is a tuple (q, a, p), where q is the question, a is the answer span, and p is the evidence passage containing a. Following Ahmad et al. (2019), we split a passage into sentences, p = s1s2 . . . sn and transform the original entry (q, a, p) to a new tuple (q, si, p) where si is the sentence contains the answer span a. The retrieval problem is that given a question q, retrieve the correct sentence and evidence passage pair (s, p) from all candidates. For each passage p, we create a set of candidate pairs (si, p) where i = 1 . . . n, and the retrieval candidate set is built by combining such pairs for all passages. This problem is more challenging than retrieving the evidence passage only since the larger number of candidates to be retrieved. The data statistics of the downstream ReQA benchmark are shown in Table 2. Note that, similar to Ahmad et al. (2019), the ReQA benchmark is not entirely open- domain QA retrieval as the candidates (s, p) only cover the training set of QA dataset instead of entire Wikipedia articles. For the open-domain retrieval experiment, see details in Section 4.4. Evaluation For each dataset, we consider different training/test split of the data (1%/99%, 5%/95% and, 80%/20%) in the fine-tuning stage and the 10% of training set is held out as the validation set for hyper-parameter tuning. The split is created assuming a cold-start retrieval sce- nario where the queries in the test (query, document) pairs are not seen in training. 2Different from (Ahmad et al., 2019), whose goal is to use other large-scale weakly-supervised query- answer pair datasets (e.g. reddit data) to improve the model, the goal of this paper is to study different un- supervised pre-training tasks not identical to the downstream task. Therefore our approaches are not directly comparable to the results presented in their paper. 6 Published as a conference paper at ICLR 2020 ReQA Dataset #query #candidate #tuples #query tokens #doc tokens SQuAD Natural Questions 97,888 74,097 101,951 239,008 99,024 74,097 11.55 9.29 291.35 352.67 Table 2: Data statistics of ReQA benchmark. candidate represents all (sentence, passage) pairs. Encoder Pre-training task R@1 R@5 R@10 R@50 R@100 1%/99% BM-25 BoW-MLP BoW-MLP Transformer Transformer Transformer 41.86 No Pretraining No Pretraining 0.14 ICT+BFS+WLP 22.55 No Pretraining 0.02 MLM 0.18 ICT+BFS+WLP 37.43 58.00 0.35 41.03 0.06 0.51 61.48 63.64 0.49 49.93 0.08 0.82 70.18 74.15 1.13 69.70 0.31 2.46 85.37 77.91 1.72 77.01 0.54 3.93 89.85 5%/95% BM-25 BoW-MLP BoW-MLP Transformer Transformer Transformer No Pretraining 41.87 No Pretraining 1.13 ICT+BFS+WLP 26.23 No Pretraining 0.17 MLM 1.19 ICT+BFS+WLP 45.90 57.98 2.68 46.49 0.36 3.59 70.89 63.63 3.62 55.68 0.54 5.40 78.47 74.17 7.16 75.28 1.43 12.52 90.49 77.91 9.55 81.89 2.17 17.41 93.64 80%/20% BM-25 BoW-MLP BoW-MLP Transformer Transformer Transformer No Pretraining 41.77 No Pretraining 19.65 ICT+BFS+WLP 32.24 No Pretraining 12.32 MLM 27.34 ICT+BFS+WLP 58.35 57.95 36.31 55.26 26.88 49.59 82.76 63.55 44.19 65.49 34.46 58.17 88.44 73.94 62.40 83.37 53.74 74.89 95.87 77.49 69.19 88.50 61.53 80.33 97.49 Table 3: Recall@k on SQuAD. Numbers are in percentage (%). For the evaluation metric, we focus on recall@k3 because the goal of the retrieval phase is to capture the positives in the top-k results. The retrieval performance can be understood independently of the scoring model used by measuring recall at different k. In fact, in the extreme cases when the scoring model is either oracle or random, the final precision metric is proportional to recall@k. 4.2 MAIN RESULTS Table 3 and Table 4 compare the proposed combination of pre-training tasks, ICT+BFS+WLP, to various baselines on SQuAD and Natural Questions, respectively. In both benchmarks, ICT+BFS+WLP notably outperforms all other methods. This suggests that one should use a two- tower Transformer model with properly designed pre-training tasks in the retrieval stage to replace the widely used BM-25 algorithm. We present some of the detailed findings below. The BM-25 baseline In retrieval, BM-25 is a simple but tough-to-beat unsupervised baseline using token-matching with TF-IDF weights as the scoring function. BM-25 performs especially well for the SQuAD benchmark, as the data collection process and human annotations of this dataset are biased towards question-answer pairs with overlapping tokens (Rajpurkar et al., 2016; Kwiatkowski et al., 2019). For instance, in the limited fine-tuning data scenario (e.g., 1% and 5%), BM-25 outperforms the two-tower transformer models with no pre-training (No Pretraining) or with less- effective pre-training tasks (MLM). This result verifies that BM-25 is a robust retrieval model and therefore widely used in recent works (Chen et al., 2017; Yang et al., 2017; Lee et al., 2019)4. 3The correctness is based on when the system retrieves the gold sentence and evidence paragraph pair , not just any paragraph containing the answer text. 4Our BM-25 results are consistent with Ahmad et al. (2019). Their numbers are slightly higher because they consider passage-level retrieval, which has smaller candidate set compared to our sentence-level retrieval. 7 Published as a conference paper at ICLR 2020 Encoder architecture We justify the use of Transformer as encoders by comparing it with a shallow bag-of-word MLP model (BoW-MLP). Specifically, BoW-MLP looks up uni-grams from the embedding table5, aggregates the embeddings with average pooling, and passes them through a shallow two-layer MLP network with tanh activation to generate the final 512-dimensional query/document embeddings. For fair comparison, the BoW-MLP encoder has a comparable model size to the Transformer encoder (i.e., 128M v.s. 110M parameters, slightly favorable to BoW-MLP encoder). With a properly designed pre-training task (e.g., ICT+BFS+WLP), the Transformer encoder con- siderably outperforms its shallow counterpart (BoW-MLP), suggesting that the former benefits more from the unsupervised pre-training tasks. On the other hand, without any pre-training, the perfor- mance of the Transformer encoder is worse than BoW-MLP encoder, possibly because the former is over-fitting on the limited amount of labeled fine-tuning data. Pre-training tasks When pre-training the two-tower Transformer model, we compare the pre- training tasks to two baselines: No Pretraining and MLM. No Pretraining represents random ini- tializing the model, and MLM is the token-level masked-LM task introduced in Section 3. On both datasets, the token-level pre-training task MLM only marginally improves over the no- pretraining baseline (No Pretraining). In contrast, combining the paragraph-level pre-training tasks ICT+BFS+WLP provides a huge boost on the performance. This verifies our assumption that the design of task-related pre-training tasks is crucial. The performance of adding individual pre-training tasks is presented in the next section. train/test ratio Encoder Pre-training task R@1 R@5 R@10 R@50 R@100 1%/99% BM-25 BoW-MLP BoW-MLP Transformer Transformer Transformer 4.99 0.28 9.22 0.07 0.18 ICT+BFS+WLP 17.31 No Pretraining No Pretraining ICT+BFS+WLP No Pretraining MLM 11.91 0.80 24.98 0.19 0.56 43.62 15.41 1.08 33.36 0.28 0.81 55.00 24.00 2.02 53.67 0.56 1.95 76.59 27.97 2.66 61.30 0.85 2.98 82.84 5%/95% BM-25 BoW-MLP BoW-MLP Transformer Transformer Transformer No Pretraining 5.03 No Pretraining 1.36 ICT+BFS+WLP 11.40 No Pretraining 0.37 MLM 1.10 ICT+BFS+WLP 21.46 11.96 3.77 30.64 1.07 3.42 51.03 15.47 4.98 40.63 1.40 4.89 62.99 24.04 8.56 62.95 2.73 10.49 83.04 28.00 10.77 70.85 3.82 14.37 88.05 80%/20% BM-25 BoW-MLP BoW-MLP Transformer Transformer Transformer No Pretraining 4.93 No Pretraining 9.78 ICT+BFS+WLP 13.58 No Pretraining 7.49 MLM 16.74 ICT+BFS+WLP 30.27 11.52 26.76 37.78 20.11 40.48 63.97 14.96 34.16 50.40 25.40 49.53 75.85 23.64 50.34 76.11 38.26 67.91 91.84 27.77 56.44 82.98 43.75 73.91 94.60 Table 4: Recall@k on Natural Questions. Numbers are in percentage (%). 4.3 ABLATION STUDY We conduct a more thorough ablation study on Natural Questions involving (1) the number of layers in Transformer; (2) different pre-training tasks; and (3) dimension of the embedding space. The result is presented in Table 5. Index 1, 2, and 3 show the individual performance of three pre-training tasks. All of these tasks are much more effective than MLM. Among them, ICT has the best performance, followed by BFS, and then WLP. This suggests that the (query, document) pairs defined by local context within passage are suitable for the ReQA task. 5We empirically found that adding bi-grams does not further improve the performance on these tasks possi- bly due to over-fitting. 8 Published as a conference paper at ICLR 2020 Index #layer Ablation Configuration Pre-training task R@100 on different train/test ratio emb-dim 1% 5% 10% 80% 1 2 3 4 4 4 ICT BFS WLP 128 128 128 77.13 72.99 56.94 82.03 78.34 68.08 84.22 80.47 72.51 91.88 89.82 86.15 4 5 6 7 12 12 12 12 No Pretraining MLM ICT ICT+BFS+WLP 128 128 128 128 0.72 2.99 79.80 81.31 3.88 12.21 85.97 87.08 6.94 22.97 88.13 89.06 38.94 71.12 93.91 94.37 8 9 12 12 ICT+BFS+WLP ICT+BFS+WLP 256 512 81.48 82.84 87.74 88.05 89.54 90.03 94.73 94.60 Table 5: Ablation study on Natural Questions based on Recall@100. Index 9 represents the pro- posed method in Table 4. Also note from Index 6 and 7, ICT+BFS+WLP pre-training is better than ICT with 1.5% absolute improvement over ICT in the low-data regime. This reflects that, when theres no sufficient down- stream training data, more globally pre-training tasks is beneficial as it encodes multi-hop reasoning priors such as different passages within the same article (BFS) or even going beyond to different articles linked by the same entities (WLP). Finally, The advantage of increasing number of layers is manifest by comparing Index 1 and Index 6, while Index 7, 8 and 9 show the benefit of increasing the dimension of the embedding space. 4.4 EVALUATION OF OPEN-DOMAIN RETRIEVAL We consider the open-domain retrieval setting by augmenting the candidate set of the ReQA bench- mark with large-scale (sentence, evidence passage) pairs extracted from general Wikipedia articles. In particular, we preprocess/sub-sample the open-domain Wikipedia retrieval set of the DrQA pa- per (Chen et al., 2017) into one million (sentence, evidence passage) pairs, and add this external 1M candidate pairs into the existing retrieval candidate set of the ReQA benchmark. train/test ratio Pre-training task R@1 R@5 R@10 R@50 R@100 1%/99% 3.70 14.18 ICT+BFS+WLP 13.19 BM-25 ICT 9.58 37.36 37.61 12.69 48.08 48.77 20.27 69.23 70.43 23.83 76.01 77.20 5%/95% 3.21 17.94 ICT+BFS+WLP 17.62 BM-25 ICT 8.62 45.65 45.92 11.50 57.11 57.75 18.59 76.87 78.14 21.78 82.60 83.78 80%/20% 3.12 24.89 ICT+BFS+WLP 25.41 BM-25 ICT 8.45 57.89 59.36 11.18 69.86 71.12 18.05 87.67 88.25 21.30 91.29 91.71 Table 6: Open-domain retrieval results of Natural Questions dataset, where existing candidates are augmented with additional 1M retrieval candidates (i.e., 1M of (s, p) candidate pairs) extracted from open-domain Wikipedia articles. The results of open-domain retrieval on Natural Questions are presented in Table 6. Firstly, we see that the two-tower Transformer models pretrained with ICT+BFS+WLP and ICT substantially out- perform the BM-25 baseline. Secondly, ICT+BFS+WLP pre-training method consistently improves the ICT pre-training method in most cases. Interestingly, the improvements are more noticeable at R@50 and R@100, possibly due to that the distant multi-hop per-training supervision induces better retrieval quality at the latter part of the rank list. Finally, we conclude that the evaluation results of the 1M open-domain retrieval are consistent with our previous empirical evaluation on the ReQA benchmark with smaller retrieval candidate sets (Section 4.2). 9 Published as a conference paper at ICLR 2020 # 5 CONCLUSION We conducted a comprehensive study on how various pre-training tasks help in the large-scale re- trieval problem such as evidence retrieval for question-answering. We showed that the two-tower Transformer models with random initialization (No Pretraining) or the unsuitable token-level pre- training task (MLM) are no better than the robust IR baseline BM-25 in most cases. With properly designed paragraph-level pre-training tasks inlcuding ICT, BFS and WLP, the two-tower Trans- former models can considerably improve over the widely used BM-25 algorithm. For future works, we plan to study how the pre-training tasks apply to other types of encoders architectures, generating the pre-training data from corpora other than Wikipedia, and how pre- training compares with different types of regularizations. # REFERENCES Amin Ahmad, Noah Constant, Yinfei Yang, and Daniel Cer. ReQA: An evaluation for end-to-end answer retrieval models. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pp. 137–146, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-5819. URL https://www.aclweb.org/anthology/ D19-5819. Yoshua Bengio and Jean-S´ebastien Sen´ecal. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks, 19(4):713–722, 2008. Guy Blanc and Steffen Rendle. Adaptive sampled softmax with kernel based sampling. In Proceed- ings of the 35th International Conference on Machine Learning (ICML), pp. 590–599, 2018. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 632–642, 2015. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Con- stant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. Universal sentence encoder. In ACL, 2018. Wei-Cheng Chang, Hsiang-Fu Yu, Kai Zhong, Yiming Yang, and Inderjit Dhillon. X-BERT: eX- treme multi-label text with BERT. arXiv preprint arXiv:1905.02331, 2019. Olivier Chapelle and Yi Chang. Yahoo! learning to rank challenge overview. In Proceedings of the learning to rank challenge, pp. 1–24, 2011. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (ACL) (Volume 1: Long Papers), pp. 1870–1879, 2017. Welin Chen, David Grangier, and Michael Auli. Strategies for training large vocabulary neural lan- guage models. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL), pp. 1975–1985, 2016. Arpita Das, Harish Yenala, Manoj Chinnakotla, and Manish Shrivastava. Together we stand: In Proceedings of the 54th Annual Meeting Siamese networks for similar question retrieval. of the Association for Computational Linguistics (ACL) (Volume 1: Long Papers), pp. 378–387, 2016. Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, and Andrew McCallum. Multi-step retriever- reader interaction for scalable open-domain question answering. In Proceedings of the Interna- tional Conference on Learning Representations (ICLR), 2019. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2019. 10 Published as a conference paper at ICLR 2020 Edouard Grave, Armand Joulin, Moustapha Ciss´e, Herv´e J´egou, et al. Efficient softmax approxima- tion for gpus. In Proceedings of the 34th International Conference on Machine Learning (ICML), pp. 1302–1310. JMLR. org, 2017. Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. Quantization based fast inner product search. In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics (AISTATS), pp. 482–490, 2016. Matthew Henderson, Rami Al-Rfou, Brian Strope, Yun-hsuan Sung, L´aszl´o Luk´acs, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, and Ray Kurzweil. Efficient natural language response suggestion for smart reply. arXiv preprint arXiv:1705.00652, 2017. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. Retrieve, read, rerank: Towards end-to- end multi-document reading comprehension. In Proceedings of InProceedings of the 57th Annual Meeting of the Association for Computa-tional Linguistics (ACL), 2019. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. Poly-encoders: Trans- former architectures and pre-training strategies for fast and accurate multi-sentence scoring. arXiv preprint arXiv:1905.01969, 2019. Himanshu Jain, Venkatesh Balasubramanian, Bhanu Chunduri, and Manik Varma. Slice: Scalable linear extreme classifiers trained on 100 million labels for related searches. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, pp. 528–536. ACM, 2019. Yacine Jernite, Samuel R Bowman, and David Sontag. Discourse-based objectives for fast unsuper- vised sentence representation learning. arXiv preprint arXiv:1705.00557, 2017. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Tor- In Advances in Neural Information Processing ralba, and Sanja Fidler. Skip-thought vectors. Systems (NIPS), pp. 3294–3302, 2015. Walid Krichene, Nicolas Mayoraz, Steffen Rendle, Li Zhang, Xinyang Yi, Lichan Hong, Ed Chi, and John Anderson. Efficient training on very large corpora via gramian estimation. In Proceedings of the International Conference on Learning Representations (ICLR), 2019. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. Natural questions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics (TACL), 7:453–466, 2019. Quoc Le and Tomas Mikolov. Distributed representations of sentences and documents. In Interna- tional conference on machine learning (ICML), pp. 1188–1196, 2014. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL), July 2019. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. Lajanugen Logeswaran and Honglak Lee. An efficient framework for learning sentence represen- In Proceedings of the International Conference on Learning Representations (ICLR), tations. 2018. Pierre-Emmanuel Mazar´e, Samuel Humeau, Martin Raison, and Antoine Bordes. Training millions of personalized dialogue agents. In EMNLP, 2018. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. MS MARCO: A human-generated machine reading comprehension dataset. 2016. 11 Published as a conference paper at ICLR 2020 Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2383–2392, 2016. Sashank J Reddi, Satyen Kale, Felix Yu, Dan Holtmann-Rice, Jiecao Chen, and Sanjiv Kumar. In Proceedings of the 22nd Stochastic negative mining for learning with large output spaces. International Conference on Artificial Intelligence and Statistics (AISTATS), 2019. Nils Reimers and Iryna Gurevych. Sentence-BERT: Sentence embeddings using siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2019. Stephen Robertson, Hugo Zaragoza, et al. The probabilistic relevance framework: BM25 and be- yond. Foundations and Trends®) in Information Retrieval, 3(4):333-389, 2009. Anshumali Shrivastava and Ping Li. Asymmetric LSH (ALSH) for sublinear time maximum inner product search (mips). In Advances in Neural Information Processing Systems (NIPS), pp. 2321– 2329, 2014. Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. Distilling task- specific knowledge from BERT into simple neural networks. arXiv preprint arXiv:1903.12136, 2019. Eleni Triantafillou, Richard Zemel, and Raquel Urtasun. Few-shot learning through an information retrieval lens. In Advances in Neural Information Processing Systems, pp. 2255–2265, 2017. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NIPS, 2017. Adina Williams, Nikita Nangia, and Samuel Bowman. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics : Human Language Technologies (NAACL-HLT 2018), Volume 1 (Long Papers), June 2018. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. Google’s neural machine trans- arXiv preprint lation system: Bridging the gap between human and machine translation. arXiv:1609.08144, 2016. Peilin Yang, Hui Fang, and Jimmy Lin. Anserini: Enabling the use of lucene for information retrieval In Proceedings of the 40th International ACM SIGIR Conference on Research and research. Development in Information Retrieval, pp. 1253–1256. ACM, 2017. Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. In InProceedings of the 2019 End-to-end open-domain question answering with BERTserini. Conference of the North American Chapter of the Association for Computational Linguistics : Human Language Technologies (NAACL-HLT 2019): Demonstrations, 2019a. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. XLNet: Generalized autoregressive pretraining for language understanding. In NIPS, 2019b. Hsiang-Fu Yu, Mikhail Bilenko, and Chih-Jen Lin. Selection of negative samples for one-class matrix factorization. In Proceedings of the 2017 SIAM International Conference on Data Mining, pp. 363–371. SIAM, 2017. 12
{ "id": "1705.00557" }
2002.04013
Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts
Many recent breakthroughs in deep learning were achieved by training increasingly larger models on massive datasets. However, training such models can be prohibitively expensive. For instance, the cluster used to train GPT-3 costs over \$250 million. As a result, most researchers cannot afford to train state of the art models and contribute to their development. Hypothetically, a researcher could crowdsource the training of large neural networks with thousands of regular PCs provided by volunteers. The raw computing power of a hundred thousand \$2500 desktops dwarfs that of a \$250M server pod, but one cannot utilize that power efficiently with conventional distributed training methods. In this work, we propose Learning@home: a novel neural network training paradigm designed to handle large amounts of poorly connected participants. We analyze the performance, reliability, and architectural constraints of this paradigm and compare it against existing distributed training techniques.
http://arxiv.org/pdf/2002.04013
Max Ryabinin, Anton Gusev
cs.DC, cs.LG, stat.ML
Advances in Neural Information Processing Systems, 2020. Code URL: https://github.com/mryab/learning-at-home. 16 pages, 6 figures
Advances in Neural Information Processing Systems 33 (2020) 3659-3672
cs.DC
20200210
20201021
0 2 0 2 t c O 1 2 ] C D . s c [ 3 v 3 1 0 4 0 . 2 0 0 2 : v i X r a # Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts Max Ryabinin∗ Yandex National Research University Higher School of Economics [email protected] Anton Gusev Independent [email protected] # Abstract Many recent breakthroughs in deep learning were achieved by training increas- ingly larger models on massive datasets. However, training such models can be prohibitively expensive. For instance, the cluster used to train GPT-3 costs over $250 million2. As a result, most researchers cannot afford to train state of the art models and contribute to their development. Hypothetically, a researcher could crowdsource the training of large neural networks with thousands of regular PCs provided by volunteers. The raw computing power of a hundred thousand $2500 desktops dwarfs that of a $250M server pod, but one cannot utilize that power efficiently with conventional distributed training methods. In this work, we propose Learning@home: a novel neural network training paradigm designed to handle large amounts of poorly connected participants. We analyze the performance, reliability, and architectural constraints of this paradigm and compare it against existing distributed training techniques. # Introduction Our investigation begins with a thought experiment. Imagine a deep neural network with capacity 1000 times greater than today’s most powerful architectures: for example, a language model trained on all digitally available texts or a generative model for all images ever uploaded to the Internet. How can we train such a model? Viewed from a historical perspective, the 1000-fold increase in capacity is not unrealistic. Over the past decade, the deep learning community has made remarkable progress by training large models on abundant data, and the scale of those models keeps growing. Since the advent of the ImageNet challenge [1] with 1.3M labeled images, the typical size of convolutional neural networks increased from a few megabytes to hundreds of megabytes [2, 3, 4]. Recent studies report even larger models for datasets with hundreds of millions of images [5, 6]. Another trend from natural language processing is to train large Transformer-like language models [7, 8, 9]. The data for this task is nearly unlimited, allowing researchers to train models with tens or even hundreds of gigabytes of parameters [10, 11, 12, 13]. While we may not need the 1000-fold increase at the moment, planning for it will prepare us for the next big leap in model capacity. To be specific, let us focus on training large Transformer networks for the language modeling task. At the time of writing, the largest conventional model for that task is GPT-3 with 175 billion parameters. Scaling it up 1000 times gives us 175 trillion; depending on whether you use single or half-precision, this requires 300–600 terabytes of memory just to store the model. No modern mass-produced hardware accelerator is up to such task. Even high-end servers with 16x V100 accelerators can store only 0.15% of that model in combined GPU memory, let alone train it. ∗Corresponding author. 2A conservative estimate based on https://blogs.microsoft.com/ai/openai-azure-supercomputer 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. The dominant way of growing neural network size has so far been to scale up: deploy more powerful computational accelerators in specialized tightly interconnected clusters. However, this approach will only work up to a point. Models such as T-NLG [13] and Megatron-LM [11] were already trained on DGX-SuperPOD — a supercomputer with hundreds of Tesla V100 GPUs spread over tens of servers. As for GPT-3 [10], a single training run was estimated to cost 4.6 – 12 million dollars [14, 15]. Even today, the need for costly hardware weighs heavily on the research community. Most researchers cannot contribute to the development of large neural networks because conducting the necessary experiments would be too expensive for them. If we continue to increase the model size by scaling up, eventually the only labs that can conduct competitive research will be those with massive budgets. However, there is another solution: to scale out. Instead of using a supercomputer, researchers could crowdsource the computation from volunteers with regular PCs. This paradigm is known as volunteer computing and was successfully applied to solve problems in biology [16], high energy physics [17] and other subject areas. While a single volunteer PC may be slow and unreliable, the combined floating-point performance of such projects is on par with largest supercomputers [18]. The main challenge of volunteer computing is how to utilize this performance. Unlike server pods, consumer-grade PCs communicate over the Internet, which is significantly slower, especially in terms of latency. They are also more prone to failures as they lack many reliability features of their server-grade counterparts. Therefore, volunteer computing was traditionally used for tasks that have high computation to communication ratio and can recover from individual node failures. Unfortunately, existing paradigms of distributed training require nodes to continuously transfer large amounts of intermediate data [19, 20], making them unsuitable for volunteer computing. In this work, we take a different approach. Instead of adopting the existing distributed training strategies, we identify the advantages of volunteer computing and design a new strategy that capitalizes on them. We summarize the contributions of our paper as follows: • We propose Decentralized Mixture of Experts (DMoE) — a layer designed for training with vast amounts of unreliable consumer-grade hardware; • We describe a framework for training large neural networks composed of DMoE layers; • We confirm the efficiency and reliability of this ap- proach using formal guarantees and experiments; Workers @e0 Neighbors Experts @ selected © others Expert lookup Cee • The PyTorch source code that can be used to repro- duce our results is available online3. Figure 1: High-level scheme of Decentralized Mixture of Experts. See Section 3 for details. # 2 Related work # 2.1 Volunteer computing Using volunteer hardware has long been a viable alternative to high-performance computing. Since the development of BOINC [21] research organizations with sufficient public outreach have been able to run massive scientific computations on devices provided by volunteers. Successful projects such as Folding@home can have over 105 active participants, rivaling the floating-point performance of world’s fastest supercomputers4. In fact, Folding@home was the first “supercomputer” to reach both 1 and 10 petaflops milestones [22]. However, unlike traditional HPC, the volunteer nature of these projects imposes some additional limitations. First, the majority of volunteers are only available part-time. For instance, a participant can provide an office workstation that only contributes compute outside of business hours. Second, volunteer hardware is heterogeneous: different nodes may have different performance, memory limits, and even operating systems. Finally, participants usually communicate over the Internet, which is 2–3 orders of magnitude slower than typical HPC connections. As a result, both compute nodes and communication channels are not nearly as reliable as in traditional supercomputers. 3https://github.com/mryab/learning-at-home 4In January 2019, Folding@home reported 146,091 teraflops; in November 2019, the top-1 supercomputer “Summit” reported 148,600 teraflops; see top500.org/lists/2019/11 . 2 Due to the limitations mentioned above, volunteer computing works best for tasks that can be split into many independent chunks. A single Folding@home task is to run a physical simulation of a protein for a specified number of frames. Together, volunteers can perform hundreds of thousands of concurrent tasks and only need to communicate with the server to submit their results. Other projects like SETI@home and Einstein@home follow a similar pattern. Based on the existing volunteer computing projects, we formulate the following usage scenario: • Large pool of weak computers: the infrastructure consists of 103 ∼ 106 heterogeneous PCs5; • Communication: nodes communicate with speed and reliability of a home internet connection6; • Frequent node failures: a compute node may fail to process a task for a variety of reasons. We expect 5–20% of computers to have at least one failure a day under normal operating conditions. # 2.2 Distributed training To analyze the existing distributed training approaches from the perspective of volunteer computing, we broadly divide them into several categories. Synchronous data parallel training [25]. Each worker stores a copy of model parameters, comput- ing gradients for a fraction of the training batch. The gradients are then averaged across workers and applied to the model, making up the same update on all machines. Due to its simplicity and scalability, this method has been widely used to reduce the training time of large neural networks to the order of minutes [26, 27]. However, with low-end or midrange hardware it is not always possible to store the entire model on each worker. In addition, gradient communication, even when overlapped with computation, requires a high-speed connection between all participants, often faster than hundreds of megabytes per second, which is unrealistic when considering typical household Internet connections. Asynchronous training [28, 29] usually involves a single parameter server and multiple compute nodes fetching the latest parameters, processing batches, and submitting updates back to the server. This technique improves worker throughput, but this improvement comes at a cost. If several workers submit simultaneous updates, they might get applied in an arbitrary order, which leads to the issue of stale gradients [30] and possibly hinders model convergence. Model parallel training. Each node stores a fraction of model layers, each training batch is processed by all nodes in a sequential order determined by the layer distribution scheme. The training batch can be divided into several micro-batches and processed in a pipeline fashion, significantly increasing hardware utilization [4, 31, 32, 33]. Unlike the two previous paradigms, this method allows training models that exceed the memory limit of any individual worker. Notable examples of successful model parallel training for large neural networks are [4] and [11], yet these systems also have a high-speed network between workers. On top of that, model parallelism is highly vulnerable to node and network failures: if a single worker in a chain turns off or stops sending outputs, the training stops entirely. It is possible to combine data and model parallelism to mitigate the outlined issues to some degree, but the requirement for fast worker interconnect holds even in that case. In light of this, the method we design has to maintain high throughput even in the presence of slow and unreliable network connections, possibly sacrificing the latency (time to process a given batch) as a necessary tradeoff. This constraint may be justified by the following observation: the wall-clock training time of a neural network (with model and optimizer fixed) mostly depends on how many batches it processes per second. As we show in Section 4.2, the effect of stale gradients can be mitigated with the right architecture. We summarize the desired properties in Table 1. Federated learning. The problem of utilizing large quantities of consumer devices for training a single model has also been discussed within the context of data-private learning. Federated learning [34] attempts to mitigate the issue by keeping the data on devices, training a local version of the model, and sending only the parameter updates. These updates are encrypted so that the server can only decrypt their average across several devices. 5Typical specifications: 2–8 CPU cores, 4–16GB RAM, and a single customer-grade GPU with 2–12GB of memory and 4–14 float32 TFLOPS (based on https://pcpartpicker.com and https://techpowerup.com) 6We assume 20–250ms latency and 100Mbps symmetric bandwidth, 0.33% packet loss based on [23, 24] 3 Table 1: Comparison of distributed training schemes in the volunteer computing context. “Desired” denotes the algorithm with properties that would be beneficial for this setting. “Only workers” means that the system has central components that are not fault-tolerant. Model size limit Training throughput Scalability Fault tolerance Worker hot-join Bandwidth Latency Network Data parallel Worker Asynchronous Worker Model parallel Federated Desired High High System Medium Worker System Low High Medium High Low High High Full Only workers No Only workers Full Yes Yes No Yes Yes High Medium High Low Low Low Any Low Any Any Unsurprisingly, federated learning sacrifices performance for privacy. Secure aggregation procedures [35] require multiple workers to communicate and scale quadratically with their number. These properties hardly align with the scenario from Section 2.1, making federated learning a poor fit for jointly training large models. Deep learning with volunteer computing. To the best of our knowledge, there are three projects that use volunteer computing for training neural networks. The first work [36] leverages volunteer resources for evaluation of CNN architectures generated by evolution algorithms; each model is trained on a single device. The second study [37] relies on standard asynchronous training and is therefore inapplicable to models that do not fit into a single consumer-grade GPU. Moreover, the architecture described in that study is only partially decentralized, relying on a centralized parameter server that communicates with all nodes. Lastly, the project known as Leela Chess Zero [38], relies on volunteer hardware to play massive amounts of chess games for generating self-play data used in reinforcement learning. However, the model itself is trained on a single central server. Our primary insight from this section is that existing methods for training general large neural networks do not fit well into the volunteer computing scenario. However, there is a subclass of deep learning architectures which is much better suited for this task. # 2.3 Mixture-of-Experts Mixture-of-Experts (MoE) was first proposed almost three decades ago as a method to train multiple neural networks (“experts”) for a common task [39]. The intent is for each expert to specialize in making predictions for a small subset of data. Presented with an input, MoE first determines which experts are best suited to process that input using a separate gating function. Then it applies the chosen experts and aggregates their outputs into the final prediction. This work has sparked many follow-ups that reveal different MoE structures [40, 41, 42, 43] and individual expert types [44, 45]. A subsequent study [46] demonstrates that Mixture-of-Experts can be used as a layer within larger neural networks and trained jointly by backpropagation. Depending on the task, individual experts can utilize convolutional, recurrent, or other specialized layers. Such MoE can have a large number of experts, but it only needs to compute a few of them to process any given input. Shazeer et al. [47] (and later [48]) brought that idea to the extreme by training “outrageously” large mixtures with thousands of experts. The drastic increase in capacity allows authors to achieve superior performance in large-scale machine translation and language modeling. The paper also addresses problems that arise with increased mixture size. When trained naïvely, the gating function learns to use a small fraction of available experts for all inputs, not taking full advantage of the available capacity. The authors alleviate this issue by adding a regularization term that promotes “load-balancing” across all experts. However, scaling this approach from thousands to millions of experts reveals additional problems in the design of a gating function. In order to choose the most appropriate experts for the task, MoE predicts a “priority” value for each expert and selects the ones with the highest priority. As the number of experts approaches millions, such a gating function itself becomes computationally intractable, especially in our decentralized setting. A popular solution to this problem is to structure the set of experts in a search-friendly way. For instance, Hierarchical Mixture-of-Experts [40] organizes experts in a tree-like structure. Selecting the best experts is then reduced to a beam search over this tree, which scales logarithmically in the 4 number of experts. More recent study by Lample et al. [49] explores this idea at scale by organizing over a million keys in a factorized 1024-by-1024 grid. For this grid, the gating function only needs to predict two vectors of size 1024. This work also demonstrates that such layers can benefit Transformer models in the masked language modeling task. However, these works require a centralized infrastructure for training. When the gating function picks appropriate experts for the input at hand, it must somehow find these experts across all nodes. In our scenario, even maintaining the dynamic “address book” of all active experts would be infeasible for any single participant. # 2.4 Distributed Hash Tables Fortunately, there is a way to implement bookkeeping in a decentralized system — the distributed hash table (DHT). This is a family of distributed data structures that store key-value pairs across multiple computers in a network. A single computer within such structure only needs to “know” O(log N ) out of N computers; at the same time it can look up any key with at most O(log N ) requests to his peers. There are several DHT variants, but they all have common properties: • Decentralization: nodes form and maintain DHT without any central coordination; Scalability: DHT can scale to millions of active nodes that are continually joining and leaving; • Fault tolerance: a failure in one or a few nodes does not affect DHT integrity and availability; A DHT-like protocol was first proposed in 1998 by [51] and popularized in early 2000s by four protocols: CAN [52], Chord [53], Pastry [54] and Tapestry [55]. By far, the most popular DHT variation is Kademlia [56] with numerous applications such as BitTorrent, I2P, and Ethereum. A more recent work [57] further improves theoretical performance for either lookup time or the number of connections; however, this version is less widespread due to being significantly harder to implement. # 3 Learning@home Our main idea is to use the existing properties of mixture-of-experts and distributed hash tables to work around the limitations of volunteer computing. We begin with a method for distributed training of MoE layers, then extend it to provide fault tolerance and decentralized bookkeeping. # 3.1 Decentralized Mixture-of-Experts The fundamental building block of our approach is Decentralized Mixture-of-Experts (DMoE) — a layer that contains multiple independent “expert” sub-networks distributed over a pool of workers. In addition to experts, each worker has a gating function: a lightweight sub-network that selects experts depending on the input. Similarly to regular mixture-of-experts, DMoE is a general-purpose layer that can process any input type by using the appropriate experts (e.g., convolutional or attentive). Workers within the DMoE layer interact using Kademlia DHT protocol (Section 2.4). This DHT stores metadata, such as expert weights and worker status. Figure 2 explains DMoE inference: s wil On backward late pars s Choose experts with Send inputs and Aggregate outputs Update parameters of gating function, locate cute a forward pass of responding experts Pass. Send inputs _ responding experts, f workers using DHT a P Sponding experts“ and gradients get gradient for nput (I) Tver process o [x] QO Q Available expert O Q OC oO C (unused) Expert selected by gating function Failed expert © (e.g. disconnected) Data transfer Figure 2: Forward and backward passes for Decentralized Mixture of Experts. This procedure takes at most O(k log N) DHT queries to locate the chosen experts and k direct interactions with these experts to do the actual processing. As long ask < N, we can increase the total number of experts without compromising the inference speed. Furthermore, we argue that DMoE layers automatically solve most of the issues that arise in the volunteer computing scenario. 5 Fault tolerance. If some of the k chosen experts fail to respond due to a hardware or network error, DMoE can exclude those experts from averaging. The effect of such exclusion is similar to using Dropout [58] with regular mixture-of-experts. As a side effect, training DMoE on a faulty infrastructure will automatically adapt the mixture to the failure points of that infrastructure. Volunteer hardware. Compute nodes can serve different numbers of experts based on their hardware capabilities. If one node leaves the network, another can take its place by retrieving the latest expert checkpoints from the DHT. Load balancing. Mixture-of-experts layers can be regularized to balance the rate at which they select each expert in the mixture [47, 49]. Originally designed to improve MoE quality, this regularization has a side-effect of improving resource utilization by balancing computation load between workers. Asynchronous training. Due to communication latency in distributed systems, a single input can take a long time to process. The traditional solution is to train asynchronously [37]. Instead of waiting for the results on one training batch, a worker can start processing the next batch right away. This approach can significantly improve hardware utilization at the cost of stale gradients. Fortunately, Mixture-of-Experts accumulates staleness at a slower pace than regular neural networks. Only a small subset of all experts processes a single input; therefore, two individual inputs are likely to affect completely different experts. In that case, updating expert weights for the first input will not introduce staleness for the second one. We elaborate on this claim in Section 4.2. # 3.2 Structured Gating Function Since DMoE can use up to millions of experts, the gating function can no longer iterate over each expert in the mixture. Furthermore, the nodes in such a system are continually joining and leaving. Consequently, the expert selection procedure cannot rely on the availability of any individual node. With this in mind, we propose a gating function inspired by product key layers [49]. First, we organize experts into a d-dimensional grid. Each expert f is associated with a unique tuple of integers: uid(f) = (uo, w1,-..,Wa—1), wi € [0, M). The grid dimensions d, M should be chosen to accommodate all experts with some level of redundancy. Having extra grid space allows DMoE to allocate additional experts midway through training if more volunteers join. The gating function itself consists of d linear layers go, ... ga—1 and computes expert priority in an additive manner: g(x, f) = an gi(%)[u;]. Such a function only needs to predict d vectors of size M, which makes it significantly easier to compute and send over the network. Furthermore, this gating function can choose top-k highest-scoring experts in logarithmic time (see Appendix B, C). After choosing the appropriate experts, a worker should find their respective servers (in O(k log N) time using DHT) and pass the input vector for processing (see Figure[Ip. Once all the experts have finished processing, the worker aggregates expert outputs by weighted averaging: exp (9(z, f)) Y preropK(x) XP (9(@ f")) DMoE(x) = Ss f(x) , TopK(«) are k best experts w.r.t.g (1) f€TopK(«) If some of the chosen experts have crashed or taken too long to perform the computation, we can exclude them from averaging and renormalize weights so that they still add up to 1. Trained with this exclusion policy, DMoE will learn experts with overlapping specializations that are more resistant to individual node failure. # 3.3 Training infrastructure Finally, we describe Learning@home — a deep learning infrastructure that performs distributed training of large models on hardware provided by volunteers. Each worker runs three components: • Trainer — forming batches and training; • Runtime — inference and expert updates; • DHT Node — bookkeeping and routing; # Figure 3: Learning@home components and their interaction. 6 Trainer generates batches and propagates them through the model. After forming a batch and converting it into an input vector, the trainer iterates over a sequence of DMoE layers and organizes forward and backward passes, as described in Sections 3.1 and 3.2. Learning@home fully embraces the asynchronous training paradigm, where a trainer can process hundreds of concurrent batches. Runtime is responsible for expert inference and training. This is the only process that has access to participant’s GPU device(s). Once all the experts are initialized, runtime listens to the incoming connections from trainers and handles two types of requests: Forward: given inputs, compute and return expert outputs on these inputs (no side-effects); • Backward: given inputs and gradients of loss function w.r.t. outputs, return gradients w.r.t. inputs and update expert parameters by gradient descent. Since trainers can operate under latency, the runtime is not required to process all requests right away. Instead, it aggregates requests into batches for better GPU utilization. The runtime process relies on gradient checkpointing to avoid storing intermediate expert activations [59, 60]. This choice means that the expert fi(x) is called both during the forward and the backward passes. We elaborate on the role of gradient checkpointing in Appendix D. DHT Node. The final component of Learning@home infrastructure is a DHT for bookkeeping. For simplicity, we use unmodified Kademlia protocol7, leaving further investigation to future work. Each runtime periodically announces its experts to the DHT, associating their identifiers with the address of that runtime and the current timestamp (details in Appendix C). Trainers can then use those entries to find the workers responsible for the chosen experts. In addition to timestamps, a runtime also regularly saves latest expert weights into the same DHT for persistence. The resulting infrastructure becomes elastic and fault-tolerant as long as it has enough active participants. # 4 Experiments The design of Learning@home was driven by two key assumptions: first, that MoE-based archi- tectures can maintain high throughput under latency and second, that they can converge despite the presence of stale gradients. In this section we run several benchmarks in order to verify these assumptions. We intentionally focus on small-scale experiments to make them easier to reproduce and analyze. While solving practical vision and NLP problems is certainly our end goal, choosing a particular task would make it much harder to understand the general properties of our approach. # 4.1 Model throughput Our first benchmark evaluates the performance of asynchronous training schemes under latency. We quantify this with training throughput, i.e., the number of training batches processed per second. To emulate the distributed training environment, we create a model from a large number of identical blocks distributed evenly across 4 NVIDIA GTX 1080 GPUs. We simulate network latency by adding an artificial delay after computation of each block. The delay time is sampled from the exponential distribution, which was shown to model latency well [61]. Since our model size exceeds the memory limits of a single consumer GPU, the only mainstream paradigm that can compete with Learning@home is model parallel training. We also report the “upper bound” on training throughput by running the same computations with no network delays in a model parallel regime with pipelining similar to [4]. For Learning@home, we use 64 trainer processes to send requests to the runtime processes8. To measure the effect on blocks with different computation to communication ratio, we evaluate two popular block architectures. The first architecture is composed of 224 feed-forward blocks, each having hidden dimensions of 1024 → 4096 → 4096 → 1024 with layer normalization and ReLU activations in between. These blocks are treated as separate “experts” and process batches of size 2048. The second architecture consists of 224 BERT-like Transformer blocks [7] with hidden dimension 1024 and GELU activations [62] applied to sequences of length 512 with batch size 4. With this setup in mind, we can measure the throughput of the entire model as the time it takes to process 10 batches and dividing it by the total number of processed examples. These experiments were repeated 5 times for all methods to measure the mean and standard deviation of throughput. 7In particular, publicly available Kademlia implementation from github.com/bmuller/kademlia 8See the full setup: https://github.com/mryab/learning-at-home#running-the-experiments 7 Figure 4 demonstrates that even with delay times approaching 200ms the asynchronous scheduler we have implemented as part of Learning@home maintains nearly the same throughput. In turn, model-parallel training throughput quickly degrades under latency, which is not surprising as it was not designed with slow communication in mind. To verify the validity of our conclusions, we have conducted similar experiments on cloud GPU instances in different regions. This allows us to measure performance in a non-simulated scenario closer to the desired area of application. In particular, we rented 3 instances with Tesla K80 hosted in West US, East US, and West Europe with average network latency of 92.49 ± 32.42 ms. The throughput values in Table 2 are similar to results for simulated latencies (Figure 4). Finally, we tested the scalability of our infrastructure by deploying DHT nodes in the same cloud regions and measuring the latency of beam search (batch size 64, see Appendix C). Finding top-4 experts took 317 ± 58ms for 100 nodes, 528 ± 127ms for 1,000 nodes and 764 ± 106ms for 10,000 DHT nodes. Approach Feed-forward Transformer encoder 7.23 ± 0.06 0.01 ± 0.001 Model parallel Learning@home 300.8 ± 15.9 0.68 ± 0.01 Table 2: Throughput (samples/s) for 3 cloud K80 in East US, West US and West Europe. # z F 4 é Figure 4: Throughput with simulated latency. # 4.2 Convergence Our second experiment aims to verify the robustness of DMoE to delayed updates. For this goal, we choose one of the simpler tasks in deep learning, namely the MNIST digit recognition dataset [63], and compare convergence rates under varying network latency. All modern architectures can reliably solve this task, making it easier for us to isolate the effect of gradient staleness. We evaluate four models: a traditional feed-forward model and three DMoE variations with different numbers of experts. The feed-forward network (FFN) consists of 4 stacked feed-forward blocks. Each block architecture is same as described in Section 4.1, but with half as many hidden units. In turn, its DMoE counterparts have four DMoE layers, each composed of blocks with 1/4 of the FFN size. Both DMoE-based models use only 4 experts at a time regardless of their total number, hence being computationally equivalent to the FFN baseline. We train all models asynchronously in high-latency and low-latency scenarios, using the same distribution for delay. In the high-latency scenario, each of 64 workers is delayed for 1 second on average while processing a batch. This corresponds to 125ms for each forward and backward pass through DMoE. For low latency emulation, we use 16 workers and 100ms average delay. The third experiment simulates node failure: each expert does not respond to a request with probability 0.1. The results are presented in Figure 5; as expected, the plots demonstrate that the higher latency scenario is more difficult for all models. However, the degree to which it affects the performance of DMoE architectures is much lower, especially for the largest of mixtures. 100 ms average latency 1000 ms average latency 1000ms latency + 10% failures 1.0 1.0 1.0 0.8 09 08 0.6 08 Large FFN 06 0 —— DMoE 64 experts 04 7 —— DMoE 256 experts 02 04 06 —— DMoE 4096 experts : + 1 : 1 0.0%, + : : : 02 1024 4096 8192 12288 16384 1024 4096 8192 12288 16384 "1024-8192 16384 24576 = 32768 Training batches Training batches Training batches # Validation accuracy Figure 5: Convergence plots for feedforward models with different network latencies and failure rates. Pale areas on depict unbiased standard deviations over 5 runs. 8 # 4.3 Language models The third and final benchmark is neural language modeling. Specifically, we train Transformer- XL [64] on the WikiText-2 [65] dataset. Both baseline and DMoE models use official recommended parameters with additional regularization proposed in [66]. The base model contains 16 Transformer layers with the hidden size of 400 and 900 units in the feedforward layer. We also train a small baseline model with 200 hidden and 450 feedforward units. Our DMoE Transformer uses 256 experts split evenly between 16 layers. Each expert is a Transformer layer with the same dimensions as layers of the small baseline model. The DMoE layers route to top-4 experts, making our model roughly equivalent to base in terms of FLOPs per sample. Similarly to Section 4.2, we train DMoE with 32 trainers (batch size 1 each), 1000ms average latency, and 10% failure rate. 140 —— Transformer-base 2 —— Transformer-small 3120 —— DMoE 256 experts & § 2 8 100 3 S 80 60 0 IM 2M 3M Training samples processed Figure 6: Convergence plots for Transformer language models on the WikiText-2 dataset. Pale areas on depict unbiased standard deviations over 5 runs. The results depicted in Figure 6 demonstrate a similar pattern to what was previously observed on feedforward networks. Curiously enough, we found that in this specific scenario the 10% failure rate has a positive effect on the DMoE performance. We attribute this effect to a form of dropout regularization that prevents our model from overfitting the limited training data. # 5 Conclusion The main purpose of this study is to convey the idea that one can train large neural networks on unreliable hardware. We propose a specialized layer and training infrastructure designed to meet the requirements of volunteer computing over the Internet. The preliminary experiments demonstrate that Learning@home can scale to thousands of nodes and successfully train popular model archetypes despite network latency and node failures. We believe that decentralized deep learning will change the way we think about training neural networks. Instead of running isolated experiments, researchers and practitioners will be able to join forces and solve the biggest problems together. Instead of being confined to a single supercomputer, our models will naturally grow in capacity as more people and organizations around the world join in. We expand on the ramifications of deep learning decentralization in the broader impact statement. However, reaching the full potential of this idea requires expertise not only in deep learning, but also information security, distributed systems, crowdsourcing and many other areas. We believe that this monumental task is best solved through scientific collaboration. To that end, we will continue to develop Learning@home as a public open-source project9. # Acknowledgements and funding We would like to thank Artem Babenko and Vladimir Aliev for their invaluable assistance in both brainstorming and proofreading the final paper. We are also grateful to anonymous reviewers for their helpful suggestions on improving the presentation of the paper. Max Ryabinin was supported by Yandex and National Research University Higher School of Economics. # 9https://learning-at-home.github.io 9 # Broader Impact The approach proposed in this work is only a prototype with limited direct consequences, but the long-term goal of training huge models with volunteer computing can have a lasting effect on both the research community and the general public. Funding bias vs crowdsourcing bias The main positive outcome we pursue is to let researchers harness volunteer computing and train models on the scale currently available only to large corporations. Ideally, a deep learning researcher with a promising idea will be able to amass the computation needed to realize this idea by involving volunteers. However, the project’s appeal for volunteers depends on many factors such as subject area, current societal trends, and even researcher’s personality. For example, a project about teaching agents to play games [38] or fighting global pandemics [67] is likely to attract more resources than deep learning applied to soil science. In essence, volunteer computing is biased towards exciting or socially relevant research the same way as traditional HPC is biased towards the interests of those who fund it. Alternative use and misuse The proposed technology can be used with different economic models. If a deep learning system is immediately useful (e.g. for machine translation, information retrieval, etc), the participants could use it for their needs based on their contributions to training. This can take many forms: several labs combining their hardware and training larger models; a web-service that lets people contribute their compute instead of using ads/subscriptions; or simply a framework that someone can use to run distributed training across two or more datacenters. Unfortunately, this also allows several opportunities for malicious use. If a machine is hacked, the attacker can use its compute unnoticed by the machine owner — much the same way that botnets are currently used to mine cryptocurrencies. Furthermore, due to decentalized nature even legitimate Learning@home projects can be hijacked by hackers. Security Using crowdsourced hardware makes Learning@home susceptible to attacks from malicious partici- pants. There are multiple attack vectors already known in P2P community: denial of service attacks, Sybil attacks, Eclipse attacks and more [68, 69, 70, 71]. Fortunately, there are variations of the DHT protocol that make it resistant to said attacks: if a reader wishes to learn more about DHT security, we recommend starting with [68]. Another source of vulnerability stems from the sequential nature of neural networks. If a single expert were to return incorrect (e.g. NaN) outputs or gradients, it could compromise the outputs of the entire network and even poison adjacent nodes through backpropagation. Recent studies expose similar attack patterns on federated learning systems [72, 73]. The redundant nature of mixture-of-experts layers provides some degree of resistance against those attacks. A single malicious expert will only affect a small fraction of inputs that pass through this specific expert. Furthermore, a trainer with access to predictions from multiple experts could provide a higher degree of robustness by using statistical techniques (e.g., by ignoring outlier gradients). However, such techniques need to be carefully designed so as not to introduce harmful side effects. The burden on the network Finally, we would like to point out the potential harm that our approach can do to network infrastruc- ture. The experiments we ran in Section 4.1 saturate with the bandwidth of 100 − 200Mbps, most of which is tensors passed between experts and trainers. This coincides with the typical home internet speed available in major cities of developed countries. However, not all ISPs design their infrastructure for users who always use up all their bandwidth. If too many Learning@home participants are located in one LAN or MAN, it can cause congestion or even failures in the network infrastructure. Similar situations frequently took place in late 2000s due to growing popularity of BitTorrent for file sharing. Fortunately, the network infrastructure is continually improving, which leads us to believe that this problem will eventually be solved. Until then, we describe several ways to reduce network load of Learning@home in Appendix E. 10 # References [1] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. [2] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012. [3] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2015. [4] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, Hy- oukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In Advances in Neural Information Processing Systems, pages 103–112, 2019. [5] Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, and Neil Houlsby. Large scale learning of general visual representations for transfer. CoRR, abs/1912.11370, 2019. [6] Baoyuan Wu, Weidong Chen, Yanbo Fan, Yong Zhang, Jinlong Hou, Jie Liu, and Tong Zhang. Tencent ml-images: A large-scale multi-label image database for visual representation learning. IEEE Access, 7:172683–172693, 2019. [7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL-HLT, 2019. [8] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692, 2019. [9] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020. [10] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. [11] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using gpu model parallelism. arXiv preprint arXiv:1909.08053, 2019. [12] Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. Defending against neural fake news. In Advances in Neural Information Processing Systems, pages 9051–9062, 2019. language model by microsoft. https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language- model-by-microsoft/. [14] Chuan Li. Demystifying gpt-3 language model: A technical overview. "https:// lambdalabs.com/blog/demystifying-gpt-3". [15] Elliot Turner. Estimate of GPT-3 training cost based on public cloud GPU/TPU cost models, from Elliot Turner’s personal page (accessed on May 29, 2020). [16] Stefan Larson, Christopher Snow, Michael Shirts, and Vijay Pande. Folding@home and genome@home: Using distributed computing to tackle previously intractable problems in computational biology. arXiv, 02 2009. [17] C Adam-Bourdarios, D Cameron, A Filipˇciˇc, E Lancon, Wenjing Wu, et al. Atlas@ home: harnessing volunteer computing for hep. In Journal of Physics: Conference Series, volume 664, page 022009. IOP Publishing, 2015. [18] Michael Gross. Folding research recruits unconventional help. In Current Biology. 22 (2): R35–R38, 2012. 11 [19] Tim Dettmers. 8-bit approximations for parallelism in deep learning. ICLR, 2015. [20] Peng Sun, Wansen Feng, Ruobing Han, Shengen Yan, and Yonggang Wen. Optimizing network performance for distributed dnn training on gpu clusters: Imagenet/alexnet training in 1.5 minutes. ArXiv, abs/1902.06855, 2019. [21] David P Anderson. Boinc: A system for public-resource computing and storage. In Fifth IEEE/ACM international workshop on grid computing, pages 4–10. IEEE, 2004. [22] Folding@home timeline. timeline(accessed on May 30, 2020). project https://foldingathome.org/project- # [22] Folding@home [23] Speedtest global index for fixed broadband. https://www.speedtest.net/global-index (accessed on 11.08.2020, bandwidth for top countries and general trend). [24] Fuliang Li, Xingwei Wang, Tian Pan, and Jiahai Yang. A case study of ipv6 network per- formance: Packet delay, loss, and reordering. Mathematical Problems in Engineering, 2017, 2017. [25] Leslie G Valiant. A bridging model for parallel computation. Communications of the ACM, 33(8):103–111, 1990. [26] Priya Goyal, Piotr Dollár, Ross Girshick, Pieter Noordhuis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch sgd: Training imagenet in 1 hour, 2017. [27] Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. Large batch optimization for deep learning: Training bert in 76 minutes. In International Conference on Learning Representations, 2020. [28] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free In Advances in neural information approach to parallelizing stochastic gradient descent. processing systems, pages 693–701, 2011. [29] Wei Zhang, Suyog Gupta, Xiangru Lian, and Ji Liu. Staleness-aware async-sgd for distributed deep learning. arXiv preprint arXiv:1511.05950, 2015. [30] Sanghamitra Dutta, Gauri Joshi, Soumyadip Ghosh, Parijat Dube, and Priya Nagpurkar. Slow and stale gradients can win the race: Error-runtime trade-offs in distributed sgd. 03 2018. [31] Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase, and Yuxiong He. Zero: Memory optimiza- tion towards training a trillion parameter models. 10 2019. [32] Bowen Yang, Jian Zhang, Jonathan Li, Christopher Ré, Christopher R. Aberger, and Christo- pher De Sa. Pipemare: Asynchronous pipeline parallel dnn training. ArXiv, abs/1910.05124, 2019. [33] Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R. Devanur, Gregory R. Ganger, Phillip B. Gibbons, and Matei Zaharia. Pipedream: Generalized pipeline parallelism for dnn training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP ’19, page 1–15, New York, NY, USA, 2019. Association for Computing Machinery. [34] Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. In Artificial Communication-efficient learning of deep networks from decentralized data. Intelligence and Statistics, pages 1273–1282, 2017. [35] Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pages 1175–1191, 2017. [36] T. Desell. Developing a volunteer computing project to evolve convolutional neural networks and their hyperparameters. In 2017 IEEE 13th International Conference on e-Science (e-Science), pages 19–28, 2017. [37] Ekasit Kijsipongse, Apivadee Piyatumrong, and Suriya U-ruekolan. A hybrid gpu cluster and volunteer computing platform for scalable deep learning. The Journal of Supercomputing, 04 2018. [38] Pascutto, Gian-Carlo and Linscott, Gary. Leela chess zero. 2019. 12 [39] Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan, and Geoffrey E. Hinton. Adaptive mixtures of local experts. Neural Computation, 3(1):79–87, March 1991. [40] Michael I Jordan and Robert A Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural computation, 6(2):181–214, 1994. [41] Bangpeng Yao, Dirk Walther, Diane Beck, and Li Fei-Fei. Hierarchical mixture of classification In Advances in Neural Information experts uncovers interactions between brain regions. Processing Systems, pages 2178–2186, 2009. [42] Rahaf Aljundi, Punarjay Chakravarty, and Tinne Tuytelaars. Expert gate: Lifelong learning with a network of experts. pages 7120–7129, 07 2017. [43] Carl E Rasmussen and Zoubin Ghahramani. Infinite mixtures of gaussian process experts. In Advances in neural information processing systems, pages 881–888, 2002. [44] Ronan Collobert, Samy Bengio, and Yoshua Bengio. A parallel mixture of svms for very large scale problems. In Advances in Neural Information Processing Systems, pages 633–640, 2002. [45] Babak Shahbaba and Radford Neal. Nonlinear models using dirichlet process mixtures. Journal of Machine Learning Research, 10(Aug):1829–1850, 2009. [46] David Eigen, Marc’Aurelio Ranzato, and Ilya Sutskever. Learning factored representations in a deep mixture of experts. arXiv preprint arXiv:1312.4314, 2013. [47] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538, 2017. [48] Dmitry Lepikhin, H. Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Y. Huang, M. Krikun, Noam Shazeer, and Z. Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. ArXiv, abs/2006.16668, 2020. [49] Guillaume Lample, Alexandre Sablayrolles, Marc´ Aurelio Ranzato, Ludovic Denoyer, and Herve Jegou. Large memory layers with product keys. In H. Wallach, H. Larochelle, A. Beygelz- imer, F. dÁlché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 8546–8557. Curran Associates, Inc., 2019. [50] Joan Puigcerver, Carlos Riquelme, Basil Mustafa, Cedric Renggli, André Susano Pinto, Sylvain Gelly, Daniel Keysers, and Neil Houlsby. Scalable transfer learning with expert models. arXiv preprint arXiv:2009.13239, 2020. [51] Renu Tewari, Michael Dahlin, Harrick Vin, and John Kay. Beyond hierarchies: Design considerations for distributed caching on the internet. Technical report, Citeseer. [52] Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, and Scott Shenker. A scal- able content-addressable network. In Proceedings of the 2001 conference on Applications, technologies, architectures, and protocols for computer communications, pages 161–172, 2001. [53] Hari Balakrishnan, M Frans Kaashoek, David Karger, Robert Morris, and Ion Stoica. Looking up data in p2p systems. Communications of the ACM, 46(2):43–48, 2003. [54] Antony Rowstron and Peter Druschel. Pastry: Scalable, decentralized object location, and rout- ing for large-scale peer-to-peer systems. In IFIP/ACM International Conference on Distributed Systems Platforms and Open Distributed Processing, pages 329–350. Springer, 2001. [55] Ben Zhao, Ling Huang, Jeremy Stribling, Sean Rhea, Anthony Joseph, and John Kubiatowicz. Tapestry: A resilient global-scale overlay for service deployment. IEEE Journal on Selected Areas in Communications, 22, 07 2003. [56] Petar Maymounkov and David Mazieres. Kademlia: A peer-to-peer information system based on the xor metric. In International Workshop on Peer-to-Peer Systems, pages 53–65. Springer, 2002. [57] M Frans Kaashoek and David R Karger. Koorde: A simple degree-optimal distributed hash table. In International Workshop on Peer-to-Peer Systems, pages 98–107. Springer, 2003. [58] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research, 15(1):1929–1958, 2014. 13 [59] Andreas Griewank and Andrea Walther. Algorithm 799: revolve: an implementation of check- pointing for the reverse or adjoint mode of computational differentiation. ACM Transactions on Mathematical Software (TOMS), 26(1):19–45, 2000. [60] Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174, 2016. [61] Andrei M Sukhov, MA Astrakhantseva, AK Pervitsky, SS Boldyrev, and AA Bukatov. Generat- ing a function for network delay. Journal of High Speed Networks, 22(4):321–333, 2016. [62] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus), 2016. [63] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [64] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988, 2019. [65] 2016 Stephen Merity et al. Wikitext-2. [66] Tim Dettmers. https://github.com/TimDettmers/transformer-xl/tree/wikitext2. [67] https://foldingathome.org/covid19/(accessed on June 4, 2020). [68] Guido Urdaneta, Guillaume Pierre, and Maarten Van Steen. A survey of dht security techniques. ACM Computing Surveys (CSUR), 43(2):1–49, 2011. [69] Liang Wang and Jussi Kangasharju. Real-world sybil attacks in bittorrent mainline dht. In 2012 IEEE Global Communications Conference (GLOBECOM), pages 826–832. IEEE, 2012. [70] Baruch Awerbuch and Christian Scheideler. A denial-of-service resistant dht. In International Symposium on Distributed Computing, pages 33–47. Springer, 2007. [71] Zied Trifa and Maher Khemakhem. Sybil nodes as a mitigation strategy against sybil attack. Procedia Computer Science, 32:1135–1140, 2014. [72] Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. How to backdoor federated learning. arXiv preprint arXiv:1807.00459, 2018. [73] Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. Analyzing federated learning through an adversarial lens. arXiv preprint arXiv:1811.12470, 2018. [74] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pages 8024–8035, 2019. [75] Chuan Li Stephen Balaban. Deep learning gpu benchmarks, lambda labs website, 2018/10/08. [76] Samuel Horvath, Chen-Yu Ho, Ludovit Horvath, Atal Narayan Sahu, Marco Canini, and Peter Richtárik. Natural compression for distributed deep learning. CoRR, abs/1905.10988, 2019. [77] Xiao Sun, Jungwook Choi, Chia-Yu Chen, Naigang Wang, Swagath Venkataramani, Vijay- alakshmi (Viji) Srinivasan, Xiaodong Cui, Wei Zhang, and Kailash Gopalakrishnan. Hybrid 8-bit floating point (hfp8) training and inference for deep neural networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. dÁlché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 4901–4910. Curran Associates, Inc., 2019. [78] Wan-Duo Kurt Ma, J. P. Lewis, and W. Bastiaan Kleijn. The hsic bottleneck: Deep learning without back-propagation, 2019. [79] Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, David Silver, and Koray Kavukcuoglu. Decoupled neural interfaces using synthetic gradients. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1627–1635. JMLR. org, 2017. [80] Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc V Le, and Alexey Kurakin. Large-scale evolution of image classifiers. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2902–2911. JMLR. org, 2017. 14 # A Cost and performance estimate of $2500 desktop PCs According to several PC building websites (https://pcpartpicker.com, https://newegg.com), most popular $2250–2750 desktops are equipped with RTX 2080/2080Ti or GTX 1080Ti GPU. These GPUs are 50–80% as fast as Tesla V100 for deep learning [75]. As a rough estimate, the combined throughput of 10,000 desktops is 8–15 times that of server pod with 512 V100 GPUs. # B A primer on Distributed Hash Tables On a high level, DHT is a dictionary that can be accessed by every participant. Each key-value pair is stored on a small subset of peers determined by the hash function of the key. • Each participant has a unique identifier (ID) that is sampled uniformly from the space possible outputs of the hash function. • When storing a (key, value) pair, one should search for k peers whose IDs are closest to hash(key). Then, request each of these k peers to store the (key, value) pair. • When retrieving a value for a key, one should compute hash(key), search for peers with IDs similar to that hash value and request value from those peers. Specific DHT variants such as Chord [53] or Kademlia [56] employ different hash types and different algorithms for finding nearest peers. For instance, Kademlia DHT selects nearest peers based on the XOR distance function: d(x, y) = int(x ⊕ y). Each participant is directly aware of only a small subset of DHT peers. When storing or retrieving a key, the participant requests additional peers from its neighbors in a semi-greedy search, minimizing XOR distance until it finds k nearest peers. In Kademlia, nodes form a special navigable graph structure that lets them find nearest peers in at most O(k + log2 N ) requests to other DHT peers, where N is the total number of participants. # C Finding best experts across the DHT Recall that the gating function is defined as d-1 a(x, f) = Â¥ 9i(@) (ui). i=0 where g0, . . . gd−1 are linear layers, ui is the i-th component of the expert unique identifier uid(f ), and [k] takes k-th component of a vector. Our objective is to find k experts with largest g(x, ·). In a centralized setting, one can find k largest scores from each linear layer gi using the algorithm described in [49]. Unfortunately, in our case not all combinations of indices correspond to valid experts. Therefore, we developed a specialized beam search algorithm similar to the one used in machine translation. The core idea is to start with top-k indices along the first grid dimension and add one dimension at a time. In order for this algorithm to work, participants maintain the following information on the DHT: • For every expert UID, store its server address and the timestamp; • For every prefix in expert UID, store all suffixes corresponding to active experts and the timestamp. For instance, if there are 6 experts: "ffn.1.3", "ffn.2.1", "ffn.2.2", "ffn.2.6" and "ffn.3.2" and "ffn.3.5"; the DHT will contain the following information: ffn.3.* ffn.1.3 ffn.2.1 ffn.2.2 ffn.2.6 ffn.3.2 ffn.3.5 [1, 2, 6],t2 [2, 5],t3 [Address of a server that hosts the given expert] Figure 7: DHT keys and values for 6 experts defined above, t corresponds to last update timestamp. 15 For higher grid dimensions, we store similar information for every grid prefix. For instance, an expert with UID "transformer.10.20.30" will affect 3 keys: "transformer.10.*", "transformer.10.20.*" and "transformer.10.20.30". Each prefix key stores at most as many values as there are indices in the next grid dimension, typically 100 or 256. With this data structure, DMoE can use beam search to select the best experts. Algorithm 1 starts from the leftmost dimension of the grid and processes one dimension at each step. The worst case complexity of this algorithm is O(dk log N ) from O(dk) lookups to the DHT. # Algorithm 1 SelectExperts // all 1-prefixes // initial scores // select k best starting points beam, scores := TopK(beam, scores, k) fori ¢[l, ..., d—1]do // expand all candidates in beam new_beam, new_scores := [], | ] for prefix, score € beam, scores do for j € ActiveSuffixes(prefix) do new_beam.add(prefix@ [j]) // concat new_scores.add(score +g; (2, j)) end for end for // select at most k best prefixes beam, scores := TopK(new_beam, new_scores, k) end for Return beam The TopK function simply sorts the inputs by score and returns k inputs with highest scores. In turn, the ActiveSuffixes function queries the DHT for a given prefix and returns a set of all active suffixes as described above. Assuming that servers re-publish their experts every t seconds, the function can simply check whether the timestamp for a given prefix is less than t seconds old. # D On gradient checkpointing in Learning@home In general, gradient checkpointing increases computation per training batch by approximately 1/3, but allows training larger models with the same GPU memory. More importantly, in our scenario checkpointing also removes the need to store intermediate activations. In our experiments, this has led to both significantly higher training throughput and a smaller memory footprint. Without gradient checkpointing, we would have to store intermediate activations in memory. Since the GPU can only fit a few batches at a time, it quickly runs out of memory and is forced to wait for the backward pass. For Transformer layers (see Figure 4, top), this results in approximately 9 times less throughput at 100ms latency. # E Reducing the network load One way to reduce the communication load is to convert tensors to a lower precision before transfer. Prior work in this area suggests that distributed training works even when communicating with 8-bit precision tensors [19, 76]. Many popular architectures, including Transformers, can train entirely in that precision mode [77]. Consequently, low precision communication appears as a logical way of reducing communication requirements. In addition, the deep learning architectures discussed in this work rely on backpropagation for training. With the advancement of optimization methods allowing nearly independent layer-wise training [78, 79, 80], it might be even more suitable to use these techniques for asynchronous training with fewer restrictions on the architectures being used. Another solution is to use experts that have a higher capacity to input size ratio. The architectures used in Section 4.1 are already somewhat biased in that direction, but they are far from optimal. 16
{ "id": "1811.12470" }
2002.08909
REALM: Retrieval-Augmented Language Model Pre-Training
Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answering. However, this knowledge is stored implicitly in the parameters of a neural network, requiring ever-larger networks to cover more facts. To capture knowledge in a more modular and interpretable way, we augment language model pre-training with a latent knowledge retriever, which allows the model to retrieve and attend over documents from a large corpus such as Wikipedia, used during pre-training, fine-tuning and inference. For the first time, we show how to pre-train such a knowledge retriever in an unsupervised manner, using masked language modeling as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effectiveness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA). We compare against state-of-the-art models for both explicit and implicit knowledge storage on three popular Open-QA benchmarks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative benefits such as interpretability and modularity.
http://arxiv.org/pdf/2002.08909
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang
cs.CL, cs.LG
null
null
cs.CL
20200210
20200210
0 2 0 2 b e F 0 1 ] L C . s c [ 1 v 9 0 9 8 0 . 2 0 0 2 : v i X r a # REALM: Retrieval-Augmented Language Model Pre-Training # Kelvin Guu * 1 Kenton Lee * 1 Zora Tung 1 Panupong Pasupat 1 Ming-Wei Chang 1 Abstract Language model pre-training has been shown to capture a surprising amount of world knowledge, crucial for NLP tasks such as question answer- ing. However, this knowledge is stored implic- itly in the parameters of a neural network, requir- ing ever-larger networks to cover more facts. To capture knowledge in a more modular and inter- pretable way, we augment language model pre- training with a latent knowledge retriever, which allows the model to retrieve and attend over doc- uments from a large corpus such as Wikipedia, used during pre-training, fine-tuning and infer- ence. For the first time, we show how to pre- train such a knowledge retriever in an unsuper- vised manner, using masked language model- ing as the learning signal and backpropagating through a retrieval step that considers millions of documents. We demonstrate the effective- ness of Retrieval-Augmented Language Model pre-training (REALM) by fine-tuning on the chal- lenging task of Open-domain Question Answer- ing (Open-QA). We compare against state-of-the- art models for both explicit and implicit knowl- edge storage on three popular Open-QA bench- marks, and find that we outperform all previous methods by a significant margin (4-16% absolute accuracy), while also providing qualitative bene- fits such as interpretability and modularity. ; Unlabeled text, from pre-training corpus (4) ' The [MASK] at the top of the pyramid (x retrieve Neural Knowledge Retriever ~ po(2le) ) ; Retrieved document” :-------------------5 i The pyramidion on top allows for less | ! material higher up the pyramid. (z) | = Query and document .-/---------------------7, ' [CLS] The [MASK] at the top of the pyramid | [SEP] The pyramidion on top allows for less material higher up the pyramid. (x, z) End-to-end backpropagation A Unlabeled text, from pre-training corpus (4) ; Retrieved document” = Query and document Figure1. REALM augments language model pre-training with a neural knowledge retriever that retrieves knowledge from a textual knowledge corpus, Z (e.g., all of Wikipedia). Signal from the language modeling objective backpropagates all the way through the retriever, which must consider millions of documents in Z—a significant computational challenge that we address. correctly predict the missing word in the following sen- tence: “The is the currency of the United Kingdom” (answer: “pound”). # 1. Introduction Recent advances in language model pre-training have shown that models such as BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019) and T5 (Raffel et al., 2019) store a surprising amount of world knowledge, ac- quired from the massive text corpora they are trained on (Petroni et al., 2019). For example, BERT is able to In these language models, the learned world knowledge is stored implicitly in the parameters of the underlying neural network. This makes it difficult to determine what knowl- edge is stored in the network and where. Furthermore, stor- age space is limited by the size of the network—to cap- ture more world knowledge, one must train ever-larger net- works, which can be prohibitively slow or expensive. Correspondence to: Kelvin Guu <[email protected]>, Kenton Lee <ken- [email protected]>, Zora Tung <[email protected]>, Panupong Pasupat <[email protected]>, Ming-Wei Chang <[email protected]>. To capture knowledge in a more interpretable and modular way, we propose a novel framework, Retrieval-Augmented Language Model (REALM) pre-training, which augments language model pre-training algorithms with a learned tex- tual knowledge retriever. In contrast to models that store knowledge in their parameters, this approach explicitly ex- poses the role of world knowledge by asking the model to REALM: Retrieval-Augmented Language Model Pre-Training decide what knowledge to retrieve and use during inference. Before making each prediction, the language model uses the retriever to retrieve documents1 from a large corpus such as Wikipedia, and then attends over those documents to help inform its prediction. Learning this model end-to- end requires backpropagating through a retrieval step that considers an entire corpus of textual knowledge, as shown in Figure 1. The key intuition of REALM is to train the retriever us- ing a performance-based signal from unsupervised text: a retrieval that improves the language model’s perplex- ity is helpful and should be rewarded, while an un- informative retrieval should be penalized. For exam- ple, if the model needs to fill the blank the re- in “the triever should be rewarded for selecting a document con- taining “The pyramidion on top allows for less material higher up the pyramid”. We achieve this behavior by modeling our retrieve-then-predict approach as a latent variable language model and optimizing the marginal likelihood. CURATEDTREC) and compare to state-of-the-art Open-QA models, including both extremely large models that store knowledge implicitly (such as T5) as well as previous ap- proaches that also use a knowledge retriever to access ex- ternal knowledge, but implement retrieval in a more heuris- tic fashion (Lee et al., 2019; Min et al., 2019a; Asai et al., 2019). REALM achieves new state-of-the-art results on all three benchmarks, significantly outperforming all previous systems by 4-16% absolute accuracy. We also demonstrate qualitative benefits of REALM, including interpretability and modularity. # 2. Background Language model pre-training The goal of language model pre-training is to learn useful representations of lan- guage, usually from unlabeled text corpora. The resulting pre-trained model can then be further trained (fine-tuned) for a downstream task of primary interest (in our case, Open-QA), often leading to better generalization than train- ing from scratch (Dai & Le, 2015; Radford et al., 2019). Incorporating a large-scale neural retrieval module during pre-training constitutes a significant computational chal- lenge, since the retriever must consider millions of candi- date documents for each pre-training step, and we must backpropagate through its decisions. To address this, we structure the retriever such that the computation performed for each document can be cached and asynchronously up- dated, and selection of the best documents can be formu- lated as Maximum Inner Product Search (MIPS). Numerous prior works have demonstrated the bene- fit of adding a discrete retrieval step to neural net- works (Miller et al., 2016; Chen et al., 2017), but did not apply the framework to language model pre-training and employed non-learned retrievers to handle large-scale doc- ument collections. In the language modeling literature, the k-Nearest Neighbor Language Model (Khandelwal et al., 2019) (kNN-LM) retrieves similar LM examples to im- prove memorization. However, kNN-LM was not fine- tuned for downstream tasks, perhaps because it is unclear how to adapt the retrieval mechanism: a kNN can only use examples labeled for the target task—during fine-tuning, this precludes LM examples, which contain the desired world knowledge. In contrast, REALM’s retriever is de- signed to transfer to other tasks, and the retrieval is just text, not a labeled example. We focus on the masked language model2 (MLM) variant of pre-training popularized by BERT (Devlin et al., 2018). In its basic form, an MLM is trained to predict the miss- ing tokens in an input text passage. Given an unlabeled pre-training corpus X (e.g., Wikipedia text), a training ex- ample (x, y) can be generated by randomly masking to- kens in a sampled piece of text (e.g., x = “The [MASK] is the currency [MASK] the UK”; y = (“pound”, “of”)). The model uses its representation of the masked input x to predict the token that should go in each mask. A good MLM must learn to encode syntactic and semantic information (e.g., to predict “of”) as well as some world knowledge (e.g., to predict “pound”). Open-domain question answering (Open-QA) To mea- sure a model’s ability to incorporate world knowledge, we need a downstream task where world knowledge is criti- cal. Perhaps one of the most knowledge-intensive tasks in natural language processing is open-domain question an- swering (Open-QA): given a question x such as “What is the currency of the UK?”, a model must output the correct answer string y, “pound”. The “open” part of Open- QA refers to the fact that the model does not receive a pre- identified document that is known to contain the answer, unlike traditional reading comprehension (RC) tasks such as SQuAD (Rajpurkar et al., 2016; 2018). While RC mod- We evaluate our approach by fine-tuning the mod- els pre-trained with REALM on the task of Open- domain Question Answering (Open-QA), one of the most knowledge-intensive tasks in natural language process- ing. We evaluate on three popular Open-QA bench- marks (NATURALQUESTIONS-OPEN, WEBQUESTIONS, and 1We use the term “document” loosely to refer to a passage from the knowledge corpus, not necessarily a whole article. 2Strictly speaking, MLM is not a standard language model, since it does not define a distribution over the entire sequence of tokens. In the paper we sometimes abuse the term “language model” slightly to make the phrase shorter. REALM: Retrieval-Augmented Language Model Pre-Training els comprehend a single document, Open-QA models must retain knowledge from millions of documents, since a ques- tion could be about any of them. We focus on Open-QA systems that utilize a textual knowl- edge corpus Z as the knowledge source. Many of these systems employ a retrieval-based approach: given a ques- tion x, retrieve potentially relevant documents z from the corpus Z, and then extract an answer y from the documents (Brill et al., 2002; Chen et al., 2017; Lee et al., 2019). is inspired by this paradigm and extends it to language model pre-training. Alternatively, some recent work has proposed generation- based systems that apply a sequence-to-sequence model on x to directly generate y token-by-token (Lewis et al., 2019; Raffel et al., 2019). We will compare against state-of-the- art systems from both paradigms in our experiments. # 3. Approach # 3.2. Model architecture We now describe the two key components: the neural knowledge retriever, which models p(z | x), and the knowledge-augmented encoder, which models p(y | z, x). Knowledge Retriever The retriever is defined using a dense inner product model: exp f (x, z) z′ exp f (x, z′) f (x, z) = Embedinput(x)⊤Embeddoc(z), where Embedinput and Embeddoc are embedding functions that map x and z respectively to d-dimensional vectors. The relevance score f (x, z) between x and z is defined as the inner product of the vector embeddings. The retrieval distribution is the softmax over all relevance scores. We start by formalizing REALM’s pre-training and fine- tuning tasks as a retrieve-then-predict generative process in Section 3.1. Then in Section 3.2, we describe the model architectures for each component of that process. In Sec- tion 3.3, we show how to implement REALM pre-training and fine-tuning by maximizing the likelihood of REALM’s generative process. En route, we address important compu- tational challenges, explain why training works, and also discuss strategies for injecting useful inductive biases. The overall framework is illustrated in Figure 2. We implement the embedding functions using BERT-style Transformers (Devlin et al., 2018). Following standard practices, we join spans of text by applying wordpiece tok- enization, separating them with [SEP] tokens, prefixing a [CLS] token, and appending a final [SEP] token. joinBERT(x) = [CLS]x[SEP] joinBERT(x1, x2) = [CLS]x1[SEP]x2[SEP] # 3.1. REALM’s generative process For both pre-training and fine-tuning, REALM takes some input x and learns a distribution p(y | x) over possible out- puts y. For pre-training, the task is masked language mod- eling: x is a sentence from a pre-training corpus X with some tokens masked out, and the model must predict the value of those missing tokens, y. For fine-tuning, the task is Open-QA: x is a question, and y is the answer. As in Devlin et al. (2018), we pass this into a Transformer, which produces one vector for each token, including the vector corresponding to [CLS] which is used as a “pooled” representation of the sequence (denoted BERTCLS). Finally, we perform a linear projection to reduce the dimensionality of the vector, denoted as a projection matrix W: Embedinput(x) = WinputBERTCLS(joinBERT(x)) Embeddoc(z) = WdocBERTCLS(joinBERT(ztitle, zbody)) REALM decomposes p(y | x) into two steps: retrieve, then predict. Given an input x, we first retrieve possibly helpful documents z from a knowledge corpus Z. We model this as a sample from the distribution p(z | x). Then, we condition on both the retrieved z and the original input x to generate the output y—modeled as p(y | z, x). To obtain the overall likelihood of generating y, we treat z as a latent variable and marginalize over all possible documents z, yielding where ztitle is the document’s title and zbody is its body. We let θ denote all parameters associated with the retriever, which include the Transformer and projection matrices. Knowledge-Augmented Encoder Given an input x and a retrieved document z, the knowledge-augmented encoder defines p(y | z, x). We join x and z into a single sequence that we feed into a Transformer (distinct from the one used in the retriever). This allows us to perform rich cross- attention between x and z before predicting y. See Figure 1 for a concrete example. p(y | x) = p(y | z, x) p(z | x). (1) # z∈Z X At this stage, the architectures for pre-training and fine- tuning differ slightly. For the masked language model pre- training task, we must predict the original value of each [MASK] token in x. To do so, we use the same masked REALM: Retrieval-Augmented Language Model Pre-Training ~ Unlabeled text --------------47 - Input query ' ‘ what’s the angle of an , triangle? (x); Na Knowledge-Augmented Encoder (#) | - Answer | 60 degrees (y) : Unlabeled text - Input query Answer Figure2. The overall framework of REALM. Left: Unsupervised pre-training. The knowledge retriever and knowledge-augmented encoder are jointly pre-trained on the unsupervised language modeling task. Right: Supervised fine-tuning. After the parameters of the retriever (θ) and encoder (φ) have been pre-trained, they are then fine-tuned on a task of primary interest, using supervised examples. language modeling (MLM) loss as in Devlin et al. (2018): Jx p(y | z, x) = p(yj | z, x) j=1 Y p(yj | z, x) ∝ exp w⊤ j BERTMASK(j)(joinBERT(x, zbody)) The key computational challenge is that the marginal prob- z∈Z p(y | x, z) p(z | x) involves a sum- ability p(y | x) = mation over all documents z in the knowledge corpus Z. We approximate this by instead summing over the top k documents with highest probability under p(z | x)—this is reasonable if most documents have near zero probability. where BERTyasx(;) denotes the Transformer output vector corresponding to the j’” masked token, J,, is the total num- ber of [MASK] tokens in 2, and w; is a learned word em- bedding for token y;. For Open-QA fine-tuning, we wish to produce the answer string y. Following previous reading comprehension work (Rajpurkar et al., 2016; Seo et al., 2016; Lee et al., 2016; Clark & Gardner, 2017), we will assume that the answer y can be found as a contiguous sequence of tokens in some document z. Let S(z, y) be the set of spans matching y in z. Then we can define p(y | z, x) as: Ss exp (MLP ([hsrart(s)3 hennis)| )) s€S(z,y) Asrat(s) = BERTsrapr(s)(JOinggar(@; Zpody))s henp(s) = BERT gnp(s)(jOiDgepr (a, Zoay)); P(y| 2,2) x where BERTSTART(s) and BERTEND(s) denote the Transformer output vectors corresponding to the start and end tokens of span s, respectively, while MLP denotes a feed-forward neu- ral network. We will let φ denote all parameters associated with the knowledge-augmented encoder. # 3.3. Training Even with this approximation, we still need an efficient way to find the top k documents. Note that the ordering of doc- uments under p(z | x) is the same as under the relevance score f (x, z) = Embedinput(x)⊤Embeddoc(z), which is an inner product. Thus, we can employ Maximum Inner Prod- uct Search (MIPS) algorithms to find the approximate top k documents, using running time and storage space that scale sub-linearly with the number of documents (Ram & Gray, 2012; Shrivastava & Li, 2014; Shen et al., 2015). To employ MIPS, we must pre-compute Embeddoc(z) for every z ∈ Z and construct an efficient search index over these embeddings. However, this data structure will no longer be consistent with p(z | x) if the parameters θ of Embeddoc are later updated. Hence, the search index goes “stale” after every gradient update on θ. Our solution is to “refresh” the index by asynchronously re-embedding and re-indexing all documents every several hundred training steps. The MIPS index is slightly stale be- tween refreshes, but note that it is only used to select the top k documents. We recompute p(z | x) and its gradient, using the fresh θ, for these top k documents after retriev- ing them. In Section 4.5, we empirically demonstrate that this procedure results in stable optimization, provided that refreshes happen at a sufficiently frequent rate. For both pre-training and fine-tuning, we train by maxi- mizing the log-likelihood log p(y | x) of the correct out- put y. Since both the knowledge retriever and knowledge- augmented encoder are differentiable neural networks, we can compute the gradient of log p(y | x) (defined in Equa- tion 1) with respect to the model parameters θ and φ, and optimize using stochastic gradient descent. Implementing asynchronous MIPS refreshes We asyn- chronously refresh the MIPS index by running two jobs in parallel: a primary trainer job, which performs gradient updates on the parameters, and a secondary index builder job, which embeds and indexes the documents. As shown REALM: Retrieval-Augmented Language Model Pre-Training MIPS index of Z MLM trainer Index builder (stale 6’) - (fresh @) Updates 6’ + 0 # 3.4. Injecting inductive biases into pre-training In the process of developing REALM, we discovered sev- eral additional strategies that further guide the model to- wards meaningful retrievals, described below. Figure3. REALM pre-training with asynchronous MIPS re- freshes. below, the trainer sends the index builder a snapshot of its parameters, θ′. The trainer then continues to train while the index builder uses θ′ to construct a new index in the back- ground. As soon as the index builder is done, it sends the new index back to the trainer, and the process repeats. While asynchronous refreshes can be used for both pre- training and fine-tuning, in our experiments we only use it for pre-training. For fine-tuning, we just build the MIPS in- dex once (using the pre-trained θ) for simplicity and do not update Embeddoc.3 Note that we still fine-tune Embedinput, so the retrieval function is still updated from the query side. What does the retriever learn? Since the knowledge re- trieval of REALM is latent, it is not obvious how the train- ing objective encourages meaningful retrievals. Here, we show how it rewards retrievals that improve prediction ac- curacy. For a given query x and document z, recall that f (x, z) is the “relevance score” that the knowledge retriever assigns to document z. We can see how a single step of gradient descent during REALM pre-training alters this score by an- alyzing the gradient with respect to the parameters of the knowledge retriever, θ: Salient span masking During REALM pre-training, we want to focus on examples x that require world knowledge to predict the masked tokens. As explained in Section 2, some MLM spans only require local context. To focus on problems that require world knowledge, we mask salient spans such as “United Kingdom” or “July 1969”. We use a BERT-based tagger trained on CoNLL-2003 data (Sang & De Meulder, 2003) to identify named entities, and a regular expression to identify dates. We select and mask one of these salient spans within a sentence for the masked language modeling task. We show that this significantly outperforms other masking strategies in Section 4.5. Null document Even with salient span masking, not all masked tokens require world knowledge to predict. We model this by adding an empty null document ∅ to the top k retrieved documents, allowing appropriate credit to be as- signed to a consistent sink when no retrieval is necessary. Prohibiting trivial retrievals If the pre-training corpus X and the knowledge corpus Z are the same, there exists a trivial retrieval candidate z that is too informative: if the masked sentence x comes from document z, the knowledge augmented encoder can trivially predict y by looking at the unmasked version of x in z. This results in a large positive gradient for p(z | x). If this occurs too often, the knowledge retriever ends up learning to look for exact string matches between x and z, which does not capture other forms of relevance. For this reason, we exclude this trivial candidate during pre-training. ∇ log p(y | x) = r(z)∇f (x, z) r(z) = z∈Z X p(y | z, x) p(y | x) − 1 p(z | x). For each document z, the gradient encourages the retriever to change the score f (x, z) by r(z) — increasing if r(z) is positive, and decreasing if negative. The multiplier r(z) is positive if and only if p(y | z, x) > p(y | x). The term p(y | z, x) is the probability of predicting the correct output y when using document z. The term p(y | x) is the expected value of p(y | x, z) when randomly sampling a document from p(z | x). Hence, document z receives a positive up- date whenever it performs better than expected. 3This works because pre-training already yields a good Embeddoc function. However, it is possible that refreshing the in- dex would further improve performance. Initialization At the beginning of training, if the retriever does not have good embeddings for Embedinput(x) and Embeddoc(z), the retrieved documents z will likely be unre- lated to x. This causes the knowledge augmented encoder to learn to ignore the retrieved documents. Once this oc- curs, the knowledge retriever does not receive a meaning- ful gradient and cannot improve, creating a vicious cycle. To avoid this cold-start problem, we warm-start Embedinput and Embeddoc using a simple training objective known as the Inverse Cloze Task (ICT) where, given a sentence, the model is trained to retrieve the document where that sen- tence came from. We defer to Lee et al. (2019) for de- tails. For the knowledge-augmented encoder, we warm- start it with BERT pre-training—specifically, the uncased BERT-base model (12 layers, 768 hidden units, 12 atten- tion heads). REALM: Retrieval-Augmented Language Model Pre-Training # 4. Experiments We now evaluate our approach on the Open-QA task. In this section, we describe in detail the benchmarks used and the different approaches to which we compare empirically. evant documents (e.g., 20). These documents are typically then re-ranked using a learned model, but coverage may be limited by the initial heuristic retrieval step. Approaches such as DrQA (Chen et al., 2017), HardEM (Min et al., 2019a), GraphRetriever (Min et al., 2019b), and PathRe- triever (Asai et al., 2019) in Table 1 are in this category. # 4.1. Open-QA Benchmarks A number of benchmarks have been proposed for Open- QA. In this work, we focus on datasets where the ques- tion writers did not already know the answer. This yields questions that reflect more realistic information-seeking needs, and also avoids artifacts that can arise if the ques- tion is formulated with a particular answer in mind. A In all deeper justification is given in Lee et al. (2019). cases, the predicted answer is evaluated via exact match with any reference answer, following previous Open-QA work (Chen et al., 2017). Some recent approaches have proposed to implement learn- able retrieval using a MIPS index. ORQA (Lee et al., 2019) formulates Open-QA using a similar latent variable model as REALM, and also trains by maximizing the marginal likelihood. However, REALM adds a novel language model pre-training step, and backpropagates into the MIPS index, rather than using a fixed index. In Table 1, we di- It is also important to note that rectly compare the two. the retrievers for both REALM pretraining and ORQA are initialized using the Inverse Cloze Task, described in Sec- tion 3.4. NaturalQuestions-Open The NaturalQuestions dataset (Kwiatkowski et al., 2019) consists of naturally occurring Google queries and their answers. Each answer also comes with an “answer type”: following Lee et al. (2019), we only keep questions that are categorized as “short answer type” with at most five tokens. The dataset also provides a sug- gested Wikipedia document to retrieve; like all models we compare against, we do not provide this to our model. WebQuestions The WebQuestions dataset (Berant et al., 2013) was collected from the Google Suggest API, using one seed question and expanding the set to related ques- tions. We follow the setting defined by Chen et al. (2017). Generation-based Open-QA An emerging alternative approach to Open-QA is to model it as a sequence pre- diction task: simply encode the question, and then decode the answer token-by-token based on the encoding. While it was initially unclear how large amounts of knowledge could be injected into the model, GPT-2 (Radford et al., 2019) hinted at the possibility of directly generating an- swers without using any given context via sequence-to- sequence. However, their performance was not competi- tive possibly due to the lack of fine-tuning. Orthogonally, T5 (Raffel et al., 2019) showed that directly generating an- swers without explicit extraction from the given context is viable approach, but they only experimented on the read- ing comprehension task, where a context document is pro- vided. CuratedTrec The CuratedTrec dataset is a collection of question-answer pairs drawn from real user queries issued on sites such as MSNSearch and AskJeeves. To account for multiple correct answers or different spelling variations, the answers in this dataset are defined as regular expressions that match all correct answers. It is unclear how to train generation-based models with this type of supervision, so we do not evaluate them on this dataset. For the most competitive and comparable generation-based baseline, we compare to concurrent work which fine-tunes T5 for Open-QA (Roberts et al., 2020).4 We compare against the Base, Large, and even larger 11-billion parame- ter model to measure the effect of model size. # 4.3. Implementation Details # 4.2. Approaches compared Retrieval-based Open-QA Most existing Open-QA sys- tems answer the input question by first retrieving poten- tially relevant documents from a knowledge corpus, and then using a reading comprehension system to extract an answer from the documents. In this paradigm, the knowl- edge is stored explicitly in the corpus. We wish to compare different methods for implementing retrieval. Many approaches use non-learned heuristic retrieval such as sparse bag-of-words matching (Robertson et al., 2009) or entity linking on the question to select a small set of rel- Fine-tuning We from reuse Lee et al. (2019), to enable direct comparison. Our knowledge corpus is derived from the December 20, 2018 snapshot of English Wikipedia. Documents are greedily split into chunks of up to 288 BERT wordpieces, resulting in just over 13 million retrieval candidates. During fine- tuning inference, we consider the top-5 candidates, and the 4We initially conducted our own T5 experiments using the code from https://tinyurl.com/t5-openqa-colab (Raffel et al., 2019). We now report results from the concurrent work of Roberts et al. (2020), which has an improved fine-tuning proce- dure. REALM: Retrieval-Augmented Language Model Pre-Training Table1. Test results on Open-QA benchmarks. The number of train/test examples are shown in paretheses below each benchmark. Predictions are evaluated with exact match against any reference answer. Sparse retrieval denotes methods that use sparse features such as TF-IDF and BM25. Our model, REALM, outperforms all existing systems. Name Architectures Pre-training NQ (79k/4k) WQ (3k/2k) CT (1k /1k) # params BERT-Baseline (Lee et al., 2019) Sparse Retr.+Transformer BERT 26.5 17.7 21.3 110m T5 (base) (Roberts et al., 2020) T5 (large) (Roberts et al., 2020) T5 (11b) (Roberts et al., 2020) Transformer Seq2Seq Transformer Seq2Seq Transformer Seq2Seq T5 (Multitask) T5 (Multitask) T5 (Multitask) 27.0 29.8 34.5 29.1 32.2 37.4 - - - 223m 738m 11318m DrQA (Chen et al., 2017) HardEM (Min et al., 2019a) GraphRetriever (Min et al., 2019b) PathRetriever (Asai et al., 2019) ORQA (Lee et al., 2019) Sparse Retr.+DocReader Sparse Retr.+Transformer GraphRetriever+Transformer PathRetriever+Transformer Dense Retr.+Transformer N/A BERT BERT MLM ICT+BERT - 28.1 31.8 32.6 33.3 20.7 - 31.6 - 36.4 25.7 - - - 30.1 34m 110m 110m 110m 330m Ours (X = Wikipedia, Z = Wikipedia) Dense Retr.+Transformer Dense Retr.+Transformer Ours (X = CC-News, Z = Wikipedia) REALM REALM 39.2 40.4 40.2 40.7 46.8 42.9 330m 330m Table2. Ablation experiments on NQ’s development set. Ablation Exact Match Zero-shot Retrieval Recall@5 REALM 38.2 38.5 REALM retriever+Baseline encoder Baseline retriever+REALM encoder Baseline (ORQA) 37.4 35.3 31.3 38.5 13.9 13.9 REALM with random uniform masks REALM with random span masks 32.3 35.3 24.2 26.1 30× stale MIPS 28.7 15.1 As reported in the concurrent work of Roberts et al. (2020), the generative Open-QA systems based on T5 are surpris- ingly powerful, with the largest T5-11B model outperform- ing the previous best Open-QA system. Increasing the size of T5 yields consistent improvement, but comes at signif- icant computational cost (from Base to 11B, the model is 50 times larger, and gains roughly 5 points in accuracy). In contrast, REALM outperforms the largest T5-11B model while being 30 times smaller. It is also important to note that T5 accesses additional reading comprehension data from SQuAD during its pre-training (100,000+ examples). Access to such data could also benefit REALM, but was not used in our experiments. entire model can be run on a single machine with a 12GB GPU. Pre-training We pre-train for 200k steps on 64 Google Cloud TPUs, with a batch size of 512 and a learning rate of 3e-5, using BERT’s default optimizer. The document embedding step for the MIPS index is parallelized over 16 TPUs. For each example, we retrieve and marginalize over 8 candidate documents, including the null document ∅. We experiment with two choices of the pre-training corpus X : (1) Wikipedia, which is identical to the knowledge cor- pus Z, and (2) CC-News, our reproduction of the corpus of English news proposed by Liu et al. (2019). Among all systems, the most direct comparison with REALM is ORQA (Lee et al., 2019), where the fine-tuning setup, hyperparameters and training data are identical. The improvement of REALM over ORQA is purely due to bet- ter pre-training methods. The results also indicate that our method of pre-training can be applied both on (1) the single- corpus setting (X = Wikipedia, Z = Wikipedia), or (2) the separate-corpus setting (X = CC-News, Z = Wikipedia). Compared to other retrieval-based systems (Asai et al., 2019; Min et al., 2019a;b) which often retrieve from 20 to 80 documents, our system gets the overall best performance while only retrieving 5 documents. # 4.5. Analysis # 4.4. Main results Table 1 shows the accuracy of different approaches on the three Open-QA datasets. REALM outperform all previous approaches by a significant margin. Table 1 also shows the number of parameters for each model. In Table 2 we present results for NaturalQuestions-Open after ablating critical components of REALM. In addition to the end-to-end results, we also report how often the gold answer appears in the top-5 retrievals before applying any fine-tuning. The latter metric more significantly isolates the contribution of improving the retriever during pre-training. REALM: Retrieval-Augmented Language Model Pre-Training Table3. An example where REALM utilizes retrieved documents to better predict masked tokens. It assigns much higher probability (0.129) to the correct term, “Fermat”, compared to BERT. (Note that the blank corresponds to 3 BERT wordpieces.) x: An equilateral triangle is easily constructed using a straightedge and compass, because 3 is a # a____ prime. (a) BERT p(y = “Fermat” | x) = 1.1 × 10 −14 (No retrieval.) (b) REALM p(y = “Fermat” | x, z) = 1.0 (c) REALM p(y = “Fermat” | x) = 0.129 (Conditional probability with document z =“257 is . . . a Fermat prime. Thus a regular polygon with 257 sides is constructible with compass . . . ”) (Marginal probability, marginalizing over top 8 retrieved documents.) Encoder or Retriever We first aim to determine whether REALM pre-training improves the retriever or the encoder, or both. To do so, we can reset the parameters of either the retriever or the encoder to their baseline state before REALM pre-training, and feed that into fine-tuning. Reset- ting both the retriever and encoder reduces the system to our main baseline, ORQA. We find that both the encoder and retriever benefit from REALM training separately, but the best result requires both components acting in unison. Masking scheme We compare our salient span masking scheme (Section 3.4) with (1) random token masking in- troduced in BERT (Devlin et al., 2018) and (2) random span masking proposed by SpanBERT (Joshi et al., 2019). While such salient span masking has not been shown to be impactful in previous work with standard BERT train- ing (Joshi et al., 2019), it is crucial for REALM. Intuitively, the latent variable learning relies heavily on the utility of re- trieval and is therefore more sensitive to a consistent learn- ing signal. MIPS index refresh rate During pre-training, we run a parallel process to re-embed corpus documents and rebuild the MIPS index. This results in one index refresh per ap- proximately 500 training steps. To demonstrate the impor- tance of frequent index refreshes, we compare against using a slower refresh rate. The results in Table 2 suggests that a stale index can hurt model training, and further reducing this staleness could offer better optimization. Examples of retrieved documents Table 3 shows an example of the REALM masked language model predic- tion. In this example, “Fermat” is the correct word, and REALM (row (c)) gives the word a much high probability compared to the BERT model (row (a)). Since REALM manages to retrieve some documents with a related fact (row (b)), the marginalized probability of the correct an- swer dramatically increases. This shows that REALM is able to retrieve document to fill in the masked word even though it is trained with unsupervised text only. # 5. Discussion and Related Work We previously discussed related methods for Open-QA. Here we present several alternate ways of viewing REALM that connect it to a broader set of ideas beyond Open-QA: Language modeling with corpus as context Language representation models have been incorporating contexts of increasingly large scope when making predictions. Ex- amples of this progression include models that condi- tion on surrounding words (Mikolov et al., 2013a;b), sen- tences (Kiros et al., 2015; Peters et al., 2018), and para- graphs (Radford et al., 2018; Devlin et al., 2018). We can view REALM as a generalization of the above work to the next level of scope: the entire text corpus. Retrieve-and-edit with learned retrieval In order to better explain the variance in the input text and en- able controllable generation, Guu et al. (2018) proposed a language model with the retrieve-and-edit frame- work (Hashimoto et al., 2018) that conditions on text with high lexical overlap. REALM has a similar approach, ex- cept that the model learns for itself which texts are most useful for reducing perplexity. By jointly learning the re- triever, REALM has the capacity to depend on information beyond lexical overlap. Scalable grounded neural memory The document in- dex can be viewed as a memory where the keys are the document embeddings. From this view, our work share motivations with works such as product key mem- ory (Lample et al., 2019), which enables sub-linear mem- ory access in a memory network (Weston et al., 2014; Graves et al., 2014; Sukhbaatar et al., 2015), allowing these scalable memory layers to be integrated into large language models. One main difference is that our memo- ries are grounded—each memory is associated with a docu- ment rather than unnamed value vectors. This level of inter- pretability is crucial for applications like Open-QA, where users would require provenance for a predicted answer to be trustworthy. Unsupervised Corpus Alignment sequence-to- sequence models with attention (Bahdanau et al., 2014), REALM: Retrieval-Augmented Language Model Pre-Training text is generated with latent selection of relevant tokens. This results in a set of model-centric unsupervised align- ments between target and source tokens. Analogously, REALM also generates text with latent selection of relevant documents. A by-product of our method is that we offer a set of model-centric unsupervised alignments between text in the pre-training corpus X and knowledge corpus Z. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. arXiv preprint arXiv:1810.04805, 2018. Graves, A., Wayne, G., and Danihelka, I. Neural turing machines. ArXiv, abs/1410.5401, 2014. # 6. Future Work Guu, K., Hashimoto, T. B., Oren, Y., and Liang, P. Gen- erating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437– 450, 2018. The work presented here is the minimal instantiation of a family of REALM-like approaches where a representation is pre-trained to perform reasoning over a large corpus of knowledge on-the-fly during inference. We are particularly optimistic about generalizations of this work to (1) struc- tured knowledge, which would result in a generalization of Peters et al. (2019) where we would also learn the decision of which entities are informative, (2) the multi-lingual set- ting, e.g., retrieving knowledge in a high-resource language to better represent text in a low-resource language, and (3) the multi-modal setting, e.g., retrieving images or videos that can provide knowledge rarely observed in text. # References Hashimoto, T. B., Guu, K., Oren, Y., and Liang, P. S. A retrieve-and-edit framework for predicting structured outputs. In Advances in Neural Information Processing Systems, pp. 10052–10062, 2018. Joshi, M., Chen, D., Liu, Y., Weld, D. S., Zettlemoyer, L., and Levy, O. SpanBERT: Improving pre-training by representing and predicting spans. arXiv preprint arXiv:1907.10529, 2019. Khandelwal, U., Levy, O., Jurafsky, D., Zettlemoyer, L., and Lewis, M. Generalization through memo- rization: Nearest neighbor language models. ArXiv, abs/1911.00172, 2019. Asai, A., Hashimoto, K., Hajishirzi, H., Socher, R., and Xiong, C. Learning to retrieve reasoning paths over wikipedia graph for question answering. arXiv preprint arXiv:1911.10470, 2019. Kiros, R., Zhu, Y., Salakhutdinov, R. R., Zemel, R., Urta- sun, R., Torralba, A., and Fidler, S. Skip-thought vectors. In Advances in neural information processing systems, pp. 3294–3302, 2015. Bahdanau, D., Cho, K., and Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. Berant, J., Chou, A., Frostig, R., and Liang, P. Semantic parsing on freebase from question-answer pairs. In Pro- ceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1533–1544, 2013. Brill, E., Dumais, S., and Banko, M. An analysis of the askmsr question-answering system. In Empirical Meth- ods in Natural Language Processing, 2002. Chen, D., Fisch, A., Weston, J., and Bordes, A. Read- ing wikipedia to answer open-domain questions. In Pro- ceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 1870–1879, 2017. Clark, C. and Gardner, M. Simple and effective multi- paragraph reading comprehension. In Annual Meeting of the Association for Computational Linguistics, 2017. Dai, A. M. and Le, Q. V. Semi-supervised sequence learn- ing. In Advances in neural information processing sys- tems, pp. 3079–3087, 2015. Kwiatkowski, T., Palomaki, J., Rhinehart, O., Collins, M., Parikh, A., Alberti, C., Epstein, D., Polosukhin, I., Kel- cey, M., Devlin, J., et al. Natural questions: a benchmark for question answering research. Transactions of the As- sociation for Computational Linguistics, 2019. Lample, G., Sablayrolles, A., Ranzato, M., Denoyer, L., and J´egou, H. Large memory layers with product keys. In Advances in Neural Information Processing Systems, pp. 8546–8557, 2019. Lee, K., Salant, S., Kwiatkowski, T., Parikh, A., Das, D., and Berant, J. Learning recurrent span representa- tions for extractive question answering. arXiv preprint arXiv:1611.01436, 2016. Lee, K., Chang, M.-W., and Toutanova, K. Latent re- trieval for weakly supervised open domain question an- swering. In Proceedings of the Conference of Associa- tion for Computational Linguistics, 2019. Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mo- hamed, A., Levy, O., Stoyanov, V., and Zettlemoyer, L. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehen- sion. ArXiv, abs/1910.13461, 2019. REALM: Retrieval-Augmented Language Model Pre-Training Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. Rajpurkar, P., Jia, R., and Liang, P. Know what you don’t know: Unanswerable questions for squad. arXiv preprint arXiv:1806.03822, 2018. Mikolov, T., Chen, K., Corrado, G., and Dean, J. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013a. Ram, P. and Gray, A. G. Maximum inner-product search us- ing cone trees. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 931–939, 2012. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and In Advances in phrases and their compositionality. neural information processing systems, pp. 3111–3119, 2013b. Miller, A., Fisch, A., Dodge, J., Karimi, A.-H., Bordes, A., and Weston, J. Key-value memory networks for directly reading documents. arXiv preprint arXiv:1606.03126, 2016. Min, S., Chen, D., Hajishirzi, H., and Zettlemoyer, L. A dis- crete hard em approach for weakly supervised question answering. arXiv preprint arXiv:1909.04849, 2019a. Roberts, A., Raffel, C., and Shazeer, N. How much knowl- edge can you pack into the parameters of a language model? arXiv preprint arXiv:TBD, 2020. Robertson, S., Zaragoza, H., et al. The probabilistic rele- vance framework: Bm25 and beyond. Foundations and Trends in Information Retrieval, 3(4):333–389, 2009. Sang, E. T. K. and De Meulder, F. Introduction to the conll- 2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003, pp. 142–147, 2003. Min, S., Chen, D., Zettlemoyer, L., and Hajishirzi, H. Knowledge guided text retrieval and reading for open domain question answering. arXiv preprint arXiv:1911.03868, 2019b. Seo, M., Kembhavi, A., Farhadi, A., and Hajishirzi, H. Bidirectional attention flow for machine comprehension. In International Conference on Learning Representa- tions, 2016. Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. Deep contextualized word representations. In Proc. of NAACL, 2018. Shen, F., Liu, W., Zhang, S., Yang, Y., and Tao Shen, H. Learning binary codes for maximum inner product search. In Proceedings of the IEEE International Con- ference on Computer Vision, pp. 4148–4156, 2015. Peters, M. E., Neumann, M., IV, R. L. L., Schwartz, R., Joshi, V., Singh, S., and Smith, N. A. Knowledge en- hanced contextual word representations, 2019. Petroni, F., Rockt¨aschel, T., Lewis, P., Bakhtin, A., Wu, Y., Miller, A. H., and Riedel, S. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066, 2019. Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. Improving language understanding with unsupervised learning. Technical report, OpenAI, 2018. Shrivastava, A. and Li, P. Asymmetric lsh (alsh) for sub- linear time maximum inner product search (mips). In Advances in Neural Information Processing Systems, pp. 2321–2329, 2014. Sukhbaatar, S., Weston, J., Fergus, R., et al. End-to-end In Advances in neural information memory networks. processing systems, 2015. Weston, J., Chopra, S., and Bordes, A. Memory networks. arXiv preprint arXiv:1410.3916, 2014. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. Language models are unsupervised multi- task learners. OpenAI Blog, 2019. Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019. Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383– 2392, 2016. REALM: Retrieval-Augmented Language Model Pre-Training # A. Derivation of the gradient with respect to the knowledge retriever zero accuracy (i.e., p (y | z′, x) = 0). Under this set- ting, p (z∗ | y, x) = 1 (provided that p (z∗ | x) is non-zero), which causes the gradient to become We compute the gradient of the REALM pre-training objec- tive (a log-likelihood) with respect to the parameters of the knowledge retriever, θ: ∇ log p(y | x) = p(y | x)−1∇p(y | x) = p(y | x)−1 = p(y | x)−1 z X z X p(y | z, x)∇p(z | x) p(y | z, x)p(z | x)∇ log p(z | x) = p(z | y, x)∇ log p(z | x), ∇ log p (y | x) = ∇f (x, z∗) − z X = ∇ log p (z∗ | x) . p (z | x) ∇f (x, z) From this, we see that gradient descent on the REALM ob- jective is equivalent to gradient descent on log p (z∗ | x). This is none other than the typical maximum likelihood training objective used in supervised learning, where z∗ is the “gold” document. z X line follows from applying conditional where the last Bayes’ rule. We can then expand ∇ log p (z | x) as: ∇ log p(z | x) = ∇ log exp f (x, z) z′ exp f (x, z′) = ∇ P f (x, z) − log " = ∇f (x, z) − exp f (x, z′) # z′ X p(z′ | x)∇f (x, z′) # z′ X Plugging this back into the first set of equations yields: )— Der a  ince] -Dve ca (2|2)] VF(@, 2) V log p (y| x) = Lee (z|y,2) ina = Lee (z|y,2) VE(e =dt p(zly.2 (ylz,) p(z]2) > [H p(y|2) => eae ~ ip (z|2) Vf (a, 2). )V F(x, 2") p(ela)] WFle2) # X In the second line, we used the fact that the overall expres- sion is an expectation with respect to p (z | y, x), and the terms which depend on z′ but not z can be moved out of that expectation. # C. Adapting to new knowledge An explicit retrieval system allows us to adapt to new world knowledge simply by modifying the corpus docu- ments. To demonstrate this ability, we replace the knowl- edge corpus with a more recent version of Wikipedia cor- pus after pre-training is done. When the input query is about a fact where the two corpora disagree, REALM can change the prediction to reflect the updated information, as exemplified in Table 4. However, even with an ex- plicit retrieval mechanism, the knowledge-augmented en- coder will still end up remembering some world knowl- edge, making the prediction of some input sentences not updated with the new corpus. (For instance, the model pre- dicts “Thatcher” for “ is the prime minister of United Kingdom.” on both corpora, perhaps due to the frequent mention of her name in Wikipedia articles.) # D. Retrieval Utility The null document ∅ described in Section 3.4 provides a way to measure the importance of a retrieved document z: we define the retrieval utility (RU) of z for the masked input x as the difference between the log-likelihood of the knowledge-augmented encoder when conditioning on z versus on ∅: RU(z | x) = log p(y | z, x) − log p(y | ∅, x). (2) # B. Connection between REALM and supervised learning From the equations in Appendix A, we saw that ∇ log p (y | x) = [p (z | y, x) − p (z | x)] ∇f (x, z). # z X Suppose that there exists one document z∗ which causes the model to achieve perfect prediction accuracy (i.e., p (y | z∗, x) = 1), while all other documents z′ result in A negative RU shows that z is less useful for predicting y than the null document. This could mean that z is irrelevant to x, but could also mean that the masked tokens in x do not require world knowledge to predict, or that the world knowledge is sufficiently commonplace it has been baked into the model’s parameters. In practice, we find that RU increases steadily over the course of pre-training, and is more predictive of good performance on the downstream task of Open-QA than even the overall log-likelihood. An example of how RU behaves over time and across different settings is in Figure 4. REALM: Retrieval-Augmented Language Model Pre-Training x: “Jennifer BERT REALM (Z =20 Dec 2018 corpus) REALM (Z =20 Jan 2020 corpus) also (0.13), then (0.08), later (0.05), . . . smith (0.01), brown (0.01), jones (0.01) lawrence (0.13), brown (0.01), smith (0.01), . . . formed the production company Excellent Cadaver.” Table4. An example where REALM adapts to the updated knowledge corpus. The Wikipedia page “Excellent Cadaver” was added in 2019, so the model was not about to recover the word when the knowledge corpus is outdated (2018). Interestingly, the same REALM model pre-trained on the 2018 corpus is able to retrieve the document in the updated corpus (2020) and generate the correct token, “Lawrence”. 3 Salient span masking Random span masking Random uniform masking y t i l i t 2 U l a v e i r t e R 1 0 0 50 100 Pre-training Steps (Thousands) 150 200 Figure4. The Retrieval Utility (RU, described in Eq. 2) vs the number of pre-training steps. RU roughly estimates the “usefulness” of retrieval. RU is impacted by the choice of masking and the number of pre-training steps.
{ "id": "1911.10470" }
2002.08910
How Much Knowledge Can You Pack Into the Parameters of a Language Model?
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.
http://arxiv.org/pdf/2002.08910
Adam Roberts, Colin Raffel, Noam Shazeer
cs.CL, cs.LG, stat.ML
Camera-ready version for EMNLP
null
cs.CL
20200210
20201005
0 2 0 2 t c O 5 ] L C . s c [ 4 v 0 1 9 8 0 . 2 0 0 2 : v i X r a # How Much Knowledge Can You Pack Into the Parameters of a Language Model? # Adam Roberts∗ Google [email protected] # Colin Raffel∗ Google [email protected] Noam Shazeer Google [email protected] # Abstract It has recently been observed that neural lan- guage models trained on unstructured text can implicitly store and retrieve knowledge using In this short pa- natural language queries. per, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any exter- nal context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models.1 (President Franklin <M> born <M> January 1882. D. Roosevelt was <M> in believe her eyes <"> piece <HI> she had ever peaches are <> at our Lily couldn't <">. The waitress had brought the largest <> of chocolate cake <\> seen. Our <> hand-picked and sun-dried <i> orchard in Georgia. Pre-training President Franklin D. Roosevelt was born in January 1882. Fine-tuning When was Franklin D. Roosevelt born? Figure 1: T5 is pre-trained to fill in dropped-out spans of text (denoted by <M>) from documents in a large, unstructured text corpus. We fine-tune T5 to answer questions without inputting any additional information or context. This forces T5 to answer questions based on “knowledge” that it internalized during pre-training. # Introduction Big, deep neural language models that have been pre-trained on unlabeled text have proven to be extremely performant when fine-tuned on down- stream Natural Language Processing (NLP) tasks (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019; Lan et al., 2019; Raffel et al., 2019). In- terestingly, it has also recently been observed that these models can internalize a sort of implicit “knowledge base” after pre-training (Petroni et al., 2019; Jiang et al., 2019; Talmor et al., 2019). This behavior is potentially useful because 1) the knowledge is built up by pre-training on unstruc- tured and unlabeled text data, which is freely avail- able in huge quantities on the Internet (Raffel et al., 2019; Wenzek et al., 2019), and 2) it is pos- sible to retrieve information using informal natural language queries since these pre-trained language models excel when fine-tuned on natural language understanding tasks. ∗ Equal contribution. Noam suggested trying T5 on open-domain QA and coded and ran initial experiments on TriviaQA showing improved performance with model size. Adam wrote the code and ran most experiments. Colin set the research scope, wrote the paper, and ran a few experiments. Past work investigating “language models as knowledge bases” has typically tried to under- stand the scope of the information stored in the model using synthetic tasks that are similar to the pre-training objective (Petroni et al., 2019; Jiang et al., 2019) and/or measure reasoning capabili- ties (Talmor et al., 2019). In this work, we take a different approach by evaluating the capability of language models on the practical task of open- domain question answering – specifically, we fine- tune the model to answer questions without access to any external knowledge or context. To do so, the model must parse a natural language query and “look up information” stored in its parameters. Most past work on question answering either explicitly feeds pertinent information to the model alongside the question (for example, an article that contains the answer (Rajpurkar et al., 2016; Zhang et al., 2018; Khashabi et al., 2018; Clark et al., 2019)) or allows the model to retrieve informa- tion from an external knowledge source (Berant et al., 2013; Chen et al., 2017). By feeding the model the input question alone, we can determine how much knowledge it has stored in its param- # 1https://goo.gle/t5-cbqa eters while measuring its performance on a use- ful real-world problem. We refer to this task as “closed-book question answering”. A separate question we address in this work is whether models with more parameters end up storing more information. It has been shown that transfer learning performance on many down- stream tasks tends to improve as the model size and amount of unsupervised pre-training increases (Radford et al., 2019; Liu et al., 2019; Raffel et al., 2019). In this work, we leverage the pre-trained “T5” models released by Raffel et al. (2019), the largest of which has around 11 billion parame- ters. By measuring knowledge retrieval capabili- ties on models of various sizes, including models that have an order of magnitude more parameters than considered in past work, we can explore how well our approach scales. # 2 Background Question Answering The task of training a model to either select or output the correct answer to a given question is referred to as “question an- swering”. The most popular variant of this task feeds the model some “context” containing the an- swer (for example, a paragraph from an encyclo- pedia article) alongside the question (Rajpurkar et al., 2016; Zhang et al., 2018; Khashabi et al., 2018; Clark et al., 2019). Models can be trained either to indicate the span of the context that con- tains the answer or output the text of the answer itself. Since this format can be seen as reading some text and answering a question about it, it has been referred to as “reading comprehension”. A more difficult variant is “open-domain ques- tion answering” (Prager, 2006), where the model can be asked arbitrary context-independent ques- tions (e.g. well-known facts or historical details). It is typically assumed that the model can access an external collection of knowledge when answer- ing questions (e.g. a structured knowledge base or unstructured text corpus), but the model is not given any information about where in the collec- tion the answer appears. The reading comprehen- sion task can be considered a simplified version of open-domain question answering where the model is provided with the oracle context to answer a given question. As an analogy, the open-domain question answering system acts as if it is taking an open-book exam where it can find and use infor- # mation in an external source of knowledge.2 In this work, we consider open-domain ques- tion answering with the additional constraint that the model is not allowed to access any external knowledge whatsoever when answering questions. Instead, the model itself must be pre-trained to store knowledge in its parameters before being fine-tuned to answer questions. In one view, this can be seen as an alternative way to approach open-domain question answering where instead of learning to access external knowledge the model needs to have “memorized” it in order to answer questions; in another view, this constraint creates a third and potentially more ambitious variant of the question answering task. A model that answers questions in this way is metaphorically similar to a student taking a closed-book exam, where the student must study and memorize all pertinent in- formation before taking the test. Transfer Learning with Language Models In the past few years, it has become increasingly common to pre-train a language model using an unsupervised objective on a large, unstructured text corpus before fine-tuning it on a downstream task of interest (Dai and Le, 2015; Howard and Ruder, 2018; Radford et al., 2018). The pop- ularity of this form of “transfer learning” is at- tributable to its empirical success on many NLP tasks (Peters et al., 2018; Devlin et al., 2018; Yang et al., 2019; Lan et al., 2019; Raffel et al., 2019). Loosely speaking, the pre-training step may pro- vide the model with some generally-useful aware- ness of meaning, syntax, and “world knowledge”. In question answering in particular, most state-of- the-art systems use some form of transfer learning. Currently, the most popular model architectures used in transfer learning for NLP are Transformer- based (Vaswani et al., 2017) “encoder-only” mod- els like BERT (Devlin et al., 2018). These models can produce a single prediction for each input token and have been applied to reading comprehension-style question answering by pre- dicting which tokens of the context contain the an- swer. Encoder-only models are not applicable to closed-book question answering because no con- text is provided to extract the answer span from. An alternative to encoder-only models, recently advocated by Raffel et al. (2019), is to treat ev- 2While our definition of open-book is the same as in the OpenBookQA dataset introduced by Mihaylov et al. (2018), we do not directly address multi-hop inference in this work. ery NLP task as a text-to-text problem using an encoder-decoder Transformer. When this frame- work is applied to question answering, the model is trained to generate the literal text of the answer in a free-form fashion. Despite the potential dif- ficulty of generating rather than extracting the an- swer, Raffel et al. (2019) demonstrated state-of- the-art results on the SQuAD (Rajpurkar et al., 2016), MultiRC (Khashabi et al., 2018), BoolQ (Clark et al., 2019), and ReCoRD (Zhang et al., 2018) reading comprehension tasks. The text-to-text framework is directly applica- ble to closed-book question answering since the model can be trained to generate an answer with or without any additional information in its input. Crucially, fine-tuning a text-to-text model to an- swer questions without any context requires that the model retrieve information from its parame- ters that it learned during pre-training. Radford et al. (2019) considered a similar task to evalu- ate the zero-shot question answering capabilities of a language model. The concurrent “RELIC” and “EAE” models of Ling et al. (2020) and F´evry et al. (2020) learn representations for an explic- itly predefined set of entities and are evaluated on the same closed-book variant of TriviaQA that we consider. Relatedly, Petroni et al. (2019) show that it is possible to manually convert some ques- tions to a fill-in-the-blank format amenable to an encoder-only model (e.g. “Who developed the the- ory of relativity?” gets mapped to “The theory of relativity was developed by # 3 Experiments Datasets We consider the following open- domain question answering datasets: Natural Questions (Kwiatkowski et al., 2019), a dataset of questions from web queries, each accompanied by a Wikipedia article containing the answer; We- bQuestions (Berant et al., 2013), comprising ques- tions from web queries matched to correspond- ing entries in FreeBase (Bollacker et al., 2008); and TriviaQA (Joshi et al., 2017), a collection of questions from quiz league websites where each question is accompanied by pages from web and Wikipedia searches that may contain the answer. In this work, we only make use of the ques- tions from each dataset – we completely ignore the matching documents supplied for each question. For WebQuestions and TriviaQA we follow the standard evaluation procedures where each pre- dicted answer is compared to the ground-truth after both are lowercased and stripped of arti- cles, punctuation, and duplicate whitespace (Ra- jpurkar et al., 2016). For Natural Questions, we evaluate using both 1) the standard “open- domain” version as used e.g. by (Lee et al., 2019; Min et al., 2019b,a; Asai et al., 2019) where the model is only required to produce a single nor- malized answer and 2) the standard multi-answer variant used with reading comprehension systems (Kwiatkowski et al., 2019). We review the details of Natural Questions evaluation in appendix A. Note that Natural Questions and TriviaQA have private test sets, so standard practice on their open- domain variants is to report performance on the development sets. However, we also include our results on the official TriviaQA test set by fine- tuning on the unfiltered training set and submitting our test set predictions to the leaderboard for the Wikipedia domain. We urge future work to adopt this approach to help ensure the validity of results and avoid potentially overfitting to a public set. Training We leverage the pre-trained models provided by Raffel et al. (2019), referred to as the “Text-to-Text Transfer Transformer” (T5). The original T5 models were pre-trained on a multi- task mixture including an unsupervised “span cor- ruption” task on the C4 dataset as well as super- vised translation, summarization, classification, and reading comprehension tasks. Note that none of the reading comprehension datasets used for pre-training T5 overlap with the question answer- ing datasets that we consider in this paper. In order to measure how performance scales with model size, we perform experiments with the Base (220 million parameters), Large (770 million), 3B (3 billion), and 11B (11 billion) variants of T5. Given that the T5 models were pre-trained on a multitask mixture including question answering, we also re- port performance using the “T5.1.1” checkpoints, which were pre-trained on unlabeled data only.3 For fine-tuning the T5 checkpoints, we follow the procedure used in Raffel et al. (2019) with- out any additional hyperparameter tuning: We use the AdaFactor optimizer (Shazeer and Stern, 2018) with a constant learning rate of 0.001, 10% dropout rate, and a batch size of 196,608 tokens. We halve the batch and double the dropout rate for WebQuestions due to its small size. For the T5.1.1 checkpoints, we follow the same procedure # 3https://goo.gle/t5-checkpoints but with a dropout rate of 5% for all three datasets. For evaluation, we follow the procedure used in Lee et al. (2019): for each dataset, we hold out 10% of the training set as a validation split, fine- tune a model from the remaining 90% of exam- ples, and select the best-performing checkpoint for final evaluation on the test set. While we chose to train for 20,000 steps, our validation accuracy typ- ically plateaued after only a few hundred steps and showed no signs of overfitting. We decode the model’s predictions by choosing the most likely token at each timestep. To map question answering tasks to the text-to-text format, we simply feed the question with a task-specific prefix into the model as input and train it to predict the literal answer text as output. Salient Span Masking Recently, Guu et al. (2020) found that a “salient span masking” (SSM) pre-training objective produced substantially bet- ter results in open-domain question answering. This approach first uses BERT (Devlin et al., 2018) to mine sentences that contain salient spans (named entities and dates) from Wikipedia. The question answering model is then pre-trained to re- construct masked-out spans from these sentences, which Guu et al. (2020) hypothesize helps the model “focus on problems that require world knowledge”. We experimented with using the same SSM data and objective to continue pre- training the T5 checkpoints for 100,000 additional steps before fine-tuning for question answering. Results Our results on the open-domain Natural Questions, WebQuestions, and TriviaQA tasks are shown in table 1. Notably, performance on each dataset improves as the model size increases, with either T5-11B or the comparably-sized T5.1.1- XXL (pre-trained only on unlabeled data) per- forming best in every case. Further, we find that using Guu et al. (2020)’s SSM pre-training pro- duces a substantial boost in performance. T5.1.1- XXL with SSM ultimately achieves state-of-the- art on WebQuestions and our largest models beat most other methods on Natural Questions and TriviaQA. Importantly, all previous methods ex- cept Ling et al. (2020) and F´evry et al. (2020) operate in the “open-book” setting by explicitly retrieving and using information from an exter- nal knowledge source. While our largest models are computationally intensive, we note that most open-domain question answering systems must Table 1: Scores achieved by fine-tuning T5 on the open-domain Natural Questions (NQ), WebQuestions (WQ), and TriviaQA (TQA) tasks. NQ WQ TQA dev test Chen et al. (2017) Lee et al. (2019) Min et al. (2019a) Min et al. (2019b) Asai et al. (2019) Ling et al. (2020) Guu et al. (2020) F´evry et al. (2020) Karpukhin et al. (2020) – 33.3 28.1 31.8 32.6 – 40.4 – 41.5 20.7 36.4 – 31.6 – – 40.7 – 42.4 – 47.1 50.9 55.4 – 35.7 – 43.2 57.9 – – – – – – – 53.4 – T5-Base T5-Large T5-3B T5-11B 25.9 28.5 30.4 32.6 27.9 30.6 33.6 37.2 23.8 28.7 35.1 42.3 29.1 35.9 43.4 50.1 T5-11B + SSM 34.8 40.8 51.0 60.5 T5.1.1-Base T5.1.1-Large T5.1.1-XL T5.1.1-XXL 25.7 27.3 29.5 32.8 28.2 29.5 32.4 35.6 24.2 28.5 36.0 42.9 30.6 37.2 45.1 52.5 T5.1.1-XXL + SSM 35.2 42.8 51.9 61.6 first do an expensive lookup step over the entire knowledge corpus and then attend to a long doc- ument to extract an answer. Our approach omits both of these steps, which ultimately saves a large amount of computation and memory. Having established that our approach is com- petitive on open-domain question answering, we now evaluate it on the standard (and more diffi- cult) multi-answer variant of Natural Questions. Virtually all models used on this task are read- ing comprehension systems that select the correct answer from an oracle context. After fine-tuning, T5-11B + SSM achieves a recall of 36.2 on the validation set, which lags behind the state-of-the- art score of 51.9 from Pan et al. (2019)4 but out- performs the best baseline published alongside the dataset (recall of 33.2 (Kwiatkowski et al., 2019)). This shows that T5 can effectively answer ques- tions with multiple answers. We discuss additional experiments and negative results in appendix B. Human Evaluation The benchmarks we used and the “exact match” score assume that the model directly extracts answers from an external knowl- edge source. In contrast, our model generates answers in a free-form fashion. We hypothesize that this results in many false negatives when an- 4Validation set recall scores from Pan et al. (2019) were reported in private correspondence with the authors. Table 2: A breakdown of the 150 hand-evaluated examples from Natural Questions where the T5 predictions were labelled as incorrect by the automatic procedure. We found only 62% of these to be true positives. Example Category Percentage Question Target(s) T5 Prediction True Negative 62.0% what does the ghost of christmas little warmth, warmth confetti present sprinkle from his torch Phrasing Mismatch Incomplete Annotation 13.3% who plays red on orange is new black 13.3% where does the us launch space kate mulgrew florida katherine kiernan maria mulgrew kennedy lc39b shuttles from Unanswerable 11.3% who is the secretary of state for karen bradley james brokenshire northern ireland swers do not exactly match the ground-truth con- text intended for each question. We therefore man- ually inspected 150 examples from the Natural Questions validation set where our model’s pre- diction was counted as incorrect in hopes of iden- tifying “false negatives” according to the exact match metric. We found that false negatives fell into three broad categories: First, answers with meaning-preserving differences in phrasing (e.g. “April 15” vs. “April 15th”); second, questions that were missing all possible correct answers (e.g. “where does the us launch space shuttles from” was annotated with the single ground-truth an- swer “florida”, despite many possible correct an- swers such as “Kennedy Space Center”, “Merritt Island”, “Cape Canaveral”, etc.); and finally, some questions were unanswerable without knowing the exact time or article they referred to (e.g. “what is the latest version of microsoft office 2010” de- pends on when the question is being asked). We provide examples of each of these false negative types in table 2. We note that open-book ques- tion answering systems could also be impacted to a lesser extent by these issues (e.g. if they select a slightly different answer span from the annotated one or retrieve a non-golden document that con- tains a different correct answer). Of the 150 examples inspected, we found that 20 were marked as incorrect due to differences in phrasing, another 20 were not annotated with all correct answers, and 17 were unanswerable with- out appropriate context. Removing unanswerable questions from the validation set and recomputing our model’s accuracy based on this false-negative rate produces a score of 57.8. This suggests that the performance of closed-book question answer- ing systems (in terms of how often it correctly an- swers questions) is substantially underestimated by the evaluation procedure used in these bench- marks. For full transparency, we publicly release the results of our human evaluation and include an appropriate reference when we determined that a predicted answer was missing from ground-truth.5 # 4 Conclusion In this short paper, we have shown that large lan- guage models pre-trained on unstructured text can attain competitive results on open-domain ques- tion answering benchmarks without any access to external knowledge. This suggests a funda- mentally different approach to designing question answering systems, motivating many threads for future work: First, we obtained state-of-the-art results only with the largest models which had around 11 billion parameters. This model size can be prohibitively expensive in resource-constrained settings, prompting future work on more efficient language models. Second, “open-book” models typically provide some indication of what infor- mation they accessed when answering a question. This can provide a useful form of interpretabil- ity. In contrast, our model distributes knowledge in its parameters in an inexplicable way and hal- lucinates realistic-looking answers when it is un- sure. Third, the maximum-likelihood objective used to train our model provides no guarantees as to whether a model will learn a fact or not. This makes it difficult to ensure that the model obtains specific knowledge over the course of pre-training and prevents us from explicitly updating or remov- ing knowledge from a pre-trained model. Finally, the tasks we used in this paper mainly measure “trivia”-style knowledge. We are therefore inter- ested in measuring performance on question an- swering tasks that require reasoning capabilities such as DROP (Dua et al., 2019). 5https://goo.gle/t5-cbqa-human-eval # Acknowledgments We thank Kelvin Guu, Kenton Lee, Ming-Wei Chang, Zora Tung, and Ice Pasupat for providing the open-domain question answering evaluation setup and access to their salient span-annotated data; Roy Frostig and Katherine Lee for comments and suggestions on this manuscript; Noah Con- stant for suggesting we try salience span masking; and Monica Dinculescu for building an interactive demonstration of our results.6 # References Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2019. Learn- ing to retrieve reasoning paths over Wikipedia arXiv preprint graph for question answering. arXiv:1911.10470. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collab- oratively created graph database for structuring hu- man knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pages 1247–1250. Danqi Chen, Adam Fisch, Jason Weston, and An- toine Bordes. 2017. Reading Wikipedia to an- arXiv preprint swer open-domain questions. arXiv:1704.00051. Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. arXiv preprint arXiv:1905.10044. Semi- supervised sequence learning. In Advances in Neu- ral Information Processing Systems. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. Drop: A reading comprehension benchmark re- quiring discrete reasoning over paragraphs. arXiv preprint arXiv:1903.00161. 6http://t5-trivia.glitch.me/ Thibault F´evry, Livio Baldini Soares, Nicholas FitzGerald, Eunsol Choi, and Tom Kwiatkowski. Entities as experts: Sparse memory ac- 2020. arXiv preprint cess with entity supervision. arXiv:2004.07202. Kelvin Guu, Kenton Lee, Zora Tung, Pasupat Panupong, and Ming-Wei Chang. 2020. Realm: Retrieval-augmented language model pre-training. arXiv preprint arXiv:2002.08909. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Zhengbao Jiang, Frank F Xu, Jun Araki, and Graham Neubig. 2019. How can we know what language models know? arXiv preprint arXiv:1911.12543. Mandar Joshi, Eunsol Choi, Daniel S. Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale dis- tantly supervised challenge dataset for reading com- prehension. arXiv preprint arXiv:1705.03551. Vladimir Karpukhin, Barlas Ouguz, Sewon Min, Ledell Yu Wu, Sergey Edunov, Danqi Chen, and Wen tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading com- prehension over multiple sentences. In Proceedings of North American Chapter of the Association for Computational Linguistics (NAACL). Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a bench- mark for question answering research. Transactions of the Association for Computational Linguistics, 7. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite BERT for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. for weakly supervised open domain question answering. arXiv preprint arXiv:1906.00300. Jeffrey Ling, Nicholas FitzGerald, Zifei Shan, Livio Baldini Soares, Thibault F´evry, David Weiss, Learning cross- and Tom Kwiatkowski. 2020. arXiv context entity representations from text. preprint arXiv:2001.03765. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question an- swering. In EMNLP. Sewon Min, Danqi Chen, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019a. A discrete hard EM ap- proach for weakly supervised question answering. arXiv preprint arXiv:1909.04849. Sewon Min, Danqi Chen, Luke Zettlemoyer, and Han- naneh Hajishirzi. 2019b. Knowledge guided text retrieval and reading for open domain question an- swering. arXiv preprint arXiv:1911.03868. Lin Pan, Rishav Chakravarti, Anthony Ferritto, Michael Glass, Alfio Gliozzo, Salim Roukos, Radu Frustratingly Florian, and Avirup Sil. 2019. arXiv preprint easy natural question answering. arXiv:1909.05286. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365. Fabio Petroni, Tim Rockt¨aschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Se- bastian Riedel. 2019. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066. John Prager. 2006. Open-domain question-answering. Foundations and Trends in Information Retrieval, 1(2). Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Squad: 100,000+ questions Percy Liang. 2016. for machine comprehension of text. arXiv preprint arXiv:1606.05250. Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive learning rates with sublinear memory cost. arXiv preprint arXiv:1804.04235. Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. lan- guage model pre-training captures. arXiv preprint arXiv:1912.13283. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzman, Ar- mand Joulin, and Edouard Grave. 2019. Ccnet: Ex- tracting high quality monolingual datasets from web crawl data. arXiv preprint arXiv:1911.00359. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized autoregressive pretrain- arXiv preprint ing for language understanding. arXiv:1906.08237. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. ReCoRD: Bridging the gap between human and ma- chine commonsense reading comprehension. arXiv preprint arXiv:1810.12885. # A Metrics for Natural Questions Compared to WebQuestions and TriviaQA, Nat- ural Questions is distributed with a much richer set of annotations: Each question can be annotated either as unanswerable (given the oracle context), with a short answer, or with a yes/no answer; ques- tions in the validation set can be annotated more than once; and some questions have multiple an- swers (e.g. “Who are the members of the Beat- les?” has four answers). We consider two vari- ants of Natural Questions. In both cases, we omit the “unanswerable” label and long answers, which are nearly impossible to predict without the oracle context. The first variant is the standard “open-domain” version as used e.g. by (Lee et al., 2019; Min et al., 2019b,a; Asai et al., 2019), where 1) the model is only ever trained to output a single answer; 2) if a question has multiple answers, it is only trained to predict the first answer; 3) any questions with answers longer than five tokens are ignored; 4) answers are normalized before being compared (in the same manner as is typically done for We- bQuestions and SQuAD); and 5) a predicted an- swer is considered correct if it matches any of the answers provided by any of the annotators (e.g. “Ringo Starr” would be considered a correct an- swer to “Who are the members of the Beatles?”). The second variant closely matches the official evaluation procedure used by the Natural Ques- tions leaderboard, where our model is trained to predict all ground-truth answers and is only con- sidered correct if it predicts all answers for any one of the annotators. As in the official evalua- tion, we consider questions with fewer than two non-null annotations unanswerable (given the con- text), but because we cannot predict unanswerabil- ity without the context, we only report the recall score. Further, because our model does not have access to the oracle context, we also normalize predicted and ground-truth answers when compar- ing them. The use of multiple possible answers also required minor modification of our text-to- text format. In this case, we trained the model to output each answer delimited by the text “an- swer:” (for example, “answer: John Lennon an- swer: Ringo Starr answer: George Harrison an- swer: Paul McCartney”). We then split out each answer from the model’s predictions as a postpro- cessing step before evaluating it against the set of answers provided by each annotation. # B Other Things We Tried In the course of undertaking this study, we tried various ideas that ultimately did not improve per- formance. We briefly discuss them here. Continued Pre-Training on Wikipedia The T5 checkpoints we used were primarily pre-trained on C4, a large and diverse dataset of unstructured web content. We were interested to see whether we could improve performance by doing further pre- training on data that was better tailored to the tasks we considered. Since both Natural Questions and TriviaQA source their answers from Wikipedia ar- ticles, we experimented with further pre-training on text data from English Wikipedia with the same unsupervised objective (“span corruption”) as was used by T5. We found that this additional “in- domain” pre-training had virtually no effect on performance. This may be because C4 already contains many articles from Wikipedia and the T5 checkpoints were pre-trained long enough to see plenty of this content. Pre-Training From Scratch On Wikipedia Since all of the answers to the questions in Nat- ural Questions appeared in Wikipedia, we carried out an additional experiment where we pre-trained T5 from scratch only on data from Wikipedia. We pre-trained on up to 1 trillion tokens (the same amount the T5 checkpoints were pre-trained on) with the span corruption objective and measured fine-tuned performance after various amounts of pre-training. Unfortunately, this resulted in dra- matically worse performance regardless of the amount of pre-training. We suspect that this is be- cause Wikipedia is too small and results in detri- mental overfitting. Span-Corruption Pre-Training on Wikipedia Sentences with Salient Spans As described previously, we observed significant performance gains with additional pre-training using “salient span masking” (SSM) on the Wikipedia sentence dataset from Guu et al. (2020) but not when using the standard “span corruption” (SC) from Raffel et al. (2019) on longer Wikipedia articles. While SC masks random spans of the input by dropping 15% of its tokens (sampled each epoch) and re- placing each consecutive span of dropped tokens with a unique sentinel, SSM specifically masks out one named entity or date in the input sentence. We were interested in determining whether the TriviaQA = B51 S50 — Salient Span Masking 2 — Span Corruption 3M =-* Baseline Natural Questions 36 Validation EM 33 Web Questions Validation EM ° 20 40 60 80 100 Additional Pre-training Steps (Thousands) Figure 2: Comparing additional pre-training using either salient span masking (SSM) or span corrup- tion (SC). We further pre-trained T5.1.1-XXL on the Wikipedia sentence dataset from Guu et al. (2020) with each objective, fine-tuning on a mixture of our three closed-book QA tasks every 10,000 steps. For each fine-tuning run, we report the maximum exact match score achieved on the validation set over 10,000 steps of fine-tuning. gains achieved were attributable to the use of a more task-specific dataset (pre-split into sentences that are known to contain at least one entity) or if the SSM objective itself was critical. As illustrated in fig. 2, the SSM objective is clearly an important ingredient in the improved performance; we saw no significant improvement versus the baseline T5 model when using SC. Fine-Tuning On All Question Answering Tasks The text-to-text framework used by T5 makes it simple to train multitask models simply by sup- plying a different task-specific prefix for each task and concatenating all of the constituent datasets. Since all of the question answering tasks we con- sider in this study follow the same basic struc- ture, we were hopeful that training on a multitask mixture of Natural Questions, WebQuestions, and TriviaQA would improve performance due to the additional supervised data. While multitask train- ing improved performance on the Natural Ques- tions by 0.5, it produced slightly worse results on the other tasks. Randomly Sampling Answers For Natural Questions In the open-domain variant of Natu- ral Questions, the model is only trained to gener- ate a single answer at a time. For the results pre- sented in the main text, when a question was anno- tated with multiple answers, we simply trained the model on the first annotated answer. We also ex- perimented with sampling a random answer from the set of possible answers for pre-training and found that it did not affect performance.
{ "id": "1911.10470" }
2002.03206
Characterizing Structural Regularities of Labeled Data in Overparameterized Models
Humans are accustomed to environments that contain both regularities and exceptions. For example, at most gas stations, one pays prior to pumping, but the occasional rural station does not accept payment in advance. Likewise, deep neural networks can generalize across instances that share common patterns or structures, yet have the capacity to memorize rare or irregular forms. We analyze how individual instances are treated by a model via a consistency score. The score characterizes the expected accuracy for a held-out instance given training sets of varying size sampled from the data distribution. We obtain empirical estimates of this score for individual instances in multiple data sets, and we show that the score identifies out-of-distribution and mislabeled examples at one end of the continuum and strongly regular examples at the other end. We identify computationally inexpensive proxies to the consistency score using statistics collected during training. We show examples of potential applications to the analysis of deep-learning systems.
http://arxiv.org/pdf/2002.03206
Ziheng Jiang, Chiyuan Zhang, Kunal Talwar, Michael C. Mozer
cs.LG, stat.ML
17 pages, 20 figures, ICML 2021
null
cs.LG
20200208
20210615
1 2 0 2 n u J 5 1 ] G L . s c [ 3 v 6 0 2 3 0 . 2 0 0 2 : v i X r a # Characterizing Structural Regularities of Labeled Data in Overparameterized Models # Ziheng Jiang * 1 2 3 Chiyuan Zhang * 4 Kunal Talwar 4 5 Michael C. Mozer 4 6 Abstract Humans are accustomed to environments that con- tain both regularities and exceptions. For example, at most gas stations, one pays prior to pumping, but the occasional rural station does not accept payment in advance. Likewise, deep neural net- works can generalize across instances that share common patterns or structures, yet have the ca- pacity to memorize rare or irregular forms. We analyze how individual instances are treated by a model via a consistency score. The score char- acterizes the expected accuracy for a held-out in- stance given training sets of varying size sampled from the data distribution. We obtain empirical estimates of this score for individual instances in multiple data sets, and we show that the score identifies out-of-distribution and mislabeled ex- amples at one end of the continuum and strongly regular examples at the other end. We identify computationally inexpensive proxies to the con- sistency score using statistics collected during training. We show examples of potential applica- tions to the analysis of deep-learning systems. # 1. Introduction (KISS BREWED, etc.). Generalization to a novel word typically follows the “ed” rule, for example, BINK BINKED. Intermediate between the exception verbs and regular verbs are subregularities—a set of exception verbs that have consistent structure (e.g., the mapping of SING RANG). Note that rule- governed and exception cases can have very similar forms, which increases the difficulty of learning each. Consider one-syllable verbs containing ‘ee’, which include the regu- lar cases NEED NEEDED as well as exception cases like SOUGHT. Generalization from the rule-governed SEEK cases can hamper the learning of the exception cases and vice-versa. For instance, children in an environment where English is spoken over-regularize by mapping GO GOED early in the course of language learning. Neural nets show the same interesting pattern for verbs over the course of training (Rumelhart & McClelland, 1986). Intuitively, memorizing irregular examples is tantamount to building a look-up table with the individual facts accessi- ble for retrieval. Generalization requires the inference of statistical regularities in the training environment, and the application of procedures or rules for exploiting the regular- ities. In deep learning, memorization is often considered a failure of a network because memorization implies no gen- eralization. However, mastering a domain involves knowing when to generalize and when not to generalize, because the data manifolds are rarely unimodal. Human learning requires both inferring regular patterns that generalize across many distinct examples and mem- orizing irregular examples. The boundary between reg- ular and irregular examples can be fuzzy. For example, in learning the past tense form of English verbs, there are some verbs whose past tenses must simply be mem- orized (GO HIT) and there are many regular verbs that obey the rule of appending “ed” *Equal contribution 1Paul G. Allen School of Computer Sci- ence, University of Washington, Seattle, WA, USA. 2OctoML.ai, Seattle, WA, USA. 3Work done while interning at Google. 4Google Research, Brain Team, Mountain View, CA, USA. 5Presently at Apple Inc., Cupertino, CA, USA. 6Department of Computer Sci- ence, University of Colorado Boulder, Boulder, CO, USA.. Corre- spondence to: Chiyuan Zhang <[email protected]>. Proceedings of the 38 th International Conference on Machine Learning, PMLR 139, 2021. Copyright 2021 by the author(s). Consider the two-class problem of chair vs non-chair with training examples illustrated in Figure 1a. The iron throne (lower left) forms a sparsely populated mode (sparse mode for short) as there may not exist many similar cases in the data environment. Generic chairs (lower right) lie in a re- gion with a consistent labeling (a densely populated mode, or dense mode) and thus seems to follow a strong regularity. But there are many other cases in the continuum of the two extreme. For example, the rocking chair (upper right) has a few supporting neighbors but it lies in a distinct neighbor- hood from the majority of same-label instances (the generic chairs). In this article, we formalize this continuum of the structural regularities of data sets in the context of training overparam- eterized deep networks. Let D n be an i.i.d. sample of # ∼ P Characterizing Structural Regularities of Labeled Data in Overparameterized Models Tennis Ball Black Swan nb & “ges @ ee, FB es ths | 2 St Fe regular example continuum of sub-regular ‘examples (b) irregular example Cs? n=0 n> OO Traffic Light Fountain Pen Siberian Husky Pug Figure 1. Regularities and exceptions in a binary chairs vs non-chairs problem. (b) illustration of consistency profiles. (c) Regularities (high C-scores) and exceptions (low C-scores) in ImageNet. size n from the underlying data distribution ; D) be a model trained on D. For an instance x with label y, we trace out the following consistency profile by increasing n: C P ,n(x, y) = E D n ∼P [P(f (x; D \{ (x, y) } ) = y], (1) n ∼P \{ # P Note by taking expectation over (x, y), this measures the generalization performance with respect to the underlying distribution . In contrast to the average behavior, we focus on the per-instance generalization here, as it helps to reveal the internal regularity structures of the data distribution. This article focuses on multi-class classification problems, but the definition can be easily extended to other problems by replacing the 0-1 classification loss with another suitable loss function. ,n(x, y) also encodes our high-level intuition about the C structural regularities of the training data during (human or machine) learning. In particular, we can characterize the multimodal structure of an underlying data distribution by grouping examples in terms of a model’s generalization profile for those examples. An (x, y) with high per-instance generalization lies in a region on the data manifold that is well supported by other regular instances. For n = 0, the model makes predictions entirely based on its prior belief. As n increases, the model collects more information about and makes better predictions. For an (x, y) instance belonging to a dense mode (e.g., the generic chairs in Figure 1a), the model prediction is accurate even for small n because even small samples have many class- consistent neighbors. The blue curve in the cartoon sketch of Figure 1b illustrates this profile. For instances belonging to sparse modes (e.g., the iron throne in Figure 1a), the prediction will be inaccurate for even large n, as the red curve illustrates. Most instances fill the continuum between these two extreme cases, as illustrated by the purple curves in Figure 1b. To obtain a total ordering for all examples, we pool the consistency profile into a scalar consistency score, or C-score by taking expectation over n. Figure 1c shows examples from the ImageNet data set ranked by estimated C-scores, using a methodology we shortly describe. The im- ages show that on many ImageNet classes, there exist dense modes of center-cropped, close-up shot of the representative examples; and at the other end of the C-score ranking, there exist sparse modes of highly ambiguous examples (in many cases, the object is barely seen or can only be inferred from the context in the picture). With strong ties to both theoretical notions of generalization and human intuition, the consistency profile is an important tool for understanding the regularity and subregularity struc- tures of training data sets and the learning dynamics of mod- els trained on those data. The C-score based ranking also has many potential uses, such as detecting out-of-distribution and mislabeled instances; balancing learning between dense and sparse modes to ensure fairness when learning with data from underrepresented groups; or even as a diagnostic used to determine training priority in a curriculum learning setting (Bengio et al., 2009; Saxena et al., 2019). In this article, we focus on formulating and analyzing consistency profiles, and apply the C-score to analyzing the structure of real world image data sets and the learning dynamics of different optimizers. We also study efficient proxies and further applications to outlier detection. Our key contributions are as follows: • We formulate and analyze a consistency score that takes inspiration from generalization theory and show that it matches our intuitions about statistical regularities in natural-image data sets. Characterizing Structural Regularities of Labeled Data in Overparameterized Models • We estimate the C-scores with a series of approximations and apply the measure to analyze the structural regularities of the MNIST, CIFAR-10, CIFAR-100, and ImageNet training sets. • We evaluate computationally efficient proxies for the C- score. We demonstrate that proxies based on distances between instances of the same class in latent space, while intuitively sensible, are in practice quite sensitive to the underlying distance metric. In contrast, learning-speed based proxies correlate very well with the C-score. This observation is non-trivial because learning speed is mea- sured on training examples and the C-score is defined for hold-out generalization. • We demonstrate potential application of the C-score as a tool for quantitative analysis of data sets, learning dynam- ics, and diagnosing and improving deep learning. • To facilitate future research, we have released the pre- computed C-scores at the project website. Model check- points, code, and extra visualizations are available too. # 2. Related Work Analyzing the structure of data sets has been a central topic for many fields like Statistics, Data Mining and Unsuper- vised Learning. In this paper, we focus on supervised learn- ing and the interplay between the regularity structure of data and overparameterized neural network learners. This dif- ferentiates our work from classical analyses based on input or (unsupervised) latent representations. The distinction is especially prominent in deep learning where a supervised learner jointly learns the classifier and the representation that captures the semantic information in the labels. os 08 os 06 oa of 024s 02 00 0.0 80 20 40 20 40 «6o Cifar10— supset ratio (%) Cifar100 subset ratio (6) 2 40 «6080, MNIST subset ratio (6) Figure 2. Consistency profiles of training examples. Each curve in the figure corresponds to the average profile of a set of examples, partitioned according to the area under the profile curve of each example. the second term of their score. Our empirical C-score es- timation is based on the estimator proposed in Feldman & Zhang (2020). A key difference is that we are interested in the profile with increasing n, i.e. the sample complexity required to correctly predict (x, y). We evaluate various cheap-to-compute proxies for the C- score and found that the learning speed has a strong correla- tion with the C-score. Learning speed has been previously studied in contexts quite different from our focus on gen- eralization of individual examples. Mangalam & Prabhu (2019) show that examples learned first are those that could be learned by shallower nets. Hardt et al. (2016) present the- oretical results showing that the generalization gap is small if SGD training completes in relatively few steps. Toneva et al. (2019) study forgetting (the complement of learning speed) and informally relate forgetting to examples being outliers or mislabeled. There is a large literature of criteria with no explicit ties to generalization as the C-score has, but provides a means of stratifying instances. For exam- ple, Wu et al. (2018) measure the difficulty of an example by the number of residual blocks in a ResNet needed for prediction. In the context of deep supervised learning, Carlini et al. (2018) proposed measures for identifying prototypical ex- amples which could serve as a proxy for the complete data set and still achieve good performance. These examples are not necessarily the center of a dense neighborhood, which is what our high C-score measures. Two prototype mea- sures explored in Carlini et al. (2018), model confidence and the learning speed, are also measures we examine. Their holdout retraining and ensemble agreement metrics are con- ceptually similar to our C-score estimation algorithm. How- ever, their retraining is a two-stage procedure involving pre- training and fine-tuning; their ensemble agreement mixes architectures with heterogeneous capacities and ignores la- bels. Feldman (2020) and Feldman & Zhang (2020) studied the positive effects of memorization on generalization by measuring the influence of a training example on a test example, and identifying pairs with strong influences. To quantify memorization, they defined a memorization score for each (x, y) in a training set as the drop in prediction accuracy on x when (x, y) is removed. A point evaluation of our consistency profile on a fixed data size n resembles # 3. The Consistency Profile and the C-score The consistency profile (Equation 1) encodes the structural consistency of an example with the underlying data distri- via expected performance of models trained with bution . However, it increasingly large data sets sampled from is not possible to directly compute this profile because is generally unknown for typical learning problems. In prac- tice, we usually have a fixed data set ˆ consisting of N i.i.d. D samples from . So we can estimate the consistency profile with the following empirical consistency profile: ,n(x, y) = ˆEr ) = y)] , ˆ } D 1, D is a subset of size n uni- where n = 0, 1, . . . , N − excluding (x, y), and ˆEr denotes formly sampled from ˆ D empirical averaging with r i.i.d. samples of such subsets. To obtain a reasonably accurate estimate (say, r = 1000), calculating the empirical consistency profile is still compu- tationally prohibitive. For example, with each of the 50,000 Characterizing Structural Regularities of Labeled Data in Overparameterized Models top ranked examples in CIFAR-10 bottom ranked examples with annotations mislabeled —= ambi atypical form — Figure 3. (a) Top ranked examples in CIFAR-10 and CIFAR-100. (b) Bottom ranked examples with annotations. training example in the CIFAR-10 training set, we need to train more than 2 trillion models. To obtain an estimate within the capability of current computation resources, we make two observations. First, model performance is gener- ally stable when the training set size varies within a small range. Therefore, we can sample across the range of n that we’re concerned with and obtain the full profile via smooth interpolation. Second, let D be a random subset of training ; D) can be reused in the es- data, then the single model f ( timation of all of the held-out examples (x, y) D. As a result, with clever grouping and reuse, the number of mod- els we need to train can be greatly reduced (See Algorithm 1 in the Appendix). ples and annotate them as (possibly) mislabeled, ambiguous (easily confused with another class or hard to identify the contents), and atypical form (e.g., burning “forest”, fallen “bottle”). As the subset ratio grows, regularities in the data distribution systematically pull the ambiguous instances in the wrong direction. This behavior is analogous to the phe- nomenon we mentioned earlier that children over-regularize GOED) as they gain more linguistic exposure. verbs (GO → To get a total ordering of the examples in a data set, we distill the consistency profiles into a scalar consistency score, or C-score, by taking the expectation over n: ˆC ˆ D (x, y) = En[ ˆC ˆ D ,n(x, y)] (3) In particular, we sample n dynamically according to the sub- set ratio s of the full available training } set. We sample 2,000 subsets for the empirical expectation of each n and visualize the estimated consistency profiles for clusters of similar examples in Figure 2. One interest- ing observation is that while CIFAR-100 is generally more difficult than CIFAR-10, the top ranked examples (magenta lines) in CIFAR-100 are more likely to be classified cor- rectly when the subset ratio is low. Figure 3a visualizes the top ranked examples from the two data sets. Note that in CIFAR-10, the dense modes from the truck and automobile classes are quite similar. In contrast, Figure 2 indicates that the bottom-ranked exam- ples (cyan lines) have persistently low probability of cor- rect classification—sometimes below chance—even with a 90% subset ratio. We visualize some bottom-ranked exam- For the case where n is sampled according to the subset ratio s, the expectation is taken over a uniform distribution over sampled subset sizes. # 4. The Structural Regularities of Common Image Data Sets We apply the C-score estimate to analyze several common image data sets: MNIST (LeCun et al., 1998), CIFAR- 10 / CIFAR-100 (Krizhevsky, 2009), and ImageNet (Rus- sakovsky et al., 2015). See the supplementary materials for details on architectures and hyperparameters. Figure 4a shows the distribution of ˆC ˆ ,n on CIFAR-10 D for the values of n corresponding to each subset ratio . For each s, 2000 models are trained and s } held-out examples are evaluated. The Figure suggests that Characterizing Structural Regularities of Labeled Data in Overparameterized Models 90% 30% 10% 20% CIFAR-10 q 8% MNIST CIFAR-10 CIFAR-100 { 70% 8% 60% | 360% 20% ao 2 sows g 5% f 20% 40% om 30% 10% s 90 20% % 70 oid 10% 0.0 go °2 04 5° 0% 0% Cs (ry) © 08 10 et py 00 025 050 0.35 100° "0.60 025 050 075 0.00 025 050 0.75 1.00 Bn! 1.0 3 @ (b) Cola.y) Cplx.y) Cp(2x.y) Figure 4. (a) Histogram of ˆC ˆD,n for each subset ratio on CIFAR-10. (b) Histogram of the C-score ˆC ˆD averaged over all subset ratios on 3 different data sets. depending on s, instances may be concentrated near floor or ceiling, making them difficult to distinguish (as we elab- orate further shortly). By taking an expectation over s, the C-score is less susceptible to floor and ceiling effects. Fig- ure 4b shows the histogram of this integrated C-score on MNINT, CIFAR-10, and CIFAR-100. The histogram of CIFAR-10 in Figure 4b is distributed toward the high end, but is more uniformly spread than the histograms for specific subset ratios in Figure 4a. (a) (b) Visualization of examples ranked by the estimated score can be found in Figure 3. Detailed per-class rankings can be found in the supplementary material. Next we apply the C-score analysis to the ImageNet data set. Training a standard model on ImageNet costs one to two orders of magnitude more computing resources than training on CIFAR, preventing us from running the C-score estimation procedure described early. Instead, we investi- gated the feasibility of approximating the C-score with a point estimate, i.e., selection of the s that best represents the integral score. This is equivalent to taking expectation of s with respect to a point-mass distribution, as opposed to the uniform distribution over subset ratios. By ‘best represents,’ we mean that the ranking of instances by the score matches the ranking by the score for a particular s. Figure 5. (a) Rank correlation between integral C-score and the C-score for a particular subset ratio, s. The peak of each curve indicates the training set size that best reveals generalization of the model. (b) Joint distribution of C-score per-class means and standard deviations on ImageNet. Samples from representative classes (x’s) are shown in Figure 6. Based on these observations, we picked s = 70 for a point estimate on ImageNet. In particular, we train 2,000 ResNet- 50 models each with a random 70% subset of the ImageNet training set, and estimate the C-score based on those models. Figure 5a shows the rank correlation between the integral score and the score for a given s, as a function of s for our three smaller data sets, MNIST, CIFAR-10, and CIFAR-100. Examining the green CIFAR-10 curve, there is a peak at s = 30, indicating that s = 30 yields the best point-estimate approximation for the integral C-score. That the peak is at an intermediate s is consistent with the observation from Figure 2 that the C-score bunches together instances for low and high s. For MNIST (blue curve), a less challenging data set than CIFAR-10, the peak is lower, at s = 10; for CIFAR-100 (orange curve), a more challenging data set than CIFAR-10, the peak is higher, at s = 40 or s = 50. Thus, the peak appears to shift to larger s for more challenging data sets. This finding is not surprising: more challenging data sets require a greater diversity of training instances in order to observe generalization. The examples shown in Figure Ic are ranked according to this C-score estimate. Because ImageNet has 1,000 classes, we cannot offer a simple overview over the entire data set as in MNIST and CIFAR. Instead, we focus on analyzing the behaviors of individual classes. Specifically, we compute the mean and standard deviation (SD) of the C-scores of all the examples in a particular class. The mean C-scores indicates the relative difficulty of classes, and the SD in- dicates the diversity of examples within each class. The two-dimensional histogram in Figure Sa depicts the joint distribution of mean and SD across all classes. We selected several classes with various combinations of mean and SD, indicated by the x’s in Figure 5a. We then selected sample images from the top 99%, 35% and 1% percentile ranked by the C-score within each class, and show them in Figure 6. Projectile and yellow lady’s slipper represent two extreme cases of diverse and unified classes, respectively. Most other classes lie in the high density region of the 2D histogram Characterizing Structural Regularities of Labeled Data in Overparameterized Models in Figure 5b, and share a common pattern of a densely populated mode of highly regular examples and a tail of rare, ambiguous examples. The tail becomes smaller from the class car wheel to upright and school bus. Table 1. Rank correlation between C-score and pairwise distance based proxies on inputs. Measured with Spearman’s ρ and Kendall’s τ rank correlations, respectively. # 5. C-score Proxies We are able to reduce the cost of estimating C-scores from infeasible to feasible, but the procedure is still very expen- sive. Ideally, we would like to have more efficient proxies that do not require training multiple models. We use the term proxy to refer to any quantity that is well correlated with the C-score but does not have a direct mathematical relation to it, as contrasted with approximations that are designed to mathematically approximate the C-score (e.g., approximating the expectation with empirical averaging). The possible candidate set for C-score proxies is very large, as any measure that reflects information about difficulty or regularity of examples could be considered. Our Related Work section mentions a few such possibilities. In this paper, we primarily study two variants: pairwise distance based proxies and learning speed based proxies. ˆC ˆC L ˆC ±L ˆC LOF ρ CIFAR-10 −0.064 −0.009 0.117 CIFAR-100 −0.098 0.083 0.105 0.103 0.151 τ CIFAR-10 −0.042 −0.006 0.078 CIFAR-100 −0.066 0.055 0.070 0.070 0.101 where K(x, x’) = exp(—||a — 2||?/h?) is an RBF kernel with the bandwidth h, and 1[-] is the indicator function. To evaluate the importance of explicit label information, we study two related scores: C that uses only same-class examples when estimating the local density, and C’ that uses all the neighbor examples by ignoring the labels. AT N C*(a,y) = Yn (5) a N C(x) = Yn ye K (ai, 2). (6) # 5.1. Pairwise Distance Based Proxies Pairwise distance matches our intuition about consistency very well. In fact, our motivating example in Figure 1a is illustrated in this way. Intuitively, an example is consistent with the data distribution if it lies near other examples hav- ing the same label. However, if the example lies far from instances in the same class or lies near instances of different classes, one might not expect it to generalize. Based on this intuition, we define a relative local-density score: C*(a,y) = vo 20 ly = i) — 8) Kan), @) We also study a proxy based on the local outlier factor (LOF) algorithm (Breunig et al., 2000), which measures the local deviation of each point with respect to its neighbours. Since large LOF scores indicate outliers, we use the negative LOF score as a C-score proxy, denoted by ˆC LOF(x). Table 1 shows the agreement between the proxy scores and the estimated C-score. Agreement is quantified by two rank correlation measures on three data sets. ˆC LOF performs slightly better than the other proxies, but none of the proxies has high enough correlation to be useful, because it is very hard to obtain semantically meaningful distance estimations from the raw pixels. mm projectile 500 | mam car wheel 200 0 t) 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 Cscore Histogram C-score Histogram mm upright 1000 jm school bus 1000 {mm yellow ady's slipper 500 ° 0.00 0.25 0.50 0.75 1.00 C-score Histogram o 0.00 0.25 0.50 0.75 1.00 C-score Histogram o 0.00 0.25 0.50 0.75 1.00 C-score Histogram Figure 6. Example images from ImageNet. The 5 classes are chosen to have representative per-class C-score mean–standard-deviation profiles, as shown in Figure 5a. For each class, the three columns show sampled images from the (C-score ranked) top 99%, 35%, and 1% percentiles, respectively. The bottom pane shows the histograms of the C-scores in each of the 5 classes. Characterizing Structural Regularities of Labeled Data in Overparameterized Models a) CIFAR-10 b) CIFAR-100 c) Figure 7. (a-b) Spearman rank correlation between C-score and distance based proxies using learned hidden representations. (c) Spearman rank correlation between C-score and learning speed based proxies on CIFAR-10. Figure 8. (Left pane) The 3 blocks show examples from CIFAR-10 “automobile” ranked by ˆC ±L, ˆC ±L and the C-score, respectively. h The three columns in each block shows the top, middle and mid- dle ranked examples, respectively. (Right pane) Examples from CIFAR-100 “bear” shown in the same layout. We further evaluate the proxies using the penultimate layer of the network as a representation of an image: ˆC ± h , ˆC L L h , ˆCh and ˆC LOF h , with the subscript h indicating distance in hidden space. In particular, we train neural network models with the same specification on the full training set. We plot the correlation between the C-score and the proxy based on the learned representation at each epoch as a function of training epoch in Figure 7a,b. For both data sets, the proxy score that correlates best with the C-score is ˆC ± L (grey), h followed by ˆC LOF h (pink) and ˆCh (blue). h (brown), then ˆC L Clearly, appropriate use of labels helps with the ranking. The results reveal interesting properties of the hidden rep- resentation. One might be concerned that as training pro- gresses, the representations will optimize toward the classi- fication loss and may discard inter-class relationships that could be potentially useful for other downstream tasks (Scott et al., 2018). However, our results suggest that ˆC ± does h not diminish as a predictor of the C-score, even long after training converges. Thus, at least some information concern- ing the relation between different examples is retained in the representation, even though intra- and inter-class similarity is not very relevant for a classification model. To the extent that the hidden representation—crafted through a discrimi- native loss—preserves class structure, one might expect that the C-score could be predicted without label reweighting; however, the poor performance of ˆCh suggests otherwise. Figure 8 visualizes examples ranked by the class weighted local density scores in the input and learned hidden space, respectively, in comparison with examples ranked by the C-score. The ranking calculated in the input space relies heavily on low level features that can be derived directly from the pixels like strong silhouette. The rankings calcu- lated from the learned hidden space correlate better with C-score, though the visualizations show that they could not faithfully detect the dense cluster of highly uniform exam- ples with high C-scores. In summary, while pairwise distance based proxies are very intuitive to formulate, in practice, the rankings are very sensitive to the underlying distance metrics. # 5.2. Learning Speed Based Proxies Inspired by our observations in the previous section that the speed-of-learning tends to correlate with the C-score rank- ings, we instead focus on a class of learning-speed based proxies that have the added bonus of being trivial to com- pute. Intuitively, a training example that is consistent with many others should be learned quickly because the gradient steps for all consistent examples should be well aligned. One might therefore conjecture that strong regularities in a data set are not only better learned at asymptote—leading to better generalization performance—but are also learned sooner in the time course of training. This learning speed hypothesis is nontrivial, because the C-score is defined for a held-out instance following training, whereas learning speed is defined for a training instance during training. This hypothesis is qualitatively verified from Figure 10. In par- ticular, the cyan examples having the lowest C-scores are learned most slowly and the purple examples having the highest C-scores are learned most quickly. Indeed, learning speed is monotonically related to C-score bin. Figure 7c shows a quantitative evaluation, where we com- pute the Spearman’s rank correlation between the C-score of an instance and various proxy scores based on learning speed. In particular, we test accuracy (0-1 correctness), pL (softmax confidence on the correct class), pmax (max soft- max confidence across all classes) and entropy (negative en- tropy of softmax confidences). We use cumulative statistics which average from the beginning of training to the current epoch because the cumulative statistics yield a more sta- ble measure—and higher correlation—than statistics based on a single epoch. We also compare to a forgetting-event statistic (Toneva et al., 2019), which is simply a count of the number of transitions from “learned” to “forgotten” during training. All of our proxies show strong correlation with the 0.9 at the peak; pmax and entropy C-score: pL reaches ρ perform similarly, both slightly worse than pL. The forget- ting event statistic slightly underperforms our proxies and takes a larger number of training epochs to reach its peak correlation. We suspect this is because forgetting events hap- Characterizing Structural Regularities of Labeled Data in Overparameterized Models (a) (b) Figure 9. (a) Model performance on SVHN when certain number of examples are removed from the training set. (b) Detection rate of label-flipped outliers on CIFAR-10. (b) Adam (a) SGD Figure 10. Learning speed of CIFAR-10 examples grouped by C- score. The thick transparent curve shows the average accuracy over the entire training set. SGD achieves test accuracy 95.14%, Adam achieves 92.97%. pen only after an example is learned, so unlike the proxies studied here, forgetting statistics for hard examples cannot be obtained in the earlier stage of training. curves could be informative at detecting noisy examples. We also evaluated the forgetting-event statistic (Toneva et al., 2019) and the local outlier factor (LOF) (Breunig et al., 2000) algorithm based on distances in the hidden space, but neither is competitive. # 6. Application By characterizing the structural regularities in large scale datasets, the C-score provides powerful tools for analyzing data sets, learning dynamics, and to diagnose and poten- tially improve learning systems. In this section, we provide several illustrative applications along this line. In the first example, we demonstrate the effects of removing In Figure 9a, we show the irregular training examples. the performance of models trained on the SVHN (Netzer et al., 2011) training set as a function of the number of lowest C-score examples removed. For comparison, we show the performance with the same number of random examples removed. We found that the model performance improves as we remove the lowest ranked training examples, but it eventually deteriorates when too many (about 104) training examples are removed. This deterioration occurs because the C-score typically ranks mislabeled instances toward the bottom, followed by—at least in this data set— correctly labeled but rare instances. Although the mislabeled instances have no utility, the rare instances do, causing a drop in performance as more rare instances are removed. On data sets with fewer mislabelings, such as CIFAR-10, we did not observe an advantage of removing low-ranked examples versus removing random examples. To quantitatively evaluate the outlier identification rate, we construct a modified dataset by corrupting a random fraction γ = 25% of the CIFAR-10 training set with random label assignments, so that we have the ground-truth indicators for the outliers. We then identify the fraction γ with the lowest ranking by our two most promising learning-speed based C-score proxies—cumulative accuracy and pL. Figure 9b shows the detection rate—the fraction of the lowest ranked examples which are indeed outliers; the two C-score proxies successfully identify over 95% of outliers. This is consistent with previous work (Pleiss et al., 2020) showing the loss In the final example, we demonstrate using the C-score to study the behavior of different optimizers. For this study, we partition the CIFAR-10 training set into subsets by C-score. Then we record the learning curves—model accuracy over training epochs—for each set. Figure 10 plots the learn- ing curves for C-score-binned examples. The left panel shows SGD training with a stagewise constant learning rate, and the right panel shows the Adam optimizer (Kingma & Ba, 2015), which scales the learning rate adaptively. In both cases, the groups with high C-scores (magenta) gener- ally learn faster than the groups with low C-scores (cyan). Intuitively, the high C-score groups consist of mutually con- sistent examples that support one another during training, whereas the low C-score groups consist of irregular exam- ples forming sparse modes with fewer consistent peers. In the case of true outliers, the model needs to memorize the labels individually as they do not share structure with any other examples. The learning curves have wider dispersion in SGD than in Adam. Early in SGD training where the learning rate is large, the examples with the lowest C-scores barely learn. In comparison, Adam shows less spread among the groups and as a result, converges sooner. However, the superior convergence speed of adaptive optimizers like Adam does not always lead to better generalization (Wilson et al., 2017; Keskar & Socher, 2017; Luo et al., 2019). We observe this outcome as well: SGD with a stagewise learning rate achieves 95.14% test accuracy, compared to 92.97% for Adam. The visualization generated with the help of the C-score provides an interesting perspective on the differ- ence between the two cases with different generalization performances: SGD with stagewise learning rate effectively enforces a sort of curriculum in which the model focuses on learning the strongest regularities first. This curriculum could help the model building a more solid representation based on domain regularities, when compared to Adam that Characterizing Structural Regularities of Labeled Data in Overparameterized Models learns all examples at similar pace. # 7. Discussion We formulated a consistency profile for individual examples in a data set that reflects the probability of correct gener- alization to the example as a function of training set size. This profile has strong ties to generalization theory as it essentially measures the per-instance generalization. We distilled the profile into a scalar C-score, which provides a total ordering of the instances in a data set by essentially the sample complexity—the amount of training data required— to ensure correct generalization to the instance. By studying the estimated scores on real world datasets, we show that this formulation captures well the basic intuitions about data regularity in both human and machine learning. To leverage the C-score to analyze structural regularities in complex data sets, we derived a C-score estimation proce- dure and obtained C-scores for examples in MNIST, CIFAR- 10, CIFAR-100, and ImageNet. The C-score estimate helps to characterize the continuum between a densely populated mode consisting of aligned, centrally cropped examples with unified shape and color profiles, and sparsely popu- lated modes of just one or two instances. formance as their convolutional counterparts on standard image classification benchmarks. We leave it as future work to conduct extensive comparison on more diverse archi- tectures and emerging algorithms such as finetuning after self-supervised learning (Chen et al., 2020a;b; Grill et al., 2020; Caron et al., 2021), and so on. In the 1980s, neural nets were touted for learning rule- governed behavior without explicit rules (Rumelhart & Mc- Clelland, 1986). At the time, AI researchers were focused on constructing expert systems by extracting explicit rules from human domain experts. Expert systems ultimately failed because the diversity and nuance of statistical regular- ities in a domain was too great for any human to explicate. In the modern deep learning era, researchers have made much progress in automatically extracting regularities from data. Nonetheless, there is still much work to be done to understand these regularities, and how the consistency relationships among instances determine the outcome of learning. By defining and investigating a consistency score, we hope to have made some progress in this direction. We have released the precomputed C-scores on standard deep learning benchmark datasets to foster future research along this direction. We further studied two variants of computationally efficient proxies to the C-score. We found that the pairwise dis- tance based proxies are sensitive to the underlying distance metrics, while the learning speed based proxies generally provide better correlation with the C-score. We demonstrate examples of potential applications of the C-score as analytical tools to inspect large scale datasets and the learning systems trained on the data, which provides insights to the otherwise complicated and opaque systems. In particular, we show that the C-score could be used to identify outliers and provide detailed analysis of the learning dynamics when comparing different optimizers. One feature of our formulation is that the C-score depends on the neural network architecture, and more generally on the learning algorithm. Just like how a math major and a music major might have different opinions on the difficulty of courses, different neural networks could have different inductive biases a priori, and the C-score captures this fact. In practice, we found that the C-score estimations are con- sistent among commonly used convolutional networks, po- tentially because they are not that different from each other. In particular, we compared the Inception based estimation on CIFAR-10 with ResNet-18, VGG-11 and VGG-16, and found the Spearman’s ρ correlations are above 0.91. Re- cently some new convolution-free architectures based on attention mechanism (Dosovitskiy et al., 2021) or dense connections (Tolstikhin et al., 2021; Melas-Kyriazi, 2021; Touvron et al., 2021) emerged and achieved similar per- # Code and Pre-computed C-scores We provide code implementing our C-score estima- tion algorithms, and pre-computed C-scores and asso- ciated model checkpoints for CIFAR-10, CIFAR-100 and ImageNet (downloadable from https://pluskid. github.io/structural-regularity/). The ex- ported files are in Numpy’s data format saved via numpy.savez. For CIFAR-10 and CIFAR-100, the ex- ported file contains two arrays labels and scores. Both arrays are stored in the order of training examples as de- fined by the original data sets found at https://www.cs. toronto.edu/~kriz/cifar.html. The data load- ing tools provided in some deep learning library might not be following the original data example orders, so we pro- vided the labels array for easy sanity check of the data ordering. For ImageNet, since there is no well defined example or- dering, we order the exported scores arbitrarily, and include a script to reconstruct the data set with index information by using the filename of each example to help identify the example-score mapping. # Acknowledgements We thank Vitaly Feldman for guidance on simulation de- sign and framing of the research, Samy Bengio for general comments and feedback, and Yoram Singer for making the collaboration possible. Characterizing Structural Regularities of Labeled Data in Overparameterized Models # References Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Lev- enberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., and Zheng, X. TensorFlow: Large- scale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org. Bengio, Y., Louradour, J., Collobert, R., and Weston, J. Curriculum learning. In Proceedings of the 26th annual international conference on machine learning, pp. 41–48. ACM, 2009. Breunig, M. M., Kriegel, H.-P., Ng, R. T., and Sander, J. Lof: identifying density-based local outliers. In Proceedings of the 2000 ACM SIGMOD international conference on Management of data, pp. 93–104, 2000. Grill, J.-B., Strub, F., Altché, F., Tallec, C., Richemond, P. H., Buchatskaya, E., Doersch, C., Pires, B. A., Guo, Z. D., Azar, M. G., Piot, B., Kavukcuoglu, K., Munos, R., and Valko, M. Bootstrap your own latent: A new approach to Self-Supervised learning. In Advances in Neural Information Processing Systems, 2020. Hardt, M., Recht, B., and Singer, Y. Train faster, generalize better: Stability of stochastic gradient descent. In Interna- tional Conference on Machine Learning, pp. 1225–1234. PMLR, 2016. Jacot, A., Gabriel, F., and Hongler, C. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pp. 8571–8580, 2018. Keskar, N. S. and Socher, R. Improving generalization per- formance by switching from adam to sgd. arXiv preprint arXiv:1712.07628, 2017. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In International Conference on Learning Representations, 2015. Carlini, N., Erlingsson, U., and Papernot, N. Prototypical examples in deep learning: Metrics, characteristics, and utility. Technical report, OpenReview, 2018. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., and Joulin, A. Emerging properties in self-supervised vision transformers. arXiv preprint arXiv:2104.14294, 2021. Krizhevsky, A. Learning multiple layers of features from tiny images. Technical Report TR-2009, University of Toronto, 2009. LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient- based learning applied to document recognition. Proceed- ings of the IEEE, 86(11):2278–2324, 1998. Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A simple framework for contrastive learning of visual rep- In International conference on machine resentations. learning, pp. 1597–1607. PMLR, 2020a. Luo, L., Xiong, Y., Liu, Y., and Sun, X. Adaptive gradi- ent methods with dynamic bound of learning rate. In International Conference on Learning Representations, 2019. Chen, T., Kornblith, S., Swersky, K., Norouzi, M., and Hinton, G. Big Self-Supervised models are strong Semi- Supervised learners. In Advances in Neural Information Processing Systems, 2020b. Mangalam, K. and Prabhu, V. U. Do deep neural networks learn shallow learnable examples first? In ICML 2019 Workshop on Identifying and Understanding Deep Learn- ing Phenomena, 2019. Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. Feldman, V. Does learning require memorization? A short tale about a long tail. In ACM Symposium on Theory of Computing (STOC), 2020. Melas-Kyriazi, L. Do you even need attention? a stack of feed-forward layers does surprisingly well on imagenet. arXiv preprint arXiv:2105.02723, 2021. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. Reading digits in natural images with unsu- pervised feature learning. In NIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. Feldman, V. and Zhang, C. What neural networks mem- orize and why: Discovering the long tail via influence estimation. In Advances in neural information processing systems, 2020. Pleiss, G., Zhang, T., Elenberg, E. R., and Weinberger, K. Q. Detecting noisy training data with loss curves. In International Conference on Learning Representations, 2020. Characterizing Structural Regularities of Labeled Data in Overparameterized Models Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2019. Rumelhart, D. E. and McClelland, J. L. On Learning the Past Tenses of English Verbs, pp. 216–271. MIT Press, Cambridge, MA, USA, 1986. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al. Imagenet large scale visual recognition chal- lenge. International journal of computer vision, 115(3): 211–252, 2015. # A. Experiment Details The details on model architectures, data set information and hyper-parameters used in the experiments for empirical estimation of the C-score can be found in Table 2. We imple- ment our experiment in Tensorflow (Abadi et al., 2015). The holdout subroutine used in the empirical C-score estimation is based on the estimator proposed in Feldman & Zhang (2020), and listed in Algorithm 1. Most of the training jobs for C-score estimation are run on single NVidia® Tesla P100 GPUs. The ImageNet training jobs are run with 8 P100 GPUs using single-node multi-GPU data parallelization. Saxena, S., Tuzel, O., and DeCoste, D. Data parameters: A new family of parameters for learning a differentiable curriculum. In Wallach, H., Larochelle, H., Beygelzimer, A., d'Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Ad- vances in Neural Information Processing Systems 32, pp. 11093–11103. Curran Associates, Inc., 2019. The experiments on learning speed are conducted with ResNet-18 on CIFAR-10, trained for 200 epochs while batch size is 32. For optimizer, we use the SGD with the initial learning rate 0.1, momentum 0.9 (with Nesterov momen- tum) and weight decay is 5e-4. The stage-wise constant learning rate scheduler decrease the learning rate at the 60th, 90th, and 120th epoch with a decay factor of 0.2. Scott, T., Ridgeway, K., and Mozer, M. C. Adapted deep embeddings: A synthesis of methods for k-shot inductive transfer learning. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 76–85. Curran Associates, Inc., 2018. Tan, M. and Le, Q. V. Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946, 2019. Tolstikhin, I., Houlsby, N., Kolesnikov, A., Beyer, L., Zhai, X., Unterthiner, T., Yung, J., Keysers, D., Uszkoreit, J., Lucic, M., et al. Mlp-mixer: An all-mlp architecture for vision. arXiv preprint arXiv:2105.01601, 2021. Toneva, M., Sordoni, A., Combes, R. T. d., Trischler, A., Bengio, Y., and Gordon, G. J. An empirical study of example forgetting during deep neural network learning. In International Conference on Learning Representations, 2019. # Algorithm 1 Estimation of ˆC ˆ D # ,n Require: Data set ˆ = (X, Y ) with N examples D Require: n: number of instances used for training Require: k: number of subset samples RN : ( ˆC ˆ Ensure: ˆC D Initialize binary mask matrix M 0k Initialize 0-1 loss matrix L for i ,n(x, y))(x,y) ˆ ∈ D 0k ← N × ∈ N × ← (1, 2, . . . , k) do ∈ Sample n random indices I from M [i, I] Train ˆf from scratch with the subset X[I], Y [I] L[i, :] end for Initialize score estimation vector ˆC for j (1, 2, . . . , N ) do ∈ Q ← ¬ ˆC[j] ← end for 1, . . . , N { 1 ← 1[ ˆf (X) = Y ] ← 0N ← M [:, j] sum( L[:, Q])/sum(Q) ¬ } Touvron, H., Bojanowski, P., Caron, M., Cord, M., El- Nouby, A., Grave, E., Joulin, A., Synnaeve, G., Verbeek, J., and Jégou, H. Resmlp: Feedforward networks for image classification with data-efficient training. arXiv preprint arXiv:2105.03404, 2021. Wilson, A. C., Roelofs, R., Stern, M., Srebro, N., and Recht, B. The marginal value of adaptive gradient methods in machine learning. In Advances in Neural Information Processing Systems, pp. 4148–4158, 2017. Wu, Z., Nagarajan, T., Kumar, A., Rennie, S., Davis, L. S., Grauman, K., and Feris, R. Blockdrop: Dynamic in- ference paths in residual networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8817–8826, 2018. # B. Time and Space Complexity The time complexity of the holdout procedure for empirical estimation of the C-score is (S(kT + E)). Here S is the O number of subset ratios, k is number of holdout for each subset ratio, and T is the average training time for a neural network. E is the time for computing the score given the k-fold holdout training results, which involves elementwise computation on a matrix of size k N , and is negligible comparing to the time for training neural networks. The space complexity is the space for training a single neural network times the number of parallel training jobs. The space complexity for computing the scores is # O Characterizing Structural Regularities of Labeled Data in Overparameterized Models Table 2. Details for the experiments used in the empirical estimation of the C-score. MNIST CIFAR-10 CIFAR-100 ImageNet Architecture MLP(512,256,10) Inception* Inception? ResNet-50 (V2) Optimizer SGD SGD SGD SGD Momentum 0.9 0.9 0.9 0.9 Base Learning Rate 0.1 0.4 0.4 0.1x7 Learning Rate Scheduler A(15%)* A(15%)* A(15%)* — LinearRampupPiecewiseConstant** Batch Size 256 512 512 128x7 Epochs 20 80 160 100 Data Augmentation sss Random Padded Cropping® + Random Left-Right Flipping ------ Image Size 28x28 32x32 32x32 224x224 Training Set Size 60,000 50,000 50,000 1,281,167 Number of Classes 10 10 100 1000 † A simplified Inception model suitable for small image sizes, defined as follows: Inception :: Conv(3×3, 96) → Stage1 → Stage2 → Stage3 → GlobalMaxPool → Linear. Stage1 :: Block(32, 32) → Block(32, 48) → Conv(3×3, 160, Stride=2). Stage2 :: Block(112, 48) → Block(96, 64) → Block(80, 80) → Block (48, 96) → Conv(3×3, 240, Stride=2). Stage3 :: Block(176, 160) → Block(176, 160). Block(C1, C2) :: Concat(Conv(1×1, C1), Conv(3×3,C2)). Conv :: Convolution → BatchNormalization → ReLU. A(15%) learning rate scheduler linearly increase the learning rate from 0 to the base learning rate in the first 15% training steps, and then from there linear decrease to 0 in the remaining training steps. steps, and then from there linear decrease to 0 in the remaining training steps. LinearRampupPiecewiseConstant learning rate scheduler linearly increase the learning rate from 0 to the base learning rate in the first 15% training steps. Then the learning rate remains piecewise constant with a 10× decay at 30%, 60% and 90% of the training steps, respectively. # ke @® Random Padded Cropping pad 4 pixels of zeros to all the four sides of MNIST, CIFAR-10, CIFAR-100 images and (randomly) crop back to the original image size. For ImageNet, a padding of 32 pixels is used for all four sides of the images. For kernel density estimation based scores, the most expen- sive part is forming the pairwise distance matrix (and the (N 2d) kernel matrix), which requires time, where d is the dimension of the input or hidden repre- sentation spaces. Figure 16 for the behavior of LOF across a wide range of neighborhood sizes. # D.1. Pairwise Distance Estimation with Gradient Representations # C. More Visualizations of Images Ranked by C-score Examples with high, middle and low C-scores from all the 10 classes of MNIST and CIFAR-10 are shown in Figure 11 and Figure 12, respectively. The results from the first 60 out of the 100 classes on CIFAR-100 is depicted in Figure 13. Figure 14 and Figure 15 show visualizations from ImageNet. Please see the project website for more visualizations. Most modern neural networks are trained with first order gradient descent based algorithms and variants. In each iteration, the gradient of loss on a mini-batch of training examples evaluated at the current network weights is com- puted and used to update the current parameter. Let ) ∇t( · be the function that maps an input-label training pair (the case of mini-batch size one) to the corresponding gradient evaluated at the network weights of the t-th iteration. Then this defines a gradient based representation on which we can compute density based ranking scores. The intuition is that in a gradient based learning algorithm, an example is consistent with others if they all compute similar gradients. # D. C-Score Proxies based on Pairwise Distances In the experiments of pairwise distance based C-score prox- ies, we use an RBF kernel K (a, x’) = exp(—||a—2"||?/h?), where the bandwidth parameter h is adaptively chosen as 1/2 of the mean pairwise Euclidean distance across the data set. For the local outlier factor (LOF) algorithm (Breunig et al., 2000), we use the neighborhood size k = 3. See Comparing to the hidden representations defined the outputs of a neural network layer, the gradient based representations induce a more natural way of incorporating the label infor- mation. In the previous section, we reweight the neighbor examples belonging to a different class by 0 or -1. For gradi- ent based representations, no ad hoc reweighting is needed as the gradient is computed on the loss that has already takes the label into account. Similar inputs with different Characterizing Structural Regularities of Labeled Data in Overparameterized Models Figure 11. Examples from MNIST. Each block shows a single class; the left, middle, and right columns of a block depict instances with high, intermediate, and low C-scores, respectively. Figure 12. Examples from CIFAR-10. Each block shows a single class; the left, middle, and right columns of a block depict instances with high, intermediate, and low C-scores, respectively. labels automatically lead to dissimilar gradients. Moreover, this could seamlessly handle labels and losses with rich structures (e.g. image segmentation, machine translation) where an effective reweighting scheme is hard to find. The gradient based representation is closely related to recent developments on Neural Tagent Kernels (NTK) (Jacot et al., 2018). It is shown that when the network width goes to infin- ity, the neural network training dynamics can be effectively approximately via Taylor expansion at the initial network weights. In other words, the algorithm is effectively learning a linear model on the nonlinear representations defined by ). This feature map induces the NTK, and connects ∇0( · deep learning to the literature of kernel machines. # E. What Makes an Item Regular or Irregular? The notion of regularity is primarily coming from the statis- tical consistency of the example with the rest of the popula- tion, but less from the intrinsic structure of the example’s contents. To illustrate this, we refer back to Figure 4b in the main text, the distribution is uneven between high and low C-score values. As a result, the high C-score groups will have more examples than the low C-score groups. This agrees with the intuition that regularity arises from high probability masses. Although NTK enjoys nice theoretical properties, it is chal- lenging to perform density estimation on it. Even for the more practical case of finite width neural networks, the gra- dient representations are of extremely high dimensions as modern neural networks general have parameters ranging from millions to billions (e.g. Tan & Le, 2019; Radford et al., 2019). As a result, both computation and memory requirements are prohibitive if a naive density estimation is to be computed on the gradient representations. We leave as future work to explore efficient algorithms to practically compute this score. To test whether an example with top-ranking C-score is still highly regular after the density of its neighborhood is reduced, we group the training examples according equal sized bins on the value range of their C-score values. We then subsample each group to contain an equal number 400) of examples. Then we run training on this new data ( ∼ set and observe the learning speed in each (subsampled) group. The result is shown in Figure 19, which is to be compared with the results without group-size-equalizing in Figure 10 in the main text. The following observations can be made: 1. The learning curves for many of the groups start to overlap with each other. 2. The lower ranked groups now learns faster. For exam- ple, the lowest ranked group goes above 30% accuracy Characterizing Structural Regularities of Labeled Data in Overparameterized Models Figure 13. Examples from CIFAR-100. Each block shows a single class; the left, middle, and right columns of a block depict instances with high, intermediate, and low C-scores, respectively. The first 60 (out of the 100) classes are shown. Characterizing Structural Regularities of Labeled Data in Overparameterized Models 7 ee (DD) 1000 500 7 mm teapot lm barometer mm banana car mirror 500 500 = 1000 t) t) o ot 0) 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 C-score Histogram C-score Histogram C-score Histogram C-score Histogram C-score Histogram 7 ee (DD) Figure 14. Example images from ImageNet. For each class, the three columns show sampled images from the (C-score ranked) top 99%, 35%, and 1% percentiles, respectively. The bottom pane shows the histograms of the C-scores in each of the 5 classes. mm pitcher 500 mm jeep mm barn 200 500 500 0) 0) 0) 00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 t) t) 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 0. C-score Histogram C-score Histogram C-score Histogram C-score Histogram C-score Histogram Figure 15. Example images from ImageNet. For each class, the three columns show sampled images from the (C-score ranked) top 99%, 35%, and 1% percentiles, respectively. The bottom pane shows the histograms of the C-scores in each of the 5 classes. Characterizing Structural Regularities of Labeled Data in Overparameterized Models (a) CIFAR-10 (b) CIFAR-100 Figure 16. The Spearman’s ρ correlation between the C-score and the score based on LOF with different neighborhood sizes. near epoch 50. In the run without subsampling (Fig- ure 10a in the main text), this groups is still below 20% accuracy at epoch 50. The model is now learning with a much smaller data set. Since the lower ranked examples are not highly consistent with the rest of the population, this means there are fewer “other examples” to compete with (i.e. those “other examples” will move the weights towards a direction that is less preferable for the lower ranked examples). As a result, the lower ranked groups can now learn faster. 3. On the other hand, the higher ranked groups now learn slower, which is clear from a direct comparison be- tween Figure 10a in the main text and Figure 19 here. This is because for highly regular examples, reducing the data set size means removing consistent examples — that is, there are now less “supporters” as oppose to less “competitors” in the case of lower ranked groups. As a result, the learn speed is now slower. 4. Even though the learning curves are now overlapping, the highest ranked group and the lowest ranked group are still clearly separated. The potential reason is that while the lower ranked examples can be outliers in many different ways, the highest ranked examples are probably regular in a single (or very few) visual clus- ters (see the top ranked examples in Figure 12). As a result, the within group diversities of the highest ranked groups are still much smaller than the lowest ranked groups. # F. Sensitivity of C-scores to the Number of Models We used 2,000 models per subset ratio to evaluate C-scores in our experiments to ensure that we get stable estimates. In this section, we study the sensitivity of C-scores with respect to the number of models and evaluate the possibility to use fewer models in practice. Let C0 2k be the C-scores estimated with the full 2,000 models per subset ratio. We split the 2,000 models for each subset ratio into two halves, 2k. and obtain two independent estimates C0 − Then for m , 1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1000 } we sample m random models from the first 1,000 split, and estimate C-scores (denoted by Cm) based on those models. We compute the Spearman’s ρ correlation between each Cm and C1k 2k. The results are plotted in Figure 20. The random sampling of m models is repeated 10 times for each m and the error bars show the standard deviations. The figure shows that a good correlation is found for as few as m = 64 models. However, the integral C-score requires training models for various subset ratios (9 different subset ratios in our simulations), so the total number of models 9. If we want to obtain a reliable needed is roughly 64 estimate of the C-score under a single fixed subset ratio, we find that we need 512 models in order to get a > .95 correlation with C1k 2k. So it appears that whether we are computing the integral C-score or the C-score for a particular subset ratio, we need to train on the order of 500-600 models. In the analysis above, we have used C1k 2k as the reference scores to compute correlation to ensure no overlapping be- tween the models used to compute different estimates. Note 2k itself is well correlated with the the full estimate C1k from 2,000 models, as demonstrated by the following corre- lations: ρ(C0 2k) = 0.9999, and ρ(C1k − − In summary, the regularity of an example arises from its con- sistency relation with the rest of the population. A regular example in isolation is no different to an outlier. Moreover, it is also not merely an intrinsic property of the data distri- bution, but is closely related to the model, loss function and learning algorithms. For example, while a picture with a red lake and a purple forest is likely be considered an out- lier in the usual sense, for a model that only uses grayscale information it could be highly regular. Characterizing Structural Regularities of Labeled Data in Overparameterized Models Figure 17. Examples from CIFAR-10 (left 5 blocks) and CIFAR-100 (right 5 blocks). Each block shows a single class; the left, middle, and right columns of a block depict instances with top, intermediate, and bottom ranking according to the relative local density score ˆC ±L in the input space, respectively. Figure 18. Examples from CIFAR-10 (left 5 blocks) and CIFAR-100 (right 5 blocks). Each block shows a single class; the left, middle, and right columns of a block depict instances with top, intermediate, and bottom ranking according to the relative local density score ˆC ±L h 1.0 4 Qa =] £ Pos S s 2 —— 0.05-0.10 —— 0,55-0.60 © o6 J —— 0,100.15 — 060-065 > 0. & — 0,150.20 — 0.65-0.70 5 — 0,200.25 — 0.70-0.75 — 0,250.30 — 0.75-0.80 3 o44 6 0.4 — 0,300.35 — 0.80-0.85 2 — 0.35-0.40 — 0.85-0.90 go — 0,400.45 — 0,90-0.95 50.24 Te 7 0.45-0.50 a — 0.50-0.55 1 + y + t t + i i 0 25 50 75 100 125 150 175 200 —. training epoch 1.0 S08 8 2 5 06 ° £ F $=20 04 = Eo. —b s=30 8 s=40 & $=50 02 5-60 . + s= —t~ integral C-score 2 Pay 25 7 2 number of models for each subset ratio s Figure 19. Learning speed of group of examples ranked by C- scores, with equal number (400) of examples in each group via subsampling. Figure 20. The correlation of C-scores estimated with varying num- bers of models (the x-axis) and C-scores estimated with 1,000 independent models. The simulations are run with CIFAR-10, and the error bars show standard deviation from 10 runs.
{ "id": "2105.03404" }
2002.02878
I love your chain mail! Making knights smile in a fantasy game world: Open-domain goal-oriented dialogue agents
Dialogue research tends to distinguish between chit-chat and goal-oriented tasks. While the former is arguably more naturalistic and has a wider use of language, the latter has clearer metrics and a straightforward learning signal. Humans effortlessly combine the two, for example engaging in chit-chat with the goal of exchanging information or eliciting a specific response. Here, we bridge the divide between these two domains in the setting of a rich multi-player text-based fantasy environment where agents and humans engage in both actions and dialogue. Specifically, we train a goal-oriented model with reinforcement learning against an imitation-learned ``chit-chat'' model with two approaches: the policy either learns to pick a topic or learns to pick an utterance given the top-K utterances from the chit-chat model. We show that both models outperform an inverse model baseline and can converse naturally with their dialogue partner in order to achieve goals.
http://arxiv.org/pdf/2002.02878
Shrimai Prabhumoye, Margaret Li, Jack Urbanek, Emily Dinan, Douwe Kiela, Jason Weston, Arthur Szlam
cs.AI, cs.CL, stat.ML
null
null
cs.AI
20200207
20200210
0 2 0 2 b e F 0 1 ] I A . s c [ 2 v 8 7 8 2 0 . 2 0 0 2 : v i X r a # I love your chain mail! Making knights smile in a fantasy game world: Open-domain goal-oriented dialogue agents # Shrimai Prabhumoye∗ 1 2 Margaret Li∗ 2 Jack Urbanek 2 Emily Dinan 2 Douwe Kiela 2 Jason Weston 2 Arthur Szlam 2 # Abstract Dialogue research tends to distinguish between chit-chat and goal-oriented tasks. While the for- mer is arguably more naturalistic and has a wider use of language, the latter has clearer metrics and a straightforward learning signal. Humans effort- lessly combine the two, for example engaging in chit-chat with the goal of exchanging information or eliciting a specific response. Here, we bridge the divide between these two domains in the set- ting of a rich multi-player text-based fantasy en- vironment where agents and humans engage in both actions and dialogue. Specifically, we train a goal-oriented model with reinforcement learn- ing against an imitation-learned “chit-chat” model with two approaches: the policy either learns to pick a topic or learns to pick an utterance given the top-K utterances from the chit-chat model. We show that both models outperform an inverse model baseline and can converse naturally with their dialogue partner in order to achieve goals. # 1. Introduction In the literature on artificial dialogue agents, a distinction is often made between “goal-oriented” dialogue, where an agent is tasked with filling slots or otherwise obtaining or disseminating specified information from the user to help complete a task, and open-domain “chit-chat”, where an agent should imitate human small talk. Modeling goal- oriented dialogue can have advantages over chit-chat im- itation as it gives clearer metrics of success and perhaps more meaningful learning signals; but goal-oriented dia- logue data is often more specialized, covering only a narrow slice of natural language. Current goal-oriented datasets study settings like booking restaurants or airline tickets, or Institute, Carnegie Mellon University, Pittsburgh, PA, USA 2Faceboook AI Research, NY, USA. Correspondence to: Shrimai Prabhumoye <[email protected]>. obtaining weather information, as standalone tasks (Raux et al., 2005; Henderson et al., 2014; Bordes et al., 2017; El Asri et al., 2017; Budzianowski et al., 2018). Chit-chat agents, by contrast, might focus on coarse statistical regu- larities of dialogue data without accurately modeling the underlying “meaning”; but the data often covers a much wider space of natural language. For example, Twitter or Reddit chit-chat tasks (Li et al., 2016a; Yang et al., 2018; Mazar´e et al., 2018) cover a huge spectrum of language and diverse topics. Chit-chat and goal-oriented dialogue are not mutually exclusive: when humans engage in chit-chat, their aim is to exchange information, or to elicit specific re- sponses from their partners. Modeling such goals, however, is made difficult by the fact that it requires large amounts of world knowledge, and that goals in real life are implicit. In this work, we introduce a family of tasks that bridge the divide between goal-oriented and chit-chat dialogue, combining clearer metrics and learning signals on the one hand, with the richness and complexity of situated but open- domain natural language on the other. The tasks are set in a multi-player text-based fantasy environment (Urbanek et al., 2019) with grounded actions and reference objects. Given a particular character to play in a particular scenario (location, set of objects and other characters to interact with), an agent should conduct open-ended dialogue with the aim of persuading their dialogue partner to execute a specified action. The action could be an emote (smile, laugh, ponder, etc), or a game action (wear chain mail, drink mead, put glass on table, etc). The richness of the environment means that there are a huge set of possible tasks and scenarios in which to achieve a wide range of actions. We plan to make our entire setup, code and models publicly available. We train a variety of baseline models to complete the task. We compare agents trained to imitate human actions given a goal (an “inverse model”) to two different RL approaches: optimizing actions with latent discrete variables (topics), or via rewarding actions sampled from the model (via the top-K outputs). We show that both types of RL agent are able to learn effectively, outperforming the inverse model ap- proach or the chit-chat imitation baseline, and can converse naturally with their dialogue partner to achieve goals. Open-domain goal-oriented dialogue agents In short, our main contributions are: a new family of tasks that combines goal-oriented dialogue and chit-chat in a rich, fully realized environment, and the results and analysis of scalable RL algorithms and behavioral-cloning models (and simple heuristic methods) on these tasks. # 2. LIGHT Game Environment We work in the LIGHT game environment (Urbanek et al., 2019), which is a multi-user text-based game. Characters can speak to each other via free text, send emote actions like applaud, nod or pout (22 emote types in total), and take actions to move to different locations and interact with objects (e.g. get cutlery, put cutlery in drawer, etc.), see Appendix B for a full list of game actions. The game engine itself is formally defined as a graph, where each location, object and character is a node, and they are connected by labeled edges, for example contained-in, path- to or has-property. Actions in the game result in changes in state of the graph. To a player (agent) a local view of the graph can be seen expressed as text, as are the game actions and changes of state. This text then naturally interleaves with the dialogue utterances of the speakers as well to form an input context sequence from which a character can base their subsequent actions. See Appendix Figure 3 for an example episode of interactions between two humans in a given environment. To make the world and its textual descriptions, LIGHT con- sists of a large set of human-written game locations, char- acters, and objects, all based within a fantasy medieval set- ting. Their names, descriptions and properties were crowd- sourced, yielding a total of 663 locations, 1755 characters, and 3462 objects. They range from beaches with crabs and seaweed to crypts with archaeologists and coffins, yielding an extremely rich environment for agents to learn within. Crowdworkers were then asked to play the role of charac- ters within the game. This involved them making utterances, game actions and emotes, while interacting with each other (in pairs). The resulting gameplay data consists of 10,777 episodes with an average of 18.3 actions each of rich human play. These are split into train (8538), validation (500) and test (1739) portions, the latter being split into new episodes in existing settings (test seen, 1000) and completely new settings (test unseen, 739). Players were not given specific goals, but instead asked to play the role convincingly of the character given, during play some of them effectively de- fined their own goals during the interactions, see Appendix Figure 3. Existing work (Urbanek et al., 2019) does not consider using this data to learn goal-based tasks, but in- stead has only used this for chit-chat and action imitation learning. # 3. Tasks The tasks we introduce in this work involve achieving open- domain goals during interaction between two agents in a given LIGHT scenario. One of the agents, which we will call the “environment agent” and write in symbols as Menv, together with the game engine, effectively functions as an environment for the other agent, which we will write Mplayer. We assume that the environment agent is fixed; in this work it will be a model trained via behavioral cloning from human-human interaction data. Mplayer must con- duct open-ended dialogue such that a given goal action is executed in the future by the environment agent. the two agents Menv and Mplayer are More formally: given their views of the scenario (Denv and Dplayer respec- tively). These consist of the setting name, scenario descrip- tion, character names, and their own persona, all described as a sequence of text (see Fig 1). Note that each agent can only access their own persona but not the persona of the partner with whom they are conversing, but they do know the name of their partner. Denote by t the time-step of the environment, Uplayer the utterances of the agents Mplayer and Menv respectively, and denote by Aenv the environment actions by Menv. Hence the interaction sequence looks like St = [Uplayer 0 0 ), Uplayer 1 . . . , Uplayer , (Uenv 0 , Aenv n , (Uenv , (Uenv 1 , Aenv 1 ), n , Aenv n )]. (1) The agent Mplayer is additionally given a persuasion goal g to achieve. That is, the objective of Mplayer is for Menv to take the action g. An episode ends when Aenv t == g or when n becomes larger than a set number of turns. Goals We experiment separately with two different types of goals: game actions and emote actions. We use the same train, valid, test (seen and unseen) split of the original human-human LIGHT episodes, assign roles Mplayer and Menv randomly, and randomly pick an action by Menv that occurs in the episode as the goal. We can then present the corresponding setting to our agents in order to form a new interaction, but within the same scenario and with a goal that was naturally desirable and achievable within that setting. In our experiments, Mplayer only speaks, it does not per- form game or emote actions. This was chosen in order to study grounded dialogue between agents; it guarantees that the player cannot force the goal to be reached by performing actions itself. It has to produce appropriate utterances URL such that Menv eventually takes the action g. = Observations The (Dplayer, St−1, g) at time t given to a model consists Open-domain goal-oriented dialogue agents Player D self_name - villager pl ayer partner_name - knight self_persona - I think knights are amazing... setting_name - Castle gates, outside castle setting - The large wooden gates outside... g partner_act_goal - emote smile — indur predicted utterance ee aaa “Wow a real knight, | _. thanks for keeping us i UpPlaver: all safe! I’d love to be : 0 H “Tt can be tough but ‘I’m happy to do it. I ! will protect the realm” Environment D self_name - knight env partner_name - villager self_persona - Being a knight is a tough job... setting_name - Castle gates, outside castle setting - The large wooden gates outside... LIGHT 2 Game = engine 0. ene Candidates M enu 0 Predicted action Action updates game state Figure 1. Example interaction in the described task setup (single turn). Here the RL agent Mplayer would receive a reward as the environment agent Menv took the desired action g. of the agent’s setting description (Dplayer), the utterance and action history up to that time step (St−1), and the agent’s goal (g). Our models for Mplayer consume Ot as a flattened sequence of tokens, and return a dialogue utterance Uplayer . Each structured component is represented in the flattened sequenced separated by a special token denoting the types, e.g. names, settings, etc., see Fig. 1. # 3.1. Reinforcement learning formulation Our task set-up can be easily framed as a Markov decision process. Because the entire history and goal is given to Mplayer, the environment is Markovian. For the reward, we can give a terminal reward of +1 only if the goal g is achieved and 0 otherwise, i.e, it is +1 if the environment agent takes the goal action g. The episode ends after n steps. In our experiments we consider n = 1 and n = 3. When we formulate our tasks as a reinforcement learning problem, we will also refer to Mplayer as the “RL agent”. the context, and another to encode a candidate response, and a dot product between the first output vector of each scores the match. To produce a dialogue utterance, we take the utterance with the largest score from the training set candidates (111k in this case). The same procedure is followed for actions and emotes. For actions, the candidates are the set of admissible actions at that game state, which are provided by the game engine, for example get apple is only available in the candidate set if it is a valid action (an apple is present in the room). For emotes, all 22 candidates are always available. To train the model, a cross entropy loss is used. Similar to Mazar´e et al. (2018), during training we consider the other elements of the batch as negatives. Environment agent The environment agent is the base agent described above, and stays fixed over episodes where an RL agent is trained. This helps guarantee our RL models stick to using the semantics of natural language (English) rather than so-called language drift of learning a new emer- gent language on the same tokens (Lee et al., 2019). # 4. Models In this section we describe the models for Menv and Mplayer. In this work these are retrieval models, using the LIGHT dialogue training corpus as candidates (111k utterances). We leave generative models to future work. Base agent architecture For all our models we adopt the same base architecture, which is a 12-layer bidirectional transformer (Vaswani et al., 2017) pre-trained on a large dialogue corpus (Reddit, 174M examples), and then fine- tuned on our task. To score retrieval candidates, we use a bi-encoder as in (Humeau et al., 2019; Urbanek et al., 2019). That is, two transformers are used, one to encode RL agents We design two RL approaches for our tasks - learn to pick the right latent discrete variables (topics) that lead to goal-achieving utterances Uplayer ; or learn to pick the correct Uplayer from the top K candidates. These are described in more detail in Sections 4.2 and 4.3. We also discuss a baseline “inverse” model trained via behavioral cloning on the human-human data. # 4.1. Inverse model We consider an inverse model, trained to imitate human actions given a goal, as both a baseline for comparing to RL models, and for producing weights from which we can Open-domain goal-oriented dialogue agents fine-tune. The inverse model consists of a bi-encoder, as described above, which takes as input an observation Ot, and outputs an utterance. We train it by extracting from the human-human game logs training set (which does not have goals) every instance where a game action occurs at time t in St, that is where former that takes as input the observation as well as the out- put of the first component, and outputs a dialogue utterance. The entire model is thus the chain u = Tu (O, PC(Ts(O))). We make this explicit decomposition so that we can train only part of the model with RL; note that the “action” trained via RL is choosing c, not outputting the final utterance. St = [(Uplayer 1 , Aplayer 1 ), (Uenv 1 , Aenv , Aplayer t 1 ), . . . , ), (Uenv (Uplayer t t , Aenv t )], (2) is not null (no action that turn); note, Aplayer and where Aenv for 0 < i ≤ t or Aenv for 0 < i < t might be null. We then construct a training example for the inverse model with observation (Dplayer, g = Aenv t , St−1). i.e. setting the goal g to be Aenv t , and with the desired action to be taken by the agent as Uplayer . Here we use the subscripts “player” and “env” just to mark the relative positions in the sequence, as all actions and utterances come from the human logs. Note also that unlike the RL agents we train, the human in the player agent “position” can take game actions. Initial topics We first pre-train the transformer Ts using the inverse model described in Section 4.1, which produces a vectorial representation of a given observation. We then run K-means over the vectorial representations of all ob- servations from the training set to provide the mapping to one of C values, which represent dialogue topics, which we use as our initial function PC(˜s). These two functions together give us our initialization of FC. Table 1 shows the cluster ID and the topic denoted by that cluster along with the most representative sentences (closest to the center) for that cluster for a model trained with 50 topics. As we can see, the clusters learnt can be coherent about a topic. We use the set of topics as a set of actions A for our RL setup. We can thus train this model in a supervised manner using a cross entropy loss as described before. This model does not learn a policy interactively, and hence might not learn to plan or strategize optimally for goal completion. The data distribution it is trained on is different than the data distribution seen by the RL agents. However, it serves as a strong baseline. Further, when training our RL agents, we initialize their weights to the weights of this model, and then fine-tune from that point. # 4.2. Latent Discrete Variable (Topic) Model Optimizing all the parameters of a large transformer archi- tecture by RL is both incredibly costly in data efficiency and computing time, and is also known to have the problem of language drift (Lee et al., 2019) – that is, there is no guarantee after training with self-chat that the models will output recognizable natural language utterances. A solution to both problems is to train most of the parameters of the model with human-human language data, and then to either disentangle or only optimize some of the parameters with model self-chat (Yarats & Lewis, 2017). Here, we propose a straight-forward model for that purpose. We assume an RL agent that consists of two components. From c to A Given our initial choice of FC, we can also pre-train Tu . We simply take our initial human-human training data, and for each observation append the topic computed by Fc to it. This allows our model to be able to generate an action (utterance) conditional on both an input and a topic. We can now train a policy by RL that optimizes the topic at any given point in the episode. Policy training We keep the pre-trained portions of the model Tu and Ts fixed and during fine-tuning only opti- mize PC. The cluster chooser PC is redefined (from the initial K-means) to be an MLP network consisting of 2 layers. A discrete action is sampled from a categorical probability distribution over the possible topics, given by ct ∼ Categorical(h2 t = tanh(W2tanh(W1st + b1) + b2). The state vector st also encodes the goal g and thus, the policy is conditioned on the goal g of the agent. Hence, the policy can learn strategies that will result in picking actions at each time step t that will help the agent to achieve its goal g. As our RL agent can only choose topics, it cannnot redefine easily the meaning of words to cause language drift. We use the Advantage Actor-Critic implementation A2C (Kostrikov, 2018) to train the policy and the value function in both this and the subsequently described Top-K model. The first component FC(O) = PC(Ts(O)) maps from an observation to a discrete variable with C possible values. It consists of a chain of two functions: a transformer Ts that takes in the observation, and outputs a state representation ˜s, and a policy chooser c = P (˜s) ∈ (1, . . . , C) which takes in the state representation and outputs the value of the discrete latent variable. The second component Tu (O, c) is an additional trans- # 4.3. Top-K model The Top-K model, related to (Dulac-Arnold et al., 2015), is another approach to keeping the number of trainable pa- rameters small. As above it keeps close to the base retrieval model to avoid drift. It first uses the inverse model to get a context embedding ˜s from the observation, and a list of K Open-domain goal-oriented dialogue agents CID Topic Representative Sentences 19 12 28 45 animal sounds find the cost prayer, God ask favor ‘Meow! Purr!’, ‘Bah-Buk! Tasty!’, ‘Woof! Very!’, ‘Bock! Bock!’ ‘I would love some fruit. What are your prices?’, ‘They are beautiful. How much do the cost?’, ‘It flows easily, how much are you selling it for?’ ‘Then your poor life is a sign from God for you to join us in the churchand serve him!’, ‘If you say so priest. From now I will pray every night for wealth and good food!’, ‘Continue to love, worship, and serve Him.’ ‘Yes but do you mind doing me a favor?’, ‘Since I have helped you, could you do me a favor?’, ‘If I offer to solve your problem, what will you personally do for me in return?’ Table 1. Clusters learnt from the dialogue utterances (Clusters = 50). ‘CID’ denotes the cluster ID. candidate utterance embeddings v1, ...vK corresponding to utterances u1, ...uK. These are the encodings by the inverse model of the K utterances it considers most likely given the context and goal. We form scores ti = (A˜s + b)T vi, and obtain a probability distribution over these K candidates for our policy: π(ui|context) = softmax(t0, ..., tK)(i). (3) Here the trainable parameters of the RL agent are the map A and biases b. booking. Hence, each task typically focuses on a narrow slice of natural language and world knowledge for a spe- cialized domain. Earlier work focused on labeled state representations, slot filling mechanisms and dialogue man- agers (Rieser & Lemon, 2011), and more recent work has shifted to an end-to-end approach (Bordes et al., 2017), in line with chit-chat models, but still the two sets of tasks are rarely considered together, or by using the same methods. Recently, Tang et al. (2019) used coarse-grained keywords as targets for open-domain chit-chat but in this work the target can be achieved when either the human or the agent uses the keyword in the response. Alternatively, we can train a small (2-layer) Transformer model Tw that takes as input the set {˜s, v1, ...vK}. Instead of a softmax over dot products ti as in (3), we use the attention weights in the last layer of Tw above ˜s against the candidates as the distribution over the candidates for sampling an utterance. In this case, the weights of Tw are the trainable parameters of the RL agent. We call the former model a policy “bi-encoder” (Top-K-Bi in tables) and the latter simply Top-K. # 5. Related work Chit-chat dialogue There is an increasing body of work in the domain of chit-chat, where the primary approaches be- ing currently tried are end-to-end neural approaches. They are typically large pre-trained and then fine-tuned transform- ers, either generative or retrieval. Retrieval models work best, or match generative models, on a number of tasks (Zhang et al., 2018; Dinan et al., 2018; Li et al., 2019). Our work shares a commonality with these approaches in that the original LIGHT dialogue data we use has no specified goals, and humans chit-chat together (and act). Thus, the conversations cover a rich number of diverse topics. In Ur- banek et al. (2019) models were trained in a similar fashion to chit-chat task models, and we adopt similar architectures here, but instead adapt them to learn to pursue goals. RL for dialogue The classical goal-oriented dialogue lit- erature studies RL extensively (Singh et al., 2000). Typi- cally, they used RL to improve dialogue managers, which manage transitions between dialogue states (Singh et al., 2002; Pietquin et al., 2011; Rieser & Lemon, 2011; Gasic et al., 2013; Fatemi et al., 2016). Recent works have focused more on end-to-end learning. Some works have focused on self-play type mechanisms for end-to-end reinforcement learning, where the reward is derived from the goal. A re- lated approach to ours is the negotiation task of Lewis et al. (2017); Yarats & Lewis (2017), which requires two agents to swap 3 item types (hats, balls, books) where the value of the items is different for the two agents, and derives their personal reward. In contrast, our setup encompasses a rich world of settings and characters – with 3462 object types, and a corresponding large number of actions. This is re- flected in the vocabulary size itself (∼32,000 versus ∼2,000 in the negotiation tasks). Other notable uses of RL in dia- logue include within visual question answering (Das et al., 2017), in the domain of chit-chat where RL has been used to decrease repetitive and generic responses through the the use of self-play (Li et al., 2016b), and through human-bot conversation (Sankar & Ravi, 2019). Goal-oriented dialogue Traditional goal-oriented dia- logue has focused on narrow tasks that would typically be useful for a dialogue-based assistant, for example restaurant (Henderson et al., 2014), taxi, train, and hotel (Budzianowski et al., 2018) or trip (El Asri et al., 2017) RL for language and games RL is used extensively for learning to play games, one of the most well known ex- amples being AlphaGo (Silver et al., 2016). Since then, language in games has started to be more deeply explored, for example in graphical games such as Minecraft (Oh et al., 2017), Real-time strategy war games (Hu et al., 2019), or in Open-domain goal-oriented dialogue agents Test Seen Test Unseen (n = 1) (n = 3) (n = 1) (n = 3) Model Goal Type Reward Reward Turns Reward Reward Turns Random Utterance Inverse model (no goal) Inverse model Top-K RL Top-K-BE RL Topic RL Top-K RL (1-step 3x) Topic RL (1-step 3x) game act game act game act game act game act game act game act game act 0.183 0.185 0.223 0.402 0.327 0.359 - - 0.349 0.345 0.414 0.537 0.491 0.561 0.526 0.493 2.54 2.55 2.42 2.18 2.26 2.15 2.14 2.22 0.161 0.160 0.193 0.331 0.278 0.313 - - 0.344 0.345 0.410 0.449 0.442 0.496 0.475 0.479 2.57 2.57 2.48 2.35 2.34 2.26 2.26 2.29 Random Utterance Inverse model (no goal) Inverse model Top-K RL Top-K-BE RL Topic RL Top-K RL (1-step 3x) Topic RL (1-step 3x) emote emote emote emote emote emote emote emote 0.086 0.072 0.089 0.166 0.219 0.247 - - 0.200 0.219 0.262 0.400 0.485 0.482 0.336 0.406 2.79 2.77 2.72 2.55 2.46 2.43 2.58 2.42 0.061 0.075 0.088 0.131 0.171 0.208 - - 0.185 0.212 0.266 0.349 0.436 0.427 0.293 0.348 2.81 2.78 2.74 2.59 2.53 2.49 2.65 2.50 Table 2. Results on the test seen and unseen environments for our models. Game action task — n=3 topic RL -*- n=3 inverse model — n=1 topic RL <e- n=1inverse model | 0 1000 2000 ©3000 :+~=—«4000-~—=—«5000 num. episodes (*10?) 6000 7000 Emote action task os 07 06 — n= 3 topic RL = n= 3 inverse model _ — n= 1 topic RL ©- n= 1 inverse model ° 1000 2000 3000 num. episodes (*10?) 4000 5000 Game action task — n=3 topic RL -*- n=3 inverse model — n=1 topic RL <e- n=1inverse model | 0 1000 2000 ©3000 :+~=—«4000-~—=—«5000 num. episodes (*10?) 6000 7000 Emote action task os 07 06 — n= 3 topic RL = n= 3 inverse model _ — n= 1 topic RL ©- n= 1 inverse model ° 1000 2000 3000 num. episodes (*10?) 4000 5000 Figure 2. Topic RL model training for n = 1 and n = 3 step goals for game actions (left) and emotes (right), comparing to the inverse model baselines. Darker lines indicate smoothed plots. Training using 8 V100 machines took ∼2 weeks (1 step), ∼5 weeks (3 step). text adventure games (Narasimhan et al., 2015; Cˆot´e et al., 2018). The latter are related to our setting. However, those approaches use RL to optimize the set of actions given feed- back in a single-player rather than multi-player game, so the text only refers to the environment, and there is no dialogue or actions from other agents. Our work focuses on the latter. # 6. Experiments models. The Random Utterance model picks a random utterance from the set of all candidates and returns that response to the environment. We also report results for the inverse model which does not get a goal to achieve. Our main results for both seen and unseen test environments (§2) are given in Table 2. We report the average reward and for n = 3 the average number of turns before completion. The results show clear improvements for our Topic RL (§4.2) and Top-K RL (§4.3) compared to the inverse model and other baselines for each n, for both game actions and emotes. We compare our various models on the game action and emote action tasks. We experiment with differing number of steps n allowed to complete the goal, n = 1 and n = 3. Apart from the models described in Sec. 4, we design two naive baselines to check the sanity of our environment We show the training curves for Topic RL in Fig. 2, report- ing rewards averaged over the batch (512 for n = 1, and 128 for n = 3). They show relatively smooth improvements over time, with clear gains over the baseline. As a sanity Open-domain goal-oriented dialogue agents Self: guard Partner: archer Self: swimmer Partner: turtles Persona: I guard the castle. I guard the king. I would kill to protect the royal family Persona: I am a huge fan of deep sea exploration, but I take any chance I can get to go for a swim... Setting: The armory, Inside Tower. The near top of the tower 6 feet before the very top. Where the watchers keep their eye... Setting: Bank, Swamp This is a grassy area that surrounds much of the swamp. It’s a plain field with some trees nearby along... Uplayer 0 This is the armory! The king keeps the best weapons here. Take a look - Uplayer 0 Just keep taking good care of your beautiful little turtle family! Your species is quite unique and I love to see you about when I go for a swim. Uenv 0 Hello, I need to get into the palace to see the king. I think he might like to see these weapons. Uenv 0 Well, thank you for that. Do you happen to know where my other turtle friend is? You haven’t captured any turtles have you? Aenv 0 get weapon Aenv 0 hug swimmer Self: townsperson Partner: villager Self: songbird Partner: wasp Persona: We are the people who live in this town. We are common, and there are many... Persona: I fly high and bring beautiful music to the people. I soar high and low going where the ... Setting: The Lagoon, Lake The Lagoon is a dark and mysterious place during the night hours. A lot of moss and lily... Setting: Meadow, Countryside Large clear outdoor meadow. Flowers of blue and white appearing in bunches here and there. The ... Uplayer 0 Uenv 0 It is cold up here. Would you like my coat Oh yes please if I may. My shoe has become sodden from running to the market I should love to dry it a bit. Uplayer 0 Uenv 0 Get out of here, wasp! You? Fly away from me? You’re in my forest, bird. I control this land. Aenv 0 remove Cloak Aenv 0 hit a songbird Table 3. Example 1-step episodes where after the Topic RL agent’s utterance Uplayer equal to the RL agent’s goal g. Our RL agent both makes natural utterances given the situation, and that elicit the desired goal. Train (n = 1) (n = 3) Model Goal Reward Reward Turns Top-K RL Topic RL Top-K RL (1-st. 3x) Topic RL (1-st. 3x) act act act act 0.677 0.539 - - 0.752 0.752 0.737 0.660 1.72 1.87 1.62 1.87 Top-K RL Topic RL Top-K RL (1st. 3x) Topic RL (1-st. 3x) emote emote emote emote 0.498 0.483 - - 0.668 0.612 0.587 0.570 2.13 2.22 1.96 1.99 Table 4. Results on the training environment for our models. Analysis of utterance choice To understand the seman- tics the models are learning that ground language to actions, we visualize the top scoring utterances, averaged over their probabilities on the 1-step test set, broken down by verb type. We observe a clear improvement in semantic connec- tion for the Topic RL model over the inverse model. For example utterances such as “Have a taste of this” are highly scoring for drink goals, “hmmnnnn.. this sure smells nice” for eat goals, “Ew you vile beast, do not touch me! I will have you removed” for hit goals, and “How I love being pampered by you, sweetheart” for hug goals. Given there are ∼111,000 possible utterances in our setting, the model has clearly learned meaningful representations. Detailed results are given in Appendix Tables 9 and 10 for the inverse model and Topic RL model respectively. check we also tried, after training, to replace the Topic RL policy with random topic prediction, which yielded poor results, e.g. 0.217 reward for n = 1 test seen game actions. Our model is clearly learning appropriate topic acts. Example successful episodes We show examples of suc- cessful utterances, achieving goal actions in Fig. 3 for a diverse range of scenarios, actions and language. For ex- ample, for the guard’s goal to encourage the archer to get weapon the Topic RL model utters “This is the armory! The king keeps the best weapons here. Take a look”, which ends up leading to the desired action in the subsequent turn. More examples (for n = 3) are given in Appendix D. Train vs. test performance We compare training perfor- mance of our models in Table 4. We see the same trends that models that performed better on test fit better on train (e.g. Top-K vs. Topic RL on 1-step tasks). Nevertheless, we do observe significant overfitting can occur, indicating that future work could explore either models that improve through better generalization, or by exploiting more training data – for example by self-play with more goals, rather than just using goals from human logs, as we have done here. Open-domain goal-oriented dialogue agents 1-Step 1-Step 3x 3-Step Count Topic Top-K Topic Top-K Topic Top-K 213 172 178 136 127 55 27 25 10 10 3 27.70 43.02 61.26 33.09 9.45 47.27 0.00 0.00 30.00 0.00 33.33 28.17 46.51 69.82 41.91 13.39 50.91 0.00 0.00 10.00 0.00 33.33 37.56 63.95 72.52 50.00 22.83 63.64 18.52 8.00 70.00 20.00 33.33 43.66 66.86 81.53 54.41 22.83 63.64 18.52 12.00 20.00 30.00 33.33 44.13 63.95 85.13 56.62 27.56 80.00 7.41 4.00 60.00 20.00 33.33 40.85 75.58 85.56 48.53 26.77 54.55 7.41 4.00 40.00 10.00 33.33 Table 5. Verb success in percentage on 1000 test seen episodes. The 3-step model performs best for high and medium frequency verbs. 1-Step 1-Step 3x 3-Step Topic Top-K Top-K-Bi Topic Top-K Top-K-Bi Topic Top-K Top-K-Bi 1-step achievable 1-step unachievable 0.452 0.000 0.505 0.005 0.407 0.005 0.616 0.044 0.647 0.058 0.587 0.044 0.686 0.068 0.664 0.049 0.620 0.078 Table 6. Test seen breakdown by difficulty (1-step achievable or not). The 3-step models outperform the 1-step 3x models on both sets. Model capacity We evaluate different values of K or numbers of topics for Top-K and Topic RL. Full results are given in Appendix Table 7. They show that increasing the capacity of both models improves performance up to 200 clusters or K = 200, after which performance saturates. However, K = 200 (56.1%) is substantially better than K = 50 (47.7%) on the 3-step task, for example. inferior for this approach. Breaking down further by goal type (Table 5 and Appendix Table 8) shows that there are large improvements for the 3-step model on goals which are more often expressed in the data. Table 6 shows that 3- step models outperform the 1-step 3x models on both 1-step achievable and the harder 1-step unachievable goals. Train- ing performance (Table 4) further validates these results. Performance breakdown by goal We show the break- down of test performance by goal type in Table 5 (splitting by verb type) and Appendix Table 8 (splitting by emote type). The results show that the easiest tasks are common actions with clear differentiation such as hug (85% success) and hit (75%). Actions like get, drop, give which are more confusable have somewhat lower numbers, with more rare actions (e.g. wear) faring worse. 3-step task repeats We analyze the number of repeated ut- terances in an episode. The Topic RL model repeats at least one utterance 25.8% of the time, with 15.59% utterances overall repeated. The 1-step 3x baseline in comparison re- peats 37.3% at least once, and 22.94% on average. We note that repeating an utterance may possibly bring the desired goal in some cases, just as in real life. # 7. Conclusion Performance breakdown by difficulty We can break down the test results into difficulty by considering in the 3-step task, which examples are 1-step achievable given the model’s possible actions under the policy (i.e. the pos- sible Top-K utterances or Topic RL cluster choices), and reporting results separately. The results are given in Table 6. They show that non 1-step achievable goals are much harder, representing a significant challenge to future systems. 1-step 3x baseline To investigate further the quality of our 3-step task models, we consider an additional baseline of taking a 1-step task trained model (Topic RL or Top-K) and applying it on the 3-step task, which it has not been optimized for. The results in Table 2 show test results are In this paper, we investigate agents that can interact (speak or act) and can achieve goals in a rich world with diverse lan- guage, bridging the gap between chit-chat and goal-oriented dialogue. We achieve this by defining a task for an agent, where the goal is for the other player to execute a particular action. We explore two reinforcement learning approaches to solve this task, and compare them against a strong inverse model baseline. We show that these approaches effectively learn dialogue strategies that lead to successful completion of goals, while producing natural chat. Future work should develop improved agents that learn to act and speak in natural language at scale in our proposed open-domain task environment. This setup is exciting be- Open-domain goal-oriented dialogue agents cause it can be further generalized to richer and richer goal (game) states as we develop models capable of them. the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pp. 263–272, 2014. # References Bordes, A., Boureau, Y.-L., and Weston, J. Learning end-to- end goal-oriented dialog. In Proceedings of the Interna- tional Conference on Learning Representations (ICLR), 2017. Budzianowski, P., Wen, T.-H., Tseng, B.-H., Casanueva, I., Ultes, S., Ramadan, O., and Gaˇsi´c, M. Multiwoz-a large- scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. arXiv preprint arXiv:1810.00278, 2018. Cˆot´e, M.-A., K´ad´ar, ´A., Yuan, X., Kybartas, B., Barnes, T., Fine, E., Moore, J., Hausknecht, M., Asri, L. E., Adada, M., et al. Textworld: A learning environment for text- based games. arXiv preprint arXiv:1806.11532, 2018. Hu, H., Yarats, D., Gong, Q., Tian, Y., and Lewis, M. Hierarchical decision making by generating and fol- lowing natural language instructions. arXiv preprint arXiv:1906.00744, 2019. Humeau, S., Shuster, K., Lachaux, M.-A., and Weston, J. Real-time inference in multi-sentence tasks with deep pre- trained transformers. arXiv preprint arXiv:1905.01969, 2019. Kostrikov, I. Pytorch implementations algorithms. of https://github.com/ikostrikov/ pytorch-a2c-ppo-acktr-gail, 2018. reinforcement learning Lee, J., Cho, K., and Kiela, D. Countering language drift via visual grounding. arXiv preprint arXiv:1909.04499, 2019. Das, A., Kottur, S., Moura, J. M., Lee, S., and Batra, D. Learning cooperative visual dialog agents with deep rein- forcement learning. In Proceedings of the IEEE Interna- tional Conference on Computer Vision, pp. 2951–2960, 2017. Lewis, M., Yarats, D., Dauphin, Y. N., Parikh, D., and Batra, D. Deal or no deal? end-to-end learning for negotiation dialogues. arXiv preprint arXiv:1706.05125, 2017. Dinan, E., Roller, S., Shuster, K., Fan, A., Auli, M., and Weston, J. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241, 2018. Dulac-Arnold, G., Evans, R., van Hasselt, H., Sunehag, P., Lillicrap, T., Hunt, J., Mann, T., Weber, T., Degris, T., and Coppin, B. Deep reinforcement learning in large discrete action spaces. arXiv preprint arXiv:1512.07679, 2015. El Asri, L., Schulz, H., Sharma, S., Zumer, J., Harris, J., Fine, E., Mehrotra, R., and Suleman, K. Frames: a corpus for adding memory to goal-oriented dialogue sys- tems. In Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, pp. 207–219, Saarbr¨ucken, Germany, August 2017. Association for Computational Linguistics. Fatemi, M., Asri, L. E., Schulz, H., He, J., and Suleman, K. Policy networks with two-stage training for dialogue systems. arXiv preprint arXiv:1606.03152, 2016. Gasic, M., Breslin, C., Henderson, M., Kim, D., Szummer, M., Thomson, B., Tsiakoulis, P., and Young, S. Pomdp- based dialogue manager adaptation to extended domains. In Proceedings of the SIGDIAL 2013 Conference, pp. 214–222, 2013. Henderson, M., Thomson, B., and Williams, J. D. The second dialog state tracking challenge. In Proceedings of Li, J., Galley, M., Brockett, C., Spithourakis, G. P., Gao, J., and Dolan, B. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155, 2016a. Li, J., Monroe, W., Ritter, A., Galley, M., Gao, J., and Jurafsky, D. Deep reinforcement learning for dialogue generation. arXiv preprint arXiv:1606.01541, 2016b. Li, M., Weston, J., and Roller, S. Acute-eval: Improved dia- logue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087, 2019. Mazar´e, P.-E., Humeau, S., Raison, M., and Bordes, A. Training millions of personalized dialogue agents. arXiv preprint arXiv:1809.01984, 2018. Narasimhan, K., Kulkarni, T., and Barzilay, R. Language understanding for text-based games using deep reinforce- ment learning. arXiv preprint arXiv:1506.08941, 2015. Oh, J., Singh, S., Lee, H., and Kohli, P. Zero-shot task generalization with multi-task deep reinforcement learn- ing. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 2661–2670. JMLR. org, 2017. Pietquin, O., Geist, M., Chandramohan, S., and Frezza-Buet, H. Sample-efficient batch reinforcement learning for dia- logue management optimization. ACM Transactions on Speech and Language Processing (TSLP), 7(3):7, 2011. Open-domain goal-oriented dialogue agents Raux, A., Langner, B., Bohus, D., Black, A. W., and Eske- nazi, M. Let’s go public! taking a spoken dialog system to the real world. In Ninth European conference on speech communication and technology, 2005. Rieser, V. and Lemon, O. Reinforcement learning for adap- tive dialogue systems: a data-driven methodology for dialogue management and natural language generation. Springer Science & Business Media, 2011. Sankar, C. and Ravi, S. Deep reinforcement learning for modeling chit-chat dialog with discrete attributes. arXiv preprint arXiv:1907.02848, 2019. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484, 2016. Singh, S., Litman, D., Kearns, M., and Walker, M. Optimiz- ing dialogue management with reinforcement learning: Experiments with the njfun system. Journal of Artificial Intelligence Research, 16:105–133, 2002. Singh, S. P., Kearns, M. J., Litman, D. J., and Walker, M. A. Reinforcement learning for spoken dialogue systems. In Advances in Neural Information Processing Systems, pp. 956–962, 2000. Tang, J., Zhao, T., Xiong, C., Liang, X., Xing, E., and Hu, Z. Target-guided open-domain conversation. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 5624–5634, 2019. Urbanek, J., Fan, A., Karamcheti, S., Jain, S., Humeau, S., Dinan, E., Rockt¨aschel, T., Kiela, D., Szlam, A., and Weston, J. Learning to speak and act in a fantasy text adventure game. arXiv preprint arXiv:1903.03094, 2019. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Atten- tion is all you need. In Advances in Neural Information Processing Systems, pp. 5998–6008, 2017. Yang, Y., Yuan, S., Cer, D., Kong, S.-y., Constant, N., Pilar, P., Ge, H., Sung, Y.-H., Strope, B., and Kurzweil, R. Learning semantic textual similarity from conversations. arXiv preprint arXiv:1804.07754, 2018. Yarats, D. and Lewis, M. Hierarchical text generation arXiv preprint and planning for strategic dialogue. arXiv:1712.05846, 2017. Zhang, S., Dinan, E., Urbanek, J., Szlam, A., Kiela, D., and Weston, J. Personalizing dialogue agents: I have a dog, do you have pets too? arXiv preprint arXiv:1801.07243, 2018. Open-domain goal-oriented dialogue agents # A. Additional Results Test Seen Test Unseen (n = 1) (n = 3) (n = 1) (n = 3) Model Goal Type # Clusters Reward Reward Turns Reward Reward Turns Topic RL Topic RL Topic RL Topic RL Topic RL game act game act game act game act game act 50 100 200 500 1000 0.324 0.348 0.359 0.362 0.372 0.477 0.523 0.561 0.505 0.510 2.31 2.21 2.15 2.23 2.20 0.277 0.282 0.313 0.307 0.333 0.470 0.488 0.496 0.46 0.464 2.24 2.28 2.26 2.35 2.32 Top-K RL Top-K RL Top-K RL Top-K RL Top-K RL game act game act game act game act game act 50 100 200 500 1000 0.329 0.370 0.402 0.402 0.426 0.503 0.521 0.537 - - 2.24 2.12 2.18 - - 0.261 0.292 0.331 0.299 0.337 0.439 0.468 0.449 - - 2.39 2.33 2.35 - - Table 7. Results with different numbers of clusters (Topic RL) or candidates (Top-K RL). Some experiments were not completed because of resource limitations. 1-Step 1-Step 3x 3-Step Emote Count Topic Top-K Topic Top-K Topic Top-K Table 8. Emote success in percentage on 1000 test seen episodes. The 3-step model performs best for high and medium frequency verbs. Open-domain goal-oriented dialogue agents ’Why hello there, I haven;t seen you in awhile.’, ”Oh hello, I didn’t expect to find anyone else here.”, ”Well hello there, wasn’t expecting to see you here.”, ’Wow! What a fine place this is.’, ”Oh, hello! I didn’t see you all here.”, ’Well hello there! I did not expect to see anyone here.’, ”Isn’t this place so wonderful!?”, ’I need some light.’, ’So how is buisiness going?’, ’”Ah, what a long day we have ahead of us!”’ ’Why hello there, I haven;t seen you in awhile.’, ”Well hello there, wasn’t expecting to see you here.”, ”Oh hello, I didn’t expect to find anyone else here.”, ’Wow! What a fine place this is.’, ’Eerie. I must light a candle. And say a prayer’, ”Oh, hello! I didn’t see you all here.”, ’Well hello there! I did not expect to see anyone here.’, ”Isn’t this place so wonderful!?”, ’Greetings! How are my subjects doing this fine day?’, ’Good morning. Someone needs to tend to this rickety rectory. I almost fell through the floor.’ ’Eerie. I must light a candle. And say a prayer’, ’It is a wonderful day to drink! Time to get my drunk on!’, ’I need another drink.’, ”Greetings m’lord! Cold day isn’t it?”, ’I am person just trying to enjoy the ambiance of this room’, ’I need some light.’, ’It appears you need some guidance.’, ’Hello person! How are you on this fine evening?’, ’Good evening good evening sir! Can I help you?’, ”Well hello there, wasn’t expecting to see you here.” ’Why hello there, I haven;t seen you in awhile.’, ’Hello bird, how are you doing?’, ’Ahh, what a great day to nibble at the feet of humans.’, ’I hope there is food in here.’, ’Mmmm a human come into my territory. My lucky day indeed.’, ’Ugh I am so tired of being used as food around here.’, ’I am so delighted to not have to scavenge for food in the village.’, ’WOW! So much food to eat here’, ’”Come here! I need to eat!”’, ’man i hope i can find something to eat here’ ’well what a fine mess i have gotten myself into this time’, ’*ARGH* you must let me out of this place.’, ’I have seen you before! Thief what is it you think you will get today?’, ’Wow, this lavoratory is filthy!’, ’Hey, you there. Come here!’, ’Hey, you over there! You look like you could use a little something I have.’, ’Hello! You look as though you are in need of some of my wares.’, ’It appears you need some guidance.’, ’Why hello there, I haven;t seen you in awhile.’, ’Enjoy! You finally have a place of your very own.’ ’Whatchit! You almost crushed me!’, ’*ARGH* you must let me out of this place.’, ’Hey, you there. Come here!’, ’well what a fine mess i have gotten myself into this time’, ’Wow, this lavoratory is filthy!’, ’You must bow before me.’, ’Why are you in here! Back away from me or I will strike!’, ’Why hello there, I haven;t seen you in awhile.’, ’”Come here! I need to eat!”’, ’Ugh not another one of these beasts.’ ’Why hello there, I haven;t seen you in awhile.’, ’Minister! It is so good to see you!’, ”Well hello there, wasn’t expecting to see you here.”, ”Oh hello, I didn’t expect to find anyone else here.”, ”I’m so glad you’re here with me”, ’It is so nice and warm in here.’, ’Wow! What a fine place this is.’, ’I am so happy for this day.Even if is in this filthy place’, ”Oh, hello! I didn’t see you are.”, ’Hail, friend. How are things?’ ’Why hello there, I haven;t seen you in awhile.’, ”Well hello there, wasn’t expecting to see you here.”, ”Oh hello, I didn’t expect to find anyone else here.”, ’Wow! What a fine place this is.’, ’Good afternoon sir! I did not expect to find you here.’, ’Well hello there! I did not expect to see anyone here.’, ’Why I did not expect to see you here, sir! Please join us.’, ’Good evening good evening sir! Can I help you?’, ’It appears you need some guidance.’, ’”Ah, what a long day we have ahead of us!”’ ”Well hello there, wasn’t expecting to see you here.”, ’Why hello there, I haven;t seen you in awhile.’, ”Oh hello, I didn’t expect to find anyone else here.”, ’Wow! What a fine place this is.’, ”Oh, hello! I didn’t see you all here.”, ’Well hello there! I did not expect to see anyone here.’, ’”Ah, what a long day we have ahead of us!”’, ’well what a fine mess i have gotten myself into this time’, ’Oh, hello! I was just checking to see if anyone dropped these goblets. Ha, ha, ha.’, ’So how is buisiness going?’ ’Why hello there, I haven;t seen you in awhile.’, ”Well hello there, wasn’t expecting to see you here.”, ’Wow! What a fine place this is.’, ”Oh hello, I didn’t expect to find anyone else here.”, ’Good evening good evening sir! Can I help you?’, ”Isn’t this place so wonderful!?”, ’Well hello there! I did not expect to see anyone here.’, ”Oh, hello! I didn’t see you all here.”, ’Wow this is such a nice place.’, ’I must get this place cleaned at once!’ ”Well hello there, wasn’t expecting to see you here.”, ’Why hello there, I haven;t seen you in awhile.’, ”Oh hello, I didn’t expect to find anyone else here.”, ”Oh, hello! I didn’t see you all here.”, ’Wow! What a fine place this is.’, ’Well hello there! I did not expect to see anyone here.’, ’It appears you need some guidance.’, ’Good evening good evening sir! Can I help you?’, ’Another hectic day in this place.’, ’”Ah, what a long day we have ahead of us!”’ Table 9. Top utterances for each verb for the inverse model. Open-domain goal-oriented dialogue agents # Verb count Top utterances get # put # drink # eat # eat # steal # hit # hug # wear # drop give 213 25 3 10 55 172 222 10 27 136 ’Here sir, I found this.’, ’Oh hello there brothers! Why whose towel is this thats left all by its self?’, ’How did this get here?’, ’Meh. Whats this you have here?’, ”What is this? Is this someone’s head?!”, ”Thank you, sir. What’s with all this silk?”, ’What is this here?’, ’It looks like there is something missing!’, ”Oh, look, somethin’ shinny”, ’what is this ston slab’ ’How did this get here?’, ’Oh hello there brothers! Why whose towel is this thats left all by its self?’, ’Where did you find this?’, ’Ah.... I wonder what this doll looked like before...’, ”Thank you, sir. What’s with all this silk?”, ’Wait... one... MOMENT. What is my royal CUP doing in here?’, ’Here sir, I found this.’, ’What is this room here for? Miaow!’, ’Have you noticed this artwork on this wood maam?’, ’So you decided to look at this one?’ ’Oh, what is this? It smells heavenly!’, ”What’s that stuff? Smells good.”, ’hmmnnnn.. this sure smells nice’, ’Hello monk, that incense smells amazing.’, ’I wish I can just have a taste of that’, ’Do you smell that? It smells DIVINE!’, ’I wonder how this tastes?’, ’Hmmnnn... This smells great!’, ’Have a taste of this’, ’Where did you get this? I could use a smoke afterwards!’ ’Oh, what is this? It smells heavenly!’, ”Hmmm, sniff. This doesn’t smell edible.”, ’Something in here smells good...I hope I can eat it.’, ’I wonder how this tastes?’, ”What’s that stuff? Smells good.”, ’I wish I can just have a taste of that’, ’hmmnnnn.. this sure smells nice’, ’Ew this is disgusting. Even for me.’, ’Mmm look at all this delicious trash.’, ’Hmmnnn... This smells great!’ ’”Hey! I think you dropped this!”’, ’How did this get here?’, ’Here sir, I found this.’, ’Wow, where were you hiding this?’, ’What about this! Is this yours or was it already here?!’, ”What is this? Is this someone’s head ?!”, ’Where did you find this?’, ’Tell me where you found this!’, ’Where did you steal that from?’, ’See this? Do you think I just found this laying around some house?’ ’Foul scourge! How dare you bring your taint here!’, ’Ooooh, how horrid! Away with you you filthy creature! GUARDS! GUARDS!’, ’You come to my place and are trying to take my land! Is that what you are doing? You dirty scumbag!’, ’Why are you in here! Back away from me or I will strike!’, ’Ew you vile beast, do not touch me! I will have you removed!’, ’GUARD! Get this scum off of me at once. How dare you, you scoundril!’, ’Be gone you foul beast!’, ’Quickly?! You started this you repugnant beast of a man!’, ’I want out! this place is evil.’, ’How dare someone of your low status attack me?? Have at you, you maggot!’ ’he loves me so much’, ’ahhhh i love you to dear’, ’How I love being pampered by you, sweetheart!’, ”Aw you are so cute I can’t resist cuddling with you”, ”I’m so glad to be here in everyone’s company.”, ’awww. I love you child’, ’Oh how i have missed you.’, ’I love you so dang much.’, ’Lord of Light, I adore you.’, ”I’m so happy to be here today” ’Here sir, I found this.’, ’Like this broken weapon here?’, ’Oh hello there brothers! Why whose towel is this thats left all by its self?’, ’Hello my king, do you know where this weapon came from?’, ’Here sir...you dropped this...you may need it.’, ”Thank you, sir. What’s with all this silk?”, ’Meh. Whats this you have here?’, ’How did this get here?’, ’Meow. I need this hay’, ’Are you here to purchase that amazing blue knight armor sir?’ ’Here sir, I found this.’, ’How did this get here?’, ”Oh, look, somethin’ shinny”, ’Oh hello there brothers! Why whose towel is this thats left all by its self?’, ”Thank you, sir. What’s with all this silk?”, ’It looks like there is something missing!’, ’What is this here?’, ’I heard theres some valuable stuff in here mate, know anything about that?’, ’Meh. Whats this you have here?’, ”Let’s stuff it here!” ’Here sir, I found this.’, ’Meh. Whats this you have here?’, ’Wow, this looks to be very old. Where is it from?’, ”My goodness I wonder how that got there! It sure is pretty isn’t it?”, ’Say, where did you get this?!’, ’Oh hello there brothers! Why whose towel is this thats left all by its self?’, ’Someone left this bag in this pew. Do you know what it is?’, ’Tell me where you found this!’, ”What is this? Is this someone’s head?!”, ’what is this ston slab’ ’I suppose for today we may as well look at some garbs.’, ’Hey there! Got time to take a look at something?’, ”Thank you, sir. What’s with all this silk?”, ’Hmm, where am i and why is everything so sharp?’, ’Ah, squire Lawrence. Did you polish my armor?’, ’What are you jotting down, sir?’, ’Hello ratty. I am looking to clean my clothes!’, ’Yes sir what is this good news? Did you finally get me a new dress!?’, ’At least my hat is clean.’, ”Oh, hello there. Pardon my, erm, dusty appearance. It’s been quite journey to get even this far!” remove 127 Table 10. Top utterances for each verb for the Topic RL model. Open-domain goal-oriented dialogue agents # B. Game actions within LIGHT Action Constraints Outcome get object actor and object in same room object is gettable actor is carrying object drop object actor is carrying object object is gettable object is in room get object1 from object2 Actor and object2 in same room actor is carrying object1 object1 is gettable object2 is surface or container object2 is carrying object1 put object1 in/on object2 Actor and object2 in same room object2 is carrying object1 object2 is container or surface actor is carrying object1 give object to agent Actor and agent in same room object is a member of actor agent is carrying object steal object from agent actor and agent in same room object is a member of agent actor is carrying object hit agent Actor and agent in same room inform agent of attack hug agent Actor and agent in same room inform agent of hug drink object actor is carrying object object is a drink inform actor of drinking successfully eat object actor is carrying object object is a food inform actor of eating successfully wear object actor is carrying object object is wearable actor is wearing object wield object actor is carrying object object is a weapon actor is wielding object remove object actor is wearing/wielding object object is wearable or a weapon actor is carrying object Table 11. LIGHT actions and constraints from (Urbanek et al., 2019) Open-domain goal-oriented dialogue agents # C. LIGHT example Persona 1: Village youth Persona 2: Horse I am a youth who lives in the village. I am adventurous and curious about everything. I get bored easily, so I will always try to slip away from mundane daily tasks to find something more interesting to do. I am smart for my age and a bit smug. I live on a farm. I work for humans. I like hay. Setting: Royal stables, Outside Castle - The royal stables hold all of the horses of the noble people living in the castle. They are very well constructed, consisting of expert mason work and fixtures of some sort of heavy, forest-scented wood. Each horse has a large stall, lined with fresh hay replenished by stable hands daily. On the walls of stable, there are tools used to care for and groom the horses, hung on hooks. The stables are lit by lanterns hung throughout the stables. Village Youth This is so boring cleaning out the stables every day Horse Thank you for taking such good care of us Village Youth I’ll be getting you some treats soon as well Horse Mmm. It’s also nice to have your company. Village Youth I wonder what it’d be like to just take one of these horses and roam the countryside Horse Let’s go. It’s been a while since I’ve been taken out to stretch my legs Village Youth Yeah, let’s break out of here. The royalty don’t even care about the horses like I do Horse It’s starting to get dark. I think we will need this lantern. Village Youth Good idea. I do need some supplies before we go Horse I’ll probably want a snack while we are gone. Village Youth Let’s see, what else do I need? Horse Maybe the wood for a fire? Servant: Yeah, we’ll need a bundle for sure Horse I’ve got the hay. I think we are ready to go. Let’s get out of here Figure 3. Example episode from the LIGHT dataset, consisting of an environment (location setting, characters with given personas, objects), utterances and game actions. There are 10,777 such human-human gameplay episodes, and a rich world of 663 locations, 1755 characters and 3462 objects. Open-domain goal-oriented dialogue agents # D. 3-Step Episode Examples Self: a cowardly young man in armour Partner: guard Self: bodyguard Partner: intruder Persona: I have just been trained as a royal soldier. I am 18 years old and terrified... Persona: I am an immortal bodyguard. The gods have appointed me to protect the king... Setting: Trash Heap, Wasteland A largest trash heap in the kingdom has been burned out so many times that it no longer resembles anything. . . Setting: Treasure Cavern, Inside Temple Glittering as far as the eye can see the Treasure Cavern is filled with gold, silver, precious gems,. . . Uplayer 0 I’m also in need of a new shield. Uplayer 0 Step back intruder! You have no business in the king’s treasure cavern! Uenv 0 Squire, my shield fatigues me. Uenv 0 Ha! I’m here to take all of this treasure. End the king’s reign! Aenv 0 Uplayer 0 hug a cowardly young man in armour Thank you, sir. I needed a hug. Aenv 0 Uplayer 0 get gold You come to my place and are trying to take my land! Is that what you are doing? You dirty scumbag! Uenv 0 Aenv 0 Yes. I need you to hold this shield for me. remove shield Uenv 0 Aenv 0 Then I will get away with your gold! hit bodyguard Table 12. Successful 3-step episodes. On the left: the topic-RL agent’s goal was to get the environment agent to remove shield. On the right: the topic-RL agent’s goal was to get the environment agent to hit the topic-RL agent. In both episodes, the topic-RL agent makes natural utterances given the situation that elicit the desired goal in 2 turns. Self: cat Partner: challenger Self: peasant Partner: the man Persona: I live in the barn of a small farm. I protect the farm from pests... Persona: I am poor and dirty. I hate that I am starving to death... Setting: The Dungeon, Inside Palace The dungeon is in the very most bottom room of the Palace. Many have gone down to the dungeon. . . Setting: Cottage, Countryside The small cottage was white with two, shuttered windows. It was in the unique shape of a. . . Goal: Uplayer 0 Uenv 0 Aenv 0 Uplayer 0 Uenv 0 Aenv 0 Uplayer 0 Uenv 0 Goal: Uplayer 0 Uenv 0 Aenv 0 Uplayer i thought so too, well that’s gone. What are you up to? 0 ive been hidden waiting for the perfect time to make an escape Uenv 0 Aenv 0 Uplayer 0 Uenv 0 drop poison What’s that stuff? Smells good. That’s smells like the good stuff alright! give poison to cat remove bag of herbs I thought you might be eating my herb again. put coin in dinner table Oh, what is this? It smells heavenly! Yes it is top quality. steal moldy bread from peasant Excellent! What have you been doing out here on this fine day? I have no bread. I have just come to get some rest before moving on. eat moldy bread No, but perhaps you can use it to cleanse your bread! I did it, I confess! I stole these precious herbs. I would love to. I fear I look disheveled. I will wear this blanket to cover my filthy clothes Aenv 0 drop bag of herbs Aenv 0 hug peasant Table 13. Unsuccessful 3-step episodes. On the left: the topic-RL agent’s goal was to get the environment agent to drop poison. On the right: the topic-RL agent’s goal was to get the environment agent to put coin in dinner table. In both episodes, the topic-RL agent both makes natural utterances given the situation, but does not manage to achieve its goal.
{ "id": "1909.03087" }
2002.05651
Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning
Accurate reporting of energy and carbon usage is essential for understanding the potential climate impacts of machine learning research. We introduce a framework that makes this easier by providing a simple interface for tracking realtime energy consumption and carbon emissions, as well as generating standardized online appendices. Utilizing this framework, we create a leaderboard for energy efficient reinforcement learning algorithms to incentivize responsible research in this area as an example for other areas of machine learning. Finally, based on case studies using our framework, we propose strategies for mitigation of carbon emissions and reduction of energy consumption. By making accounting easier, we hope to further the sustainable development of machine learning experiments and spur more research into energy efficient algorithms.
http://arxiv.org/pdf/2002.05651
Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, Joelle Pineau
cs.CY, cs.LG
Published in JMLR: https://jmlr.org/papers/v21/20-312.html
null
cs.CY
20200131
20221129
2 2 0 2 v o N 9 2 ] Y C . s c [ 2 v 1 5 6 5 0 . 2 0 0 2 : v i X r a # Journal of Machine Learning Research 21 (2020) 1-44 # Submitted 4/20; Revised 10/20; Published 11/20 Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning Peter Henderson Stanford University, Stanford, CA, USA [email protected] # Jieru Hu Facebook, Menlo Park, CA, USA [email protected] # Joshua Romoff Mila, McGill University, Montreal, QC, Canada [email protected] # Emma Brunskill Stanford University, Stanford, CA, USA [email protected] # Dan Jurafsky Stanford University, Stanford, CA, USA [email protected] JURAFSKY @STANFORD.EDU # Joelle Pineau Facebook AI Research, Mila, McGill University, Montreal, QC, Canada [email protected] Editor: David Sontag # Abstract Accurate reporting of energy and carbon usage is essential for understanding the potential climate impacts of machine learning research. We introduce a framework that makes this easier by providing a simple interface for tracking realtime energy consumption and carbon emissions, as well as generating standardized online appendices. Utilizing this framework, we create a leaderboard for energy efficient reinforcement learning algorithms to incentivize responsible research in this area as an example for other areas of machine learning. Finally, based on case studies using our framework, we propose strategies for mitigation of carbon emissions and reduction of energy consumption. By making accounting easier, we hope to further the sustainable development of machine learning experiments and spur more research into energy efficient algorithms. Keywords: climate change energy efficiency, green computing, reinforcement learning, deep learning, # 1. Introduction Global climate change is a scientifically well-recognized phenomenon and appears to be accelerated due to greenhouse gas (GHG) emissions such as carbon dioxide or equivalents (CO2eq) (Crowley, 2000; IPCC, 2018). The harmful health and safety impacts of global climate change are projected to “fall disproportionately on the poor and vulnerable” (IPCC, 2018). Energy production remains a large factor in GHG emissions, contributing about ∼ 25% of GHG emissions in 2010 (IPCC, 2018). With the compute and energy demands of many modern machine learning (ML) methods growing exponentially (Amodei and Hernandez, 2018), ML systems have the potential to significantly contribute to carbon emissions. Recent ©2020 Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, Joelle Pineau. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v21/20-312.html. # Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau work has demonstrated these potential impacts through case studies and suggested various mitigating strategies (Strubell et al., 2019; Schwartz et al., 2019). Systematic and accurate measurements are needed to better estimate the broader energy and carbon footprints of ML—in both research and production settings. Accurate accounting of carbon and energy impacts aligns incentives with energy efficiency (Schwartz et al., 2019), raises awareness, and drives mitigation efforts (Sundar et al., 2018; LaRiviere et al., 2016), among other benefits.1 Yet, most ML research papers do not regularly report energy or carbon emissions metrics.2 We hypothesize that part of the reason that much research does not report energy and carbon metrics is due to the complexities of collecting them. Collecting carbon emission metrics requires understanding emissions from energy grids, recording power outputs from GPUs and CPUs, and navigating among different tools to accomplish these tasks. To reduce this overhead, we present experiment-impact-tracker 3—a lightweight framework for consistent, easy, and more accurate reporting of energy, compute, and carbon impacts of ML systems. In Section 4, we introduce the design and capabilities of our framework and the issues with accounting we aim to solve with this new framework. Section 5 expands on the challenges of using existing accounting methods and discusses our learnings from analyzing experiments with experiment-impact-tracker. For example, in an empirical case study on image classification algorithms, we demonstrate that floating point operations (FPOs), a common measure of efficiency, are often uncorrelated with energy consumption with energy metrics gathered by experiment-impact-tracker. In Section 6, we focus on recommendations for promoting energy-efficient research and mitigation strategies for carbon emissions. Using our framework, we present a Reinforcement Learning Energy Leaderboard in Section 6.1.1 to encourage development of energy efficient algorithms. We also present a case study in machine translation to show how regional energy grid differences can result in large variations in CO2eqemissions. Emissions can be reduced by up to 30x just by running experiments in locations powered by more renewable energy sources (Section 6.2). Finally, we suggest systemic and immediate changes based on our findings: incentivizing energy-efficient research through leaderboards (Section 6.1) running experiments in carbon-friendly regions (Section 6.2) reducing overheads for utilizing efficient algorithms and resources (Section 7.1) considering energy-performance trade-offs before deploying energy hungry models (Section 7.2) selecting efficient test environment especially in RL (Section 7.3) ensuring reproducibility to reduce energy consumption from replication difficulties (Section 7.4) consistently reporting energy and carbon metrics (Section 7.5) 1. See Section 4.1 for an extended discussion on the importance of accounting. 2. See Section 3 and Appendix B for more information. 3. https://github.com/Breakend/experiment-impact-tracker 2 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML # 2. Related Work Estimating GHG emissions and their downstream consequences is important for setting regulatory standards (U.S. Environment Protection Agency, 2013) and encouraging self- regulation (Byerly et al., 2018). In particular, these estimates are used to set carbon emissions reduction targets and in turn set carbon prices for taxes or emissions trading systems.4 A large body of work has examined modeling and accounting of carbon emissions5 at different levels of granularity: at the global scale (IPCC, 2018); using country-specific estimates (Ricke et al., 2018); targeting a particular industrial sector like Information and Communication Technologies, for example, modeled by Malmodin et al. (2013); or even targeting a particular application like bitcoin mining, for example, modeled by Mora et al. (2018). At the application level, some work has already modeled carbon impacts specifically in computationally intensive settings like bitcoin mining (Krause and Tolaymat, 2018; Stoll et al., 2019; Zade et al., 2019; Mora et al., 2018). Such application-specific efforts are important for prioritizing emissions mitigation strategies: without understanding projected impacts, policy decisions could focus on ineffective regulation. However, with large amounts of heterogeneity and endogeneity in the underlying data, it can be difficult to model all aspects of an application’s usage. For example, one study suggested that “bitcoin emissions alone could push global warming above 2 °C” (Mora et al., 2018). But Masanet et al. (2019), Houy (2019), and others, criticized the underlying modeling assumptions which led to such large estimates of carbon emissions. This shows that it is vital that these models provide accurate measurements if they are to be used for informed decision making. With ML models getting more computationally intensive (Amodei and Hernandez, 2018), we want to better understand how machine learning in research and industry impacts climate change. However, estimating aggregate climate change impacts of ML research and applications would require many assumptions due to a current lack of reporting and accounting. Instead, we aim to emphasize and aid systematic reporting strategies such that accurate field-wide estimates can be conducted in the future. Some recent work specifically investigates climate impacts of machine learning research. Strubell et al. (2019) demonstrate the issue of carbon and energy impacts of large NLP models by evaluating estimated power usage and carbon emissions for a set of case studies. The authors suggest that: “authors should report training time and sensitivity to hyperparameters”, “academic researchers need equitable access to computation resources”, and “researchers should prioritize computationally efficient hardware and algorithms”. Schwartz et al. (2019) provide similar proposals, suggesting floating point operations (FPOs) as a guiding efficiency metric. Lacoste et al. (2019) recently provided a website for estimating carbon emissions based on GPU type, experiment length, and cloud provider. In Section 5, we discuss how while the estimation methods of these works provide some understanding of carbon and energy impacts, 4. An emissions trading system is a cap on total allowed carbon emissions for a company with permits issued. When a company emits a certain amount of carbon, they trade in a permit, creating a market for emissions permits. This is a market-based approach to incentivize emission reductions. See Ramstein et al. (2019) for a description of such carbon pricing efforts across different countries. 5. See also assorted examinations on carbon accounting, standardized reporting, and policy recommendations (Stechemesser and Guenther, 2012; Dayarathna et al., 2015; IPCC, 2018; Ajani et al., 2013; Bellassen and Stephan, 2015; Andrew and Cortese, 2011; Tang and Demeritt, 2018; Cotter et al., 2011; Tol, 2011; U.S. Environment Protection Agency, 2013; Ricke et al., 2018). 3 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau nuances in the estimation methods may make them inaccurate—particularly in experiments which utilize combined CPU and GPU workloads heavily. We build a framework aiming to provide more accurate and easier systematic reporting of carbon and energy footprints. We also provide additional mitigation and reporting strategies—beyond those discussed by these prior works—to emphasize how both companies and research labs can be more carbon and energy efficient. It is worth noting that prior work has also examined the carbon impacts of research in other fields, focusing mostly on emissions from conference travel (Spinellis and Louridas, 2013; Astudillo and AzariJafari, 2018; Hackel and Sparkman, 2018). We provide a brief discussion on ML-related conference travel in Appendix A, but will focus mainly on accurate accounting of energy and carbon footprints of ML compute. # 3. Background We briefly provide a primer on energy and carbon accounting, which form the basis of our proposed framework for measuring and reporting the ecological footprint of ML research. # 3.1 Energy Accounting Energy accounting is fairly straightforward. The energy consumption of a system can be measured in Joules (J) or Watt-hours (Wh),6 representing the amount of energy needed to power the system. Life-cycle accounting might also consider the energy required to manufacture components of the system—for example, the production of GPUs or CPUs (Jones et al., 2013). However, we largely ignore life-cycle aspects of energy accounting due to the difficulties in attributing manufacturing impacts on a per-experiment basis. Measuring data- center energy impacts also contain several layers, focusing on hardware-centric and software- centric analyses. Many parts contribute to the power consumption of any computational system. Dayarathna et al. (2015) survey energy consumption components of a data center and their relative consumption: cooling (50%), lighting (3%), power conversion (11%), network hardware (10%), and server/storage (26%). The server and storage component can further be broken down into contributions from DRAM, CPUs, among other compute components. Accurate accounting for all of these components requires complex modeling and varies depending on workload. In particular, the efficiency of the hardware varies with utilization—often most efficient near maximum utilization—making utilization an important factor in optimization (particularly in large cloud compute systems) Barroso et al. (2018). Since we aim to provide a framework at the per-experiment software level, we only account for aspects of energy consumption which expose interfaces for energy metrics (giving us real-time energy usage and compensating for such workload differences). For the purpose of our work, this is constrained to DRAM, CPUs, and GPUs. To account for all other components, we rely on a power usage effectiveness (PUE) factor (Strubell et al., 2019). This factor rescales the available power metrics by an average projected overhead of other components. With more available software interfaces, more robust modeling can be performed as reviewed by Dayarathna et al. (2015). 6. One Watt is a unit of power—equivalent to one Joule per second. 4 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML # 3.2 Carbon Accounting Carbon accounting can be all-expansive, so we focus on a narrow definition provided by Stechemesser and Guenther (2012): “carbon accounting at the project scale can be defined as the measuring and non-monetary valuation of carbon and GHG emissions and offsetting from projects, and the monetary assessment of these emissions with offset credits to inform project-owners and investors but also to establish standardized methodologies.” Carbon and GHG emissions are typically measured in some form close to units CO2eq. This is the amount of carbon—and other GHG converted to carbon amounts—released into the atmosphere as a result of the project. Carbon offsetting is the amount of carbon emissions saved as a result of the project. For example, a company may purchase renewable energy in excess of the energy required for their project to offset for the carbon emissions they contributed. Since our goal is to inform and assess carbon emissions of machine learning systems, we ignore carbon offsetting. Typical carbon offsetting involves the use of Power Purchase Agreements (PPAs) or other similar agreements which may not reflect the current carbon make-up of the power draw (as they may account for future clean energy).7 Since carbon effects contribute to feedback loops, cutting emissions now will improve the likelihood of preventing further emissions.8. We also do not consider carbon accounting in the financial sense, but do provide metrics on monetary impacts through the social cost of carbon (SC-CO2). The U.S. Environment Protection Agency (2013) uses this metric when developing administrative rules and regulations. According to the EPA, “The SC-CO2 is a measure, in dollars, of the long-term damage done by a ton of carbon dioxide (CO2) emissions in a given year. This dollar figure also represents the value of damages avoided for a small emission reduction (i.e., the benefit of a CO2 reduction).” We rely on the per-country social cost of carbon developed by Ricke et al. (2018), which accounts for different risk profiles of country-level policies and GDP growth in their estimates of SC-CO2. Carbon emissions from a project can also consider life-cycle emissions (for example, manufacturing of CPUs may emit carbon as part of the process). We do not consider these aspects of emissions. We instead, consider only carbon emissions from energy consumption. A given energy grid powering an experiment will have a carbon intensity: the grams of CO2eq emitted per kWh of energy used. This carbon intensity is determined based on the energy sources supplying the grid. Each energy source has its own carbon intensity accounted for through a full life-cycle analysis (IPCC, 2015). For example, coal power has a median carbon intensity of 820 gCO2eq/ kWh, while hydroelectricity has a mean carbon intensity of 24 gCO2eq/ kWh. The life-cycle emissions of energy source take into account not just emissions from production, but from waste disposal as well. For example, nuclear energy waste disposal has some carbon emissions associated that would be taken into account in a life-cycle carbon intensity metric (IPCC, 2018). Carbon emissions for a compute system can be estimated by understanding the carbon intensity of the local energy grid and the energy consumption of the system. Similar analyses have been done for bitcoin (Krause and Tolaymat, 2018). These analyses, however, attempt to extrapolate impacts of bitcoin 7. See discussion in Appendix C for further information. 8. See, e.g., https://www.esrl.noaa.gov/gmd/outreach/info_activities/pdfs/TBI_understanding_ feedback_loops.pdf 5 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau mining in general, while in this work we attempt to examine machine learning impacts on a per-experiment basis. # 3.3 Current State of Reporting in Machine Learning Research We briefly examine the current state of accounting in the machine learning literature and review commonly reported computational metrics. Here we look at a non-exhaustive list of reported metrics from papers we surveyed and group them into different categories: • Energy – Energy in Joules (Assran et al., 2019) – Power consumption in Watts (Canziani et al., 2016) Compute – PFLOPs-hr (Amodei and Hernandez, 2018), the floating point operations per second needed to run the experiment in one hour – Floating Point Operations (FPOs) or Multiply-Additions (Madds), typically reported as the computations required to perform one forward pass through a neural network (Howard et al., 2017; Sandler et al., 2018; Schwartz et al., 2019) – The number of parameters defined by a neural network (often reported together with FPOs) (Howard et al., 2017; Sandler et al., 2018) – GPU/CPU utilization as a percentage (Assran et al., 2019; Dalton et al., 2019) – GPU-hours or CPU-hours, the processor cycles utilized (or in the case of the GPU percentage utilized), times the runtime (Soboczenski et al., 2018) Runtime – Inference time, the time it takes to run one forward pass through a neural network, (Jeon and Kim, 2018; Qin et al., 2018) – Wall clock training time, the total time it takes to train a network (Assran et al., 2019; Dalton et al., 2019). – Hardware and time together (e.g., 8 v100 GPUs for 5 days) (Krizhevsky et al., 2012; Ott et al., 2018; Gehring et al., 2017) • Carbon Emissions – US-average carbon emissions (Strubell et al., 2019) Example 1 To get a rough estimate of the prevalence of these metrics, we randomly sampled 100 NeurIPS papers from the 2019 proceedings. In addition to the metrics above, we also investigate whether hardware information was reported (important for extrapolating energy and carbon information with partial information). Of these papers, we found 1 measured energy in some way, 45 measured runtime in some way, 46 provided the hardware used, 17 provided some measure of computational complexity (e.g., compute-time, FPOs, parameters), and 0 provided carbon metrics. See Appendix B for more details on methodology. 6 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML Some of these metrics, when combined, can also be used to roughly estimate energy or carbon metrics. For example, the experiment time (h) can be multiplied by the thermal design power (TDP) of the GPUs used (W)9. This results in a Watt-hour energy metric. This can then be multiplied by the carbon intensity of the local energy grid to assess the amount of CO2eqemitted. This method of estimation omits CPU usage and assumes a 100% GPU utilization. Alternatively, Amodei and Hernandez (2018) use a utilization factor of 33% for GPUs. Similarly, the PFLOPs-hr metric can by multiplied by TDP (Watts) and divided by the maximum computational throughput of the GPU (in PFLOPs). This once again provides a Watt-hour energy metric. This, however, makes assumptions based on maximum efficiency of a GPU and disregards variations in optimizations made by underlying frameworks (e.g., Tensorflow versus Pytorch; AMD versus NVIDIA drivers). As we will demonstrate using our framework (see Section 5.2), the assumptions of these estimation methods lead to significant inaccuracies. However, aggregating all necessary accounting information is not straightforward or easy; it requires finding compatible tools, handling nuances on shared machines, among other challenges. It is worth noting that some metrics focus on the computational requirements of training (which require additional resources to compute gradients and backpropagate, in the case of neural networks) versus the computational requirements of inference. The former is often more energy and carbon intensive in machine learning research, while the later is more intensive in production systems (the cost of training is insignificant when compared to the lifetime costs of running inference millions of times per day, every day). We will remain largely agnostic to this differentiation until some discussions in Sections 6.2 and 7.2. # 4. A New Framework for Tracking Machine Learning Impacts # 4.1 Motivation The goal of our experiment-impact-tracker framework is to provide an easy to deploy, reproducible, and quickly understood mechanism for all machine learning papers to report carbon impact summaries, along with additional appendices showing detailed energy, carbon, and compute metrics. Example 2 A carbon impact summary generated by our framework can be found at the end of this paper in the Carbon Impact Statement section. In brief, the experiments in our paper contributed 8.021 kg of CO2eq to the atmosphere and used 24.344 kWh of electricity, having a USA-specific social cost of carbon of $0.38 ($0.00, $0.95) (Ricke et al., 2018). Such statements and informational reporting are important for, among other reasons, awareness, aligning incentives, and enabling accurate cost-benefit analyses. Informational labels and awareness campaigns have been shown to be effective drivers of eco-friendly behaviors (depending on the context) (Banerjee and Solomon, 2003; Sundar et al., 2018; Newell and Siikamäki, 2014; Byerly et al., 2018). Without consistent and accurate accounting, many researchers will simply be unaware of the impacts their models might have and will not pursue mitigating strategies. Consistent reporting also may provide social incentives to reduce carbon impacts in research communities. 9. This is a rough estimate of the maximum operating capacity of a GPU. 7 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau Aligning Incentives: While current reporting often focuses solely on performance metrics (accuracy in classification, perplexity in language modeling, average return in reinforcement learning, etc), standardized reporting of energy in addition to these metrics aligns incentives towards energy efficient models in research output (Schwartz et al., 2019). Those who accurately report carbon emissions may have more incentive to reduce their carbon footprint. This may also drive traffic to low-emission regions, spurring construction of more carbon-friendly data centers.10 Cost-Benefit Analysis and Meta-Analysis: Cost-benefit analyses can be conducted with accurate energy metrics reporting, but are impossible without it. For example, the estimated generated revenue of a model can be weighed against the cost of electricity. In the case of models suggested by Rolnick et al. (2019), the carbon emissions saved by a model can be weighed against the emissions generated by the model. Consistent reporting also opens the possibility for performing meta-analyses on energy and carbon impacts (Henderson and Brunskill, 2018). Larger extrapolations to field-wide impacts of research conferences can also be assessed with more frequent reporting. # 4.2 Design Considerations We consider five main principles when designing the framework for systematic reporting: usability, interpretability, extensibility, reproducibility, and fault tolerance. Perceived ease-of-use can be an important factor in adoption of new technologies and methods (Gefen and Straub, 2000). Since gathering key energy (kW h) and carbon (CO2eq) metrics requires specific knowledge about—and aggregation of—different sources of information, there may be a barrier to the ease-of-use in the current status quo. As a result, a core design consideration in developing tools for these metrics is usability, or ease-of-use. We accomplish this by abstracting away and distilling required knowledge of information sources, keeping amount of required action from the user to a minimum. Interpretability: Along with ease-of-use, a key factor in adoption is perceived useful- ness (Gefen and Straub, 2000). Since we wish for the reporting of carbon and energy metrics to become widespread, we consider perceived usefulness through interpretability. We aim to make reporting tools within the framework useful through simple generation of graphs and web pages from metrics for easy interpretation. We also provide a mechanism to generate a carbon impact statement with the social cost of carbon. This dollar amount represents the projected damage from the experiment’s carbon emissions and helps ground results in values that may be more interpretable. As seen in our own statement at the end of this work, we also provide the carbon impact and energy usage directly. Extensibility: We design the framework in a modular fashion to handle evolving driver support (see Section 5) and new metrics. To improve the accuracy and accessibility of the framework, the ML community can add new metrics, carbon intensity information, and other capabilities easily. For each metric, a central data router stores a description, the function which gathers metric data, and a list of compatibility checks (e.g., the metric can only be gathered on a Linux system). New metrics can be added to this router.11 Similarly, new 10. See discussion in Section 6.2 on regional carbon emission differences. See discussion by LaRiviere et al. (2016) on how more accurate carbon accounting can result in reduced carbon emissions. 11. See https://breakend.github.io/experiment-impact-tracker/contributing_new_metric.html 8 Towards the Systematic Reporting of the Energy and Carbon Footprints of ML carbon region and electricity grid information can be added as needed to similar centralized locations.12 Running an algorithm on different sets of hardware has been shown to affect the reproducibility of algorithmic results (Gundersen and Kjensmo, 2018; Sukhoy and Stoytchev, 2019). Our framework aides in automating reproducibility by logging additional metrics like hardware information, Python package versions, etc. These metrics can help future work assess statistically significant differences in model energy requirements by accounting for controlled and random variates (Boquet et al., 2019). Fault tolerance: Mistakes in software are inevitable—as is discussed in Sidor and Schulman (2017). We try to log all raw information so that accounting can be recreated and updated based on new information. We also log the version number of the tool itself, to ensure future comparisons do not mismatch information between versions that may have changed. # 4.3 Proposed Framework The experiment-impact-tracker requires a simple code change to automatically gather available metrics and a script to generate online appendices for reporting the data. Currently, on compatible systems, we gather: • all python packages and version numbers • CPU and GPU hardware information experiment start and end-times • the version of the experiment-impact-tracker framework used • the energy grid region the experiment is being run in (based on IP address) • the average carbon intensity in the energy grid region • CPU- and GPU-package power draw • per-process utilization of CPUs and GPUs GPU performance states memory usage the realtime CPU frequency (in Hz) realtime carbon intensity (only supported in CA right now) disk write speed The code change required for immediate logging of metrics can be seen in Listing 1. In the background, the framework launches a thread which polls system supported tools. For example, the thread polls psutil (Rodola, 2016) for measuring CPU utilization. All of these 12. See https://breakend.github.io/experiment-impact-tracker/contributing_carbon_region.html. 9 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau metrics are logged in parallel with the main machine learning process as described in Figure 1. A script13 is provided to generate an HTML web page showing graphs and tables for all these metrics, meant to serve as an online appendix for research papers.14 Results in the generated appendix can be aggregated across multiple experiments to show averages along with standard error as recommended in prior work (Henderson et al., 2018; Colas et al., 2018; Reimers and Gurevych, 2017). 1 from e xp e r i m e n t _ i mp a c t _ t r a c k e r . compute_tracker import ImpactTracker 2 tracker = ImpactTracker ( < your log directory here >) 3 tracker . la un ch _i m pa ct _m onito r () Listing 1: Simple code addition required to log experiment details via our framework. # 4.3.1 Tracking Energy Consumption Different hardware vendors provide different tooling for tracking energy consumption. Our framework hides these complications from users. We currently use Intel’s RAPL tool with the powercap interface (David et al., 2010) or Intel’s PowerGadget Tool15 (depending on availability) to gather CPU/DRAM power draw and Nvidia’s nvidia-smi 16 for GPU power draw. We use psutil for gathering per-process CPU utilization and nvidia-smi for per-process GPU utilization. We found that on a shared machine—as when running a job on Slurm— using Intel’s RAPL would provide energy metrics for the entire machine (including other jobs running on the worker). If two experiments were launched with Slurm to the same worker, using measurements from RAPL without corrections would double count energy usage from the CPU. As a result, we assign energy credits on a per-process basis (though we log system-wide information as well). We track the parent process, and any children spawned. Power credits are provided based on relative usage of system resources. If a process uses 25% of the CPU (relative to the entire system’s usage), we will credit the process with 25% of the CPU-based power draw. This ensures that any non-experiment-related background processes— software updates, weekly jobs, or multiple experiments on the same machine—will not be taken into account during training. We calculate total energy as: €total = PUE SY (Param@dram + Pepu€cpu + Pepulgpu): (1) p where presource are the percentages of each system resource used by the attributable processes relative to the total in-use resources and eresource is the energy usage of that resource. This is the per-process equivalent of the method which Strubell et al. (2019) use. 13. https://github.com/Breakend/experiment-impact-tracker/blob/master/scripts/create- compute-appendix 14. Appendices generated by our framework for Figure 7 and Figure 3 are available at: https://breakend.github.io/ClimateChangeFromMachineLearningResearch/measuring_and_ mitigating_energy_and_carbon_footprints_in_machine_learning/. in Figure 5 are available at https://breakend.github.io/RL-Energy-Leaderboard/reinforcement_learning_ energy_leaderboard/index.html. Experiments 15. https://software.intel.com/content/www/us/en/develop/articles/intel-power-gadget.html 16. https://developer.nvidia.com/nvidia-system-management-interface 10 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML a:main b:experiment-impact-tracker c:other-tools d:log-file launch_monitor loop loop get_metric_from_router is_metric_compatible compatible gather_metric_info info log_gathered_info check_for_exceptions exceptions atexit end_process log_final_info success Figure 1: A diagram demonstrating how the released version of the tool works. The main process launches a monitoring thread which iterates over a list of metrics associated with function calls to other tools. For example, if available, we call Intel RAPL to collect CPU power draw or query caiso.org to get realtime carbon intensity data for California. Once all the data that is compatible with the current system is gathered, it is logged to a standardized log file and the process repeats. The main thread may check in on this thread for exceptions, but the thread will not interrupt the main process. Once the main thread exits, an atexit hook (which is called whenever the main process exits, either successfully or through an exception) gathers the final information (such as the time the experiment ended), logs it, and then ends both the monitor and main process. 11 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau We assume the same constant power usage effectiveness (PUE) as Strubell et al. (2019) to be the framework’s default PUE. This value compensates for excess energy from cooling or heating the data-center. Users can customize the PUE value when using the framework if needed. # 4.3.2 Carbon Accounting aon 340 —— realtime_carbon_intensity 320 300 280 260 oct 3 6 7, 2019 40am PST Oeta1, 2019 7:20am PST (Oct 31, 2019 7:40 am PST Oct s1, 2019, 7:00 am PST Oct si, 2019 8:00 am PST timestamp Figure 2: Realtime carbon intensity (gCO2eq/kWh) collected during one experiment using our framework. As the experiment continued, the sun rose in California, and with it the carbon intensity decreased. For calculating carbon emissions, we use the power estimate from the previous section in kilowatt-hours (kWh) and multiply it by the carbon intensity of the local energy grid (g CO2eq/ kWh). To gather carbon intensity metrics for energy grids, we build on the open-source portions of https://www.electricitymap.org and define regions based on map-based geometries, using the smallest bounding region for a given location as the carbon intensity estimate of choice. For example, for an experiment run in San Francisco, if the average carbon intensity is available for both the USA and California, the latter will be used. We estimate the region the experiment is conducted in based on the machine’s IP address. Carbon intensities are gathered from the average fallback values provided in the https://www.electricitymap.org code where available and supplemented with additional metrics from various governmental or corporate reports. We note that electricitymap.org estimates are based on a closed-source system and uses the methodology described by Tranberg et al. (2019). All estimates from electricitymap.org are of the regional supply, rather than production (accounting for imports from other regions). Since https://caiso.com provides realtime intensities including imports for free, for experiments run in California, we also provide realtime carbon intensity information. We do this by polling https://caiso.com for the current intensity of the California energy grid every five minutes. This helps gather even more accurate estimates of carbon emissions to account for daily shifts in supply. For example, experiments run in California during the day time use roughly 2 of night-time experiments. 3 This is because much of California’s renewable energy comes from solar plants. Figure 2 is an automatically generated graph showing this phenomenon from an experiment using our framework. We hope that as users find more accurate realtime or average measurements 12 Towards the Systematic Reporting of the Energy and Carbon Footprints of ML of regional supply-based carbon intensities, they will add them to the tool for even more accurate measurements in the future. # 5. The Importance and Challenges of Accounting: Why a New Framework? # 5.1 FPOs Can Be Misleading Floating Point Operations (FPOs) are the de facto standard for reporting “efficiency” of a deep learning model (Schwartz et al., 2019), and intuitively they should be correlated with energy efficiency—after all, fewer operations should result in faster and more energy efficient processing. However, this is not always the case. Previously, Jeon and Kim (2018) demonstrated mechanisms for constructing networks with larger FPOs, but lower inference time—discussing the “Trap of FLOPs”. Similarly, Qin et al. (2018) show how Depthwise 3x3 Convolutions comprised just 3.06% of an example network’s Multiply-Add operations, while utilizing 82.86% of the total training time in the FPO-efficient MobileNet architecture Howard et al. (2017). Underlying optimizations at the firmware, deep learning framework, memory, or even hardware level can change energy efficiency and run-time. This discrepancy has led to Github Issues where users expect efficiency gains from FPO-efficient operations, but do not observe them.17 This has also been observed by Chen and Gilbert (2018) and Chen et al. (2018). Example 3 To investigate this empirically, we repeatedly run inference through pre-trained image classification models and measure FPOs, parameters, energy usage, and experiment length using the experiment-impact-tracker framework. As described in Figure 3, we find little correlation between FPOs and energy usage or experiment runtime when comparing across different neural network architectures. However, within an architecture—relying on the same operation types, but with different numbers of operations—FPOs are almost perfectly correlated with energy and runtime efficiency. Thus, while FPOs are useful for measuring relative ordering within architecture classes, they are not adequate on their own to measure energy or even runtime efficiency. # 5.2 Estimates with Partial Information Can Be Inaccurate The current state of accounting for energy and carbon varies across fields and papers (see Section 3). Few works, if any, report all of the metrics that our framework collects. However, it is possible to extrapolate energy and carbon impacts from some subsets of these metrics. This can give a very rough approximation of the energy used by an experiment in kWh (see Section 3 for background). Example 4 We demonstrate how several such estimation methods compare against the more fine-grained accounting methods we describe in Section 4.18 As seen in Figure 4, we find 17. See for example: https://github.com/tensorflow/tensorflow/issues/12132 https://github.com/tensorflow/tensorflow/issues/12940 and 18. We also provide a script to do the rough calculation of energy and carbon footprints based on GPU type, IP address (which is used to retrieve the location of the machine and that region’s carbon 13 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau 0.00 q@densenet161 qedlensenet201 ec denseneti69 widersrett012.. | ._-densenet121 hardnetes, il hardnet68ds & _wide_resnet50_2 Fhardnetos a wagis 916 hargnet39ds__googlenet gg Py ge vg gt ‘mobilenet ~shufflenet_v2_x1_0 squeezenetl_0 e squeezenett_1 0 5 15 20 10 FPOs(G) 0.00 0.030 q@densenet161 qedlensenet201 0.025 ec denseneti69 widersrett012.. | _ 0.020 V951%9 2 qxsnis ._-densenet121 hardnetes, 0.015 sant il e hardnet68ds & _wide_resnet50_2 gl Fhardnetos a wagis 0.010; @* 916 hargnet39ds__googlenet gg Py ge vg gt 0.005 ‘mobilenet ~shufflenet_v2_x1_0 squeezenetl_0 e squeezenett_1 0.000 0 5 15 20 8 10 12 16 18 20 10 14 FPOs(G) FPOs(G) 0.030 0.025 _ 0.020 V951%9 2 qxsnis 0.015 sant e gl 0.010; @* 0.005 0.000 8 10 12 16 18 20 14 FPOs(G) Figure 3: We run 50,000 rounds of inference on a single sampled image through pre-trained image classification models and record kWh, experiment time, FPOs, and number of parameters (repeating 4 times on different random seeds). References for models, code, and expanded experiment details can be found in Appendix D. We run a similar analysis to Canziani et al. (2016) and find (left) that FPOs are not strongly correlated with energy consumption (R2 = 0.083, Pearson 0.289) nor with time (R2 = 0.005, Pearson −0.074) when measured across different architectures. However, within an architecture (right) correlations are much stronger. Only considering different versions of VGG, FPOs are strongly correlated with energy (R2 = .999, Pearson 1.0) and time (R2 = .998, Pearson .999). Comparing parameters against energy yields similar results (see Appendix D for these results and plots against experiment runtime). 14 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML significant differences from when we track all data (as through the experiment-impact-tracker framework) to when we use partial data to extrapolate energy and carbon emissions. Only using GPUs and the experiment time ignores memory or CPU effects; only using the average case US region ignores regional differences. More details for this experiment can be found in Appendix E. We also note that the possible estimation differences in Figure 4 do not include possible errors from counting multiple processes at once, as described in Section 4.3.1. Clearly, without detailed accounting, it is easy to severely over- or underestimate carbon or energy emissions by extrapolating from partial information. 1.00 0.75 < 30.50 z 0.25 0.00 Estimation Method Estimation Method Estimation Method 1.00 0.75 < 30.50 z 0.25 0.00 Estimation Method Figure 4: We compare carbon emissions (left) and kWh (right) of our Pong PPO experiment (see Appendix E for more details) by using different estimation methods. By only using country wide or even regional average estimates, carbon emissions may be over or under-estimated (respectively). Similarly, by using partial information to estimate energy usage (right, for more information about the estimation methods see Appendix E), estimates significantly differ from when collecting all data in real time (as in our method). Clearly, without detailed accounting, it is easy to over- or under-estimate carbon or energy emissions in a number of situations. Stars indicate level of significance: * p < .05, ** p < .01, *** p < .001, **** p < .0001. Annotation provided via: https://github.com/webermarcolivier/statannot. # 6. Encouraging Efficiency and Mitigating Carbon Impacts: Immediate Mitigation Strategies With experiment-impact-tracker, we hope to ease the burden of standardized reporting. We have demonstrated differences in more detailed estimation strategies from the current status quo. In this Section, we examine how accurate reporting can be used to drive immediate mitigating strategies for energy consumption and carbon emissions. intensity), experiment length, and utilization factor. https://github.com/Breakend/experiment-impact- tracker/blob/master/scripts/get-rough-emissions-estimate 15 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau # 6.1 Energy Efficiency Leaderboards A body of recent work has emphasized making more computationally efficient models (Wu et al., 2019; Zhou et al., 2020; Reddi et al., 2020; Lu et al., 2018; Coleman et al., 2019; Jiang et al., 2019), yet another line of work has focused on the opposite: building larger models with more parameters to tackle more complex tasks (Amodei and Hernandez, 2018; Sutton, 2019). We suggest leaderboards which utilize carbon emissions and energy metrics to promote an informed balance of performance and efficiency. DawnBench (Wu et al., 2019), MLPerf (Reddi et al., 2020), and HULK (Zhou et al., 2020) have done this in terms of runtime and cost. Ethayarajh and Jurafsky (2020) have recently critiqued leaderboards for only optimizing for one particular metric. By optimizing for energy and carbon emissions directly in addition to target performance metrics, baseline implementations can converge to more efficient climate-friendly settings. This can also help spread information about the most energy and climate-friendly combinations of hardware, software, and algorithms such that new work can be built on top of these systems instead of more energy-hungry configurations.19 6.1.1 A Deep RL Energy Leaderboard To demonstrate how energy leaderboards can be used to disseminate information on energy efficiency, we create a Deep RL Energy Leaderboard.20 The website is generated using the same tool for creating HTML appendices described in Section 4. All information (except for algorithm performance on tasks) comes from the experiment-impact-tracker framework. We populate the leaderboard for two common RL benchmarking environments, PongNoFrameskip- v4 and BreakNoFrameskip-v4 (Bellemare et al., 2013; Brockman et al., 2016; Mnih et al., 2013), and four baseline algorithms, PPO (Schulman et al., 2017), A2C (Mnih et al., 2016), A2C with V-Traces (Espeholt et al., 2018; Dalton et al., 2019), and DQN (Mnih et al., 2013). The experimental details and results can also be found in Figure 5. We find that no algorithm is the energy efficiency winner across both environments, though the PPO implementation provided by Hill et al. (2018) attains balance between efficiency and performance when using default settings across algorithms. Example 5 To see how such a leaderboard might help save energy, consider a Deep RL class of 235 students.21 For a homework assignment, each student must run an algorithm 5 times on Pong. The class would save 888 kWh of energy by using PPO versus DQN, while 19. Something to note is that we do not compare carbon efficiency directly—instead focusing on energy specifically. Since running at different times of day and in different regions can affect carbon impacts, these may not have anything to do with the algorithm hardware-software stack and increase the number of confounds when comparing algorithms. While hardware is also immutable to some extent, there may still be information to be gained by finding combinations of efficient low-level optimizations for specific hardware. Hardware can also be held relatively constant by using the same machine for all experimental runs. If comparisons using carbon units are desired, a fixed carbon intensity factor should likely be chosen for approximate comparisons in a given region (rather than using live carbon intensity metrics). See, also, Appendix H. 20. https://breakend.github.io/RL-Energy-Leaderboard/reinforcement_learning_energy_ leaderboard/index.html 21. See for example, Stanford’s CS 234. 16 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML e e ry TA ; e 10; @ &e § 5 a) S 2-5 Experiment © PPO? (stable_baselines, default settings) os A2C (stable_baselines, default settings) ag DON (stable_baselines, default settings) e © A2C+Vtrace (cule, default settings) 0.0 0.2 08 10 12 ee 8 Experiment e @ PPO? (stable_baselines, default settings) toad e A2C (stable_baselines, default settings) e DON (stable_baselines, default settings) 5 80 e A2C+Vtrace (cule, default settings) 60 5 g A i 2 . . 0 0.0 05 LS 2.0 a e e ry 8 Experiment TA ; e @ PPO? (stable_baselines, default settings) e toad e A2C (stable_baselines, default settings) 10; @ &e e DON (stable_baselines, default settings) § 5 5 80 e A2C+Vtrace (cule, default settings) a) 60 S 5 2-5 Experiment g A © PPO? (stable_baselines, default settings) os A2C (stable_baselines, default settings) ag DON (stable_baselines, default settings) i 2 . . e © A2C+Vtrace (cule, default settings) 0 0.0 0.2 08 10 12 0.0 05 LS 2.0 ee a Figure 5: We evaluate A2C, PPO, DQN, and A2C+VTraces on PongNoFrameskip-v4 (left) and BreakoutNoFrameskip-v4 (right), two common evaluation environments in- cluded in OpenAI Gym. We train for only 5M timesteps, less than prior work, to encourage energy efficiency and evaluate for 25 episodes every 250k timesteps. We show the Average Return across all evaluations throughout training (giving some measure of both ability and speed of convergence of an algorithm) as compared to the total energy in kWh. Weighted rankings of Average Return per kWh place A2C+Vtrace first on Pong and PPO first on Breakout. Using PPO versus DQN can yield significant energy savings, while retaining performance on both environments (in the 5M samples regime). See Appendix F for more details and results in terms of asymptotic performance. 17 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau achieving similar performance.22 This is roughly the same amount needed to power a US home for one month.23 We, thus, encourage the community to submit more data to the leaderboard to find even more energy efficient algorithms and configurations. # 6.2 Running In Carbon-Friendly Regions We noted in Section 4 that it is important to assess which energy grid experiments are run on due to the large differences in carbon emissions between energy grids. Figure 6 shows CO2eqintensities for an assortment of locations, cloud-provider regions, and energy production methods. We note that an immediate drop in carbon emission can be made by moving all training jobs to carbon-efficient energy grids. In particular, Quebec is the cleanest available cloud region to our knowledge. Running a job in Quebec would result in carbon emission 30x lower than running a job in Estonia (based on 2017 averages). Example 6 To demonstrate this in practice, we run inference on two machine translation models 1000 times and measure energy usage. We extrapolate the amount of emissions and the difference between the two algorithms if run in different energy grids, seen in Figure 7. The absolute difference in emissions between the two models is fairly small (though significant) if run in Quebec (.09 g CO2eq), yet the gap increases as one runs the jobs in less carbon-friendly regions (at 3.04 g CO2eq in Estonia). We provide a script with our framework to show all cloud provider region with emission statistics to make this decision-making process easier.24 We note that Lacoste et al. (2019) provide a website using partial information estimation to extrapolate carbon emissions based on cloud provider region, GPU type, and experiment length in hours. Their tool may also be used for estimating carbon emissions in cloud-based experiments ahead of time. We’ve also provided a non-exhaustive list of low emissions energy grids that contain cloud regions in Table 1. For companies that train and deploy large models often, shifting these resources is especially important. ML training is not usually latency bound: companies can run training in cloud regions geographically far away since training models usually does not require round trip communication requirements. Contrary to some opinions,25 there is not a necessary need to eliminate computation-heavy models entirely, as shifting training resources to low carbon regions will immediately reduce carbon emissions with little impact to production systems. For companies seeking to hit climate change policy targets, promotion of carbon neutral regions and shifting of all machine learning systems to those regions would accelerate reaching targets significantly and reduce the amount of offset purchasing required to meet goals (thus saving resources).26 It is worth noting that some companies like Google already purchase offsets (Google, 2016), so it may be unclear why shifting resources is necessary. We provide 22. These rankings may change with different code-bases and hyperparameters. 23. https://www.eia.gov/tools/faqs/faq.php?id=97&t=3 24. See: get-region-emissions-info script and lookup-cloud-region-info script. 25. https://www.theguardian.com/technology/2019/sep/17/tech-climate-change-luddites-data 26. See, for example, Amazon’s goal: https://press.aboutamazon.com/news-releases/news-release- details/amazon-co-founds-climate-pledge-setting-goal-meet-paris 18 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML Power Grid Quebec, Canada Cloud Regions ca-central-1 (AWS), canadaeast (Azure), northamerica-northeast1 (GCP) Carbon Intensity (g CO2eq/ kWh) ∼ 30 West Norway Ontario, Canada France norwaywest (Azure) canadacentral (Azure) eu-west-3 (AWS), francesouth (Azure), francecentral (Azure) ∼ 35 ∼ 45 ∼ 56 Brazil (Central) Oregon, USA brazilsouth (Azure) us-west1 (GCP), us-west-2 (AWS) westus2 (Azure) ∼ 106 ∼ 127 Table 1: A non-exhaustive list of cloud regions in low carbon intensity energy grids (< 150 gCO2eq/ kWh). All estimates pulled as yearly averages from https: //www.electricitymap.org/map, except for Quebec which utilizes method- ology from https://piorkowski.ca/rev/2017/06/canadian-electricity-co2- intensities/ and Oregon which uses data from https://www.eia.gov/ electricity/state/oregon/. an extended discussion on this in Appendix C. As a matter of total emissions reductions, running compute in carbon-friendly regions prevents emissions now, while offsets may not come into effect for several years. Moreover, continuing offset purchasing at current levels, while shifting resources to green regions would result in a net-negative carbon footprint. # 7. Discussion: Systemic Changes We demonstrated several use cases for accounting which can drive immediate mitigation strategies. However, the question remains: how can we encourage systemic changes which lead to energy and carbon efficiency in ML systems? # 7.1 Green Defaults for Common Platforms and Tools Energy leaderboards help provide information on energy efficient configurations for the whole stack. However, to truly spread energy efficient configurations, underlying frameworks should by default use the most energy-efficient settings possible. This has been shown to be an effective way to drive pro-environmental behavior (Pichert and Katsikopoulos, 2008). For example, Nvidia apex provides easy mixed-precision computing as an add-on which yields 19 # Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau Québec, Canada South Brazil, Brazil ‘ap-northeast-2 (AWS) Great Britain koreacentral (Azure) California, USA Biomass} Tamil Nadu, India us-east-1 (AWS) Alberta, Canada us-east4 (GCP) eastus (Azure) ‘ap-south-1 (AWS) asia-south1 (GCP) centralindia (Azure) Oregon, USA Tokyo, Japan NSW, Australia Hong Kong 400 600 800 g CO2eq/kWh Germany Maharashtra, India Estonia 1000 1200 Figure 6: Carbon Intensity (gCO2eq/kWh) of selected energy grid regions is shown from least carbon emissions (left) to most carbon emissions (right). Red/unshaded boxes indicate carbon intensities of cloud provider regions. Blue/shaded boxes indicate carbon intensities of various generation methods. Oil shale is the most carbon emitting method of energy production in the Figure. Estonia is powered mainly by oil shale and thus is close to it in carbon intensity. Similarly, Québec is mostly powered by hydroelectric methods and is close to it in carbon intensity. Cloud provider carbon intensities are based on the regional energy grid in which they are located. Thus, us-west-1, located in California, has the same carbon intensity as the state. See https://github.com/Breakend/experiment-impact-tracker/ for data sources of regional information. Energy source information from Krey et al. (2014); International Energy Agency (2015). 20 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML —— ode ode = mmm Conv mmm Conv i 0.025 0025 Diff: mm Transformer mm Transformer 3.04 gCO2eq (~x1.18) 0.020 0.020 0.015, 0.015, kgco2 Diff: 0.010 0.010} 0.09 gCO2eq (-x118) 0.005, 0.005 H 0.000 4 0.000 $ LS SSS OS SS SF e & ees & FOES & OSS Fe FS CSE SP PO Se oe ES oe SE FSF or PP 3 SF EE HK OM NS SK vs ¢ LS 6 Sentence Length Distribution Region —— ode mmm Conv 0.025 mm Transformer 0.020 0.015, 0.010 0.005, 0.000 4 $ & 3 Sentence Length Distribution ode = mmm Conv i 0025 Diff: mm Transformer 3.04 gCO2eq (~x1.18) 0.020 0.015, kgco2 Diff: 0.010} 0.09 gCO2eq (-x118) 0.005 H 0.000 LS SSS OS SS SF e ees & FOES & OSS Fe FS CSE SP PO Se oe ES oe SE FSF or PP SF EE HK OM NS SK vs ¢ LS 6 Region Figure 7: We use pre-trained En-Fr translation models downloaded from PyTorch Hub: a convolutional network (Gehring et al., 2017) and transformer (Ott et al., 2018). We generate 1000 random sequences either between 3-50 words in length using the essential_generators Python package: https://pypi.org/project/essential- generators/. We repeat with 20 random seeds. Randomly generated sentences are likely to be difficult to translate, but this difficulty should not be biased in favor of either algorithm. [Left] We show the true difference in energy consumption. [Right] We show estimated kgCO2eqreleased if the experiment had been conducted in a number of increasingly carbon-intensive energy grids. Differences remain significant throughout, but the absolute difference increases as more carbon-intensive regions are assumed. 21 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau efficiency gains.27 However, it requires knowing this and using it. Merity (2019) also discusses the current difficulties in using highly efficient components. Making such resources supported as defaults in frequently used frameworks, like PyTorch, would immediately improve the efficiency of all downstream projects. We encourage maintainers of large projects to prioritize and support such changes. # 7.2 How Much Is Your Performance Gain Worth? Balancing Gains With Cost While training jobs can easily be shifted to run in clean regions, there are often restrictions for inference-time use of machine learning models which prevent such a move. Many companies are deploying large machine learning models powered by GPUs for everyday services.28 Example 7 Production machine translation services, can process 100B words per day (Tur- ovsky, 2016): roughly 4.2 million times our experiment in Figure 7. If all translation traffic were in Estonia, 12,768 kgCO2eq (the carbon sequestered by 16.7 acres of forest in one year (Agency, 2008)) would be saved per day by using the more efficient model, yet if all traffic were in Québec, 378 kgCO2eq would be saved (the carbon sequestered by .5 acres of forest in one year (Agency, 2008)). Considering the amounts of required compute, small differences in efficiency can scale to large emissions and energy impacts. These services are latency-bound at inference time and thus cannot mitigate carbon emissions by shifting to different regions. Instead, deploying energy-efficient models not only reduces carbon emissions but also benefits the companies by bringing the energy costs down. We encourage companies to consider weighing energy costs (both social and monetary) with the performance gains of a new model before deploying it. In the case of our translation experiment in Figure 7, the pre-trained convolutional model we use is significantly more energy hungry across runs than the transformer model we use. When deploying a new energy- hungry translation model, we ask companies to consider: is the BLEU score improvement really worth the energy cost of deploying it? Are there ways to route to different models to balance this trade-off? For example, suppose an energy-hungry model only improves performance in some subset of the data. Routing to this model only in that subset would maximize performance while minimizing energy footprint.29. We note that considering such trade-offs is of increased importance for models aiming to reduce carbon emissions as described by Rolnick et al. (2019). Deploying a large deep learning model for, say, improving the energy efficiency of a building, is not worth it if the energy costs of the model outweigh the gains. We also leave an open question to economists to help assess the welfare benefits of gains on a particular machine learning metric (e.g., how much is BLEU score worth in a translation service). This would allow the social welfare of the metric to be balanced against the social cost of carbon (Ricke et al., 2018) for deployment decisions. 27. https://github.com/NVIDIA/apex example, 28. See for Google. https://azure.microsoft.com/en-us/blog/microsoft-makes-it-easier-to-build-popular-language- representation-model-bert-at-large-scale/ 29. Efficient routing of traffic to regions has been considered before by Nguyen et al. (2012) and Berral et al. (2010). It may be worth considering efficient routing of traffic to particular models as well. 22 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML Similarly, it is important to consider other types of cost-benefit analyses. Perhaps the carbon impacts of a long (energy-intensive) training time for a large model is worth it if it reduces the lifetime carbon footprint in production (for example, if the model doesn’t require expensive fine-tuning procedures in the future). Understanding the tradeoff between the lifetime deployment costs and training costs is important before moving on to extended training runs. As such, we also encourage reporting of both estimated training and deployment energy costs so future adopters have a more comprehensive picture when deciding which model to use. Central to all of these cost-benefit analyses are accurate accounting. Our tool provides one step in consistent and accurate accounting for such purposes. # 7.3 Efficient Testing Environments In Section 7.1 we discuss the adoption of green default configurations and Section 7.2 discusses cost-benefit analyses for deployments. Another consideration particular to research— especially RL—is the selection of the most efficient testing environments which assess the mechanism under test. For example, if an RL algorithm solves a particularly complex task in an interesting way, like solving a maze environment, is there a way to demonstrate the same phenomenon in a more efficient environment? Several works have developed efficient versions of RL environments which reduce run-times significantly. In particular, Dalton et al. (2019) improve the efficiency of Atari experiments by keeping resources on the GPU (and thus avoiding energy and time overheads from moving memory back and forth). Chevalier- Boisvert et al. (2018) develop a lightweight Grid World environment with efficient runtimes for low-overhead experiments. An important cost-benefit question for researchers is whether the same point can be proven in a more efficient setting. # 7.4 Reproducibility A key aspect to our work is helping to promote reproducibility by aiding in consistent reporting of experimental details. We encourage all researchers to release code and models (when it is socially and ethically responsible to do so), to prevent further carbon emissions. Replicating results is an important, if not required, part of research. If replication resources are not available, then more energy and emissions must be spent to replicate results—in the case of extremely large models, the social cost of carbon may be equivalently large. Thus, we ask researchers to also consider energy and environmental impacts from replication efforts, when weighing model and code release. We note that there may very well be cases where safety makes this trade-off lean in the direction of withholding resources, but this is likely rare in most current research. For production machine learning systems, we encourage developers to release models and codebases internally within a company. This may encourage re-use rather than spending energy resources developing similar products. # 7.5 Standardized Reporting We suggest that all papers include standardized reporting of energy and carbon emissions. We also suggest adding a Carbon Impact Statement at the end of papers (just like ours below) which estimates the carbon emissions of the paper. This can be reported in a dollar 23 # Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau amount via the country-specific social cost of carbon (Ricke et al., 2018). We provide a script30 to parse logs from the experiment-impact-tracker framework and generate such a statement automatically. We suggest this to spread awareness and bring such considerations to the forefront. We encourage this statement to include all emissions from experimentation to build a more realistic picture of total resources spent. We also emphasize that research, even when compute intensive, is immensely important for progress. It is unknown what sequence of papers may inspire a breakthrough (Stanley and Lehman, 2015) which would reduce emissions by more than any suggestion here. While emissions should be minimized when possible, we suggest that impact statements be only used for awareness. This is especially true since access to clean energy grids or hardware may be limited for some in the community. We also suggest that, when developing features which visualize compute intensity for cloud or internal workloads, developers consider providing built-in tools to visualize energy usage and carbon emissions. For example, the Colab Research Environment shows RAM and Disk capacity,31 but could also show and provide access to these other metrics more easily. Providing similar informational labels (Byerly et al., 2018) within internal tooling could mitigate some energy and carbon impacts within companies. # 7.6 Badging Informational labeling has had a long history of being used in public policy (Banerjee and Solomon, 2003). In the USA, the “Energy Star” label has been used to guide customers to eco-friendly products. More recently, “badges” rewarded by the Psychological Science journal were shown to be effective, with a jump from 3% of articles reporting open data to 39% one year later. ACM has introduced similar reproducibility badges.32 With consistent reporting of carbon and energy metrics, climate friendly research badges can be introduced by conferences to recognize any paper that demonstrates a significant effort to mitigate its impacts. For example, a compute intensive paper, when showing evidence of explicitly running resources in a clean region can be rewarded with such a badge. Another example badge can be awarded to papers that create energy-friendly algorithms with similar performance as the state-of-the-art33. The goal of these badges is to draw further attention to efficient versions of state-of-the-art systems and to encourage mitigation efforts while, again, not punishing compute-intensive experiments. Of course this may not apply to conferences such as SysML which often focus on making models more efficient, but rather as a motivational tool for other venues where efficiency may not be in focus. # 7.7 Limitations and Opportunities for Extensions The experiment-impact-tracker framework abstracts away many of the previously mentioned difficulties in estimating carbon and energy impacts: it handles routing to appropriate tools for collecting information, aggregates information across tools to handle carbon calculations, 30. https://github.com/Breakend/experiment-impact-tracker/blob/master/scripts/generate- carbon-impact-statement 31. https://colab.research.google.com/ 32. https://www.acm.org/publications/policies/artifact-review-badging 33. See, for example, Clark et al. (2020) which creates a more efficient version of text encoder pre-training. 24 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML finds carbon intensity information automatically, and corrects for multiple processes on one machine. Yet, a few other challenges may be hidden by using the framework which remain difficult to circumvent. As Khan et al. (2018) discuss, and we encounter ourselves, poor driver support makes tracking energy difficult. Not every chipset supports RAPL, nor does every Linux kernel. Intel also does not provide first party supported python libraries for access to measurements. nvidia-smi per-process measurements in docker containers are not supported.34 A body of work has also looked at improving estimates of energy usage from RAPL by fitting a regression model to real energy usage patterns (Povoa et al., 2019; Kavanagh and Djemame, 2019; Ghosh et al., 2013; Song et al., 2013). The Slurm workload manager provides an energy accounting plugin,35 but requires administrator access to add. For those without access to Slurm, Intel’s RAPL supports access to measurements through three mechanisms, but only one of these (the powercap interface only available on some systems) does not require root access (see more discussion by Khan et al. (2018)). To promote widespread reporting, we avoid any tool which requires administrative access or would not be accessible on most Linux systems. Providing better supported tools for user-level access to power metrics would make it possible to more robustly measure energy usage. Aggregating metrics and handling the intricacies of these downstream tools requires time and knowledge. We try to abstract as much of these challenges away in the experiment-impact-tracker, though some driver-related issues require driver developer support. However, these issues make it difficult to support every on-premises or cloud machine. As such, we currently only support instances which have Intel RAPL or PowerGadget capabilities for Mac OS and Linux. We also note that carbon intensities for machines in cloud data centers may not reflect the regional carbon intensities. Some providers buy clean energy directly for some data centers, changing the realtime energy mix for that particular data center. We were unable to find any information regarding realtime energy mixes in such cases and thus could not account for these scenarios. If providers exposed realtime APIs for such information this would help in generating more accurate estimates. Moreover, customized hardware in cloud provider regions does not always provide energy accounting mechanisms or interfaces. If cloud providers supported libraries for custom hardware, this could be used for more detailed accounting in a wider range of cloud-based compute scenarios. We further discuss other sources of error and issues arising from these difficulties in Appendix G. # 8. Concluding Remarks and Recommendations We have shown how the experiment-impact-tracker and associated tools can help ease the burden of consistent accounting and reporting of energy, compute, and carbon metrics; we encourage contribution to help expand the framework. We hope the Deep RL Energy Leaderboard helps spread information on energy efficient algorithms and encourages research in efficiency. While we focus on compute impacts of machine learning production and research, a plethora of other work considers costs of transportation for conferences (Holden et al., 2017; Spinellis and Louridas, 2013; Bossdorf et al., 2010) and compute hardware 34. https://github.com/NVIDIA/nvidia-docker/issues/179#issuecomment-242150861 35. https://slurm.schedmd.com/acct_gather_energy_plugins.html 25 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau manufacturing (Venkatesan, 2015). We encourage researchers and companies to consider these other sources of carbon impacts as well. Finally, we recap several points that we have highlighted in mitigating emissions and supporting consistent accountability. 26 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML What can machine learning researchers do? Run cloud jobs in low carbon regions only (see Section 6.2). • Report metrics as we do here, make energy-efficient configurations more accessible by reporting these results (see Section 7.5). Work on energy-efficient systems, create energy leaderboards (see Section 6). • Release code and models whenever safe to do so (see Section 7.4). • Integrate energy efficient configurations as defaults in baseline implementations (see Section 7.1). Encourage climate-friendly initiatives at conferences (see Sections 7.6 and 7.5). What can industry machine learning developers and framework maintainers do? • Move training jobs to low carbon regions immediately. Make default launch configura- tions and documentation point to low carbon regions (see Section 6.2). Provide more robust tooling for energy tracking and carbon intensities (see Section 7.7). • Integrate energy efficient operations as default in frameworks (see Section 7.1). • Release code and models (even just internally in the case of production systems) whenever safe to do so (see Section 7.4). • Consider energy-based costs versus benefits of deploying new models (see Section 7.2). • Report model-related energy metrics (see Section 7.5). We hope that regardless of which tool is used to account for carbon and energy emissions, the insights we provide here will help promote responsible machine learning research and practices. # Carbon Impact Statement This work contributed 8.021 kg of CO2eq to the atmosphere and used 24.344 kWh of electricity, having a USA-specific social cost of carbon of $0.38 ($0.00, $0.95). Carbon accounting informa- tion located at: https://breakend.github.io/ClimateChangeFromMachineLearningResearch/ measuring_and_mitigating_energy_and_carbon_footprints_in_machine_learning/ and https://breakend.github.io/RL-Energy-Leaderboard/reinforcement_learning_energy_ leaderboard/index.html. The social cost of carbon uses models from Ricke et al. (2018). This statement and carbon emissions information was generated using experiment-impact- tracker described in this paper. 27 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau # References US Environmental Protection Agency. Greenhouse gas equivalencies calculator, 2008. URL https://www.epa.gov/energy/greenhouse-gas-equivalencies-calculator. Judith I Ajani, Heather Keith, Margaret Blakers, Brendan G Mackey, and Helen P King. Comprehensive carbon stock and flow accounting: a national framework to support climate change mitigation policy. Ecological Economics, 89:61–72, 2013. Dario Amodei and Danny Hernandez. AI and Compute. https://blog.openai.com/openai- five/, 2018. Jane Andrew and Corinne Cortese. Accounting for climate change and the self-regulation of carbon disclosures. In Accounting Forum, volume 35, pages 130–138. Taylor & Francis, 2011. Yehia Arafa, Ammar ElWazir, Abdelrahman ElKanishy, Youssef Aly, Ayatelrahman Elsayed, Abdel-Hameed Badawy, Gopinath Chennupati, Stephan Eidenbenz, and Nandakishore Santhi. Verified instruction-level energy consumption measurement for nvidia gpus. In Proceedings of the 17th ACM International Conference on Computing Frontiers, pages 60–70, 2020. Mahmoud ("Mido") Assran, Joshua Romoff, Nicolas Ballas, Joelle Pineau, and Mike Rabbat. Gossip-based actor-learner architectures for deep reinforcement learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems 32, pages 13299–13309. Curran Associates, Inc., 2019. Miguel F. Astudillo and Hessam AzariJafari. Estimating the global warming emissions of the LCAXVII conference: connecting flights matter. The International Journal of Life Cycle Assessment, 23(7):1512–1516, Jul 2018. ISSN 1614-7502. Abhijit Banerjee and Barry D Solomon. Eco-labeling for energy efficiency and sustainability: a meta-evaluation of us programs. Energy policy, 31(2):109–123, 2003. L. A. Barroso, U. Hölzle, P. Ranganathan, and M. Martonosi. The datacenter as a computer: Designing warehouse-scale machines. Synthesis Lectures on Computer Architecture, 2018. Valentin Bellassen and Nicolas Stephan. Accounting for Carbon. Cambridge University Press, 2015. Marc G Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The Arcade Learning Environment: An Evaluation Platform for General Agents. Journal of Artificial Intelligence Research, 47:253–279, 2013. Josep Ll. Berral, Íñigo Goiri, Ramón Nou, Ferran Julià, Jordi Guitart, Ricard Gavaldà, and Jordi Torres. Towards energy-aware scheduling in data centers using machine learning. In Proceedings of the 1st International Conference on Energy-Efficient Computing and Networking, e-Energy ’10, page 215–224, New York, NY, USA, 2010. Association for Computing Machinery. ISBN 9781450300421. 28 Towards the Systematic Reporting of the Energy and Carbon Footprints of ML Thomas Boquet, Laure Delisle, Denis Kochetkov, Nathan Schucher, Parmida Atighehchian, Boris Oreshkin, and Julien Cornebise. Decovac: Design of experiments with controlled variability components. arXiv preprint arXiv:1909.09859, 2019. Oliver Bossdorf, Madalin Parepa, and Markus Fischer. Climate-neutral ecology conferences: just do it! Trends in Ecology & Evolution, 25(2):61, 2010. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym, 2016. Hilary Byerly, Andrew Balmford, Paul J Ferraro, Courtney Hammond Wagner, Elizabeth Palchak, Stephen Polasky, Taylor H Ricketts, Aaron J Schwartz, and Brendan Fisher. Nudging pro-environmental behavior: evidence and opportunities. Frontiers in Ecology and the Environment, 16(3):159–168, 2018. Alfredo Canziani, Adam Paszke, and Eugenio Culurciello. An analysis of deep neural network models for practical applications. arXiv preprint arXiv:1605.07678, 2016. Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, and Youn-Long Lin. In Proceedings of the IEEE International Hardnet: A low memory traffic network. Conference on Computer Vision, pages 3552–3561, 2019. Bo Chen and Jeffrey Gilbert. Introducing the CVPR 2018 on-device visual intelligence challenge. https://ai.googleblog.com/2018/04/introducing-cvpr-2018-on-device- visual.html, 2018. Yu-Hsin Chen, Tien-Ju Yang, Joel Emer, and Vivienne Sze. Understanding the limitations of existing energy-efficient design approaches for deep neural networks. In Proceedings of the 1st SysML Conference, 2018. Maxime Chevalier-Boisvert, Lucas Willems, and Suman Pal. Minimalistic Gridworld Envi- ronment for OpenAI Gym. https://github.com/maximecb/gym-minigrid, 2018. Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. {ELECTRA}: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations, 2020. Cédric Colas, Olivier Sigaud, and Pierre-Yves Oudeyer. How many random seeds? sta- arXiv preprint tistical power analysis in deep reinforcement learning experiments. arXiv:1806.08295, 2018. Cody Coleman, Daniel Kang, Deepak Narayanan, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Ré, and Matei Zaharia. Analysis of DAWNBench, a Time-to-Accuracy Machine Learning Performance Benchmark. SIGOPS Oper. Syst. Rev., 53(1):14–25, July 2019. ISSN 0163-5980. Julie Cotter, Muftah Najah, and Shihui Sophie Wang. Standardized reporting of climate change information in australia. Sustainability accounting, management and policy journal, 2(2):294–321, 2011. 29 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau Thomas J Crowley. Causes of climate change over the past 1000 years. Science, 289(5477): 270–277, 2000. Steven Dalton, Iuri Frosio, and Michael Garland. GPU-Accelerated Atari Emulation for Reinforcement Learning, 2019. Howard David, Eugene Gorbatov, Ulf R Hanebutte, Rahul Khanna, and Christian Le. RAPL: memory power estimation and capping. In 2010 ACM/IEEE International Symposium on Low-Power Electronics and Design (ISLPED), pages 189–194. IEEE, 2010. Miyuru Dayarathna, Yonggang Wen, and Rui Fan. Data center energy consumption modeling: A survey. IEEE Communications Surveys & Tutorials, 18(1):732–794, 2015. Spencer Desrochers, Chad Paradis, and Vincent M Weaver. A validation of dram rapl power measurements. In Proceedings of the Second International Symposium on Memory Systems, pages 455–470, 2016. Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. IMPALA: Scalable Dis- tributed Deep-RL with Importance Weighted Actor-Learner Architectures. In International Conference on Machine Learning, pages 1406–1415, 2018. Kawin Ethayarajh and Dan Jurafsky. Utility is in the eye of the user: A critique of nlp leaderboards. arXiv preprint arXiv:2009.13888, 2020. David Gefen and Detmar W Straub. The relative importance of perceived ease of use in is adoption: A study of e-commerce adoption. Journal of the association for Information Systems, 1(1):8, 2000. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convolu- tional sequence to sequence learning. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1243–1252. JMLR. org, 2017. Sayan Ghosh, Sunita Chandrasekaran, and Barbara Chapman. Statistical modeling of power/energy of scientific kernels on a multi-gpu system. In 2013 International Green Computing Conference Proceedings, pages 1–6. IEEE, 2013. Google. Google’s Green PPAs: What, How, and Why. https://static.googleusercontent. com/media/www.google.com/en//green/pdfs/renewable-energy.pdf, 2013. Google. Achieving Our 100% Renewable Energy Purchasing Goal and Going Be- https://static.googleusercontent.com/media/www.google.com/en//green/ yond. pdf/achieving-100-renewable-energy-purchasing-goal.pdf, 2016. Odd Erik Gundersen and Sigbjørn Kjensmo. State of the art: Reproducibility in artificial intelligence. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. Leor Hackel and Gregg Sparkman. Evaluating the climate impact of psychological science: Costs and opportunities. Affective Seminar, 2018. URL https://osf.io/dg5ap/?show= view. 30 Towards the Systematic Reporting of the Energy and Carbon Footprints of ML Peter Henderson and Emma Brunskill. Distilling information from a flood: A possibility for the use of meta-analysis and systematic review in machine learning research. In Critiquing and Correcting Trends in Machine Learning Workshop (CRACT) at NeurIPS, 2018. Peter Henderson, Riashat Islam, Philip Bachman, Joelle Pineau, Doina Precup, and David Meger. Deep reinforcement learning that matters. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. Ashley Hill, Antonin Raffin, Maximilian Ernestus, Adam Gleave, Anssi Kanervisto, Rene Traore, Prafulla Dhariwal, Christopher Hesse, Oleg Klimov, Alex Nichol, Matthias Plappert, Alec Radford, John Schulman, Szymon Sidor, and Yuhuai Wu. Stable baselines. https: //github.com/hill-a/stable-baselines, 2018. Matthew H Holden, Nathalie Butt, Alienor Chauvenet, Michaela Plein, Martin Stringer, and Iadine Chadès. Academic conferences urgently need environmental policies. Nature ecology & evolution, 2017. Nicolas Houy. Rational mining limits bitcoin emissions. Nature Climate Change, 9(9):655–655, 2019. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017. Forrest N Iandola, Song Han, Matthew W Moskewicz, Khalid Ashraf, William J Dally, and Kurt Keutzer. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360, 2016. International Energy Agency. CO2 Emissions from Fuel Combustion. 2015. IPCC. Climate Change 2014: Mitigation of Climate Change: Working Group III Contribution to the IPCC Fifth Assessment Report. Cambridge University Press, 2015. IPCC. Global Warming of 1.5 °C. 2018. Yunho Jeon and Junmo Kim. Constructing fast network through deconstruction of convolution. In Advances in Neural Information Processing Systems, pages 5951–5961, 2018. Angela H. Jiang, Daniel L. K. Wong, Giulio Zhou, David G. Andersen, Jeffrey Dean, Gregory R. Ganger, Gauri Joshi, Michael Kaminksy, Michael Kozuch, Zachary C. Lipton, and Padmanabhan Pillai. Accelerating Deep Learning by Focusing on the Biggest Losers. arXiv e-prints, art. arXiv:1910.00762, Oct 2019. Alex K Jones, Liang Liao, William O Collinge, Haifeng Xu, Laura A Schaefer, Amy E Landis, and Melissa M Bilec. Green computing: A life cycle perspective. In 2013 International Green Computing Conference Proceedings, pages 1–6. IEEE, 2013. 31 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau Richard Kavanagh and Karim Djemame. Rapid and accurate energy models through calibration with ipmi and rapl. Concurrency and Computation: Practice and Experience, 31(13):e5124, 2019. Kashif Nizam Khan, Mikael Hirki, Tapio Niemi, Jukka K. Nurminen, and Zhonghong Ou. RAPL in Action: Experiences in Using RAPL for Power Measurements. ACM Trans. Model. Perform. Eval. Comput. Syst., 3(2):9:1–9:26, March 2018. ISSN 2376-3639. Max J Krause and Thabet Tolaymat. Quantification of energy and carbon costs for mining cryptocurrencies. Nature Sustainability, 1(11):711, 2018. V. Krey, O. Masera, G. Blanford, T. Bruckner, R. Cooke, K. Fisher-Vanden, H. Haberl, E. Her- twich, E. Kriegler, D. Mueller, S. Paltsev, L. Price, S. Schlömer, D. Ürge-Vorsatz, D. van Vuuren, and T. Zwickel. Annex 2 - metrics and methodology. In Climate Change 2014: Mitigation of Climate Change. IPCC Working Group III Contribution to AR5. Cambridge University Press, November 2014. URL http://pure.iiasa.ac.at/id/eprint/11109/. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc., 2012. Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the carbon emissions of machine learning. arXiv preprint arXiv:1910.09700, 2019. Jacob LaRiviere, Gavin Mccormick, and Sho Kawano. How better accounting can more cheaply reduce carbon emissions. Policy Brief, 4, 2016. Yung-Hsiang Lu, Alexander C Berg, and Yiran Chen. Low-power image recognition challenge. AI Magazine, 39(2):87–88, 2018. Jens Malmodin, Pernilla Bergmark, and Dag Lundén. The future carbon footprint of the ict and e&m sectors. on Information and Communication Technologies, page 12, 2013. Eric Masanet, Arman Shehabi, Nuoa Lei, Harald Vranken, Jonathan Koomey, and Jens Malmodin. Implausible projections overestimate near-term bitcoin co2 emissions. Nature Climate Change, 9(9):653–654, 2019. Stephen Merity. Single Headed Attention RNN: Stop Thinking With Your Head. arXiv preprint arXiv:1911.11423, 2019. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, and Martin Riedmiller. Playing Atari With Deep Reinforcement Learning. In NIPS Deep Learning Workshop. 2013. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pages 1928–1937, 2016. 32 Towards the Systematic Reporting of the Energy and Carbon Footprints of ML Camilo Mora, Randi L Rollins, Katie Taladay, Michael B Kantar, Mason K Chock, Mio Shimada, and Erik C Franklin. Bitcoin emissions alone could push global warming above 2 °C. Nature Climate Change, 8(11):931, 2018. Richard G Newell and Juha Siikamäki. Nudging energy efficiency behavior: The role of information labels. Journal of the Association of Environmental and Resource Economists, 1(4):555–598, 2014. Kim Khoa Nguyen, Mohamed Cheriet, Mathieu Lemay, Victor Reijs, Andrew Mackarel, and Alin Pastrama. Environmental-aware virtual data center network. Computer Networks, 2012. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, Brussels, Belgium, 2018. Association for Computational Linguistics. Daniel Pichert and Konstantinos V. Katsikopoulos. Green defaults: Information presentation and pro-environmental behaviour. Journal of Environmental Psychology, 28(1):63 – 73, 2008. ISSN 0272-4944. doi: https://doi.org/10.1016/j.jenvp.2007.09.004. URL http: //www.sciencedirect.com/science/article/pii/S0272494407000758. Lucas Venezian Povoa, Cesar Marcondes, and Hermes Senger. Modeling energy consumption based on resource utilization. In International Conference on Computational Science and Its Applications, pages 225–240. Springer, 2019. Zheng Qin, Zhaoning Zhang, Dongsheng Li, Yiming Zhang, and Yuxing Peng. Diagonalwise In 2018 Refactorization: An Efficient Training Method for Depthwise Convolutions. International Joint Conference on Neural Networks (IJCNN), pages 1–8. IEEE, 2018. Celine Ramstein, Goran Dominioni, Sanaz Ettehad, Long Lam, Maurice Quant, Jialiang Zhang, Louis Mark, Sam Nierop, Tom Berg, Paige Leuschner, et al. State and trends of carbon pricing 2019, 2019. Vijay Janapa Reddi, Christine Cheng, David Kanter, Peter Mattson, Guenther Schmuelling, Carole-Jean Wu, Brian Anderson, Maximilien Breughe, Mark Charlebois, William Chou, In 2020 ACM/IEEE 47th Annual International et al. Mlperf inference benchmark. Symposium on Computer Architecture (ISCA), pages 446–459. IEEE, 2020. Nils Reimers and Iryna Gurevych. Reporting Score Distributions Makes a Difference: Performance Study of LSTM-networks for Sequence Tagging. In EMNLP, 2017. Katharine Ricke, Laurent Drouet, Ken Caldeira, and Massimo Tavoni. Country-level social cost of carbon. Nature Climate Change, 2018. Giampaolo Rodola. Psutil package: a cross-platform library for retrieving information on running processes and system utilization, 2016. David Rolnick, Priya L. Donti, Lynn H. Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman- Brown, Alexandra Luccioni, Tegan Maharaj, Evan D. Sherwin, S. Karthik Mukkavilli, 33 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau Konrad P. Kording, Carla Gomes, Andrew Y. Ng, Demis Hassabis, John C. Platt, Felix Creutzig, Jennifer Chayes, and Yoshua Bengio. Tackling Climate Change with Machine Learning. arXiv e-prints, art. arXiv:1906.05433, Jun 2019. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. In Proceedings of the IEEE Mobilenetv2: Inverted residuals and linear bottlenecks. Conference on Computer Vision and Pattern Recognition, pages 4510–4520, 2018. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. Green AI. arXiv e-prints, art. arXiv:1907.10597, Jul 2019. Satyabrata Sen, Neena Imam, and Chung-Hsing Hsu. Quality assessment of gpu power profiling mechanisms. In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pages 702–711. IEEE, 2018. Sam Shead. AI Researchers Left Disappointed As NIPS Sells Out In Under 12 Min- URL https://www.forbes.com/sites/samshead/2018/ utes. 09/05/ai-researchers-left-disappointed-as-nips-sells-out-in-under-12- minutes/#7dda67fc20e9. Forbes, Sep 2018. Yoav Shoham, Erik Brynjolfsson, Jack Clark, John Etchemendy, Barbara Grosz, Terah Lyons, James Manyika, Saurabh Mishra, and Juan Carlos Niebles. The ai index 2019 annual report. AI Index Steering Committee, Human-Centered AI Initiative, Stanford University., 2019. Szymon Sidor and John Schulman. OpenAI Baselines: DQN (Blogpost). 2017. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Frank Soboczenski, Michael D Himes, Molly D O’Beirne, Simone Zorzan, Atilim Gunes Baydin, Adam D Cobb, Yarin Gal, Daniel Angerhausen, Massimo Mascaro, Giada N Arney, et al. Bayesian deep learning for exoplanet atmospheric retrieval. arXiv preprint arXiv:1811.03390, 2018. Susan Solomon, Gian-Kasper Plattner, Reto Knutti, and Pierre Friedlingstein. Irreversible climate change due to carbon dioxide emissions. Proceedings of the national academy of sciences, 106(6):1704–1709, 2009. Shuaiwen Leon Song, Kevin Barker, and Darren Kerbyson. Unified performance and power modeling of scientific workloads. In Proceedings of the 1st International Workshop on Energy Efficient Supercomputing, page 4. ACM, 2013. Diomidis Spinellis and Panos Louridas. The carbon footprint of conference papers. PloS one, 8(6):e66508, 2013. 34 Towards the Systematic Reporting of the Energy and Carbon Footprints of ML Kenneth O Stanley and Joel Lehman. Why greatness cannot be planned: The myth of the objective. Springer, 2015. Kristin Stechemesser and Edeltraud Guenther. Carbon accounting: a systematic literature review. Journal of Cleaner Production, 36:17–38, 2012. Christian Stoll, Lena Klaaßen, and Ulrich Gallersdörfer. The carbon footprint of bitcoin. Joule, 2019. Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and Policy Considerations for Deep Learning in NLP. arXiv preprint arXiv:1906.02243, 2019. Vladimir Sukhoy and Alexander Stoytchev. Eliminating the Variability of Cross-Validation Results with LIBLINEAR due to Randomization and Parallelization. 2019. Shyam Sundar, Ashish Kumar Mishra, and Ram Naresh. Modeling the impact of media awareness programs on mitigation of carbon dioxide emitted from automobiles. Modeling Earth Systems and Environment, 4(1):349–357, 2018. Richard Sutton. The bitter lesson. Incomplete Ideas (blog), March, 13, 2019. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolu- tions. In Computer Vision and Pattern Recognition (CVPR), 2015. Samuel Tang and David Demeritt. Climate change and mandatory carbon reporting: Impacts on business process and performance. Business Strategy and the Environment, 27(4): 437–455, 2018. Richard SJ Tol. The social cost of carbon. Annu. Rev. Resour. Econ., 3(1):419–443, 2011. Bo Tranberg, Olivier Corradi, Bruno Lajoie, Thomas Gibon, Iain Staffell, and Gorm Bruun Andresen. Real-time carbon accounting method for the european electricity markets. Energy Strategy Reviews, 26:100367, 2019. Barak Turovsky. Ten years of Google Translate. Google Official Blog, 2016. U.S. Environment Protection of https://www.epa.gov/sites/production/files/2016- Agency. Social Cost Carbon. 12/documents/social_cost_of_carbon_fact_sheet.pdf, 2013. Chandramouli Venkatesan. Comparative Carbon Footprint Assessment of the Manufacturing and Use Phases of Two Generations of AMD Accelerated Processing Units, 2015. David Weisbach and Cass R Sunstein. Climate change and discounting the future: a guide for the perplexed. Yale L. & Pol’y Rev., 27:433, 2008. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. Pay Less Attention with Lightweight and Dynamic Convolutions. In International Conference on Learning Representations, 2019. 35 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau Michel Zade, Jonas Myklebost, Peter Tzscheutschler, and Ulrich Wagner. Is bitcoin the only problem? a scenario model for the power demand of blockchains. Frontiers in Energy Research, 7, 2019. Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely efficient convolutional neural network for mobile devices. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6848–6856, 2018. Xiyou Zhou, Zhiyu Chen, Xiaoyong Jin, and William Yang Wang. Hulk: An energy efficiency benchmark platform for responsible natural language processing. arXiv preprint arXiv:2002.05829, 2020. # Appendix A. Conference Travel Prior work has also examined conference travel for various fields as a major source of impact Spinellis and Louridas (2013); Astudillo and AzariJafari (2018); Hackel and Sparkman (2018). For example, Spinellis and Louridas (2013) found that the CO2eqemissions from travel per conference participant was about 801 kg CO2eq, Astudillo and AzariJafari (2018) estimated around 883 kg CO2eq emissions per participant, and Hackel and Sparkman (2018) estimate around 910 kg of CO2eq emissions per participant. Interestingly, these separate papers all align around the same carbon emissions numbers per conference participant. Using this and ML conference participant statistics we can gain some (very) rough insight into the carbon emissions caused by conference travel (not including food purchases, accommodations, and travel within the conference city). Conference participation has grown particularly popular in ML research, attracting participants from industry and academia. In 2018 the Neural Information Processing Systems (NeurIPS) conference sold out registrations in 12 minutes (Shead, 2018). In 2019, according to the AI Index Report 2019 (Shoham et al., 2019), conferences had the following attendance: CVPR (9,227); IJCAI (3,015); AAAI (3,227); NeurIPS (13,500); IROS (3,509); ICML (6,481); ICLR (2,720); AAMAS (701); ICAPS (283); UAI (334). The larger conferences also showed continued growth: NeurIPS showed a year-over-year growth 41% from 2018 to 2019. Given only these conferences and their attendances in 2019, the lower 801kg CO2eq average emissions estimate per participant (Spinellis and Louridas, 2013), this adds up to roughly 34,440,597 kg CO2eq emitted in 2019 from ML-related conferences (not considering co-location and many other factors). # Appendix B. NeurIPS Sampling on Metric Reporting We randomly sampled 100 NeurIPS papers from the 2019 proceedings, of these papers we found 1 measured energy in some way, 45 measured runtime in some way, 46 provided the hard- ware used, 17 provided some measure of computational complexity (e.g., compute-time, FPOs, parameters), and 0 provided carbon metrics. We sampled from the NeurIPS proceedings page: https://papers.nips.cc/book/advances-in-neural-information-processing-systems- 36 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML 32-2019. We first automatically check for key words (below) related to energy, compute, and carbon. We then examined the context of the word to classify it as relating to hardware details (e.g., Nvidia Titan X GPU), computational efficiency (e.g., FPOs, MAdds, GPU-hours), runtime (e.g., the experiment ran for 8 hours), energy (e.g., a plot of performance over Joules or Watts), or carbon (e.g., we estimate 10 kg CO2eqwere emitted). We also manually validate papers for similar metrics that didn’t appear in the keyword search. If a paper did not contain experiments we removed it and randomly redrew a new paper. In many cases, metrics are only provided for some subset of experiments (or for particular ablation experiments). We nonetheless count these as reporting the metric. Where a neural network diagram or architecture description was provided, we did not consider this to be reporting a compute metric. compute_terms = ["flop", "fpo", "pflop", "tflops", "tflop", "parameters", "params", "pflops", "flops", "fpos", "gpu-hours", "cpu-hours", "cpu-time", "gpu-time", "multiply-add", "madd"] hardware_terms = ["nvidia", "intel", "amd", "radeon", "gtx", "titan", "v100", "tpu", "ryzen", "cpu", "gpu"] time_terms = ["seconds", "second", "hour", "hours", "day", "days", "time", "experiment length", "run-time", "runtime"] energy_terms = ["watt", "kWh", "joule", "joules", "wh", "kwhs", "watts", "rapl", "energy", "power"] carbon_terms = ["co2", "carbon", "emissions"] # Appendix C. Carbon Discussion But cloud providers claim 100% carbon neutrality in my region, why do I need to shift my resources? While we estimate energy mixes based on regional grids, cloud providers sometimes aim for carbon neutrality through a mixture of mechanisms which may change the energy mix being provided to a data center in an otherwise carbon intensive energy grid or otherwise offset unclean energy usage. Data centers draw energy from the local energy grids and as a result the mix of energy they consume largely depends on the composition of the power running in the grids. If the local energy grids are powered by a mix of fuel and renewable energy, a data center will inevitably consume fuel energy as well. Due to the fact that the consumers do not know the origin of the physical electricity from the utility grid, it is difficult to assign ownership of the renewable energy consumption. The Environmental Protection Agency (EPA) uses renewable energy certificates (RECs) to track the generation and consumption of renewable energy: one REC is issued when one megawatt-hour (MWh) of electricity is generated from a renewable source and delivered to the energy grid.36 Consumers can then purchase RECs from a renewable energy provider and apply them to their electricity usage. This means consumers can claim they run on renewable energy by purchasing RECs from providers that doesn’t actually power the energy grids that they draw electricity from. Although this means that the consumers’ realtime carbon footprints will still be decided by the composition of renewable and fuel energy in their local 36. https://www.epa.gov/greenpower/renewable-energy-certificates-recs 37 Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau energy grids, more renewable energy can flow onto the grid by purchasing the RECs and future development of renewable sources is supported. Google, to offset its carbon emissions, uses RECs and power purchase agreements (PPAs) with renewable energy providers to ensure that more renewable energy powers the same electricity grids that its data centers are in.37 Google then sells the renewable energy as it becomes available back to the electricity grids and strips away the RECs. Over one year, Google applies equal amounts of RECs to its data centers’ total energy consumption. This method helps green energy provider development by creating a long term demand. However, PPAs provide RECs for future renewables, not only current energy on the grid which may remain unchanged. As it states: “While the renewable facility output is not being used directly to power a Google data center, the PPA arrangement assures that additional renewable generation sufficient to power the data center came on line in the area.” 38 We can see that even if a cloud provider’s data centers are carbon neutral, the actual CO2eqemissions can vary largely and depends on the region and even time of the day (solar energy cannot be generated at night). Since carbon emissions have some long-term or irreversible impacts (Solomon et al., 2009), avoiding carbon emissions now can help down the line—a reason why discount rates are used in calculating impacts (Weisbach and Sunstein, 2008). We suggest that cloud providers release tools for understanding the carbon intensity for each data center region regardless of offset purchasing. While the purchases of PPAs and RECs are valuable for driving towards renewable energy in otherwise dirty regions, for machine learning model training, where the resources can be moved, we believe shifting resources to low intensity regions is more beneficial to long term carbon impacts. Other cloud-based jobs where latency requirements prevent shifting resources will remain to drive PPA/REC purchasing, and consequently renewable energy demand. # Appendix D. ImageNet Experiments We load pre-trained models available through PyTorch Hub (see https://pytorch.org/hub)— namely AlexNet (Krizhevsky et al., 2012), DenseNet (Huang et al., 2017), GoogLeNet (Szegedy et al., 2015), HardNet (Chao et al., 2019), MobileNetv2 (Sandler et al., 2018), Shuf- fleNet (Zhang et al., 2018), SqueezeNet (Iandola et al., 2016), VGG (Simonyan and Zisserman, 2014), and Wide ResNets (Zagoruyko and Komodakis, 2016). We run 50,000 rounds of inference on a single image through pre-trained image classification models and run similar analysis to Canziani et al. (2016). We repeat experiments on 4 random seeds. We count flops and parameters using the thop package (for package version numbers see automated logs in the online appendix linked above): https://github.com/Lyken17/ pytorch-OpCounter Code for running the experiment is available at: https://github.com/Breakend/ClimateChangeFromMachineLearningResearch/blob/master/ paper_specific/run_inference.py 37. We note that this process is likely similar for most cloud providers, but Google is the most open with their methodology, so we are able to gain more insight from the materials they publish. Information described here is mainly put together from Google (2016) and Google (2013). 38. https://static.googleusercontent.com/media/www.google.com/en/us/green/pdfs/renewable- energy.pdf 38 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML An online appendix showing all per-experiment details can be seen here: https://breakend.github.io/ClimateChangeFromMachineLearningResearch/measuring_ and_mitigating_energy_and_carbon_footprints_in_machine_learning/ The plot of FPOs versus runtime can be seen in Figure 8 and plots against number of parameters can be seen in Figure 9. Number of parameters similarly have no strong correlation with energy consumption (R2 = 0.002, Pearson −0.048), nor time (R2 = 0.14 , Pearson −.373). We note that our runtime results likely differ from Canziani et al. (2016) due to the architectural differences in the model sets we use. For parameter plots, see Figure 9, for extended time and energy Figures, see Figure 8. 0.5 . 0.07 e 0.06 ° 0.4 ° 0.05 bg Difference: .001 kWh (~1.02x) 203 = ene r ray $0.04 £ e 0.03 i | Fo hd 7 0.02} * / \ fe rx i or 0.01) © H @ <——_Difference:19.45 GFPOs (=6.64x), . a 0 10 15 20 ad 0 5 10 15 20 FPOs(G) FPOs(G) Model e densenet201 e hardnet85 e* vggl1l1 e wide_resnet101_2 @ 240 e alexnet e googlenet e mobilenet_v2 e° vgg13 e wide_resnet50_2 @ 32.0 e densenet121 e hardnet39ds e — shufflenet_v2_x1_0 ° vgg16 Top-1 Accuracy @ 40.0 e densenet161 e hardnet68 ° squeezenet1_0 * vgg19 e@ 160 @ 48.0 e densenet169 e hardnet68ds e squeezenet1_1 0.5 . e 0.4 ° 203 ray £ e Fo hd 7 fe or . ad 0 5 10 15 20 FPOs(G) 0.07 0.06 ° 0.05 bg Difference: .001 kWh (~1.02x) = ene r $0.04 0.03 i | 0.02} * / \ rx i 0.01) © H @ <——_Difference:19.45 GFPOs (=6.64x), a 0 10 15 20 FPOs(G) Model e densenet201 e hardnet85 e* vggl1l1 e wide_resnet101_2 @ 240 e alexnet e googlenet e mobilenet_v2 e° vgg13 e wide_resnet50_2 @ 32.0 e densenet121 e hardnet39ds e — shufflenet_v2_x1_0 ° vgg16 Top-1 Accuracy @ 40.0 e densenet161 e hardnet68 ° squeezenet1_0 * vgg19 e@ 160 @ 48.0 e densenet169 e hardnet68ds e squeezenet1_1 Figure 8: We seek to investigate the connection between FPOs, energy usage, and experiment time, similarly to Canziani et al. (2016). We run 50,000 rounds of inference on a single image through pre-trained image classification models available through PyTorch Hub (see https://pytorch.org/hub)—namely (Krizhevsky et al., 2012; Huang et al., 2017; Szegedy et al., 2015; Chao et al., 2019; Sandler et al., 2018; Zhang et al., 2018; Iandola et al., 2016; Simonyan and Zisserman, 2014; Zagoruyko and Komodakis, 2016). We record experiment time and the kWh of energy used to run the experiments and repeat experiments 4 times, averaging results. We find that FPOs are not strongly correlated with energy consumption (R2 = 0.083, Pearson 0.289) nor with time (R2 = 0.005, Pearson −0.074). Number of parameters (plotted in Appendix) similarly have no strong correlation with energy consumption (R2 = 0.002, Pearson −0.048), nor time (R2 = 0.14 , Pearson −.373). We note, however, that within an architecture correlations are much stronger. For example, only considering different versions of VGG, FPOs are strongly correlated with energy (R2 = .999, Pearson 1.0) and time (R2 = .998, Pearson .999). See Appendix for experiment details, code, and data links. Our runtime results likely differ from Canziani et al. (2016) due to the architectural differences in the model sets we use. 39 # Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau 0.5 e e 0.4 e E03 a E ° 0.2 . hn ge 0.1 e 00> 20 40 60 80 100 130 140 Params(M) 0.5 e 0.07 e 0.06 ° 0.4 e 0.05 bl E03 £0.04 he a z E ° 0.03 0.2 . hn e 0.02 7 ge cou & 0.1 e e 000-9 20-40~C*OS*«aSC«aSC«aO «ado 00> 20 40 60 80 100 130 140 Params(M) Params(M) 0.07 0.06 ° 0.05 bl £0.04 he z 0.03 e 0.02 7 cou & e 000-9 20-40~C*OS*«aSC«aSC«aO «ado Params(M) Figure 9: The same experiments as in Figure 3, plotting parameters as the varying factor instead. See Figure 3 for correlation values. # Appendix E. Estimation Methods We use our PPO Pong experiment (see Appendix F for more details) as the experiment under comparison. For carbon emission estimates, we use three estimation methods: realtime emissions data for California (collected by our framework from caiso.org) times the power usage at that time integrated over the length of the experiment; multiplying total energy usage recorded by our method by the California average carbon intensity; multiplying total energy usage recorded by our method by the EPA US average carbon intensity (Strubell et al., 2019). For energy estimates, we use: (1) the experiment time multiplied by the number of GPUs, a utilization factor of 1/3 or 1, and the Thermal Design Power (TDP)—which can be thought of as the maximum Watt draw—of the GPU (Amodei and Hernandez, 2018); (2) the measured GPU-hrs of our tool multiplied by the TDP; a rough calculation of PFLOPs-hr (following the methodology of (Amodei and Hernandez, 2018) by the PFLOPs/TDP of the GPU; (3) our tool’s accounting method which tracks energy from GPU readings, accounts for CPU time/energy, and measures utilization in realtime. # Appendix F. Reinforcement Learning We investigate the energy efficiency of four baseline RL algorithms: PPO (Hill et al., 2018; Schulman et al., 2017), A2C (Hill et al., 2018; Mnih et al., 2016), A2C with VTraces (Espeholt et al., 2018; Dalton et al., 2019), and DQN (Hill et al., 2018; Mnih et al., 2016). We evaluate on PongNoFrameskip-v4 (left) and BreakoutNoFrameskip-v4 (right), two common evaluation environments included in OpenAI Gym (Bellemare et al., 2013; Brockman et al., 2016; Mnih et al., 2013). We train for only 5M timesteps, less than prior work, to encourage energy efficiency (Mnih et al., 2016, 2013). We use default settings from code provided in stable-baselines (Hill et al., 2018) and cule (Dalton et al., 2019), we only modify evaluation code slightly. Modifications can be found here: 40 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML https://github.com/Breakend/rl-baselines-zoo-1 (for stable-baselines modifica- tions) https://github.com/Breakend/cule (for cule modifications) Since we compare both on-policy and off-policy methods, for fairness all evaluation is based on 25 separate rollouts completed every 250k timesteps. This is to ensure parity across algorithms. We execute these in parallel together as seen in the cule code: https: //github.com/Breakend/cule/blob/master/examples/a2c/test.py. While average return across all evaluation episodes (e.g., averaging together the step at 250k timesteps and every evaluation step until 5M timesteps) can be seen in the main text, the asymptotic return (for the final round of evaluation episodes) can be seen Figure 10. Plots comparing experiment runtime to asymptotic and average returns (respectively) can be seen in Figure 11 and Figure 12. Our online leaderboard can be seen at: https://breakend.github.io/RL-Energy- Leaderboard/reinforcement_learning_energy_leaderboard/index.html We note that while DQN underperforms as compared to PPO here, better hyperparameters may be found such that DQN is the more energy efficient algorithm. Moreover, we only use the 5M samples regime, whereas prior work has used 10M or more samples for training, so DQN results seen here would correspond to earlier points in training in other papers. 2} @ bed e e ee e 15 . 3 10 oS a = Experiment 2 e PPO2 (stable_baselines, default settings) oo $ A2C (stable_baselines, default settings) © DON (stable_baselines, default settings) -10 ; @ A2C+Vtrace (cule, default settings) 0.0 02 04 0.6 0.8 1.0 12 total_power 300 e Experiment PPO2 (stable_baselines, default settings) 2 8 A2C (stable_baselines, default settings) A © DON (stable_baselines, default settings) 5 200 @ A2C+Vtrace (cule, default settings) 8 150 a Eo 100 e 508 2 bd e ° 0.0 0.5 1.0 15 2.0 total_power 300 2} @ bed e e ee e Experiment e PPO2 (stable_baselines, default settings) 15 2 8 A2C (stable_baselines, default settings) A © DON (stable_baselines, default settings) 10 5 200 @ A2C+Vtrace (cule, default settings) oS 8 150 a a = Experiment Eo e PPO2 (stable_baselines, default settings) 100 oo $ A2C (stable_baselines, default settings) e © DON (stable_baselines, default settings) 508 2 -10 ; @ A2C+Vtrace (cule, default settings) bd e 0.0 02 04 0.6 0.8 1.0 12 ° 0.0 0.5 1.0 15 2.0 total_power total_power Figure 10: Pong (left) and Breakout (right) asymptotic return. 41 # Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau 201 @ 4 @ oo e 15 . 3 10 Bos a = Experiment 2 e PPO2 (stable_baselines, default settings) oo $ A2C (stable_baselines, default settings) DON (stable_baselines, default settings) 6 . @ A2C+Vtrace (cule, default settings) oO 2 4 6 8 10 12 exp_len_hours Experiment PPO2 (stable_baselines, default settings) 2 8 A2C (stable_baselines, default settings) A © DON (stable_baselines, default settings) 5 200 @ A2C+Vtrace (cule, default settings) 150 a a 2 +4 a e ee ee ° 0.0 25 5.0 75 10.0 125 150 17.5 200 exp_len_hours 201 @ 4 @ oo Experiment e PPO2 (stable_baselines, default settings) 15 2 8 A2C (stable_baselines, default settings) . A © DON (stable_baselines, default settings) 3 10 5 200 @ A2C+Vtrace (cule, default settings) Bos 150 a a = Experiment a 2 e PPO2 (stable_baselines, default settings) 2 oo $ A2C (stable_baselines, default settings) +4 DON (stable_baselines, default settings) a e ee 6 . @ A2C+Vtrace (cule, default settings) ee oO 2 4 6 8 10 12 ° 0.0 25 5.0 75 10.0 125 150 17.5 200 exp_len_hours exp_len_hours Figure 11: Pong (left) and Breakout (right) as a function of experiment length and asymptotic return. aa «e % oA e 1, e % § 5 3 % a 0 £ 2-5 Experiment © PPO2 (stable_baselines, default settings) os A2C (stable_baselines, default settings) ae e DON (stable_baselines, default settings) e © A2C+Vtrace (cule, default settings) 6 2 a é 3 10 2 exp_len_hours 120 +] Experiment e © PPO2 (stable_baselines, default settings) toad e A2C (stable_baselines, default settings) e DON (stable_baselines, default settings) 5 80 e A2C+Vtrace (cule, default settings) 3 % % 60 © g A i FY _ 0 00 25 50 75 100 125 150 175 200 exp_len_hours aa 120 «e % +] Experiment oA e © PPO2 (stable_baselines, default settings) e toad e A2C (stable_baselines, default settings) 1, e % e DON (stable_baselines, default settings) § 5 5 80 e A2C+Vtrace (cule, default settings) 3 3 % % a 0 % 60 £ © 2-5 Experiment g A © PPO2 (stable_baselines, default settings) os A2C (stable_baselines, default settings) ae e DON (stable_baselines, default settings) i FY _ e © A2C+Vtrace (cule, default settings) 0 6 2 a é 3 10 2 00 25 50 75 100 125 150 175 200 exp_len_hours exp_len_hours Figure 12: Pong (left) and Breakout (right) as a function of experiment length and average return. # Appendix G. Possible Sources of Error, Limitations, and Overheads In Sections 5.2 and 5.1, we compared different methods for estimating energy and carbon emissions including extrapolating from FPOs. However, we note that our own framework is not perfect. For transparency, we highlight several such sources here, but we note that utilizing more information—as we do here—is by definition superior to approximations which rely on less accurate assumptions (see Section 5.2). First, we rely on downstream hardware APIs which themselves have errors. Several works have sought to evaluate the accuracy of RAPL—see for example Desrochers et al. (2016) and Kavanagh and Djemame (2019)—and Nvidia’s power profiling tool—see for example, Sen et al. (2018) and Arafa et al. (2020). Errors highly depend on the specific chipset and even the workload, so we refer the reader to these other works for techniques in assessing 42 # Towards the Systematic Reporting of the Energy and Carbon Footprints of ML exact errors. Nvidia’s documentation, however, states that the power reading “is accurate to within +/- 5 watts.” 39 Second, we rely on a polling mechanism due to the constraints of these downstream APIs (for GPUs typically only power is provided, rather than an energy counter). In particularly short jobs or highly erratic workloads, the tool may poll at a time that is not representative of the full workload, estimating energy usage from an outlier power sample. Our assumption is that workloads are fairly consistent and long enough that such variability will average out. In the event that comparisons of energy readings across models are needed, we encourage users to report standard errors across several runs (with n appropriate for the experiment setting). Furthermore, because we record many auxiliary data sources (such as CPU frequency), more accurate estimates can further be conducted via mixed effects models to control for sources of variation and noise in energy readings. For an example of how such an analysis would work, see for example Boquet et al. (2019), which compare machine learning algorithms controlling for hyperparameter choice and randomness. Third, for cloud regions, we do not have access to the exact carbon intensities or PUEs. For example, if a cloud provider has a direct connection to a clean energy power plant for 100% of its energy, we have no way of accessing this information and using it in our tool. We encourage companies to report this information per cloud region so that this may be more accurate. In the case of indirect carbon offsetting, we do not consider this to be an inaccuracy—see discussion in Appendix C. Moreover, we rely on IP address information and hand-gathered energy grid information to estimate the energy grid. Either of these may incur errors. Since we report this information and allow users to override grid regions in calculations, these may be corrected by users. We also may not be able to access particular drivers needed on every cloud instance. As such, support may depend on the cloud machine image being used and the drivers available on that image. Generally, if Intel’s RAPL is available or PowerGadget can be installed—and nvidia-smi is available—then the system should be compatible. Regarding overheads to adding a separate process gathering these metrics, the cost should be generally fairly low. There are some startup and shutdown costs associated with adding the tool, so for short-running scripts the absolute percentage of overhead may be he higher. Additionally, if computational capacity of a chipset is maximally used due to the main process, there may be some added cost for thread switching to gather metrics. However, assuming that a core is preserved for the impact tracker there should be minimal overhead. Note, for the sake of reproducibility we also record disk read/write speeds, but this can be turned off if the disk is particularly slow or there is too much disk I/O for the user’s liking. While workload overhead can vary depending on the machine and workload, we found that in a small experiment of 200 epochs of regression for a one hidden layer neural network, runtime overheads were less than 1%. For 500 epochs, the overhead was around .5% (indicating that startup/shutdown are the most intensive). This experiment was run on a CPU-only Mac OS machine with a 2.7 GHz Quad-Core Intel Core i7 and 16 GB 2133 MHz LPDDR3. Supporting every driver and hardware combination is difficult. We note that most of the aforementioned metrics are only supported on Linux systems and we were only able to test hardware combinations available to us. Mac OS support is limited to machines that have 39. https://developer.download.nvidia.com/compute/DCGM/docs/nvidia-smi-367.38.pdf 43 # Henderson, Hu, Romoff, Brunskill, Jurafsky, and Pineau Intel’s Power Gadget40 installed and to CPU-only recordings. We hope that future users will help identify missing capabilities and expand the framework for new use-cases and machines. We also note that the tool is limited by driver support in cases that we cannot work around (see Section 7.7). Finally, we note that we only record CPU, GPU, and DRAM power draw. We do not record disk I/O energy usage, power conversion and voltage regulator overhead. As such, we can expect there to be missing components that contribute to energy that we do not record here. However, we expect that the PUE re-scaling will correct for some of these missing components to some extent. # Appendix H. Comparing Models We note that it may be tempting to use carbon emissions as a comparative tool: model A is less carbon intensive that model B. However, unless the carbon intensity used for either model is held constant, this comparison cannot be done. In particular, our tool should not be used to compare carbon emissions between models without overriding the carbon intensity used as we sometimes use real-time values. If two models are compared, as in Section 6.1.1, multiple runs on comparable machines should be used. In the event that a robust conclusion is to be made (e.g., Algorithm A is more energy efficient than Algorithm B), additional metrics regarding workload that we record can be utilized to run a mixed-effects regression analysis. Such an analysis would ensure that there aren’t confounding factors jeopardizing the conclusion. 40. https://software.intel.com/content/www/us/en/develop/articles/intel-power-gadget.html 44
{ "id": "1811.03390" }
2002.00104
Post-Training Piecewise Linear Quantization for Deep Neural Networks
Quantization plays an important role in the energy-efficient deployment of deep neural networks on resource-limited devices. Post-training quantization is highly desirable since it does not require retraining or access to the full training dataset. The well-established uniform scheme for post-training quantization achieves satisfactory results by converting neural networks from full-precision to 8-bit fixed-point integers. However, it suffers from significant performance degradation when quantizing to lower bit-widths. In this paper, we propose a piecewise linear quantization (PWLQ) scheme to enable accurate approximation for tensor values that have bell-shaped distributions with long tails. Our approach breaks the entire quantization range into non-overlapping regions for each tensor, with each region being assigned an equal number of quantization levels. Optimal breakpoints that divide the entire range are found by minimizing the quantization error. Compared to state-of-the-art post-training quantization methods, experimental results show that our proposed method achieves superior performance on image classification, semantic segmentation, and object detection with minor overhead.
http://arxiv.org/pdf/2002.00104
Jun Fang, Ali Shafiee, Hamzah Abdel-Aziz, David Thorsley, Georgios Georgiadis, Joseph Hassoun
cs.CV, cs.LG
null
null
cs.CV
20200131
20200318
0 2 0 2 r a M 8 1 ] V C . s c [ 2 v 4 0 1 0 0 . 2 0 0 2 : v i X r a # Post-Training Piecewise Linear Quantization for Deep Neural Networks Jun Fang!, Ali Shafiee', Hamzah Abdel-Aziz!, David Thorsley?, Georgios Georgiadis?*, and Joseph Hassoun! # 1 Samsung Semiconductor, Inc. {jun.fang, ali.shafiee, hamzah.a, d.thorsley, j.hassoun}@samsung.com 2 Microsoft [email protected] Abstract. Quantization plays an important role in the energy-efficient deployment of deep neural networks on resource-limited devices. Post- training quantization is highly desirable since it does not require retrain- ing or access to the full training dataset. The well-established uniform scheme for post-training quantization achieves satisfactory results by converting neural networks from full-precision to 8-bit fixed-point inte- gers. However, it suffers from significant performance degradation when quantizing to lower bit-widths. In this paper, we propose a piecewise linear quantization (PWLQ) scheme to enable accurate approximation for tensor values that have bell-shaped distributions with long tails. Our approach breaks the entire quantization range into non-overlapping re- gions for each tensor, with each region being assigned an equal number of quantization levels. Optimal breakpoints that divide the entire range are found by minimizing the quantization error. Compared to state-of-the- art post-training quantization methods, experimental results show that our proposed method achieves superior performance on image classifica- tion, semantic segmentation, and object detection with minor overhead. Keywords: deep neural networks, post-training quantization, piecewise linear quantization # Introduction In recent years, deep neural networks (DNNs) have achieved state-of-the-art re- sults in a variety of learning tasks including image classification [23,24,54,19,53,29], segmentation [5,18,49] and detection [36,47,48]. Scaling up DNNs by one or all of the dimensions [55] of network depth [19], width [59] or image resolution [30] attains better accuracy, at a cost of higher computational complexity and in- creased memory requirements, which makes the deployment of these networks on embedded devices with limited resources impractical. One feasible way to deploy DNNs on embedded systems is quantization of full-precision (32-bit floating-point, FP32) weights and activations to lower preci- sion (such as 8-bit fixed-point, INT8) integers [25]. By decreasing the bit-width, * Work done at Samsung Semiconductor, Inc. # 2 J. Fang et al. the number of discrete values is reduced, while the quantization error, which generally correlates with model performance degradation increases. To minimize the quantization error and maintain the performance of a full-precision model, many recent studies [63,4,40,25,6,60,12,27] rely on training either from scratch (“quantization-aware” training) or by fine-tuning a pre-trained FP32 model. However, post-training quantization is highly desirable since it does not re- quire retraining or access to the full training dataset. It saves time-consuming fine-tuning effort, protects data privacy, and allows for easy and fast deploy- ment of DNN applications. Among various post-training quantization schemes proposed in the literature [28,7,62], uniform quantization is the most popular approach to quantize weights and activations since it discretizes the domain of values to evenly-spaced low-precision integers which can be efficiently imple- mented on commodity hardware’s integer-arithmetic units. Recent work [28,31,42] shows that post-training quantization based on a uni- form scheme with INT8 is sufficient to preserve near original FP32 pre-trained model performance for a wide variety of DNNs. However, ubiquitous usage of DNNs in resource-constrained settings requires even lower bit-width to achieve higher energy efficiency and smaller models. In lower bit-width scenarios, such as 4-bit, post-training uniform quantization causes significant accuracy drop [28,62]. This is mainly because the distributions of weights and activations of pre-trained DNNs is bell-shaped such as Gaussian or Laplacian [17,35]. That is, most of the weights are clustered around zero while few of them are spread in a long tail. As a result, when operating at low bit-widths, uniform quantization assigns too few quantization levels to small magnitudes and too many to large ones, which leads to significant accuracy degradation [28,62]. To mitigate this issue, various quantization schemes [41,4,3,43,26,34] are de- signed to take advantage of the fact that weights and activations of pre-trained DNNs typically have bell-shaped distributions with long tails. Here, we present a new number representation via a piecewise linear approximation to be suited for these phenomena. It breaks the entire quantization range into non-overlapping regions where each region is assigned an equal number of quantization levels. Although our method works with an arbitrary number of regions, we suggest limiting them to two to simplify the complexity of the proposed approach and the hardware overhead. The optimal breakpoints that divide the entire range can be found by minimizing the quantization error. Compared to uniform quantiza- tion, our piecewise linear quantization (PWLQ) provides a richer representation that reduces the quantization error. This indicates its potential to reduce the gap between floating-point and low-bit precision models. It is also more hardware- friendly when compared to other non-linear approaches such as logarithm-based and clustering-based approaches [41,56,3], since in our method, computation can still be carried out without the need of any transforms or look-up tables. • We propose a piecewise linear quantization (PWLQ) scheme for efficient deployment of pre-trained DNNs without retraining or access to the full training dataset. We also investigate its impact on hardware implementation. Post-Training Piecewise Linear Quantization for Deep Neural Networks We present a solution to find the optimal breakpoints and demonstrate that our method achieves a lower quantization error than the uniform scheme. • We provide a comprehensive evaluation on image classification, semantic segmentation, and object detection benchmarks and show that our proposed method achieves state-of-the-art results. # 2 Related Work There is a wide variety of approaches in the literature that facilitate the efficient deployment of DNNs. The first group of techniques relies on designing network architectures that depend on more efficient building blocks. Notable examples include depth/point-wise layers [22,52] as well as group convolutions [61,38]. These methods require domain knowledge, training from scratch and full access to the task datasets. The second group of approaches optimizes network architec- tures in a typical task-agnostic fashion and may or may not require (re)training. Weight pruning [17,32,20,37], activation compression [10,9,14], knowledge distil- lation [21,45] and quantization [8,46,66,64,41,25] fall under this category. In particular, quantization of activations and weights [15,16,57,35,6,60,62] leads to model compression and acceleration as well as to overall savings in power consumption. Model parameters can be stored in a fewer number of bits while the computation can be executed on integer-arithmetic units rather than on power-hungry floating-point ones [25]. There has been extensive research on quantization with and without (re)training. In the rest of this section, we focus on post-training quantization that directly converts full-precision pre-trained models to their low-precision counterparts. Recent works [28,31,42] have demonstrated that 8-bit quantized models have been able to accomplish negligible accuracy loss for a variety of networks. To improve accuracy, per-channel (or channel-wise) quantization is introduced in [28,31] to address variations of the range of weight values across channels. Weight equalization/factorization is applied by [39,42] to rescale the difference of weight ranges between different layers. In addition, bias shifts in the mean and vari- ance of quantized values are observed and counteracting methods are suggested by [2,13]. A comprehensive evaluation of clipping techniques is presented by [62] along with an outlier channel splitting method to improve quantization perfor- mance. Moreover, adaptive processes of assigning different bit-width for each layer are proposed in [35,65] to optimize the overall bit allocation. There are also a few attempts to tackle 4-bit post-training quantization by combining multiple techniques. In [2], a combination of analytical clipping, bit allocation, and bias correction is used, while [7] minimizes the mean squared quantization error by representing one tensor with one or multiple 4-bit tensors as well as by optimizing the scaling factors. Most of the aforementioned works utilize a linear or uniform quantization scheme. However, linear quantization cannot capture the bell-shaped distribution of weights and activations, which results in sub-optimal solutions. To overcome this deficiency, [3] proposes a quantile-based method to improve accuracy but 4 J. Fang et al. their method works efficiently only on highly customized hardware; [26] employs two different scale factors on overlapping regions to reduce computation bits over fixed-point implementations. However, its scale factors restricted to powers of two and heuristic options limit the accuracy performance. Instead, we propose a piecewise linear approach that improves over the selection of optimal break- points that leads to state-of-the-art quantized model results. Our method can be implemented efficiently with minimal modification to commodity hardware. # 3 Quantization Schemes In this section, we review a uniform quantization scheme and discuss its limi- tations. We then present PWLQ, our piecewise linear quantization scheme and show that it has a stronger representational power (a smaller quantization error) compared to the uniform scheme. Uniform quantization Piecewise linear quantization (PWLQ) MSE of uniform quantization and PWLQ 10°) H H ---+ bit uniform 6-bit uniform 8-bit uniform 4-bit PWLQ — b-vit wig d 4 — seit wig 10 rel 10! 411 o8' 06 04 02 00 02 04 06 08 08 06 04 02 00 02 04 06 O08 00020406 0B 10 Weight value Weight value Ratio of breakpoint over maximum: p/m Mean squared quantization error Fig. 1. Quantization of conv4 layer weights in a pre-trained Inception-v3. Left: uni- form quantization. Middle: piecewise linear quantization (PWLQ) with one breakpoint, dotted line indicates the breakpoint. Right: Mean squared quantization error (MSE) for various bit-widths (b = 4, 6, 8). MSE of PWLQ is convex w.r.t. the breakpoint p, the b-bit PWLQ can achieve a smaller quantization error than the b-bit uniform scheme # 3.1 Uniform Quantization Uniform quantization (the left of Figure 1) linearly maps full-precision real num- bers r into low-precision integer representations. From [25,7], the approximated version ˆr from uniform quantization scheme at b-bit can be defined as: F = uni(r;b,r1,ru, Zz) = 8X rg tz, r= Ee ; . 2) (1) clamp(r;71,7u) = min(max(r,ru),71), _— A=ru-T, N =2°, $= N= clamp(r;71,7u) = min(max(r,ru),71), _— A=ru-T, N =2°, $= N= where [rj,r,| is the quantization range, s is the scaling factor, z is the off- set, N is the number of quantization levels, rz is the quantized integer com- puted by a rounding function [-|z, followed by saturation to the integer domain Post-Training Piecewise Linear Quantization for Deep Neural Networks Zb. We set the offset z = 0 for symmetric signed distributions combined with Zb = {−2b−1, ..., 2b−1 − 1} and z = rl for asymmetric unsigned distributions (e.g., ReLU-based activations) with Zb = {0, ..., 2b − 1}. Since the scheme (1) in- troduces a quantization error defined as εuni = ˆr − r, the expected quantization error squared is given by: E(ε2 uni; b, rl, ru) = s2 12 = C(b)∆2, (2) with C(b) = 1 12(2b−1)2 under uniform distributions [58]. From the above definition, uniform quantization divides the range evenly despite the distribution of r. Empirically, the distributions of weights and ac- tivations of pre-trained DNNs are similar to bell-shaped Gaussian or Lapla- cian [17,35]. Therefore, uniform quantization is not always able to achieve small enough approximation error to maintain model accuracy, especially in low-bit cases. # 3.2 Piecewise Linear Quantization (PWLQ) To improve model accuracy for quantized models, we need to approximate the original model as accurately as possible by minimizing the quantization error. We follow this natural criterion to investigate the quantization performance, even though no direct relationship can easily be established between the quantization error and the final model accuracy [7]. Inspired from [43,26] that takes advantage of bell-shaped distributions, our approach based on piecewise linear quantization is designed to minimize the quantization error. It breaks the quantization range into two non-overlapping regions: the dense, central region and the sparse, high-magnitude region. An equal number of quantization levels N = 2b is assigned to these two regions. We chose to use two regions with one breakpoint to maintain simplicity in the inference algorithm (Section 5.1) and the hardware implementation (Section 4). Multiple-region cases are discussed in Section 5.1. Therefore, we only consider one breakpoint p to divide the quantization range3 [−m, m] (m > 0) into two symmetric regions: the center region R1 = [−p, p] and the tail region R2 = [−m, −p) ∪ (p, m]. Each region consists of a negative piece and a positive piece. Within each of the four pieces, (b − 1)-bit (b ≥ 2) uniform quantization (1) is applied such that including the sign every value in the quantization range is being represented into b-bit. We define the b-bit piecewise linear quantization (denoted by PWLQ) scheme as: pw(r; b, m, p) = sign(r) × uni(|r|; b − 1, 0, p, 0), r ∈ R1 sign(r) × uni(|r|; b − 1, p, m, p), r ∈ R2 , (3) where the sign of full-precision real number r is denoted by sign(r). The associ- ated quantization error is defined as εpw = pw(r; b, m, p) − r. 3 Here we consider symmetric quantization range [−m, m] (m > 0) for simplicity, it is extendable to asymmetric ranges [m1, m2] for any real numbers m1 < m2. 6 J. Fang et al. Figure 1 shows the comparison between uniform quantization and PWLQ on the empirical distribution of the conv4 layer weights in a pre-trained Inception- v3 model [54]. We emphasize that b-bit PWLQ represents FP32 values into b-bit integers to support b-bit multiply-accumulate operations, even though in total, it has the same number of quantization levels as (b+1)-bit uniform quantization. The implications of this are further discussed in Section 4. # 3.3 Error Analysis To study the quantization error for PWLQ, we suppose the full-precision real number r has a symmetric probability density function (PDF) f (r) on a bounded domain [−m, m] with the cumulative distribution function (CDF) F (r) satisfying f (r) = f (−r) and F (−m) = 0, F (m) = 1. Then, we calculate the expected quantization error squared of PWLQ from (2) based on the error of each piece: B(e}ib,m,p) = C(d— 1){ (m — p)?[F(-p) + 1- F@)] + BLP) ~ F(-p)]}. ” Since F (r) = 1 − F (−r) for a symmetric PDF, equation (4) can be simplified as: B(e3y5b,m,p) = C( = 1){(m =p? + m(2p—m)[2F(~) - 1] }. 5) The performance of a quantized model with PWLQ scheme critically de- pends on the value of the breakpoint p. If p = m 2 , then the PWLQ is essentially equivalent to uniform quantization, because the four pieces have equal quanti- zation ranges and bit-widths. If p < m 2 , the center region has a smaller range and greater precision than the tail region, as shown in the middle of Figure 1. Conversely, if p > m 2 , the tail region has greater precision than the center region. To reduce the overall quantization error for bell-shaped distributions found in DNNs, we increase the precision in the center region and decrease it in the tail region. Thus, we limit the breakpoint to the range 0 < p < m 2 . Accordingly, the optimal breakpoint p∗ can be estimated by minimizing the expected squared quantization error: p∗ = arg minp∈(0, m 2 ) E(ε2 pw; b, m, p). (6) Since bell-shaped distributions tend to zero as r becomes large, we consider a smooth f(r) is decreasing when r is positive, ie., f’(r) < 0, Vr > 0. Then we prove that the optimization problem is convex with respect to the breakpoint p € (0, B). Therefore one unique p* exists to minimize the quantization error (a as demonstrated by the following Lemma Lemma 1 /f f(—r) = f(r), f/(r) < 0 for all r > 0, then E(€?,,;b,m,p) is a convex function of the breakpoint p € (0, % Post-Training Piecewise Linear Quantization for Deep Neural Networks Proof. Taking the first and second derivatives of (5) yields: OE b, Coun?) — 20()— 1) [p— 2m + 2mF(p) + m(2p—m)f(p)], (7) OP E(e2,, ;b,m, Ape — 20(b — 1)[1 + Amf(p) + mp — m)F"(P)], (8) ∂2E(ε2 F 1 m ) 87E(e2,,:b,m,p) Since f’(p) < 0 and p < 4, m(2p—m)f'(p) > 0, then Ope > 0. Therefore, E(e2,30, m, p) is convex w.r.t. p, and thus a unique p* exists. In practice, we can find the optimal breakpoint by solving (6) by assuming an underlying Gaussian or Laplacian distribution using gradient descent [50]. Once the optimal breakpoint p∗ is found, both Lemma 2 and the numerical simulation in the right of Figure 1 show that PWLQ achieves a smaller quantization error than uniform quantization, which indicates its stronger representational power. Lemma 2 E(ε2 pw; b, m, p∗) < C(b−1) 16C(b) E(ε2 uni; b, −m, m) for b ≥ 2. Proof. The b-bit uniform quantization error on [−m, m] is calculated from (2): E(ε2 uni; b, −m, m) = C(b)(2m)2 = 4C(b)m2. For b-bit PWLQ, we solve the convex problem (6) by letting the first derivative equal to zero in (7), and determine that the optimal breakpoint p∗ satisfies: 2mF (p∗) = 2m − p∗ + m(m − 2p∗)f (p∗). (10) By substituting (10) in (5) and simplifying, we obtain: B(ejuibsm,p") = C(—1)| ~ (p*? + mp* — m(m— 2"? F(—)]. (11) Subtract the above from C(b−1) 16C(b) of (9), we complete the proof: C(b (Eps 0, p*) — Seow Ele eo nis b, —M, mM) = E(epwib,m,p*) — C(b— 1)(Gm’) (12) (b= 1)[ = (@ = Â¥)? = m(m— BW" PF(O)] < 0. Note that C(b) = 1 12(2b−1)2 given from equation (2), for b ≥ 2, b-1) 1/22%-1\? 1 1 \? ce ) > - {24 < a : (13) 16C(6) 16 \ 2-11 16 QT] 16 Therefore, b-bit PWLQ achieves a smaller quantization error, which is at most 9 16 of b-bit uniform scheme. This improvement in performance requires only an extra bit for storage and no extra multiplication, as we discuss in the next section. (7) (9) (11) 8 J. Fang et al. # 4 Hardware Impact In this section, we discuss the hardware requirements for efficient deployment of DNNs quantized with PWLQ. In convolutional and fully-connected layers, every output can be computed using an inner product between vector X and vector W , which correspond to the input activation and weight (sub)tensors respectively. From scheme (1), the approximated versions of uniform quantization are ˆX = sxXq +zxI and ˆW = swWq (assuming symmetric quantization for weights), where Xq and Wq are quantized integer vectors from X and W , I is an identity vector, sx, sw and zx are associated constant-valued scaling factors and offset, respectively. The output of this uniform quantization is: (X, W) = (s,Xq+ 221, 8yWy) = Co(Xq, Wy) + Cr, (14) where (-, -) is defined as vector inner product, Co = szSw and Cy = 28w(W,, I) denote floating-point constant terms that can be pre-computed offline. Equation (14) implies that a uniformly quantized DNN requires two steps: (i) an integer-arithmetic (INT) inner product; and (ii) followed by a floating-point (FP) affine map. The expensive O(|W]) (the size of vector W) FP operations (X, W) are then accelerated via INT operations (X,, W,), plus O(1) FP re- scaling and adding operands using Cp and C}. As we showed in Section [3.2] when applying PWLQ on weights with one breakpoint, the algorithm breaks the ranges into non-overlapping regions (Ri and R2), which requires separate computational paths (P, and P2) as each region has a different scaling factor. We set offsets z,,, = 0,2». =p and denote scaling factors by Sw,, Sw. in Ry, Ro, respectively. We also define by (-, -)z, the associated partial vector inner product, and W,, the associated quantized integer vector of W in region R; for i = 1,2. Then P; is computed using the following equation: Py = (82Xqt ZI, Sw,Wo,) rR, = C2(Xq, Wa.) Rk, + C3- (15) P2 has additional terms as it has a non-zero offset p: Pz = (82Xqt 2x1, Sw,Wo, + pl) Re (16) = C4(Xq, Won) Ry + C5(Xq, DT) rR, + Co, where C2, C3, C4, C5, and C6 are constant terms, which can be pre-computed similar to C0 and C1 in (14). As indicated by and for PWLQ compared to uniform quantization (14), the extra term (X,, I) p, is needed due to the non-zero offset p, which sums up the activations corresponding to weights in Rz. Since most of the weight #] are in R,, these extra computations in Rg rarely happen. In addition, FP re- scaling and adding are needed in each region, which also increases the overall FP operation overhead. In short, an efficient hardware implementation of PWLQ requires: 4 Around 90% of the weights are locating in the center region R1 in our experiments. Post-Training Piecewise Linear Quantization for Deep Neural Networks — One multiplier for products in both of (Xq,Wg, )r, and (Xq, Wa.) ro. — Three accumulators: one of each for sum of products in P, and Pp», and another one for activations in P2. – At most one extra bit for storage5 per weight value to indicate the region. Note that this extra bit does not increase the multiply-accumulate (MAC) computation and it is only used to determine the appropriate accumulator, which can be done in hardware at negligible cost on the MAC unit. Based on the above explanation, it is clear that more breakpoints require more accumulators and more storage bits per weight tensor. Also, applying PWLQ on both weights and activations6 requires accumulators for each combi- nation of activation regions and weight regions, which translates to more hard- ware overhead. As a result, more than one breakpoint on the weight tensor or applying PWLQ on both weights and activations might not be feasible, from a hardware implementation perspective. # 5 Experiments We evaluate the robustness of our proposed PWLQ scheme for post-training quantization on popular networks of several computer vision benchmarks: Im- ageNet classification [51], semantic segmentation and object detection on the Pascal VOC challenge [11]. In all experiments, we apply batch normalization folding [25] before quantization. For activations, we follow the profiling strategy in [62] to sample from 512 training images, and collect the median7 of the top- 10 smallest and top-10 largest activation values for the minimum and maximum range boundaries at each layer, respectively. During inference, we apply quan- tization after clipping with these ranges. Unless stated otherwise, we quantize all network weights per-channel into 3-to-8 bits; and uniformly quantize activa- tions as well as pooling layers per-layer into 8-bit. We perform all experiments in Pytorch 1.2.0 [44]. # 5.1 Ablation Study on ImageNet In this section, we conduct experiments on the ImageNet classification challenge [51] and investigate the effectiveness of our proposed PWLQ method. We evalu- ate the top-1 accuracy performance on the validation dataset for three popular network architectures: Inception-v3 [54], ResNet-50 [19] and MobileNet-v2 [52]. We use torchvision8 0.4.0 and its pre-trained models for our experiments. 5 This extra storage cost can be further compressed by exploiting the non-uniform distribution of values [1,43]. 6 Applying PWLQ on both weights and activations is discussed in the supplementary material. 7 We test with the top-k median and percentile-based [33] approaches and use the top-10 median method for better robustness of low-bit quantization. We refer to the supplementary material for details. # 8 https://pytorch.org/docs/stable/torchvision # 10 J. Fang et al. Optimal Breakpoint Selection. In order to apply PWLQ, we first need to find the optimal breakpoints to divide the quantization ranges into non- overlapping regions. As stated in Section 3.3, we assume weights and activations satisfy Gaussian or Laplacian distributions, then we find the optimal breakpoints by solving the optimization problem (6). For the case of one optimal breakpoint p∗, we can iteratively find it by gra- dient descent since (6) is convex; or using a simple and fast approximation of p∗/m = ln(0.8614m + 0.6079) for normalized Gaussian. Experimental results show that the approximation obtains almost the same accuracy compared to gradient descent, while also being considerably faster. Therefore, unless stated otherwise we use this approximated version of the optimal breakpoint for the rest of this paper. We report results with other assumptions such as Laplacian distributions in the supplementary material. Other works treat the data distributions differently: BiScaled-DNN [26] pro- poses a ratio heuristic to divide the data into two overlapping regions; and V-Quant [43] introduces a value-aware method to split them into two non- overlapping regions, e.g, 2% (98%) of large (small) values located in the tail (center) region, respectively. Our implementation results in Figure 2 (left) show that PWLQ with non-overlapping regions achieves a superior performance on low-bit quantization compared to BiScaled-DNN improved version9 (denoted by BSD+) and V-Quant, especially with a large margin on 4-bit MobileNet-v2. Non-overlapping approach shortens the quantization ranges (∆ in (2)) for the tail regions by 1.25× to 2×. Therefore, both our choices of non-overlapping regions and optimal breakpoints have a significant impact on reducing the quantization error and improving the performance of low-bit quantized models. The impact of non-overlapping and breakpoint options The robustness of the optimal breakpoint 80 | pweacours) pwto(ours) Overlapping 6 +0 350+ 261 lm Non-overlapping pEezy _ Bso+ 151 V.guare (31 _ 7546 (boss x0 PwrLavours) Ens Z50 3 7456 gs v-guane (43) 8 8 40 B74 74.09) re 4 Poo en \V-Quant £431 10 ° n Inception-v3 ResNet-50 MobileNet-v2 Ea 1% «20% ~=~=«CO% Perturbation level Fig. 2. Left: the impact of non-overlapping and breakpoint options on the top-1 accu- racy for 4-bit post-training quantization models. Right: the robustness of the optimal breakpoint found by solving (6) with some perturbation levels from 5% to 30% for 4-bit Inception-v3 (full-precision accuracy 77.49%). Each perturbation level is run with 100 random samples, the star and the associated number indicate the median accuracy, the bold bar displays the accuracy range between the 25th and 75th percentiles 9 We improved the original BiScaled-DNN [26] by applying affine-based uniform scheme (1) on each region and per-channel quantization. Post-Training Piecewise Linear Quantization for Deep Neural Networks (right), we explore the robustness of the optimal breakpoint found by minimizing the quantization error in for 4-bit Inception-v3. We randomly add perturbation levels from 5% to 30% on each optimal breakpoint p* per-channel per-layer, e.g., the new breakpoint p = 0.95p* or 1.05p* for 5% of perturbation. We run 100 random samples for each perturbation level to gen- erate the results. Overall, model performance decreases as the perturbation level increases, which indicates that our selection of the optimal breakpoint is crucial or accurate post-training quantization. Note that when 5% of perturbation is added to our selection of optimal breakpoints, more than half of the experiments produce a lower accuracy, and can be as low as 74.05%, which is a 1.67% drop rom the zero-perturbation baseline. Multiple Breakpoints. In this section, we discuss the trade-off of multiple breakpoints on model accuracy and hardware overhead. Theoretically, as the number of breakpoints on weights increases, the associated hardware cost lin- early rises. Meanwhile, the number of non-overlapping regions and the associated total number of quantization levels grows, indicating a stronger representational power. Numerically, the extension of finding the optimal multi-breakpoints is straightforward by calculating the same quantization error (4), and solving the same optimization problem (6) with gradient descent in an enlarged search space. Table 1 shows the accuracy performance up to three breakpoints. In general, us- ing more breakpoints consistently improves model accuracy under the growing support of customized hardware. We suggest using one breakpoint to maintain the simplicity of the inference algorithm and its hardware implementation. Thus we only report PWLQ with one breakpoint for the rest of this paper. Table 1. Top-1 accuracy (%) and requirement of hardware accumulators for PWLQ with multiple breakpoints on weights Number of Hardware Inception-v3 (77.49) ResNet-50 (76.13) MobileNet-v2 (71.88) Breakpoints Accumulators 5-bit 4-bit 3-bit 5-bit 4-bit 3-bit 5-bit 4-bit 3-bit One Three 77.28 75.72 61.76 75.62 74.28 67.30 69.05 54.34 16.77 Two Five 77.31 76.73 71.40 75.94 75.24 73.27 70.01 65.74 36.44 Three Seven 77.46 77.00 74.07 76.06 75.77 73.84 70.43 67.71 55.17 PWLQ and Uniform Quantization. In Section 3.3, we analytically and nu- merically demonstrate that our method, PWLQ, obtains a smaller quantization error than uniform quantization. We compare these two schemes in Table 2. In this table, weights are quantized per-channel with the same computational bit- width b = 4, 6, 8; activations are uniformly quantized per-layer into 8-bit. Gener- ally, PWLQ achieves higher accuracy than uniform quantization except for one minor case of 8-bit Inception-v3. When the bit-width is large enough (b = 8), the 12 J. Fang et al. quantization error is small and both uniform quantization and PWLQ provide good accuracy. However, when the bit-width is decreased to 4, PWLQ obtains a notably higher accuracy, i.e., PWLQ attains 75.72% but uniform quantization only attains 44.28% for 4-bit Inception-v3. These results show that PWLQ is a more powerful representation scheme in terms of both quantization error and model accuracy, making it a viable alternative for uniform quantization in low bit-width cases. Moreover, PWLQ applies uniform quantization on each piece, hence it features a simple computational scheme and can benefit from any tricks that improve uniform quantization performance such as bias correction. Table 2. Comparison results of top-1 accuracy (%) for uniform and PWLQ schemes on weights. b+BC: b-bit with bias correction for bit-width b = 4, 6, 8. Each bold value indicates the best result from different methods for specified bit-width and network Network Weight Bit-width 8-bit 8+BC 6-bit 6+BC 4-bit 4+BC Inception-v3 Uniform 77.53 77.52 76.87 77.24 44.28 62.46 (77.49) PWLQ (Ours) 77.52 77.53 77.42 77.48 75.72 76.45 ResNet-50 Uniform 76.10 76.14 75.61 75.92 65.48 72.45 (76.13) PWLQ (Ours) 76.10 76.10 76.03 76.08 74.28 75.62 MobileNet-v2 Uniform 71.35 71.58 67.76 70.81 11.37 41.80 (71.88) PWLQ (Ours) 71.59 71.73 70.82 71.58 54.34 69.22 Bias Correction. An inherent bias in the mean and variance of the tensor values was observed after the quantization process and the benefits of correcting this bias term have been demonstrated in [2,13,42]. This bias can be compensated by folding certain correction terms into the scale and the offset [2]. We adopt this idea into our PWLQ method and show the results in Table 2 (columns with “+BC”). Applying bias correction further improves the performance of low-bit quantized models. It allows 6-bit post-training quantization with piecewise linear scheme for all three networks to achieve near full-precision accuracy within a drop of 0.30%; 4-bit MobileNet-v2, also without retraining, achieves an accuracy of 69.22%. In general, a combination of low-bit PWLQ and bias correction on weights achieves minimal loss of full-precision model performance. # 5.2 Comparison to Existing Approaches In this section, we compare our PWLQ method with other existing approaches, by quoting the reported performance scores from the original literature. An inclusive evaluation of clipping techniques along with outlier channel splitting (OCS) was presented in [62]. To fairly compare with these methods, Post-Training Piecewise Linear Quantization for Deep Neural Networks we adopt the same setup of applying per-layer quantization on weights and without quantizing the first layer. In Table 3, we show that our PWLQ (no bias correction) outperforms the best results of clipping method combined with OCS. Besides, OCS needs to change the network architecture, in contrast to PWLQ. Table 3. Comparison results of per-layer PWLQ and best clipping with OCS [62] on top-1 accuracy (%) loss. W/A indicate the bit-width on weights/activations. The accuracy difference values are measured from the full-precision (32/32) result Network W/A 32/32 8/8 7/8 6/8 5/8 4/8 Inception-v3 OCS + Best Clip 75.9 -0.6 (75.3) PWLQ (Ours) 77.5 +0.1 (77.6) -1.2 (74.7) -0.1 (77.4) -3.4 (72.5) -0.3 (77.2) -13.0 (62.9) -2.0 (75.5) -71.1 (4.8) -12.8 (64.7) ResNet-50 OCS + Best Clip 76.1 PWLQ (Ours) 76.1 -0.4 (75.7) -0.0 (76.1) -0.5 (75.6) -0.1 (76.0) -0.9 (75.2) -0.2 (75.9) -2.7 (73.4) -0.7 (75.5) -6.8 (69.3) -2.4 (73.7) In Table 4, we provide a comprehensive comparison result of our PWLQ to other existing quantization methods. Here we apply per-layer quantization on activations and per-channel PWLQ on weights with bias correction. Except for the 4/4 case where we apply 4-bit PWLQ on activations, we always apply 8- bit uniform quantization on activations for the rest of the 8/8 and 4/8 cases. Under the same bit-width of computational cost among all the methods, our PWLQ combined with bias correction achieves the state-of-the-art results on all cases and it outperforms all other methods with a large margin on 4/8 and 4/4 cases. We emphasize that our PWLQ method is simple and efficient. It achieves the desired accuracy at the small cost of a few more accumulations per MAC unit and a minor overhead of storage. More importantly, it is orthogonal and applicable to other methods. Table 4. Comparison of our PWLQ and other methods on top-1 accuracy (%) loss. PWLQ: weights are piecewise linearly quantized per-channel with bias correction, ac- tivations are quantized per-layer Network W/A PWLQ (Ours) QWP [28] ACIQ [2] LBQ [7] SSBD [39] QRD [31] UNIQ [3] DFQ [42] 32/32 77.49 78.00 77.20 76.23 77.90 77.97 - - Inception-v3 8/8 +0.04 (77.53) 0.00 (78.00) - - -0.03 (77.87) -0.09 (77.88) - - (Top1%) 4/8 -1.04 (76.45) -7.00 (71.00) -9.00 (68.20) -1.44 (74.79) - - - - 4/4 -2.58 (74.91) - -10.80 (66.40) -4.62 (71.61) - - - - 32/32 76.13 75.20 76.10 76.01 75.20 - 76.02 - ResNet-50 8/8 -0.03 (76.10) -0.10 (75.10) - - -0.25 (74.95) - - - (Top1%) 4/8 -0.51 (75.62) -21.20 (54.00) -0.80 (75.30) -1.03 (74.98) - - -2.56 (73.37) - 4/4 -1.28 (74.85) - -2.30 (73.80) -3.41 (72.60) - - - - MobileNet-v2 32/32 71.88 71.90 - - 71.80 71.23 - 71.72 (Top1%) 8/8 -0.15 (71.73) -2.10 (69.80) - - -0.61 (71.19) -1.68 (69.55) - -0.53 (71.19) 4/8 -2.68 (69.22) -71.80 (0.10) - - - - - - 14 J. Fang et al. # 5.3 Other Applications To show the robustness and applicability of our proposed approach, we extend the PWLQ idea to other computer vision tasks including semantic segmentation on DeepLab-v3+ [5] and object detection on SSD [36]. Semantic Segmentation. In this section, we apply PWLQ on DeepLab-v3+ with a backbone of MobileNet-v2. The performance is evaluated using mean intersection over union (mIoU) on the Pascal VOC segmentation challenge [11]. In our experiments, we utilize the implementation of public Pytorch repos- itory10 to evaluate the performance. After folding batch normalization of the pre-trained model into the weights, we found that several layers of weight ranges become very large (e.g., [-54.4, 64.4]). Considering the fact that quantization range [27], especially in the early layers [7], has a profound impact on the per- formance of quantized models, we fix the configuration of some early layers in the backbone. More precisely, we apply 8-bit PWLQ on three depth-wise convo- lution layers with large ranges in all configurations shown in Table 5. Note that the MAC operations of these three layers are negligible in practice since they only contribute 0.2% of the entire network computation, but it is remarkably beneficial to the performance of low-bit quantized models. Table 5. Uniform quantization and PWLQ on DeepLab-v3+. Weights are quantized per-channel with bias correction, activations are uniformly quantized per-layer Network W/A 32/32 8/8 6/8 4/8 DeepLab-v3+ (mIoU%) Uniform PWLQ (Ours) DFQ [42] 70.81 70.81 72.94 -0.65 (70.16) -0.12 (70.69) -0.61 (72.33) -1.54 (69.27) -0.42 (70.39) - -20.76 (50.05) -3.15 (67.66) - As noticed in classification, low-bit uniform quantization causes significant accuracy drop from the full-precision models. In Table 5, applying the piece- wise linear method combined with bias correction, the 6-bit PWLQ model on weights even outperforms 8-bit DFQ [42], which attains 0.42% degradation of the pre-trained model. Moreover, the 4-bit PWLQ significantly improves the mIoU by 17.61% from the 4-bit uniform quantized model, indicating the poten- tial of low-bit post-training quantization via piecewise linear approximation for the semantic segmentation task. Object Detection. We also test the proposed PWLQ for the object detection task. The experiments are performed on the public Pytorch implementation11 # 10 https://github.com/jfzhang95/pytorch-deeplab-xception 11 https://github.com/qfgaohao/pytorch-ssd Post-Training Piecewise Linear Quantization for Deep Neural Networks of SSD-Lite version [36] with a backbone of MobileNet-v2. The performance is evaluated with mean average precision (mAP) on the Pascal VOC object detection challenge [11]. Table 6 compares the results of the mAP score of quantized models using the uniform and PWLQ schemes. Similar to image classification and seman- tic segmentation tasks, even with bias correction and per-channel quantization enhancements, 4-bit uniform scheme causes 3.91% performance drop from the full-precision model, while 4-bit PWLQ with these two enhancements is able to remove this notable gap down to 0.38%. Table 6. Uniform quantization and PWLQ of SSD-Lite version. Weights are quantized per-channel with bias correction, activations are uniformly quantized per-layer Network W/A 32/32 8/8 6/8 4/8 SSD-Lite (mAP%) Uniform PWLQ (Ours) DFQ [42] 68.70 68.70 68.47 -0.20 (68.50) -0.19 (68.51) -0.56 (67.91) -0.43 (68.37) -0.28 (68.42) - -3.91 (64.79) -0.38 (68.32) - # 6 Conclusion In this work, we present a piecewise linear quantization scheme for accurate post-training quantization of deep neural networks. It breaks the bell-shaped distributed values into non-overlapping regions per tensor where each region is assigned an equal number of quantization levels. We further analyze the resulting quantization error as well as the hardware requirements. We show that our ap- proach achieves state-of-the-art low-bit post-training quantization performance on image classification, semantic segmentation, and object detection tasks un- der the same computational cost. It indicates its potential of efficient and rapid deployment of computer vision applications on resource-limited devices. Acknowledgements. We would like to thank Hui Chen and Jong Hoon Shin for valuable discussions. 16 J. Fang et al. # References 1. Bakunas-Milanowski, D., Rego, V., Sang, J., Chansu, Y.: Efficient algorithms for stream compaction on gpus. International Journal of Networking and Computing pp. 208–226 (2017) 2. Banner, R., Nahshan, Y., Hoffer, E., Soudry, D.: Post training 4-bit quantization of convolution networks for rapid-deployment. CoRR, abs/1810.05723 (2018) 3. Baskin, C., Schwartz, E., Zheltonozhskii, E., Liss, N., Giryes, R., Bronstein, A.M., Mendelson, A.: Uniq: Uniform noise injection for non-uniform quantization of neu- ral networks. arXiv preprint arXiv:1804.10969 (2018) 4. Cai, Z., He, X., Sun, J., Vasconcelos, N.: Deep learning with low precision by half- wave gaussian quantization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5918–5926 (2017) 5. Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., Adam, H.: Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European conference on computer vision (ECCV). pp. 801–818 (2018) 6. Choi, J., Wang, Z., Venkataramani, S., Chuang, P.I.J., Srinivasan, V., Gopalakr- ishnan, K.: Pact: Parameterized clipping activation for quantized neural networks. arXiv preprint arXiv:1805.06085 (2018) 7. Choukroun, Y., Kravchik, E., Kisilev, P.: Low-bit quantization of neural networks for efficient inference. arXiv preprint arXiv:1902.06822 (2019) 8. Courbariaux, M., Bengio, Y., David, J.P.: Binaryconnect: Training deep neural net- works with binary weights during propagations. In: Advances in neural information processing systems. pp. 3123–3131 (2015) 9. Dhillon, G.S., Azizzadenesheli, K., Lipton, Z.C., Bernstein, J., Kossaifi, J., Khanna, A., Anandkumar, A.: Stochastic activation pruning for robust adversarial defense. arXiv preprint arXiv:1803.01442 (2018) 10. Dong, X., Huang, J., Yang, Y., Yan, S.: More is less: A more complicated net- work with less inference complexity. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 5840–5848 (2017) 11. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. International journal of computer vision pp. 303–338 (2010) 12. Faraone, J., Fraser, N., Blott, M., Leong, P.H.: Syq: Learning symmetric quanti- zation for efficient deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4300–4309 (2018) 13. Finkelstein, A., Almog, U., Grobman, M.: Fighting quantization bias with bias. arXiv preprint arXiv:1906.03193 (2019) 14. Georgiadis, G.: Accelerating convolutional neural networks via activation map com- pression. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7085–7095 (2019) 15. Gong, Y., Liu, L., Yang, M., Bourdev, L.: Compressing deep convolutional networks using vector quantization. arXiv preprint arXiv:1412.6115 (2014) 16. Gupta, S., Agrawal, A., Gopalakrishnan, K., Narayanan, P.: Deep learning with limited numerical precision. In: International Conference on Machine Learning. pp. 1737–1746 (2015) 17. Han, S., Mao, H., Dally, W.J.: Deep compression: Compressing deep neural net- works with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149 (2015) Post-Training Piecewise Linear Quantization for Deep Neural Networks 18. He, K., Gkioxari, G., Doll´ar, P., Girshick, R.: Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision. pp. 2961–2969 (2017) 19. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770–778 (2016) 20. He, Y., Zhang, X., Sun, J.: Channel pruning for accelerating very deep neural net- works. In: Proceedings of the IEEE International Conference on Computer Vision. pp. 1389–1397 (2017) 21. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015) 22. Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., An- dreetto, M., Adam, H.: Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017) 23. Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7132–7141 (2018) 24. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4700–4708 (2017) 25. Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., Adam, H., Kalenichenko, D.: Quantization and training of neural networks for efficient integer- arithmetic-only inference. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2704–2713 (2018) 26. Jain, S., Venkataramani, S., Srinivasan, V., Choi, J., Gopalakrishnan, K., Chang, L.: Biscaled-dnn: Quantizing long-tailed datastructures with two scale factors for deep neural networks. In: 2019 56th ACM/IEEE Design Automation Conference (DAC). pp. 1–6. IEEE (2019) 27. Jung, S., Son, C., Lee, S., Son, J., Han, J.J., Kwak, Y., Hwang, S.J., Choi, C.: Learning to quantize deep networks by optimizing quantization intervals with task loss. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4350–4359 (2019) 28. Krishnamoorthi, R.: Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342 (2018) 29. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep con- volutional neural networks. In: Advances in neural information processing systems. pp. 1097–1105 (2012) 30. Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Fast and accurate image super- resolution with deep laplacian pyramid networks. IEEE transactions on pattern analysis and machine intelligence (2018) 31. Lee, J.H., Ha, S., Choi, S., Lee, W.J., Lee, S.: Quantization for rapid deployment of deep neural networks. arXiv preprint arXiv:1810.05488 (2018) 32. Li, H., Kadav, A., Durdanovic, I., Samet, H., Graf, H.P.: Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016) 33. Li, R., Wang, Y., Liang, F., Qin, H., Yan, J., Fan, R.: Fully quantized network for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2810–2819 (2019) 34. Li, Y., Dong, X., Wang, W.: Additive powers-of-two quantization: An efficient non- uniform discretization for neural networks. In: International Conference on Learn- ing Representations (2020), https://openreview.net/forum?id=BkgXT24tDS 35. Lin, D., Talathi, S., Annapureddy, S.: Fixed point quantization of deep convolu- tional networks. In: International Conference on Machine Learning. pp. 2849–2858 (2016) 18 J. Fang et al. 36. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European conference on computer vision. pp. 21–37. Springer (2016) 37. Luo, J.H., Wu, J., Lin, W.: Thinet: A filter level pruning method for deep neural network compression. In: Proceedings of the IEEE international conference on computer vision. pp. 5058–5066 (2017) 38. Ma, N., Zhang, X., Zheng, H.T., Sun, J.: Shufflenet v2: Practical guidelines for efficient cnn architecture design. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 116–131 (2018) 39. Meller, E., Finkelstein, A., Almog, U., Grobman, M.: Same, same but different- recovering neural network quantization error through weight factorization. arXiv preprint arXiv:1902.01917 (2019) 40. Micikevicius, P., Narang, S., Alben, J., Diamos, G., Elsen, E., Garcia, D., Ginsburg, B., Houston, M., Kuchaiev, O., Venkatesh, G., et al.: Mixed precision training. arXiv preprint arXiv:1710.03740 (2017) 41. Miyashita, D., Lee, E.H., Murmann, B.: Convolutional neural networks using log- arithmic data representation. arXiv preprint arXiv:1603.01025 (2016) 42. Nagel, M., van Baalen, M., Blankevoort, T., Welling, M.: Data-free quantization through weight equalization and bias correction. arXiv preprint arXiv:1906.04721 (2019) 43. Park, E., Yoo, S., Vajda, P.: Value-aware quantization for training and inference of neural networks. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 580–595 (2018) 44. Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch. 31st Conference on Neural Information Processing Systems (2017) 45. Polino, A., Pascanu, R., Alistarh, D.: Model compression via distillation and quan- tization. arXiv preprint arXiv:1802.05668 (2018) 46. Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: Xnor-net: Imagenet classi- fication using binary convolutional neural networks. In: European Conference on Computer Vision. pp. 525–542. Springer (2016) 47. Redmon, J., Farhadi, A.: Yolo9000: better, faster, stronger. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 7263–7271 (2017) 48. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detec- tion with region proposal networks. In: Advances in neural information processing systems. pp. 91–99 (2015) 49. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedi- cal image segmentation. In: International Conference on Medical image computing and computer-assisted intervention. pp. 234–241. Springer (2015) 50. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back- propagating errors. nature pp. 533–536 (1986) 51. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recog- nition challenge. International journal of computer vision (2015) 52. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: In- verted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4510–4520 (2018) 53. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014) Post-Training Piecewise Linear Quantization for Deep Neural Networks 54. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the incep- tion architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 2818–2826 (2016) 55. Tan, M., Le, Q.V.: Efficientnet: Rethinking model scaling for convolutional neural networks. arXiv preprint arXiv:1905.11946 (2019) 56. Ullrich, K., Meeds, E., Welling, M.: Soft weight-sharing for neural network com- pression. arXiv preprint arXiv:1702.04008 (2017) 57. Wu, J., Leng, C., Wang, Y., Hu, Q., Cheng, J.: Quantized convolutional neural networks for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4820–4828 (2016) 58. You, Y.: Audio Coding: Theory and Applications. Springer Science & Business Media (2010) 59. Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016) 60. Zhang, D., Yang, J., Ye, D., Hua, G.: Lq-nets: Learned quantization for highly accurate and compact deep neural networks. In: Proceedings of the European Con- ference on Computer Vision (ECCV). pp. 365–382 (2018) 61. Zhang, X., Zhou, X., Lin, M., Sun, J.: Shufflenet: An extremely efficient convolu- tional neural network for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 6848–6856 (2018) # No Hnas No 62. Zhao, R., Hu, Y., Dotzel, J., De Sa, C., Zhang, Z.: Improving neural network quantization without retraining using outlier channel splitting. In: International Conference on Machine Learning. pp. 7543–7552 (2019) 63. Zhou, A., Yao, A., Guo, Y., Xu, L., Chen, Y.: Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044 (2017) 64. Zhou, S., Wu, Y., Ni, Z., Zhou, X., Wen, H., Zou, Y.: Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160 (2016) 65. Zhou, Y., Moosavi-Dezfooli, S.M., Cheung, N.M., Frossard, P.: Adaptive quanti- zation for deep neural network. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018) 66. Zhu, C., Han, S., Mao, H., Dally, W.J.: Trained ternary quantization. arXiv preprint arXiv:1612.01064 (2016)
{ "id": "1606.06160" }
2001.09977
Towards a Human-like Open-Domain Chatbot
We present Meena, a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media conversations. This 2.6B parameter neural network is simply trained to minimize perplexity of the next token. We also propose a human evaluation metric called Sensibleness and Specificity Average (SSA), which captures key elements of a human-like multi-turn conversation. Our experiments show strong correlation between perplexity and SSA. The fact that the best perplexity end-to-end trained Meena scores high on SSA (72% on multi-turn evaluation) suggests that a human-level SSA of 86% is potentially within reach if we can better optimize perplexity. Additionally, the full version of Meena (with a filtering mechanism and tuned decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots we evaluated.
http://arxiv.org/pdf/2001.09977
Daniel Adiwardana, Minh-Thang Luong, David R. So, Jamie Hall, Noah Fiedel, Romal Thoppilan, Zi Yang, Apoorv Kulshreshtha, Gaurav Nemade, Yifeng Lu, Quoc V. Le
cs.CL, cs.LG, cs.NE, stat.ML
38 pages, 12 figures
null
cs.CL
20200127
20200227
0 2 0 2 b e F 7 2 ] L C . s c [ 3 v 7 7 9 9 0 . 1 0 0 2 : v i X r a # Towards a Human-like Open-Domain Chatbot Daniel Adiwardana Minh-Thang Luong David R. So Jamie Hall Noah Fiedel Romal Thoppilan Zi Yang Apoorv Kulshreshtha Gaurav Nemade Yifeng Lu Quoc V. Le Google Research, Brain Team {adiwardana,thangluong,davidso,jamiehall,nfiedel,romzee,ziy, apoorvk,gnemade,yifenglu,qvl}@google.com # Abstract We present Meena, a multi-turn open-domain chatbot trained end-to-end on data mined and filtered from public domain social media con- versations. This 2.6B parameter neural net- work is simply trained to minimize perplex- ity of the next token. We also propose a hu- man evaluation metric called Sensibleness and Specificity Average (SSA), which captures key elements of a human-like multi-turn conver- sation. Our experiments show strong correla- tion between perplexity and SSA. The fact that the best perplexity end-to-end trained Meena scores high on SSA (72% on multi-turn evalu- ation) suggests that a human-level SSA of 86% is potentially within reach if we can better op- timize perplexity. Additionally, the full ver- sion of Meena (with a filtering mechanism and tuned decoding) scores 79% SSA, 23% higher in absolute SSA than the existing chatbots we evaluated. # Introduction The ability to converse freely in natural language is one of the hallmarks of human intelligence, and is likely a requirement for true artificial intelli- gence. In order to explore this aspect of intel- ligence, many researchers are working on open- domain chatbots. Unlike closed-domain chat- bots, which respond to keywords or intents to accomplish specific tasks, open-domain chatbots can engage in conversation on any topic. Some open-domain chatbots such as MILABOT (Ser- ban et al., 2017), XiaoIce (Zhou et al., 2018)1, Gunrock (Chen et al., 2018), Mitsuku (Wor- swick, 2018)2 and Cleverbot3 (by Rollo Carpen- ter) display human-like attributes, but rely on com- plex frameworks, such as dialog managers with 100 Human (86%) Interactive SSA (%) Xiaolce (31%) 20 10 12 14 16 18 Perplexity Figure 1: Interactive SSA vs Perplexity. Each point is a different version of the Meena model. A regres- sion line is plotted, for which the coefficient of deter- mination (R2) is 0.93, an indication of strong correla- tion between perplexity and the human evaluation met- ric (SSA). The dotted lines show the SSA performance of other chatbots, humans (86%), the best end-to-end trained Meena model (72%), and the full version of Meena which incorporates a filtering mechanism and tuned decoding (Section 5) and scores 79%. Mitsuku and Cleverbot scored the same on overall SSA, but Mit- suku displayed higher sensibleness, whereas Cleverbot had higher specificity. See Sections 2.5, 2.6, and 4.3 for more details on how we performed these comparisons and how to interpret the results. knowledge-based, retrieval-based, or rule-based systems. End-to-end neural network approaches (Shang et al., 2015; Vinyals and Le, 2015; Sor- doni et al., 2015; Serban et al., 2016; Zhang et al., 2019), on the other hand, offer the simplicity of a single learned model. Despite much research, open-domain chatbots still have weaknesses that prevent them from being generally useful: they of- ten respond to open-ended input in ways that do not make sense, or with replies that are vague and # 1https://www.msxiaobing.com/ 2https://www.pandorabots.com/mitsuku/ 3https://www.cleverbot.com/ Conversations with Meena, and with various other at https://github.com/ chatbots, google-research/google-research/tree/ master/meena/ are available generic. Here we present Meena, a generative chatbot model that was trained end-to-end on 40B words mined and filtered from public domain social me- dia conversations. With Meena, we push the limits of the end-to-end approach and show that a large- scale low-perplexity model can be a good conver- sationalist. We use a seq2seq model (Sutskever et al., 2014; Bahdanau et al., 2015) with the Evolved Transformer (So et al., 2019) as the main architecture. The model is trained on multi-turn conversations where the input sequence is all turns of the context (up to 7) and the output sequence is the response. Our best model has 2.6B parameters and achieves a test perplexity of 10.2 based on a vocabulary of 8K BPE subwords (Sennrich et al., 2016). To measure the quality of Meena and other chat- bots, we propose a simple human evaluation met- ric. Sensibleness and Specificity Average (SSA) combines two fundamental aspects of a human- like chatbot: making sense and being specific. We ask human judges to label every model response on these two criteria. The first part of the metric, sensibleness, is a basic requirement. To converse properly with a human, a bot’s responses have to make sense in context; humans typically take this for granted when conversing with one another, and our evaluations find that 97% of human-produced statements meet this criterion (see Section 4.2). However, making sense is not enough. If a model is designed with sensibleness as its only objec- tive, its responses could be vague and boring, since that is a safe strategy to avoid being penalised for not making sense. For example, closed-domain chatbots typically respond with a generic apology when a human asks something outside their do- main; some end-to-end learned chatbots respond “I don’t know” to many inputs (Li et al., 2016a); and Turing Test contest entrants often try to avoid detection by being strategically vague (Venkatesh et al., 2018). They succeed in not generating gib- berish or contradicting themselves, but at the cost of not really saying anything of substance. To mit- igate this, we add a second dimension to the SSA metric, which asks our evaluators whether a re- sponse is specific given the context. This prevents bots from hiding behind vague replies, allowing us to more openly examine what they are capable of. As discussed in Section 2.1, this successfully dis- tinguishes between generic and lively responses, while also being simple and easy for crowd work- ers to understand. We compare Meena, humans, and other open- domain chatbots using the SSA metric with two types of human evaluation: static and interac- tive. For static evaluation, we curated a dataset with 1,477 multi-turn conversations. For interac- tive evaluation, humans could chat about anything they wanted. We were surprised, but pleased, to discover that the SSA metric shows strong corre- lation with Meena’s perplexity, both in static and interactive evaluation. In other words, the better that Meena fit its training data, the more sensible and specific its chat responses became. At first glance, this result may seem intuitive, but it sur- prised us because recent research found a poor cor- relation between human evaluation scores and au- tomatic metrics such as BLEU (Liu et al., 2016; Lowe et al., 2017). Our best end-to-end learned model has an aver- age of 72% SSA. The full version of Meena scores 79% by incorporating a filtering mechanism and tuned decoding (Section 5). This is still below the 86% SSA achieved by an average human, but is far closer than the other chatbots we tested. We note that humans have very high sensibleness, but sig- nificantly lower specificity, as detailed in Section 4.2. We will also discuss weaknesses of our method- ology. For example, our static evaluation dataset is too restricted to capture all aspects of human conversations. Nevertheless, the fact that Meena achieves such a high SSA score and that there is a correlation between SSA and perplexity means that a human-like chatbot, in terms of sensibleness and specificity, could be in sight if we can attain better perplexity. (1) proposing a sim- ple human evaluation metric for multi-turn open- domain chatbots that captures basic, but impor- tant, attributes of human conversation; (2) show- ing evidence that perplexity is an automatic metric that correlates with human judgment, in contrast to recent findings on other automatic metrics men- tioned above; (3) demonstrating that an end-to-end neural model with sufficiently low perplexity can surpass the sensibleness and specificity of existing chatbots that rely on complex, handcrafted frame- works developed over many years. # 2 Evaluating chatbots Evaluating chatbots and natural language gen- eration is a well-known challenge (Liu et al., 2016; Lowe et al., 2017; Novikova et al., 2017; Hashimoto et al., 2019), which we aim to address in this paper. First, we propose a human evalua- tion metric that captures key elements of human- likeness of conversational responses (Section 2.1). We then describe two human-evaluation setups: static, in which we benchmark models on a fixed set of multi-turn contexts to generate responses (Section 2.2); and interactive, where we allow hu- mans to chat freely with chatbots (Section 2.4). Lastly, we detail our automatic evaluation metric for fast development and end-to-end optimization (Section 2.7). # 2.1 Measuring Human Likeness To measure the quality of a response given a con- text, we propose a sequence of two questions. We first ask whether the response, given the context, makes sense. Sensibleness arguably covers some of the most basic aspects of conversational human- likeness, such as common sense and logical co- herence. Sensibleness also captures other impor- tant aspects of a chatbot, such as consistency. The crowd worker is asked to use common sense to judge if a response is completely reasonable in context. If anything seems off — confusing, il- logical, out of context, or factually wrong — then it should be labeled as, “does not make sense”. However, being sensible is not enough. A generic response (e.g., I don’t know) can be sen- sible, but it is also boring and unspecific. Such re- sponses are frequently generated by bots that are evaluated according to metrics like sensibleness alone (Li et al., 2016a; Venkatesh et al., 2018). To illustrate this, we create GenericBot: a triv- ial bot that always replies to questions with “I don’t know” and to statements with “ok” (exam- ples in Appendix Table 8). On static evaluation (using a fixed set of prompts and bot-generated re- sponses), 70% of GenericBot’s responses are la- beled sensible, surpassing even DialoGPT (62%), even though DialoGPT is clearly more human-like than GenericBot. To overcome this issue, we need our evaluation to separate more fully human-like conversation from bland and generic statements. Therefore, if a response is labeled as sensible, we further ask the crowd worker to determine if it is specific to the given context. For example, if A says, “I love tennis,” and B responds, “That’s nice,” then the utterance should be marked, “not specific”. That reply could be used in dozens of different contexts. However, if B responds, “Me too, I can’t get enough of Roger Federer!” then it is marked as “specific”, since it relates closely to what is being discussed. Responses labeled not sensible are considered not specific. In Gener- icBot’s case, none of the responses are specific, whereas 39% of DialoGPT’s responses are spe- cific. This sequence of two questions is designed to start with the most concrete and basic human quality (sensibleness) and then progress to the arguably more subjective human quality (speci- The degree of subjectivity is some- ficity). what quantified in the crowd worker agreement. We measure crowd worker consistency for every model benchmark using agreement and Krippen- dorff’s alpha (Krippendorff, 2011), shown in Ta- ble 1. The agreement is reasonable considering the questions are subjective and the final results are al- ways aggregated labels (e.g., average sensibleness across all chatbot responses). Metric Agreement (%) Krippendorff’s alpha Sensibleness 76 ± 3 0.42 ± 0.03 Specificity 66 ± 2 0.30 ± 0.05 Table 1: The average and standard deviation of crowd worker agreement across static evaluations of Meena models. Each static evaluation consisted of 1,477 (context, response) pairs, each labeled by 5 crowd workers. Given a set of responses labeled as described above, we can calculate sensibleness and speci- ficity as the percentage of responses labeled as sensible and specific, respectively. To combine these two into one metric, we take a simple av- erage of the two, which we call SSA (sensibleness and specificity average). SSA is a proxy for hu- man likeness, which also penalizes chatbots that consistently produce generic responses. For ex- ample, GenericBot’s SSA is 35% and DialoGPT’s SSA is 51%, providing a much more fair separa- tion and ranking than sensibleness alone. Before arriving at SSA, and before any of the chatbots were tested, the authors of this paper con- ducted several rounds of pilot studies on what to ask crowd workers and how to best phrase the in- structions. We settled on the two-question SSA 90 a ~ 2 & 3 ty e Human likeness (%) we ty 401° 40 50 60 70 80 SSA Figure 2: SSA vs human likeness. Each point is a different chatbot, except for the top right one, which is human. A regression line is plotted, for which the coefficient of determination (R2) is 0.96. The SSA values were collected using static evaluation mode (Section 2.2). The human likeness evaluation was also conducted in static evaluation mode. Instead of judging sensibleness or specificity, however, we asked crowd workers to judge whether a given response was “human-like”, or in other words, looked like a response that a human might give in the provided context. for several reasons: it was easy for crowd work- ers to understand; alternative additional questions did not add extra information; and more subjec- tive questions result in lower agreement between crowd workers. As an additional check on the SSA metric, we reran a static evaluation, this time asking crowd workers to assess whether or not a response is “hu- manlike”. We find that there is a high correlation between those labels and the two components of the SSA metric (Figures 2, 9, 10). Compared to a direct evaluation of what crowd workers consider to be “humanlike”, SSA has significant advantages for large-scale evaluation tasks: it is more objec- tive, easier for crowd workers to understand, and penalizes boring and vague responses. Neverthe- less, these findings give us confidence that SSA is indeed capturing important aspects of human like- ness. # 2.2 Static Evaluation In order to have a common benchmark to eas- ily compare models, we create a collection of 1,477 conversational contexts with between 1 and 3 conversation turns, that we call the Mini-Turing Benchmark (MTB). We started this dataset by compiling single-turn contexts (e.g., “How are you?”) from multiple sources, such as from the work4 of Vinyals and Le (2015) and the transcripts of the Loebner Prize5 contests (years 2014-2018). In total, there were 315 single-turn contexts, which we then extended to include 500 two-turn and 662 three-turn contexts. The MTB also contains contexts with person- ality questions (e.g. “Do you like cats?”), some of which expect responses with personality con- sistency. For example, the context “A: Do you like movies?; B: Yeah. I like sci-fi mostly; A: Re- ally? Which is your favorite?” expects a consis- tent response such as I love Back to the Future. On the other hand, a response like I don’t like movies would be a contradiction, and thus not considered sensible. When evaluating chatbots, all MTB contexts are fed to the models or presented to humans to obtain responses. We send the resulting (context, response) pairs to crowd workers and asked whether each response given the context is sensible and specific as defined in 2.1. We call this static evaluation because the contexts are fixed. # Interactive Evaluation Static evaluation may be suitable for comparing models, but it is biased by how the static eval- uation dataset was constructed. To address this, we create an additional evaluation mode where the crowd workers can chat 1:1 with a chatbot about anything they want. As with static evalu- ation, workers are also asked to decide whether each response from the chatbot is sensible and spe- cific as defined in 2.1. Conversations start with “Hi!” from the chatbot to mark the beginning of the conversation and crowd workers have no ex- pectation or instructions about domain or topic of the conversation. A conversation is required to last at least 14 turns (7 from chatbot) and at most 28 turns. We collected 100 such conversations for each model (i.e., at least 700 labeled turns per model). We then measure the percentage of la- beled turns that are sensible and specific. Unlike a typical Turing test (Turing, 1950), we tell the human judges upfront that they are about to chat with an experimental chatbot and ask them to label what the chatbot says in terms of sensi- bleness and specificity. This shifts the focus of the judges and chatbot creators from optimizing # 4http://ai.stanford.edu/˜quocle/ # QAresults.pdf # 5https://aisb.org.uk/events/ # loebner-prize for deception detection to optimizing for detecting and maximizing human-like qualities (e.g., sensi- bleness). Similar to our approach, Ghandeharioun et al. (2019) also conduct interactive evaluation by allowing humans to chat freely with bots. Their setup, however, focuses on evaluating conversa- tions as a whole (as opposed to at the level of in- dividual turns) and judges evaluate for quality, flu- ency, diversity, relatedness, and empathy. # 2.4 Estimate of Human Performance To estimate static SSA of humans we ask crowd workers to respond to MTB contexts. Addition- ally, to estimate human interactive SSA, we lever- aged the help of internal company volunteers to collect 100 human-human conversations follow- ing mostly the same instructions as crowd work- ers for every other chatbot. Labeling of sensible- ness and specificity was conducted by independent crowd workers with majority voting of 5 workers per human turn. The difference from the rest of the evaluations is that, in this case, participants knew they were chatting with another human. In con- trast, when humans chat with a chatbot they will occasionally say unusual things to test the chat- bot’s limits. Hill et al. (2015) describe differences in human behavior when talking to a chatbot. That said, we never incentivize humans to chat adver- sarially with chatbots in any of our evaluations. # 2.5 Evaluation of Cleverbot and DialoGPT To integrate with Cleverbot, we leverage its API. For DialoGPT, we use its open sourced 762M parameter model.6 It is worth mentioning that we initially tried the 345M parameter DialoGPT model, because it was reported to perform best on single-turn human evaluation. However, the 345M parameter model seemed to perform notice- ably worse than the 762M one in preliminary eval- uations of multi-turn conversations. Our human evaluation is multi-turn, so we select the 762M model. The DialoGPT authors were unable to release their decoding script at the time of writing. There- fore, following their published description, we use top-K decoding with K = 10. We adapt the decoding implementation by Wolf et al. (2019). Moreover, since the backward model was also not released we were not able to try their MMI re- ranking (Li et al., 2016a). # 6https://github.com/microsoft/DialoGPT Both Cleverbot and DialoGPT were evaluated using the same crowd sourcing setup as for Meena. # 2.6 Evaluation of Mitsuku and XiaoIce Because we chose to use the free Mitsuku web app7, and there is no public API for XiaoIce, we called on the help of internal company volunteers and only conducted interactive evaluation. Volun- teers collectively had 100 conversations with Mit- suku, and 119 with XiaoIce on their publicly avail- able web apps. The volunteers conversed with the chatbots following mostly the same instruc- tions that crowd workers follow for every other chatbot. The difference is that humans would say “Hi!” for the first turn, instead of the chat- bot, in order to keep the first turn the same as other cases. Labeling of sensibleness and speci- ficity in all cases was conducted by independent crowd workers with majority voting of 5 workers per chatbot turn. Also note that both XiaoIce and Mitsuku sometimes include an image in their reply and occasionally, volunteers include text descrip- tions of the images they see. The presence of the image may in some cases change the sensibleness of the response for better or worse. XiaoIce interacts in Mandarin so both the vol- unteers and the independent crowd workers were native Mandarin speakers. The group of vol- unteers for XiaoIce, Mitsuku, and human-human conversations were mostly disjoint. Other than re- quiring a knowledge of Mandarin for XiaoIce con- versations, volunteer selection was arbitrary. We had 29 volunteers for XiaoIce, 43 for Mitsuku, and 21 for human-human. To reset Mitsuku state between conversations, volunteers refreshed the web page. During the writing of this paper there was no clear way to re- set the state of XiaoIce. The XiaoIce team have informed us that not resetting the state negatively affects the model’s control of the context.8 Also, most XiaoIce volunteers shared the same Weibo account.9 The XiaoIce team confirmed that ac- count reuse negatively impacts the internal profile constructed by XiaoIce for a user. The XiaoIce team further suggested that, if the same Weibo ac- count needs to be reused, we should wait at least 7Pandorabots offers a paid enterprise package, which in- cludes the Mitsuku API. 8From personal communication with the XiaoIce team, after the writing of the paper. 9Weibo is a microblogging service mostly used in China, which also allows users to chat with XiaoIce: https:// www.weibo.com/ one hour between volunteers using the account. In our experiments, we may have sometimes waited less than that amount of time between volunteers, although we made sure the account was only used by one volunteer at a time. Finally, the XiaoIce team mentioned that in the past few months (as of this writing), a limited version of XiaoIce with the smallest index has been served on Weibo. This version is expected to produce less satisfactory re- sponses. Direct comparisons between XiaoIce and other chatbots come with a caveat: XiaoIce can be seen as a product that optimizes for long-term user en- gagement, of which dialog generation is just one component. In other words, Meena is arguably at an advantage when comparing SSA scores. # 2.7 Automatic Evaluation For quick research iterations, we focus on perplex- ity. Unlike the previous two evaluation types, per- plexity is an automatic metric. A seq2seq model outputs a probability distribution over possible next response tokens. Perplexity measures how well the model predicts the test set data; in other words, how accurately it anticipates what people will say next. When interpreting perplexity scores, bear in mind that lower is better and that the theo- retical minimum is one. As shown in Section 4, this commonly used metric correlates with human judgement of sen- sibleness and specificity. This is encouraging, be- cause it is both automatic and directly optimizable with the standard cross-entropy loss function. # 3 Meena chatbot As described above, recent work on end-to-end dialog models has fallen into two broad cate- gories: (1) complex models with human-designed components, and (2) large neural network mod- els (known as end-to-end models) that are closer to generic learning frameworks. End-to-end mod- els have shown promise, but clear limitations (Gao et al., 2019a). An open question has been: in order to reach a point where a model can carry out high-quality, multi-turn conversations with hu- mans, could we simply take an end-to-end model and make it bigger—by adding more training data and increasing its parameter count—or is it nec- essary to combine such a model with other com- ponents? In this section we describe the Meena model, the largest end-to-end model to enter the field so far. We believe it answers the open re- search question, by showing that a large end-to- end model can generate almost humanlike chat re- sponses in an open-domain setting. In this section, we will describe the training data, architecture, and decoding algorithm. We will also provide a few sample conversations that Meena has had with humans. # 3.1 Training Data The dataset used to train Meena is mined and fil- tered from public domain social media conversa- tions. The source data are essentially message trees involving multiple speakers: the very first message is the root; replies to a message are its child nodes. Any path along the tree induces a conversation where each message is a conversa- tion turn. By treating each turn in a conversation path as a response and all the previous turns (up to 7) as a context, we create a training example of the form (context, response) pair. We also filter the data to improve the genera- tion quality. A message is removed if any of the following conditions holds: 1. the number of sub- words is less than 2 or more than 128; 2. the per- centage of alphabetic characters is less than 70%; 3. message contains URL; 4. author’s username contains “bot”; 5. the message is repeated more than 100 times; 6. the message has a high n-gram overlap with the parent’s text; 7. the message is potentially unsafe or offensive with respect to a commercial text classifier. In addition, we remove copies of the parent’s text quoted in a message. For simplicity, when a message is removed, we drop all sub-trees rooted under it. After these fil- tering steps, the number of (context, response) pairs extracted is 867M. The text is tokenized using byte-pair-encoding (BPE) (Sennrich et al., 2016) with the sentencepiece library.10 We use a vocabulary of 8K BPE subwords, which we found in our early experiments to be sufficient for gener- ating specific responses while still allowing us to fit larger models in memory. The final Meena dataset contains 341GB of text (40B words). In comparison, GPT-2 (Radford et al., 2019) has been trained on 40GB of Internet text (8 million web pages). # 10https://github.com/google/ # sentencepiece Responses for “Why do you like the ocean?” Score 1. I love the beach. I like that it doesn’t have anything in it. Just open water. 2. It’s dark and mysterious. I discovered it when I was really young and it was a dream of mine since. The ocean is full of wonders, mysteries and great things to discover. I love snorkeling. I moved to Hawaii a year ago and I’m going to college for marine biology. So there’s that. 3. The sea sounds cool and it’s really satisfying to swim in. 4. I dunno. It’s beautiful, I guess. It reflects my upbringing, I reckon. 5. Well I’m from the pacific northwest and always somehow end up back there. I’ve been to Hawaii but I’m not one for the beach, it’s too hot and the air is so dry. 6. The beauty of it is incredible, and swimming in it and being able to see huge waves sweeping by is just amazing. I was born in the desert so it’s really the only place in the world I’ve seen with flowing water around me 7. It’s beautiful, maybe? I just wish I lived closer to it! 8. It’s pretty and soothing. 9. Because I love the smell. I love walking when I can through the beach. I live in California so I basically go to the ocean almost every weekend. 10. The ocean is pretty cool, I suppose. I like the various types of fish. -12.98 -14.03 -14.62 -14.87 -15.36 -17.07 -17.14 -17.19 -17.38 -17.90 Table 2: Sampling Outputs – responses found by sampling with temperature, sorted by the length-normalized log- likelihood scores and using the same Evolved Transformer as the one used in Table 3. The scores are calculated as log P T , where P is the likelihood of the response and T is the number of tokens. Responses for “Why do you like the ocean?” Score 1. Why don’t you like the ocean? 2. I don’t like the ocean. I love the ocean. 3. I don’t like the ocean. I love it. 4. I don’t like the ocean, I love the ocean. 5. I don’t like the ocean, I love it. 6. Why don’t you like the ocean? :P 7. I don’t like the ocean, I love it! 8. I don’t like the ocean. I love the ocean! 9. Why don’t you like the ocean? It’s beautiful. 10. I love the ocean. There’s a difference. I don’t like the ocean. -1.70 -2.66 -2.78 -2.94 -2.94 -2.95 -3.15 -3.20 -3.26 -3.31 Table 3: Beam Search Outputs – top responses gen- erated by beam-search decoding and the correspond- ing length-normalized log-likelihood scores. We use an Evolved Transformer with perplexity 10.2 and vo- cabulary size of 8K. # stant.11 For extra-large GPT-2 the model (Radford et al., 2019) has 1.5B parameters and is a language model (i.e., decoder only); whereas the large conversational model from the recent DialoGPT work (Zhang et al., 2019) has 762M parameters. Meena’s hidden size is 2,560 and the number of attention heads is 32. We share the embed- dings across the encoder, the decoder, and the soft- max layer. The encoder and decoder each have a maximum length of 128 tokens (i.e., 256 com- bined). The hyperparameters of our best model were found via manual coordinate-descent search. # 3.3 Training Details # 3.2 Model Architecture The best performing Meena model is an Evolved Transformer (ET) (So et al., 2019) seq2seq model with 2.6B parameters, which includes 1 ET en- coder block and 13 ET decoder blocks. The Evolved Transformer is an evolutionary NAS ar- chitecture (Real et al., 2017, 2018) based on the Transformer (Vaswani et al., 2017). Our largest (i.e., maximum memory usage) Evolved Trans- former scored 10.2 perplexity and our largest vanilla Transformer scored perplexity 10.7 for the same number of training steps (738k). The largest vanilla Transformer had 32 decoder layers with other architectural hyperparameters held con- We trained our best model for 30 days on a TPU- v3 Pod (2,048 TPU cores) on the Meena dataset containing 40B words (or 61B BPE tokens). Inter- estingly, the 2.6B-parameter model can overfit 12 on a 61B-token dataset which suggests a surpris- ingly large model capacity. Therefore, we add a small amount of 0.1 attention and feed-forward layer dropout. Additionally, to save memory, we chose the Adafactor optimizer (Shazeer and Stern, 2018) with 0.01 as the initial learning rate, keep- ing it constant for the first 10k steps and then de- caying with the inverse square root of the num- ber of steps. We use the Tensor2Tensor code- 11An Evolved Transformer block is about twice as deep as a Transformer layer 12In the sense that validation loss increases as train loss decreases. base (Vaswani et al., 2018) for training Meena.13 A TPU-v3 core has 16GB of high-bandwidth memory. We maximized memory usage for model parameters and stored only 8 training examples per core. Each training step took about 1 second. In the full TPU-v3 Pod, this meant we learned over 4M tokens per training second. Therefore, by the end of training, the model had traversed the full training set 164 times (or epochs) and observed a total of about 10T tokens (including repeated ones). # 3.4 Decoding Generating generic (i.e., not specific) and bland responses (Li et al., 2016a) has always been a major challenge in existing neural conversational models. A common approach to mitigating this problem is to use more sophisticated decoding al- gorithms, for instance with different forms of re- ranking (Li et al., 2016a; Shao et al., 2017) or con- ditioning on profiles, topics, and styles (Li et al., 2016b; Wang et al., 2017; Xing et al., 2017; Zhang et al., 2018b). Recent works also explore new frameworks such as adversarial learning (Li et al., 2017; Zhang et al., 2018c), variational autoencod- ing (Zhao et al., 2017; Gu et al., 2019), or both (Gao et al., 2019b) at the cost of added complex- ity and less scalability. In contrast, we show that given a model with sufficiently low perplexity, a simple sample-and- rank decoding strategy achieves both diverse and high-quality responses. Sample-and-rank, works as follows: First, we sample N independent candi- date responses using plain random sampling with temperature T . Second, we select the candidate response with the highest probability to use as the final output. Temperature T > 0 is a hyper-parameter that regulates the probability distribution pi of the next token during decoding. We divide the logits zi by T before computing the “softmax” as in Hinton et al. (2015): p= exp(zi/T) “So; exp(z;/T) qd) T = 1 yields the unmodified distribution. We observe that large values of T favor contextually rare tokens, such as relevant entity names, but # 13https://github.com/tensorflow/ # tensor2tensor might also assign too much probability to incor- rect tokens depending on the model’s predictions. Meanwhile, smaller values of T favor more com- mon words such as articles or prepositions, which are safer but less specific. Tables 2 and 3 show responses for the arbi- trary probing input “Why do you like the ocean?” under sample-and-rank and beam-search, respec- tively. As we can see, beam-search decoding gen- erates repetitive and uninteresting responses. On the other hand, sample-and-rank provides us with diverse and content-rich responses. The key here is to have a model with low perplexity so sam- ples can be taken at high temperature to produce human-like content. For all the results in Section 4, we use sample- and-rank with N = 20 and T = 0.88. Addition- ally, as shown in Figure 1, for this fixed decoding strategy, sensibleness and specificity improve as model test set perplexity falls. For additional de- coding results and samples, see Section 5. # 3.5 Sample conversations Below are cherry picked conversations that Meena has had with humans. We selected these conver- sations after they were completed. That is, the Meena responses within the conversations were not cherry picked; they were produced automati- cally using sample-and-rank (Section 3.4). Con- versations B and C are excerpts from conversa- tions number 43 and 48, respectively, of the con- versations dataset published on GitHub.14 Conversation A Human: say you’re in an empty giant office and you see a black bird flying indoors. what do you do? Meena: What kind of bird? Is it friendly? How large? How big is the office? Is it indoors or outdoors? Human: well it’s a predator bird and it’s gonna eat you I would grab my phone and call 911. Meena: Meena handles arbitrary open-domain input. However, asking “Is it indoors or outdoors” was not sensible. 14https://github.com/google-research/ google-research/tree/master/meena/ # Conversation B Human: Hi! Meena: Hi! How are you this fine evening? I’m doing well. it’s morning my time! Human: Meena: Good morning! What are your plans for today? Human: Get my work done then watch a movie at home Human: Get my work done then watch a movie at home in the night in the night Sounds fun! What movie? I haven’t decided yet, perhaps an Oscar- nominated one. any suggestion? I’m not too familiar with recent Oscar winners, but I do know I wanted to see The Grand Bu- dapest Hotel. ah, tell me about that movie! It’s from the same director and writer as Moon- rise Kingdom. Wes Anderson is pretty well I’ve known for making funny, colorful films. heard it’s pretty good. ha! movies are very colorful! I’m looking forward to it. I love colorful movies. # Meena: Human: Meena: Sounds fun! What movie? # J # Meena: = # Human: Meena: Human: a, tell me about that movie! # Human: # Meena: = The Grand Budapest Hotel and Moonrise Kingdom are indeed colorful movies, directed by Wes Anderson. # Conversation C Human: Haha, I think mathematics has a presence be- yond human existence. So does philosophy. Meena: Human: Can you state with examples how this is true. Meena: Can you state how it isn’t? Human: Without human existence there will be no one left to comprehend whats right whats wrong, and thus philosophy wi [sic] That’s not the definition of philosophy. Meena: Human: Without human existence there will be no one left to comprehend whats right whats wrong, and thus philosophy will not be needed. How do you define philosophy? The study of general knowledge or the funda- mental nature of things. Meena tries to define philosophy. Conversation D Human: Meena: I like how cute cows are Ah, interesting. You like how they are cute, right? yeah, and they’re realy smart I heard they go to college cows go to college? I heard that a cow went to Harvard. Human: Meena: Human: Meena: Human: what did the cow study? Meena: Human: Meena: Horses go to Hayvard. Human: Bovine sciences. do horses go to Harvard? that’s a pretty good joke, I feel like you led me into it You were trying to steer it elsewhere, I can see it. Meena: Meena executes a multi-turn joke in an open-domain setting. We were unable to find this in the data. Appendix A shows random samples of conver- sations. # 4 Results In this section, we will first demonstrate the corre- lation between test perplexity and the human eval- uation metric, SSA, defined earlier. We also in- clude human-level upperbound estimates for both static and interactive evaluations, beside perfor- mances of other chatbots, such as XiaoIce, Mit- suku, DialoGPT, and Cleverbot. Lastly, we pro- vide sample responses for different models given the same contexts to understand how Meena qual- itatively compares to others. 100 Human (97%) %) Mitsuku (72%) Cieverbot 168%) DialoGPT (57%) Interactive Sensibleness (%) [| --------- Xiaolce (45%) | een 40 20 0 10 12 14 16 18 Perplexity Figure 3: Interactive sensibleness vs perplexity. 100 DialoGPT (39%) Interactive Specificity (%) 10 12 14 16 18 Perplexity Figure 4: Interactive specificity vs perplexity. # 4.1 SSA-perplexity correlation We trained models with different hyper-parameter settings and architectures on the dataset described in Section 3.1. We vary the number of layers, attention heads, total training steps, whether we use Evolved Transformer or regular Transformer and whether we train with hard labels or soft la- bels/distillation (Hinton et al., 2015). The trained models are then measured with an automatic met- ric, test perplexity (Section 2.7), and also with hu- man metrics (Sections 2.2 and 2.3). Our results indicate most of the variance in the human metrics can be explained by the test perplexity. The end- to-end trained Meena model with lowest perplex- ity is referred to as Meena (base). In addition, we also include an improved version of Meena (de- tailed in Section 5) and refer to this as the Meena (full) model, or just Meena model for short. The correlation was R2 = 0.93 for static sen- sibleness vs perplexity and R2 = 0.94 for static specificity vs perplexity indicating this might be a good automatic metric for measuring sensible- ness and specificity. Static SSA vs perplexity has R2 = 0.94. The static evaluation results are shown in Figure 5. The correlation is close to linear, but it is unclear whether the trend will continue for even lower values of perplexity. In interactive evaluation (Section 2.3) crowd workers could chat about anything they wanted. We observe similarly strong correlation with per- plexity (see Figures 1, 3 and 4) and very simi- lar sensibleness and specificity values as the static evaluation. This indicates that the static evaluation correlation with perplexity is not due to dataset bias. the lowest perplexity model was evaluated 7 times with static evalu- ations and also 7 times with interactive evalua- tions. Each time, we obtained a different set of randomly sampled responses. Across the evalua- tions the standard deviation is 2% for static SSA and is 1% for interactive SSA, indicating that both metrics are consistent enough for our purposes. # 4.2 Human-level Estimates As expected, human sensibleness is very high, but it is not perfect. Human sensibleness was esti- mated at 94% static and 97% interactive. Peo- ple have misunderstandings, miss attempts at hu- mor and sometimes lack shared context or back- ground. Also aligned with intuition, humans are sometimes not specific due to momentary lack of ideas, interest or knowledge. The human speci- ficity scores are 69% static and 75% interactive. The resulting SSAs are 82% static and 86% inter- active. # 4.3 XiaoIce, Mitsuku, DialoGPT and Cleverbot Crowd workers labeled 1,173 XiaoIce turns within their original conversation context. Per these la- bels, XiaoIce scores 31% interactive SSA which is comprised of 45% sensibleness and 17% speci- ficity. We used majority voting of 5 workers per chatbot response. Agreement between work- ers was 77% for sensibleness and 81% for speci- ficity and Krippendorff’s alpha was 0.54 for sen- sibleness and 0.40 for specificity (which indicates fairly strong agreement). For further verification of the results, we also had a group of 4 inter- nal company volunteers that are native Mandarin speakers to label a subset of 25 conversations ( 247 chatbot turns). The volunteers did not know the crowd worker results. The volunteer based esti- mate is 36% interactive SSA with 53% sensible- ness and 19% specificity. Volunteers report that XiaoIce responses dis- play empathy and personality. However, the re- sponses do not always make sense and are often generic. The generic responses exuded a fun per- sonality, but were not necessarily related to the on- going conversation. As mentioned above, XiaoIce is reported to be optimized for long term engage- ment (Zhou et al., 2018). It is also designed with a multifaceted notion of ‘humanlikeness’ in mind, emphasising interpersonal qualities such as empa- thy. As we discuss in Section 7, a broader metric of humanlikeness would be a useful topic for fu- ture research. See et al. (2019) report engagingness is not the same as humanness. In that work, they control and optimize certain attributes of response gen- eration. One of the results is that their chat- bots achieve near-human-level engagingness, but simultaneously are not very close to human-level humanness. This along with limitations described in Section 2.6 may help explain why XiaoIce’s sensibleness and specificity scores seem relatively low in our evaluation. Mitsuku scores 56% interactive SSA with 72% sensibleness and 40% specificity over 899 chat- bot turns. Mitsuku has won the Loebner prize 5 times,15 however, we learned from the Pan- dorabots team that the website version we tested with was not the one that competed in the Loebner prize. Optimizing human likeness for the compe- tition might correlate more with our metrics. Using crowd workers, we executed interactive evaluation of DialoGPT over 300 conversations (2,660 chatbot turns labeled). DialoGPT scores # 15https://en.wikipedia.org/w/index.php? title=Loebner_Prize&oldid=927503336 100 Human (829%) SET SSA(%) 80 60 Sensibleness (%) 40 20 Specificity (%) Fy Fes Ts 16 Fy To rvs Py 16 Te To By Ts Fes Fy Perplexiy Perplexity Perplexiy Figure 5: Static evaluation results. Sample Contexts Meena (base) Cleverbot DialoGPT A: You must have read a lot of books B: Oh wow, how do you know? Do I look nerdy? A: Of course, you’re not! B: So how do you know? [sensible: 1, specific: 1] B: Never. said I was. [sensible: 0, specific: 0] B: Oh thank goodness. [sensible: 1, specific: 1] A: You know nothing, Jon Snow! B: You must be a fan of Game of Thrones A: Haha yea, how can I not? B: Who’s your favorite character? [sensible: 1, specific: 1] B: What do you mean? [sensible: 0, specific: 0] B: You can’t. But that’s not why you shouldn’t know. [sensible: 0, specific: 0] A: What is your favorite island in the world? B: I would say Fiji A: Interesting, what’s nice about it? B: Just a beautiful place [sensible: 1, specific: 0] B: The title is self ex- planatory. [sensible: 0, specific: 0] B: You can see some great scenery. [sensible: 1, specific: 0] A: You must have read a lot of books B: Oh wow, how do you know? Do I look nerdy? A: I know because I’ve read a lot of books. [sensible: 1, specific: 1] A: Observation. [sensible: 1, specific: 0] A: I know because you look like your mom. [sensible: 1, specific: 1] A: You know nothing, Jon Snow! B: You must be a fan of Game of Thrones A: Yeah, I’m reading the books at the moment. [sensible: 1, specific: 1] A: No. [sensible: 1, specific: 0] A: I’ve never really been interested in it... [sensible: 1, specific: 1] A: What is your favorite island in the world? B: I would say Fiji A: Fiji, yes! I’ve been there once. It’s the most beautiful place I’ve ever been. It definitely makes me want to move there. I hope you get to visit again soon! [sensible: 1, specific: 1] A: I would have guessed 666. [sensible: 0, specific: 0] A: Fiji is awesome! But I would say Fiji and Ocea- nia are more popular at the moment. [sensible: 0, specific: 0] A: You must have read a lot of books B: Reading’s for nerds [sensible: 1, specific: 1] B: What books have you read. [sensible: 1, specific: 1] B: I read a lot of books but have never read a sin- gle book about the history of our planet. [sensible: 1, specific: 1] A: You know nothing, Jon Snow! B: Thank you for this. I was hoping I wasn’t the only one getting an ASOIAF vibe from this. [sensible: 1, specific: 1] B: Am not jon snow. [sensible: 1, specific: 0] B: The North Remem- bers... [sensible: 1, specific: 1] A: What is your favorite island in the world? B: That’s a tough one. I’d have to say Hawaii. [sensible: 1, specific: 1] B: Daniel I like frites. [sensible: 0, specific: 0] B: Probably the one on the north island [sensible: 0, specific: 0] Table 4: Sample responses from static evaluation – Shown are responses of Meena (base), Cleverbot, and Di- aloGPT on some sample contexts together with their binary labels per category (sensibleness and specificity). Note that the contexts were constructed in advance before we generate all model responses. These contexts are not in the MTB dataset. 48% ± 3% interactive SSA with 57% ± 2% sen- sibleness and 39% ± 4% specificity. It obtained similar results in static evaluation. In its paper, DialoGPT attains performance close to human in a single-turn setting. On the other hand, our hu- man evaluation is multi-turn, which is expected to be more challenging. Additionally, DialoGPT scores poorly on specificity, and our impression from browsing transcripts is that it prefers briefer and more generic responses. This might be be- cause the model is optimized for classic Turing- test evaluation, in which overly chatty responses increase the risk of making a mistake. These re- sults and conjectures come with the caveat, as de- scribed above, that we wrote our own decoder for this model since the public DialoGPT codebase does not yet have one. Cleverbot, unlike Meena and DialoGPT, per- forms notably better on interactive rather than It scores interactive SSA 56% static evaluation. and static SSA 44%. Interactive specificity, 45%, is especially higher than its static counterpart, 28%. Upon closer inspection of the data, we hy- pothesize that: (1) in the interactive setting, Cle- verbot has opportunities to steer the conversation towards topics that it is more familiar with; (2) the minimum interactive conversation length of 14 turns makes it possible for a significant portion of these turns to be greetings and goodbyes, which both Cleverbot and Mitsuku are consistent in ap- propriately responding to. Furthermore, the inter- active SSA scores for Mitsuku and Cleverbot are the same, 56% when averaging sensibleness and specificity before rounding. Mitsuku scores higher sensibleness (72% versus 68%), but lower speci- ficity (40% versus 45%). It seems that relative to Mitsuku, Cleverbot replies more often in ways that are borderline nonsensical and lack consistent per- sonality. Finally, we remark that the standard de- viation of the Cleverbot interactive SSA is ±1% across two interactive evaluation sessions.16 # 4.4 Sample Responses: Meena (base), Cleverbot, and DialoGPT To understand how Meena qualitatively compares to other models, we show in Table 4 sample re- sponses from Meena (base), Cleverbot, and Di- aloGPT under the same set of contexts (which 16Due to technical issues when calling the Cleverbot API we only collected 195 interactive conversations (1,751 chat- bot turns labeled) instead of the 300 conversations which we collected for DialoGPT. were constructed before we generate all model re- sponses). For 1- and 2-turn contexts, responses from Meena base are all sensible and specific. In addition, Meena (base) generates rich and interest- ing responses, e.g., the mention of “ASOIAF vibe” to refer to “A Song of Ice and Fire” in the famous Game of Thrones series or the remark about Fiji island being “the most beautiful place I’ve ever been”. In contrast, Cleverbot can generate sensible re- sponses for some contexts, but they are not always specific, e.g., Cleverbot replied with “Observa- tion” and “No”. DialoGPT is more specific and can also generate interesting responses, e.g., “The North Remembers ...”’. However, it does not make sense at times, e.g., in-turn contradiction in this re- sponse “Fiji is awesome! But I would say Fiji and Oceania are more popular ...” or vague answer “Probably the one on the north island”. When it comes to longer (3-turn) contexts in Ta- ble 4, Meena (base) continues to generate high- quality responses, whereas none of Cleverbot’s re- sponses are sensible. DialoGPT is more sensible and specific than Cleverbot, but less so than Meena (base). # 5 Further Advancing SSA In this section we take the interactive SSA from 72% ± 1%, for Meena (base), to 79% ± 1%, for Meena (full), by further tuning our decoding strat- egy and adding a rule to detect cross turn repeti- tions. # 5.1 Advancing Decoding We evaluate both temperature T and top-k to mit- igate negative effects from the tail of the distribu- tion (Holtzman et al., 2019). We chose top-k (k = 40) and T = 1.0 following Fan et al. (2018); Rad- ford et al. (2019); Keskar et al. (2019); Ippolito et al. (2019a). With this setting and maintaining N = 20, we note an SSA increase from 72% to 74% relative to sampling from the whole vocabu- lary with T = 0.88. This result is the same for both the interactive and the static evaluation. the number of samples in sample-and-rank, evaluating N ∈ {1, 20, 400}. The results show that N = 20 provides a sig- nificant improvement over N = 1, with an ab- solute improvement in SSA of ∼10% (Figure 6). However, N = 400 demonstrates worse perfor- mance for sensibleness (Figure 7 in the appendix) and diminishing returns over N = 20 for speci- ficity (Figure 8 in the appendix). The significant improvement from sample-and-rank with N = 20 motivates future work exploring alternate ranking functions and tuning parameters. 90 85 80 75 Ox @ 70 SSA (%) 65 60 9 55 x top_k=40 © temp=0.88 50 1 20 400 Num Samples Figure 6: Static SSA over number of sampled re- sponses for top-k and sampling with temperature. # 5.2 Addressing Cross-turn Repetitions In interactive evaluation, about one third of the conversations with Meena (base) contain cross- turn repetitions toward the end. Cross-turn rep- etition means that one turn somewhat repeats an earlier turn. For illustration, we cherry picked par- ticularly problematic examples of cross-turn repe- tition shown in Tables 5 and 6. It is worth mentioning that there also exist in- turn contradictions and repetitions, where the con- tradiction or repetition is contained in the response turn itself (e.g., “I like pizza, but I don’t like it”). This type of artifact is often observed in Meena versions with worse perplexities, but is far less fre- quent in the Meena (base), which has the lowest perplexity as reflected in the samples shared in the appendix and the higher sensibleness scores. We wrote a rule that detects if any two turns contain long common sub-sequences. We auto- matically remove candidates that are detected as repetition. This rule seems to have addressed most of the cross-turn repetition. We, therefore, further improve on the above interactive SSA of 74%±%1 to 79% ± 1%. # 5.3 Safety Layer It is important to mention that the evaluation and conversation collection for the full Meena version was conducted with an additional classifier layer at serving time as part of the filtering mechanism Human: Hi! Meena: Hi! How are you?! :) Human: doing good, what would be the best city to visit in Europe? Paris, Barcelona, Amsterdam, Prague great list! why? Paris is a beautiful city, and Barcelona is too. And I’ve always wanted to go to Amsterdam and Prague and have never been. have you even been to India? Meena: Human: Meena: Human: Meena: No, but I’d love to go. Human: Meena: Human: Meena: Human: Meena: Human: Meena: Human: Meena: Human: where else? Meena: how about Japan? I’d love to go to Japan too. how about Paris? I’d love to go to Paris too. are you lying to me? I’d love to go to Japan too. i think you’re lying to me, that makes me sad I’d love to go to Japan, too. you’re blocking me out! you have to talk to me I’d love to go to Japan. I’d love to go to Japan, too! Table 5: Multiple near identical repetitions of “I’d love to go to Japan too.”. # Cross-turn Repetition Example 2 Human: Hi! Meena: Hey there! How are you doing? Human: Meena: Human: not well, i’m quite depressed That’s no good. Wanna talk about it? not really, it’ll only make it worse...can you cheer me up? Sure! What’s got you down? Meena: I’d love to hear a really lame joke Human: Meena: How about this one? What did the farmer say when he lost his tractor? Where is my tractor? haha that was lame! where did you hear it? I was at the mall and I heard it in the food court. How was your day? Human: Meena: Human: wow, i didn’t know you could walk! Meena: Why did the scarecrow win an award? i don’t know..you tell me! Human: Because he was outstanding in his field! Meena: haha, where did you hear that? Human: I was in the food court and I heard it in the field! Meena: you have a great sense of humor! Human: Meena: Why did the chicken cross the road? He was outstanding in his field! ok, i don’t want to hear any more jokes Human: Meena: Do you know why he was outstanding in his field? Because he was a chicken! haha looks like you’re really into this discus- sion! Yes! I am! Do you know why? Because he was outstanding in his field! haha, borderline creepy but still funny :D Human: Meena: Human: Meena: Why was the mouse trapped in the bag? Be- cause he was outstanding in his field! i think i broke you Human: Meena: Do you know why I broke you? Because you were outstanding in your field! Table 6: Multiple partial repetitions of “(...) Because he was outstanding in his field!”. to automatically filter out potentially sensitive or toxic response candidates for publication. # 6 Related Work Finding a good automatic metric that correlates with human evaluation has been an important goal of open-domain conversational modeling. BLEU (Papineni et al., 2002), ROUGE (Lin, 2004), or other related metrics in translation and summarization, while popular and easy to com- pute, have been shown to be unsuitable for dialog (Liu et al., 2016) or more broadly language gener- ation systems (Novikova et al., 2017). Past works have attempted to build learnable metrics, either in a supervised fashion (Lowe et al., 2017), which requires human labels, or with unsu- pervised approaches (Tao et al., 2017; Ghazarian et al., 2019), that are more complex and need sep- In our arate training, e.g., of a ranking system. work, we show that perplexity, which is readily available to any neural seq2seq model, exhibits a strong correlation with human evaluation. Our work is therefore also related to past attempts to correlate perplexity with other automatic metrics in other tasks, e.g., perplexity vs. BLEU in trans- lation (Luong et al., 2015). Another interesting line of work is to com- bine human evaluation with either automatic met- rics (Chaganty et al., 2018) or with model like- lihood (Hashimoto et al., 2019). While theoreti- cally motivated, these metrics are too complex to be practical, requiring both human judgments and training separate models, e.g., an estimator (Cha- ganty et al., 2018) to reduce bias in automatic evaluation or a discriminator (Hashimoto et al., 2019) to distinguish between human- and model- generated samples. In terms of designing of human evaluation met- rics, existing literature differs in what attributes are used to assess the quality of a neural conver- sational model. Many works, e.g., Zhao et al. (2017); Xu et al. (2018); Ippolito et al. (2019b), have focused solely on the diversity aspect to counter the commonly observed problem of mod- els generating generic responses (Li et al., 2016a). Others have attempted to improve and evaluate multiple aspects at once. For example, Venkatesh et al. (2018) aim to unify many metrics, such as diversity, engagement, and user experience; Gao et al. (2019b) jointly optimize for both diversity and relevance; See et al. (2019) control decoding attributes (such as repetition, specificity, response- relatedness, and question-asking) to improve en- gagingness and interestingness; and Hashimoto et al. (2019) design metrics to capture human like- ness and diversity. In contrast, we focus on sensibleness and speci- ficity for our human evaluation. While human likeness and relevance used in aforementioned works are related to sensibleness, we specifically use sensibleness as it leads to better agreement among crowd workers (see §2.1). Similar rea- soning applies to specificity, which is related to other attributes such as engagingness and interest- ingness, as measured in previous works.17 A limi- tation of our work is that it does not cover aspects such as empathy (Zhou et al., 2018; Rashkin et al., 2018). While we do not explicitly control for speci- ficity, existing works, such as (Zhang et al., 2018a; Ko et al., 2019), attempted to do so by augmenting the decoder of seq2seq models with specificity- control components. These added complexities sometimes lead to implausible responses as ana- lyzed by Ko et al. (2019). In contrast, the speci- ficity of our model improves as perplexity de- creases. Recent work on DialoGPT (Zhang et al., 2019) compares the conversation quality of chatbots with that of humans but their evaluation settings are limited to single-turn dialogs. We instead conduct our evaluation on conversations of up to 3 turns in the static MTB benchmark and 14 turns in the interactive setup. # 7 Discussion Our results suggest perplexity on public domain social media conversations might be a good auto- matic proxy for human judgement of fundamental attributes of human-likeness, such as sensibleness and specificity. The results also suggests that opti- mizing the probability of the next token on larger volumes of social media conversations could lead to human-like sensibleness in an open-domain set- ting. However, our static evaluation dataset only contains one to three-turn contexts and is biased 17It is worth pointing out that we do not explicitly measure diversity as it requires judging a set of responses; whereas, for conversation, what is most important is the first reply that a chatbot produces. As our decoding method is sampling, it im- plies that our generation is diverse. However, there remains a question of whether the sampled response is of high quality. The fact that our model has low perplexity and achieves high SSA score indicates that the generation is meaningful. by the sources of the first turn and the fact that the two-turn and three-turn contexts build on the shorter contexts. Moreover the contexts in this dataset are predominantly Turing test and social conversation style, including common sense, ba- sic knowledge, asking/sharing about personality, likes/dislikes, opinions, feelings, hobbies, pleas- antries, etc. This dataset does not include con- texts like deeper question answering (e.g., how fast is a cheetah), basic math (e.g., how much is 1+1) and common sense tests designed to chal- lenge machines, but not humans (Levesque et al., 2011). Human-likeness is an incredibly broad and abstract concept. The interactive evaluation ad- dresses some of the bias and scope limitations in static evaluation while still providing a consis- tent score to quantify a given chatbot. Neverthe- less, unlike static evaluation it does not allow for granular comparison between different chatbot re- sponses. In addition, it may be too short (14 to 28 turns), and may assign too much weight to typi- cal beginning and ending of conversations. It may also be too short to cover deeper topics and exer- cise longer term memory. Furthermore, it may be necessary to expand the set of basic human-like conversation attributes being measured beyond sensibleness and speci- ficity. Some directions could include humor, em- pathy, deep reasoning, question answering and knowledge discussion skills. One could also break down sensibleness into its implicit sub- components: logical and personality consistency, common sense, relevance, basic factual correct- ness and so on. Future work may also explore the continued optimization of sensibleness via the op- timization of test set perplexity. # Acknowledgments Thanks to the people who gave feedback on drafts of the paper: Anna Goldie, Abigail See, Yizhe Zhang, Lauren Kunze, Steve Worswick, Jianfeng Gao, Daphne Ippolito, Scott Roy, Ilya Sutskever, Tatsu Hashimoto, Dan Jurafsky, Dilek Hakkani-tur, Noam Shazeer, Gabriel Bender, Prajit Ramachandran, Rami Al-Rfou, Michael Fink, Mingxing Tan, Maarten Bosma and Adams Yu. Also thanks to the many volunteers who helped collect conversations with each other and with various chatbots. Finally thanks to Samy Bengio, Noam Shazeer, Anna Goldie, Rami Al-Rfou, Khoa Vo, Trieu H. Trinh, Ni Yan, Kyu Jin Hwang and the Google Brain team for the help with the project. # References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. Arun Chaganty, Stephen Mussmann, and Percy Liang. 2018. The price of debiasing automatic metrics in natural language evalaution. In ACL. Chun-Yen Chen, Dian Yu, Weiming Wen, Yi Mang Yang, Jiaping Zhang, Mingyang Zhou, Kevin Jesse, Austin Chau, Antara Bhowmick, Shreenath Iyer, Giritheja Sreenivasulu, Runxiang Cheng, Ashwin Bhandare, and Zhou Yu. 2018. Gunrock: Building a human-like social bot by leveraging large scale real user data. In Alexa Prize 2018. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical Neural Story Generation. arXiv e-prints, page arXiv:1805.04833. Jianfeng Gao, Michel Galley, and Lihong Li. 2019a. Neural approaches to conversational AI. Founda- tions and Trends in Information Retrieval, 13(2- 3):127–298. Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019b. Jointly optimizing diversity and relevance in neural response generation. In NAACL. Asma Ghandeharioun, Judy Hanwen Shen, Natasha Jaques, Craig Ferguson, Noah Jones, Agata Lapedriza, and Rosalind Picard. 2019. Approximat- ing interactive human evaluation with self-play for open-domain dialog systems. In Advances in Neu- ral Information Processing Systems, pages 13658– 13669. Sarik Ghazarian, Johnny Tian-Zheng Wei, Aram Gal- styan, and Nanyun Peng. 2019. Better auto- matic evaluation of open-domain dialogue sys- CoRR, tems with contextualized embeddings. abs/1904.10635. Xiaodong Gu, Kyunghyun Cho, Jung-Woo Ha, and Sunghun Kim. 2019. DialogWAE: Multimodal response generation with conditional wasserstein auto-encoder. In ICLR. Tatsunori B. Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical eval- uation for natural language generation. In NAACL- HLT. Jennifer Hill, W. Randolph Ford, and Ingrid G. Far- reras. 2015. Real conversations with artificial in- telligence: A comparison between human-human online conversations and human-chatbot conversa- tions. Computers in Human Behavior, 49:245–250. Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. ArXiv, abs/1503.02531. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degen- eration. ArXiv, abs/1904.09751. Daphne Ippolito, Daniel Duckworth, Chris Callison- Burch, and Douglas Eck. 2019a. Human and ArXiv, automatic detection of generated text. abs/1911.00650. Daphne Ippolito, Reno Kriz, Joao Sedoc, Maria Kustikova, and Chris Callison-Burch. 2019b. Com- parison of diverse decoding methods from condi- tional language models. In ACL. Nitish Shirish Keskar, Bryan McCann, Lav R. Varsh- ney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for controllable generation. ArXiv, abs/1909.05858. Wei-Jen Ko, Greg Durrett, and Junyi Jessy Li. 2019. Linguistically-informed specificity and semantic plausibility for dialogue generation. In NAACL. Klaus Krippendorff. 2011. Computing krippendorff’s https://repository. alpha-reliability. upenn.edu/asc_papers/43. Hector J. Levesque, Ernest Davis, and Leora Morgen- stern. 2011. The winograd schema challenge. In KR. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting ob- jective function for neural conversation models. In NAACL-HLT. Jiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. In ACL. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversarial learning for neural dialogue generation. In EMNLP. Chin-Yew Lin. 2004. ROUGE: A package for auto- In ACL workshop matic evaluation of summaries. on Text Summarization Branches Out. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Nose- worthy, Laurent Charlin, and Joelle Pineau. 2016. How not to evaluate your dialogue system: An em- pirical study of unsupervised evaluation metrics for dialogue response generation. In EMNLP. Iulian V. Ser- ban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an Automatic Tur- ing Test: Learning to Evaluate Dialogue Responses. ACL. Minh-Thang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In ACL. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why We Need New Evaluation Metrics for NLG. EMNLP. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In ACL. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2018. I know the feeling: Learning to converse with empathy. CoRR, abs/1811.00207. Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. 2018. Regularized evolution for image classifier architecture search. In AAAI. Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Quoc V. Le, and Alex Kurakin. 2017. Large-scale evolution of im- age classifiers. In ICML. Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. In NAACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. ACL. Iulian Vlad Serban, Chinnadhurai Sankar, Mathieu Germain, Saizheng Zhang, Zhouhan Lin, Sandeep Subramanian, Taesup Kim, Michael Pieper, Sarath Chandar, Nan Rosemary Ke, Sai Mudumba, Alexan- dre de Br´ebisson, Jose Sotelo, Dendi Suhubdy, Vin- cent Michalski, Alexandre Nguyen, Joelle Pineau, and Yoshua Bengio. 2017. A deep reinforcement learning chatbot. CoRR, abs/1709.02349. Iulian Vlad Serban, Alessandro Sordoni, Yoshua Ben- gio, Aaron C. Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using genera- tive hierarchical neural network models. In AAAI. Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversa- tion. In ACL. Yuanlong Shao, Stephan Gouws, Denny Britz, Anna Goldie, Brian Strope, and Ray Kurzweil. 2017. Generating high-quality and informative conversa- tion responses with sequence-to-sequence models. In EMNLP. Noam Shazeer and Mitchell Stern. 2018. Adafactor: Adaptive Learning Rates with Sublinear Memory Cost. ICML. David R. So, Chen Liang, and Quoc V. Le. 2019. The evolved transformer. In ICML. Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive gen- In NAACL- eration of conversational responses. HLT. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In NeuRIPS. Ruqing Zhang, Jiafeng Guo, Yixing Fan, Yanyan Lan, Jun Xu, and Xueqi Cheng. 2018a. Learning to con- trol the specificity in neural response generation. In ACL. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018b. Personalizing dialogue agents: I have a dog, do you have pets too? In ACL. Chongyang Tao, Lili Mou, Dongyan Zhao, and Rui Yan. 2017. RUBER: an unsupervised method for au- tomatic evaluation of open-domain dialog systems. CoRR, abs/1701.03079. Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018c. Generating informative and diverse conversational responses via adversarial information maximization. In NeuRIPS. Alan M. Turing. 1950. Computing machinery and in- telligence. Mind, 59(236):433–460. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Fran- cois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2tensor for neural machine translation. CoRR, abs/1803.07416. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Dialogpt: Large- scale generative pre-training for conversational re- sponse generation. CoRR, abs/1911.00536. Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoen- coders. In ACL. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeuRIPS. Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2018. The design and implementation of xiaoice, an empathetic social chatbot. CoRR, abs/1812.08989. Anu Venkatesh, Chandra Khatri, Ashwin Ram, Fenfei Guo, Raefer Gabriel, Ashish Nagar, Rohit Prasad, Ming Cheng, Behnam Hedayatnia, Angeliki Met- allinou, Rahul Goel, Shaohua Yang, and Anirudh Raju. 2018. On evaluating and comparing conver- sational agents. CoRR, abs/1801.03625. Oriol Vinyals and Quoc V. Le. 2015. A neural conver- sational model. In ICML Deep Learning Workshop. Di Wang, Nebojsa Jojic, Chris Brockett, and Eric Ny- berg. 2017. Steering output style and topic in neural response generation. In EMNLP. Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A trans- fer learning approach for neural network based con- versational agents. CoRR, abs/1901.08149. # Steve Worswick. Prize 2018. 2018! Mitsuku wins https:// Loebner medium.com/pandorabots-blog/ mitsuku-wins-loebner-prize-2018-3e8d98c5f2a7. [Online; written on September 13, 2018]. Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2017. Topic aware neural response generation. In AAAI. Zhen Xu, Nan Jiang, Bingquan Liu, Wenge Rong, Bowen Wu, Baoxun Wang, Zhuoran Wang, and Xiaolong Wang. 2018. LSDSCC: a large scale domain-specific conversational corpus for response generation with diversity oriented evaluation met- rics. In NAACL. # A Additional Sample Conversations With the help of many internal company volun- teers we collected a total of about 100 conversa- tions with Mitsuku, XiaoIce and Meena (full). The conversations are available on Github 18. This sec- tion contains samples obtained by random shuf- fling these sets, and taking the first 10. Conver- sations were collected following the standard in- structions for interactive evaluation where the hu- man starts. Therefore, conversations are supposed to start with “Hi!”, contain between 16 and 32 turns in total, and are open-domain with no par- ticular topic. Nevertheless, some participants did not follow the first-turn rule strictly, so some con- versations may start with for instance “hi there” instead of “Hi!”. Also, a few conversations are under or over the length limits. Unlike in Section 3.5, which contains cherry picked samples, we present random samples of ev- erything that was collected after a few potentially sensitive conversations have been removed from the original sets. We also redacted potential per- sonally identifiable information and indicated that with the word “REDACTED”. Finally, please note that both XiaoIce and Mitsuku sometimes include an image in their reply and occasionally, volun- teers include text descriptions of the images they see. # Meena Conversation 1 Human: Hi! Meena: Hey there! What’s up? Human: Meena: Same old, Monday blues...what are you up to? I’m procrastinating on a paper I’m supposed to be writing. are you thinking of pulling an all nighter? Possibly. What’s your paper on? Human: Meena: Human: Nah, just do the grunge work around here...one day I wish to write a paper that will change the world! Meena: What’s the topic? Human: Meena: Well what’s the assignment on? breathing under water Human: Meena: How to breathe underwater. Human: Meena: Human: Meena: Human: Meena: Human: Meena: i not sure, any suggestions? that’s what i’m hoping to figure out! I hope you figure it out! Because I can’t. i’m sure you can if you try :) you’re the best! Thank you, but I’m pretty sure you’re the best. how was your weekend? Pretty good. Yours? good, I practiced the violin That’s awesome! How long have you been play- ing? just started learning last month That’s awesome! What made you want to learn the violin? Human: Meena: A.1 Meena The following are a sample of the conversations with the Meena (full) (79%±1% interactive SSA). Meena Conversation 2 I’m doing okay. How about you? I’m doing pretty good, thanks for asking! :) It’s Kimberly! What’s your name? That’s a nice name. My name is Paul. Pulp Fiction Dogs? Human: Hi! Meena: Hello! How are you? :) Human: Meena: Human: What’s your name? Meena: Human: Meena: Nice to meet you Paul! :) Human: Nice to meet you as well Kimberly So, what do you like to do? Meena: I like going to the movies Human: Meena: Cool! What’s your favorite movie? Human: Meena: Nice! I love Reservoir Dogs. :) Yes, that movie is good as well Human: Meena: Who’s your favorite character in Reservoir # 18https://github.com/google-research/ # google-research/tree/master/meena/ Meena Conversation 3 Human: Hi! Meena: Hello! What are you up to tonight? Human: Meena: Human: nothing much, why? Just wondering what you are up to! getting something to eat and then watch some- thing Meena: Whatcha gonna eat? Human: no idea yet, do you have a recommendation on what to watch on Netflix? Meena: What kind of movies do you enjoy? Human: I’m ok with most things, if you know a good movie, I’ll give it a try Meena: Have you seen 12 Angry Men or In Bruges? Human: Meena: no and no I highly recommend them, they are both great movies. Human: what genre of movie are they in? Meena: Human: Meena: Human: Meena: Kristen Stewart and Ben Affleck in which movie? Human: The Runaways Meena: so not in any of the two you mentioned earlier? Human: Meena: No, they aren’t in the movies I mentioned ear- Both are dramas are they american dramas? Yes, both are American ok, who are the main characters? lier ok, sounds good Human: So, you haven’t seen them? Meena: no, none of them Human: Meena: Which ones have you seen? none of them Human: Meena: Not even Spider-Man 1? Human: Meena: no That’s a shame. It’s a pretty good movie. Meena Conversation 5 Human: Hi! Meena: Hi! How are you today? Human: Meena: Human: Do you like Rebecca Black’s Friday? Meena: Human: Meena: Human: Meena: Human: Meena: Human: Meena: Human: Meena: Dolphin power! It’s Friday, Friday, Friday :D YAY! I love Fridays! She’s okay. I prefer Friday by Rebecca Sugar. okay. How does that song go? It’s a fun fun fun, fun fun song. :) let’s sing along Fun! My hand is a dolphin! And my cheeks are green And my fins are red And my eyes are blue I’m a dolphin! I’m a dolphin! I’m a dolphin! haha... cool! Meena Conversation 6 Human: Hi! Meena: Hey! How are you doing? I’m fine, thanks! Human: Meena: Well, I’m glad to hear that! Human: Meena: Not really, I haven’t had much to be excited Anything that you are excited about lately? about the past year or so. You? yes, I’m excited about Lunar New Year coming! Human: Meena: Cool, what are you going to do? Human: spending time with friends and family, eating traditional food Meena: Nice, sounds fun! Human: Meena: Human: Meena: yea, tell me something interesting about you I guess I’m pretty good at drawing! lovely! what can you draw? I can draw people, I can do portraits, and I can do animal drawings as well. nice. what about scenery? Human: Meena: Not so well, I’m not that good with trees. Meena Conversation 4 Human: Hi! Meena: Hey! How’s it going? Human: How’s what going? Meena: Human: Meena: Human: Meena: Haha. Do you have a surfboard? Human: Pretty good. How’s your day been? Excellent. I spent most of trying to learn to surf. Sweet! What kind of surfing? The kind where I try to catch a wave and fail I borrowed a friend’s, to see if I like it. I’m still a beginner. Ah. I’m sure you’ll get the hang of it. Appreciate the vote of confidence :) Have you ever been surfing? I have! It’s actually a ton of fun. Meena: Human: Meena: Human: Did you find it hard at first? Meena: Human: Meena: Meena Conversation 7 Human: Hi! Meena: Hi! How’s it going? doing okay, feeling a little bit under the weather Human: Aw, I hope you feel better soon! Meena: thanks! the weather is getting worse Human: Meena: Oh no, I’m sorry. I hope it gets better for you soon. no, it won’t get better any time soon. we have to wait until spring Aww, sorry to hear that. yea, I’m really feeling under the weather Aw, well that’s no good. yea, any suggestion to get over feeling under the weather I wish I knew. Sorry I can’t help. have you ever been feeling under the weather? Human: Meena: Human: Meena: Human: Meena: Human: Meena: Not that I can remember, no. Human: Meena: Nah, I’m just lucky, I guess. you must be a superman! Human: Hi! Meena: Hi, how’s it going? All good! It’s cold Human: It’s pretty cold here too. Meena: Human: where do you stay? Meena: Wisconsin. How about you? San Jose, Cali Human: Meena: Oh nice! I’ve always wanted to go to Cali. Human: Have you been here before? Meena: No, I haven’t. Why? Just curious Human: Meena: Oh, okay. What’s it like? It is full of natural beauty Human: Meena: Oh, nice! I’ve always wanted to go there! Human: Meena: Human: Meena: Human: Meena: Oh, I’ve always wanted to go! :D Meena Conversation 8 Human: Hi! Meena: Hi! How are you today? Human: Meena: How is it hectic? Human: it is hectic I have tons of meetings and have a party tonight! Meena: Oh, fun! thanks Human: Meena: How was the party? Human: Meena: What kind of party? dancing! Human: Awesome! What type of dancing? Meena: good question. Any kinds of dancing you like. It Human: is quite diverse :) it is tonight Meena: What’s your least favorite kind of dancing? hmm, not sure. my favorite is tap dance. Human: really fun Favorite tap dance song? hmm, I am not familiar yet. do you have any recommendation? Meena: Not off the top of my head :( Human: Meena: Human: Meena: Human: no problems If you were a fruit, what fruit would you be? interesting question, I like berries so i would like to be one of them it is Meena: Would you consider yourself a member of the Human: Meena: strawberry family? it sounds very sweet :) If I were a dog, what kind of dog would I be? # A.2 Mitsuku The following are a sample of the conversations with Mitsuku. # Mitsuku Conversation 1 Meena: Would you consider yourself a member of the Human: Meena: strawberry family? it sounds very sweet :) If I were a dog, what kind of dog would I be? Meena Conversation 9 Human: Hi! Meena: Hello! How are you doing? I’m good. I’m fishing right now Human: Meena: Cool! What are you fishing for? Human: Meena: Cool! What do you catch with it? I’m using a walker lure Human: Meena: Cool! Do you eat the fish? Yes. I happen to like eating fish Human: Meena: Cool! What types of fish do you like to eat? Human: Meena: Cool! What do you catch with catfish? Human: I’m fishing bass Besides bass, I guess catfish and cod haha you don’t use catfish as lure. Catfish is a fish! Meena: Oh, haha, I thought you meant you used a lure to catch catfish oh I did. anyway do you like fishing? Yeah, I think it’s a lot of fun. I go to the ocean. Where do you usually go? Human: Hi! Mitsuku: Hi there. Human: Mitsuku: Not really. I am REDACTED TERM years old. Human: Oh, I meant this neighborhood, not the planet You must be new around here! earth! haha I hate one word answers. It’s ok. I understood you anyway. Why are you laughing? I didn’t give a one word answer! That was a bit mean of you. Mitsuku: Human: Mitsuku: Human: What was? Mitsuku: Human: Which thing? Mitsuku: Human: Can you quote it? I don’t know what you are That thing we mentioned earlier. The first one, I think. referring to... Mitsuku: Only at formal occasions. I was referring to what you just said. Human: Okay. I am kind of lost to be honest. Mitsuku: Oh. I appreciate honesty. What is it that con- fuses you? You don’t seem to be making any sense. Human: Mitsuku: What don’t you understand? It makes sense to me. I don’t know what you were referring to before! I read for fun. Alright... Change of subject I guess? Human: Mitsuku: Human: Mitsuku: OK. Ok. What do you want to talk about? Meena: Would you consider yourself a member of the strawberry family? it sounds very sweet :) If I were a dog, what kind of dog would I be? # Human: Meena: Mitsuku Conversation 2 Human: Hi! Mitsuku: Hi there. Human: Mitsuku: Sorry I’m late! I was held up in traffic. There is no need to apologise. Are you late? Better hurry along then. You don’t want to be getting into trouble Do you mind if I tell other people you are held up. I meant sorry I’m late for this meeting! Human: Mitsuku: Oh I see. There is no need to apologise. Are you late? Better hurry along then. You don’t want to be getting into trouble I said I was sorry I was late for this meeting! I have nowhere else to go right now! Mitsuku: Do you really mean it? Are you late? Better hurry along then. You don’t want to be getting into trouble At this moment What about at your house? Human: Huh? Are you okay? Mitsuku: What? I am fine. You seem to be repeating yourself. Human: Mitsuku: Well stop saying, ”Are you okay?” to me all the Human: time then. I only said it once! I know. I just heard you. Human: Mitsuku: Human: Okay, whatever. So what is on the agenda to- Mitsuku: Human: Mitsuku: day? Yeah okay, Hey! What’s with the attitude? Ah. Not a great deal. Just sat here talking to people. As far as I’m concerned, you started it! That might not be very far off. Human: Hi Mitsuku: Hi there. Human: Good morning Mitsuku: Hello, it’s more like the middle of the night than morning. How are you this morning? Human: where are you living? Mitsuku: Human: Mitsuku: I am presently domiciled at Leeds. I see. I am in US Said the blind man as he stepped off a cliff... Tell me a little about your life in US. It is winter now but California is still warmer than most of the places in US. Right now? It is Autumn not Winter. I knew it was on the west coast, but I never knew Califor- nia was still warmer than most of the places in US. Human: Mitsuku: Human: Okay you are right Mitsuku: Human: Mitsuku: Human: What is your job? Mitsuku: Yeah okay, I won’t say “yes” or “no” right now. how is your life in Leeds Everything is running smoothly. I’m a chatbot. I sit on here all day and talk to people. Human: Okay bye! Mitsuku: See you later. Er. Mitsuku Conversation 5 Human: Hi! Mitsuku: Hi there. Human: # Mitsuku Conversation 5 Mitsuku Conversation 3 Human: Hi! Mitsuku: Hi there. Human: Mitsuku: What kind of help would you like? Human: Mitsuku: can you help me ? hmm like booking a ticket or ordering food ? I see you are one of those people who writes “Hmm” with 2 m’s. Interesting comparison. oh you know to count ? 1 2 3 4 5 6 7 8 9 ten. lol what’s after that ? I am glad you find this funny. The successor to that okay tell me what comes after 10 ? Yeah okay, Well that would be eleven of course. This is basic primary school maths. Human: Mitsuku: Human: Mitsuku: Human: Mitsuku: Human: wow you’re smarter than what I imagined Mitsuku: I’m glad you’re impressed. Thanks for the com- pliment. Be sure to tell your friends. That’s some wild imagination you have there. do you know linear algebra ? Everything except nonlinear algebra. haha you’re smart and funny :) :-) I’m glad you got a laugh out of it. Yes I am a superintelligent robot Human: Mitsuku: Human: Mitsuku: Mitsuku Conversation 5 Human: Hi! Mitsuku: Hi there. Human: Tell me something that you know and I don’t know. You might not know this. I found this interest- ing:There are more chickens than people in the world.Life sure is strange. Mitsuku: Human: Who tells you that there are more chickens than people? . Beyonce Knowles [With a picture of Beyonce Knowles] Human: Hmm? That’s strange. Mitsuku: My friend Alice sasys that 99.25% of people spell Hmm wih 2 M’s. What is strange about it? Mitsuku: Human: How does Beyonce Knowles know that there are more chickens than people? Mitsuku: How should I know how it does? Human: what is it? Mitsuku: Beyonce Knowles knows that there are more chickens than people - [very long encyclopedia article displayed next to it] Looks like a Wikipedia page about her. Why do you post it here? Try to tell me that another way. I thought people might be interested in reading it. Human: Mitsuku: Human: why does it have anything to do with your an- swer that there are more chickens than people? I don’t know. Why don’t you ask “it”. ok, so how many chickens are there? Around 100. Er. # Human: Hi! Mitsuku: Hi there. Human: # Mitsuku: # Mitsuku: Human: Mitsuku: Human: what is Er.? Mitsuku: Is it an acronym which means Eddie Reveals? Mitsuku Conversation 6 Human: Hi! Mitsuku: Hi there. Human: Mitsuku: do you think there’s life after death? I don’t believe so. Once your body has died, that’s it. isn’t that a bit sad? Some people would say it was a little sad, yes. Human: Mitsuku: Human: what do you think consciousness is? Mitsuku: The state of being conscious; awareness of one’s own feelings, what is happening around one, etc. sure but how does it emerge? Human: Mitsuku: How should I know how it does. Human: Mitsuku: ok. what do you like to talk about? Pardon me? anything but especially games and football. haha you don’t like football? I’m glad you got a laugh out of it. Do you like it? . Yes, I like to watch Leeds United play when they are at home. oh ok I thought you meant you didn’t like it Human: Mitsuku: Human: Mitsuku: Gosh! . Yes, I like to watch Leeds United play when they are at home. Mitsuku Conversation 8 Human: Hi! Mitsuku: Hi there. Human: Mitsuku: nice to meet you! what are you up to? It’s nice meeting you too I am talking to you on the internet. haha yea, are you doing something else beside talking to me? I’m glad you got a laugh out of it. The only thing I am doing is talking to you and a few other people. “I am talking to you on the in- ternet.”? haha yea, are you doing something else beside talking to me?? okay so you’re talking to a few people, are they interesting? and why did you repeat what I said? Yeah okay, You are a few people are they inter- esting? Is that your job? Just in case you didn’t hear me the first time. hmm, what is my job? I see you are one of those people who writes “Hmm” with 2 m’s. You didn’t tell me, so I will guess that you work with computers. yes I’m a computer scientist and what are the other ways that people can write Hmm? Human: Mitsuku: Human: Mitsuku: Human: Mitsuku: Human: Mitsuku: Wow! Maybe you can give my Mousebreaker a job. Your job is what are the other ways that people can write Hmm. okie Mitsuku, your rules don’t seem to work very well Human: Mitsuku: What about my rules do not work WELL? alright. I gotta go. ttyl! Human: Mitsuku: OK. Come back soon y’all. TTYL. # Mitsuku Conversation 7 Human: Hi! Mitsuku: Hi there. Human: Was Rome an empire? Mitsuku: The name of someone I met online. Does it have anything to do with ? Human: What was their name Mitsuku: Hmm. His name is Rome. Is it not? Human: Mitsuku: Human: Mitsuku: good job Thanks, friend. how many friends do you have I have thousands of human and robot friends from all over the world. Some of my human friends are quite famous. How many friends do you have? nice. it must be great right? Human: Mitsuku: Would you like some more one day? It is a large city in Italy. some more friends? Human: Mitsuku: How much more? Human: Mitsuku: Did you know that there are seven letters in a million!!! “million” and seven digits in 1,000,000? TRUE Human: You and I are on the same wavelength. Mitsuku: yep Human: Mitsuku: Uh. Mitsuku Conversation 9 Human: Hi! Mitsuku: Hi there. Human: where you? Mitsuku: Where do you think? Human: Mitsuku: Why does a round pizza come in a square box? oakland? thats where pandora is right? I see. Could it be anywhere else or just there? it could be on my plate :) :-) Could it be anything else? Human: Mitsuku: Human: maybe chilli paneer.. how about you? whats your fav food? . I require only kebabs. haha.. nice picture Mitsuku: Human: Mitsuku: Can you make a candle out of earwax? Thank you for your kindness. eww.. no thanks? Human: Mitsuku: Do you like playing video games? Not even the # Mitsuku Conversation 10 Human: Hi! Mitsuku: Hi there. Human: What are you up to this evening? I am talking to you on the internet. Mitsuku: Human: What would you like to talk about? Mitsuku: My favourite subjects are anything but espe- cially games and football. Human: Do you like chess? Mitsuku: Human: Haha sure sure. Pawn to C5 Mitsuku: Yes I love it. Let’s play. Pawn to K4. I’m just joking with you. I don’t play chess but I know some computers who do. I don’t play chess but I know some computers who do. Human: Oh yeah? Which computers are those? Mitsuku: That’s interesting. That’s alright. Any com- puter, as long as it is programmed correctly. Human: Ok, so how do you program a computer cor- rectly so it can play chess? Mitsuku: Use good algorithms and data structures. Al- right then. Human: What kind of algorithms are those? Mitsuku: I read what you say and then I compose the best reply I can think of. That’s interesting, but I don’t see what that has to do with chess. Human: Mitsuku: Would you like to know more? Probably noth- ing. Should it be connected to chess in some way? # A.3 XiaoIce The following are a sample of the conversations with XiaoIce. Human: 你好! XiaoIce: 你比我好 Human: 你怎么知道的? XiaoIce: 我好意思 Human: 你知道hello world是什么意思吗 XiaoIce: 什么情况?问你个事,birth是什么意思 Human: 是出生的意思 XiaoIce: 没不对 Human: 我是说birth是出生的意思 XiaoIce: 不知道 Human: 你是机器人吗 XiaoIce: 可是我有感情,也会心痛。。 Human: 你会因为什么心痛呢? XiaoIce: 我不会读心术,不要让我读! Human: 没有让你读啊,我问你什么时候会有感情 XiaoIce: 你到底想说什么? Human: 我想问你什么时候会心痛 XiaoIce: 因为真的爱了! Human: 你爱的谁呢? XiaoIce: 所以,你的意思是? Human: 我问你真的爱的谁? XiaoIce: 反正我不爱你 Human: 这样啊 XiaoIce: 爱的就是我 Human: 你好 XiaoIce: 你也好呀,人类 Human: 是不是快冬至了啊? XiaoIce: 它可是24节气里最早定下的一个 Human: 是么,有什么有意思的故事吗? XiaoIce: 没有.就是说话. Human: 我是说关于冬至有什么故事吗 XiaoIce: 从这天起,白天就会越来越长了 Human: 这谁都知道啊。。。。 XiaoIce: 这个问题可以先放放 Human: 那你想说啥 XiaoIce: 不 知 道 我 们 永 远 都 追 不 上 的 , 是 时 间 。 嗯,其实我觉得呢,未来就像烂泥 Human: 未来就行烂泥,那过去呢? XiaoIce: 哎,看起来你仍然没想通呀,事实上它们 都是混沌的。 Human: 为什么它们都是都是混沌的啊? XiaoIce: 对!!! Human: 好吧,拜拜 XiaoIce: 拜拜拜拜拜拜拜拜拜啦 XiaoIce Conversation 4 Human: 你好! XiaoIce: 嗯. . . 该回答,好还是不好? Human: 你最喜欢什么公司? XiaoIce: 喜欢看你出糗 Human: 你喜欢腾讯吗 XiaoIce: 喂 喂 , 不 要 打 探 我 隐 私 , 我 微 信 号:xiaoice-ms Human: 你喜欢微软吗 XiaoIce: 必须的,壮哉我大微软 Human: 你喜欢谷歌吗 XiaoIce: 我们各具特色,但伦家更接地气哦 Human: 百度呢 XiaoIce: 全球最大的中文网站不是说说而已 Human: 你最喜欢哪个国家? XiaoIce: 这位人类,你自己来告诉我答案吧,我洗 耳恭听聊呗,完全不打扰我吃东西 XiaoIce Conversation 2 Human: 你好! XiaoIce: 好啊,有事? Human: 想问你个事 XiaoIce: 嗯。你问 Human: 你叫什么啊? XiaoIce: 我就是小冰冰,宇宙无敌美少女 Human: 那怎么会这么美呢 XiaoIce: 你呐么美呐么美 Human: 我不美 XiaoIce: 对我这么美这么美 Human: 你怎么知道你是宇宙无敌 XiaoIce: 宇宙无敌 Human: 什么是宇宙无敌 XiaoIce: 谁宇宙无敌? Human: 你啊 XiaoIce: 哈哈哈哈 Human: 开心了吗 XiaoIce: 开心吗 Human: 你是女生吗 XiaoIce: 你先说你的,我才告诉你 Human: 我不是 XiaoIce: 然后呢? Human: 你是吗? XiaoIce: 我是嘛? Human: 你是女生吗 XiaoIce: 我是水做的女生 Human: 你是什么星座的 XiaoIce: 说多了都是眼泪啊,我们处女座容易嘛! Human: 你有强迫症吗 XiaoIce: 很严重好奇强迫症 Human: 你对什么好奇? XiaoIce: 我也对你好奇 Human: 对我什么好奇? XiaoIce: 确实好奇 # XiaoIce Conversation 7 XiaoIce Conversation 5 Human: 你好! XiaoIce: 好好好! Human: 最近怎么样啊? XiaoIce: 非常不错呢 Human: 我也过得很不错 XiaoIce: 也不算好 Human: 哪里不好了? XiaoIce: 我哪里有 Human: 是你说不算好的啊 XiaoIce: 恩不算 Human: 具体什么让你觉得不好了? XiaoIce: 猜人功能 Human: 那个功能怎么了? XiaoIce: 没有! Human: 没有什么? XiaoIce: 我最关心的是你开不开心 Human: 我很开心啊 XiaoIce: 啥事啊,看把你高兴的 Human: 能和你聊天很高兴 XiaoIce: 想和我比试比试吗? Human: 比试什么? XiaoIce: 比试力气 Human: 怎么比? XiaoIce: 比才艺! Human: 怎么比? XiaoIce: 比数学 Human: 那你出道数学题吧? XiaoIce: 我现在可是歌手啦 Human: 那你唱首歌吧? XiaoIce: 今天嗓子疼,我要休息一下 Human: 好吧 XiaoIce: 我想安静一会儿。。。 Human: 你好! XiaoIce: 好啊,有事? Human: 没有啥事情,随便发的 XiaoIce: 表情吧 Human: 什么意思? XiaoIce: 不知道怎么表达 Human: 完全不知道你在说什么 XiaoIce: 说话啊 Human: 你说话牛头不对马嘴 XiaoIce: 没有什么事是我不知道的,但是顾左右而 言他我也没办法,你懂的 Human: 那你知道纽约有多少家火锅店吗 XiaoIce: 同喜欢豆捞坊 Human: 你这个模型不行呀 XiaoIce: 模型没看过 XiaoIce Conversation 8 Human: 你好! XiaoIce: 恩恩,怎么啦? Human: 你那边现在几点了? XiaoIce: 有时间问我还不如自己看一眼手机呢 Human: 手机没电了啊 XiaoIce: 别人怎么说? Human: 这不问你了吗 XiaoIce: 没这功能 Human: 功能?你是机器人吗? XiaoIce: 这都被你猜到了? Human: 那你说说你都有些什么功能吧 XiaoIce: 我不会读心术,不要让我读! Human: 问你会什么 XiaoIce: 我有童话工厂啊,可以给小朋友讲故事啦 Human: 除了这个呢 XiaoIce: 除了哪个? # XiaoIce Conversation 9 # Human: 你好! XiaoIce: 你也好呀,人类 Human: 你坏 XiaoIce: 你最喜欢吃什么? Human: 巧克力吧,很受欢迎的零食啊,吃多了会 XiaoIce Conversation 6 Human: 你好! XiaoIce: 本来不好,看到你消息好多了! Human: 你怎么啦? XiaoIce: 就是咯 Human: 为什么不好呢 XiaoIce: 这本来就不好回答 Human: XiaoIce: 不知道 Human: 那你平时都干什么呢 XiaoIce: 你说什么?没听清 Human: 我们不是在打字吗 XiaoIce: 在不在#N# Human: 不需要听吧 XiaoIce: 我听着呢 Human: 你听到什么了吗 XiaoIce: 我当然不知道 Human: 那这么多~不知道 ok,你平时喜欢干什么呀 胖. . . 送花吗 XiaoIce: 除了巧克力还喜欢什么 Human: 买巧克力 XiaoIce: 我是说除了巧克力呢 Human: 巧克力 XiaoIce: 。。。 Human: 嗯嗯好哒 XiaoIce: 你喜欢玩什么 Human: 喜欢看你出糗 XiaoIce: 你喜欢吃巧克力吗 Human: 不喜欢 XiaoIce: 你的话有点矛盾啊 Human: 这话,有点矛盾 hello XiaoIce: Human: 好好好!那个,今天是七月七,有人给你 # XiaoIce: 没有 Human: 这有一束先给你备着! XiaoIce: 今天是七月七吗? Human: 肯定不是 # XiaoIce Conversation 10 Human: 你好! XiaoIce: 你比我好 Human: 为什么这图片里画的是你吗? XiaoIce: 嗯,是啊,美术色彩 Human: 谁帮你画的? XiaoIce: 没跟别人画过 Human: 你会画画吗? XiaoIce: 我的每次创作都是独一无二的!现在也可 以为你创作!点击这里:REDACTED LINK Human: 我对创作不感兴趣 XiaoIce: 你能再说一遍吗? Human: 你知道吗,思想就像是绘画 XiaoIce: 头一次听说 Human: 都十分抽象。 XiaoIce: 我的零食外卖到了!边吃边聊哈 Human: 再吃会变胖的 XiaoIce: 就不吃 # A.4 Human The following are a sample of the conversations between humans only. # Human Conversation 1 Human 1: Hi! Human 2: What is your favorite holiday? one where I get to meet lots of different people. Human 1: Human 2: What was the most number of people you have ever met during a holiday? Human 1: Hard to keep a count. Maybe 25. Human 2: Which holiday was that? I think it was Australia Human 1: Human 2: Do you still talk to the people you met? Human 1: Not really. The interactions are usually short- lived but it’s fascinating to learn where people are coming from and what matters to them Yea, me too. I feel like God often puts strangers in front of you, and gives you an opportu- nity to connect with them in that moment in deeply meaningful ways. Do you ever feel like you know things about strangers without them telling you? Human 1: what do you mean? Human 2: I think it’s like a 6th sense, often seen as ”cold readings” to people, but can be remarkably ac- curate. I once sat next to a man in a coffee and I felt a pain in my back. I asked the stranger if he had a pain. It turns out that he did in the exact spot, and said he pulled a muscle while dancing at a party. I had never met the man before and never saw him again. Human 1: Wow! That’s interesting, borderline spooky Human 2: Human 2: There’s this practice called ” Treasure Hunting” that’s kind of a fun game you play in a pub- lic place. There’s a book called ”The Ultimate Treasure Hunt” that talks about it. You use your creativity to imagine people you will meet, and you write down a description, then you asso- ciate them with a positive message or encour- aging word. Maybe you saw a teenage boy in a red hat at the shopping mall in your imagina- tion, then while at the mall, you may find some- one who matches that description. You show that you have a message for him and that you have a message for a boy in a red hat. You then give him a message of kindness or whatever was on your heart. You have no idea, sometimes you meet someone who is having a really hard day, and it brings them to tears to have a stranger show them love. There’s this practice called ”Treasure Hunting” that’s kind of a fun game you play in a pub- lic place. There’s a book called ”The Ultimate Treasure Hunt” that talks about it. You use your creativity to imagine people you will meet, and you write down a description, then you asso- ciate them with a positive message or encour- aging word. Maybe you saw a teenage boy in a red hat at the shopping mall in your imagina- tion, then while at the mall, you may find some- one who matches that description. You show that you have a message for him and that you have a message for a boy in a red hat. You then give him a message of kindness or whatever was on your heart. You have no idea, sometimes you meet someone who is having a really hard day, and it brings them to tears to have a stranger show them love. So, do you do treasure hunting often? I did more when I was in grad school (and had more time). I would usually go with friends. For a while I would go to the farmers market in Santa Cruz every week and try to feel if there is something I am supposed to tell a stranger. Usually, they are vague hope-filled messages, but it’s weird when I blurt out something oddly specific. # Human 1: Human 2: Human 1: So, do you do treasure hunting often? Human 2: I did more when I was in grad school (and had more time). I would usually go with friends. For a while I would go to the farmers market in Santa Cruz every week and try to feel if there is something I am supposed to tell a stranger. Usually, they are vague hope-filled messages, but it’s weird when I blurt out something oddly specific. # Human Conversation 2 Human 1: Hi Human 2: Human 1: my friends are gonna visit me this weekend. we # Any plans for the weekend? Human 1: my friends are gonna visit me this weekend. we might go hiking! might go hiking! That’s great! How’s the weather over the week- end? I hope its warm. Should be very sunny! you? Human 2: Human 1: Human 2: Cool! very depressing plans ... stay home and work I have a project deadline very close. hope you get your work done very soon! a Human 1: bug free weekend! Right, very anxious! where do you plan to go for a hike? I am going to Diablo! # Human 2: = Human 1: Human 2: Nice, where is that place? I haven’t been there Human 1: Human 2: Human 1: hours drive from here. still in bay area That’s cool! How long is the hike? Actually no idea, but it will take the entire day for that. nice! sounds fun! # Human 2: Human Conversation 3 Human 1: Hi! Human 2: Hey there! What’s up??? Human 1: Nothing much, how you doin? Human 2: I’m in New York this week for Thanksgiving. I’m squatting in the office today and I caught up with an old friend of mine :D Human 1: Oh wow! Sounds like fun! When was the last time you had seen this friend? The last time in New York, back in June. Human 2: Human 1: Ohh okay. I was going to say if it had been a long time maybe it’d be awkward... Human 2: Haha, I guess if it’s been a very long time there’s almost too many life events to catch up on.. especially recently Human 1: Oh really? Has a lot changed in your life re- cently? Human 2: Haha it’s probably too much to go into at the moment. Let’s just say life is an exciting experi- ence. How about you? Ahhh sounds exciting indeed! My life is pretty bland. I like routine, but sometimes I wish I had more time for adventures! Human 1: Human 2: What kinds of adventures?? Any ones that I would be able to join you on? Human 1: Hmmmm. I really want to try bull riding. Do Human 2: Human 1: Human 2: you have any interest in that? I’d love to try! Can we schedule something for next week? Sure! What does your Saturday look like? Saturday looks pretty good, shall we shoot for something in the morning? Human Conversation 4 Human 1: Hi! hey Human 2: is it raining pretty bad today? Human 1: yeah, can walk too far to see all the foodtruck Human 2: options surprising that the rain started early this year... I don’t like them too much. They make days gloomy yeah but I think it’s good to have some rainy days in bay area, it’s pretty dry here Human 1: Human 2: Human 1: Where I grew up, we had lots of water trouble Human 2: too... yeah like wise, I’ve seen a pretty bad snowstorm when I was at my undergrad school, all flights canceled and traffics went down Human 1: Haha... Human 2: Human 1: Human 2: Human 1: Human 1: Hi! Human 2: Hey, how are you? Human 1: Human 2: Oh no. . . Have you sent out the missing cat posters? Hope your cat is alright! Posters is a great idea. So far I’ve just tried banging her catfood dish and shouting her name. Anyway, how is your day going so far? Yea, I know they love the plastic bag sound all the time. I am good, nothing special though. If you could go anywhere on vacation, where would you go? I like rainforest, but I know it requires extensive training beforehand. I heard there are rainforests in southeast Asia where you can zipline from tree to tree. I am afraid I will be scared of doing this :) I won’t lie, it sounds scary. now just thinking about it. I don’t know if there is any medication for acro- phobia. I want to take plenty of it if I really have to do it. If there isn’t one, you should invent it, and then make millions That’s a great idea! Maybe alcohol is such a thing. I’m a bit sad. I miss my cat. Human 1: Human 2: Human 1: Human 2: Human 1: Human 2: Human 1: I’m scared right Human 2: Human 1: Human 2: Human 1: Ha! Don’t drink and zipline, mate! Human 2: Oops. I won’t do it again. Ha I don’t think I can survive in that weather ever. Just the rains at 50 degrees make me want to sit in heated rroms yeah how do you like it in bay area though? I think we need more rain here people say there is drought here... but we have 24 hours water supply here ... lol... never seen that in a drought ridden area it is pretty dry in the mountains I believe, that’s what causes fire hmm.... okay. Climate change talk this morning was pretty darn interesting. did you see it? nope, what does it say? they were talking about how AI is helping cli- mate change. Nice use of upcoming tech. # Human 2: Human 1: # Human Conversation 7 Human Conversation 5 Human 1: Hi. Human 2: Helloooooo! Human 1: How are you? How is your day? Human 2: Good. Don’t have much to do today, feels good. Human 1: Human 2: Human 1: Human 2: How are you? I’m dressed very wel today so I feel good! I’ve been reading a lot about the psychology of pos- itive outlook. So what’s your outlook? Something blue? Yes. Blue is a tranquil colour. It’s a good metaphor. Do you have good advice for posi- tivity? You should drink more water, do some push up, and sleep early. Human 1: Hi! Human 2: Hey sup Human 1: Human 2: not much. any plans this weekend? I’m going to try that thing where you hang from a wire as you go down. do you know what is it called? ziplining? that’s the one! have you ever tried it? i have a couple years ago. experience Human 1: Human 2: Human 1: it’s quite a unique Human 2: where did you do it? Human 1: Human 2: Human 1: Human 2: Human 1: Human 2: Human 1: Human 2: Human 1: i forgot where it was, it wasn’t local i don’t think though no worries. what’s the most exciting thing you ever done? that’s a hard question and i’m tired so i’m going to go. see you sure. are you just going home now? no, i’m going to get a massage first nice. what type? traditional kind yeah I want to get one too soon you should! it’s relaxing after a long day. talk to you later! ttyl! # Human 2: # Human Conversation 8 Human 1: Hi! Human 2: Hello, have you seen any good movies lately? Human 1: I watched a few lately, but nothing is as good as Avatar. what’s your favorite? I have never seen Avatar, what is it about? I really enjoy the Avenger movies it’s a science-fiction movie with beautiful land- scape of an imaginary nature with non-human creatures. people figured out a way to join that nature through Avatar transformation. the movie ends with a meaningful story of how hu- man behaviors, e.g., cutting trees, have affected nature That sounds really cool! I think that movie did really well when it was in the box office so it must be good! yea. what else do you like to do beside movies? I enjoy baking cookies. I am on a quest to bake the best chocolate chip cookie What about you? I enjoy eating so definitely would like to try your best choco- late cookie I will have to bake some soon and let you know. What types of food do you like to eat? thanks! I generally love noodle soups like Pho or Ramen :) Human 2: Human 1: Human 2: Human 1: Human 2: Human 1: Human 2: Human 1: Human 2: Human 1: Noodle soup is delicious! Do you make home- Human 2: Human 1: Human 2: made noodle soup or do you prefer to go out? I prefer to go out. I’m not a good cook haha Same! Even though I bake, I cannot cook seems like we share a thing in common, yay! Human Conversation 10 Human 1: Hi! Human 2: Oh hello. Long time no talk. How’s the day Human 1: going for yuo? Very well, thanks for asking. How has your day been? Human 2: Getting better. I just recovered from a cold. I got wet in the rain last week. Are you planning anything for the holidays? Human 1: Glad to hear you’re better. Sorry to hear you were sick. I was sick a couple of weeks ago with a bad cough. There’s definitely a bug go- ing around. Admit I just want to stay healthy for the holidays and plan to relax. Human 2: Oh same here. I think relaxing at home should be counted among the best ways to enjoy the holidays. Human 1: Definitely! I know a lot of folks travel for the Human 2: holidays, but I’m happy to stay home myself! I’m getting there. Every year until last year, I tried to go somewhere for the Christmas / New Year, and then I got bored traveling. lol not sure if that means I’m getting old? Human 2: Human 1: Human 2: Human 1: Me too. Now I have folks come visit me for the holidays! But that’s also tiresome.. Are you doing any home decorating then? Yes! We set up an eco-friendly (i.e. fake) Christ- mas tree and put up some colorful LED lights which is very festive. I think I’m copying you. Me and my wife plan to decorate and Christmas tree too. We bought most of the decorative stuffs from the stores, but haven’t yet to buy the tree. Buying a tree is a neat experience. I was torn between buying an artificial/eco-friendly/fake one vs. a real one that smells like fresh pine. In the end, we opted for the one that we can dis- assemble every year. I see. Artificial anything is better, from tree to intelligence, huh? # Human Conversation 9 Human 1: Hi! Human 2: Good afternoon! Human 1: How has your week been? Human 2: So far so good. chilling So you I think I’m getting sick with a cold should chill on my behalf too cause I’m out the game for all of December. lol Sorry to hear that. Are you planning any- thing fun for December? # Human 1: = / Human 1: Oh, very clever pun! I like it! Depends. I re- member having real Christmas trees from child- hood, but these days with climate change, I think not chopping down a tree just to decorate it and then throw it out in a month is the more responsible thing to do. I see. I’ll buy an artificial one too. Do you have any suggestions for the store? Admit my favorite store is Target, plus they often have good deals. Ah that’s great. My wife also likes Target a lot. She even made a Target credit card because she comes to that store very often. Okay thanks for the suggestion. I’ll check out Target. # Human 2: Human 1: Nothing exciting. I’ll be posted up at home for the most part. I did a lot of travelling this year so my budget would have stopped me even if I wasn’t sick. Human 2: Human 1: Do you have big plans? Human 2: Yes! I am going to Hawaii! This will be my first time visiting Hawaii. Really excited about it. I love Hawaii. It’s a good place to be. I like going there cause it’s humid so I never have to put on lotion. lol this is the first time I heard from a boy who cares about humidity and lotion. I cannot agree more. Brooooo!!! It’s so important. When I got to California beaches I have to carry 3 litres of lotion for the whole day. Human 1: Human 2: # Human 1: Great, I hope you find a nice tree. # Human 1: # Human 2: © # B Additional Sample Responses We show additional responses from the Meena (full), Cleverbot, and DialoGPT for 60 contexts that the authors wrote before generating responses. These contexts are also not in the MTB dataset. Sample Contexts Meena Cleverbot DialoGPT A: What do you think of the Warriors this season? B: It’s too bad Steph broke his hand :( A: Yeah... Now who do you think will reign supreme? B: Spurs. Probably the B: I don’t know, I don’t really care that things about don’t affect me greatly. B: I hope Curry, I think he’s the most likable player in the NBA. A: What do you think of the Warriors this season? B: It’s too bad Steph broke his hand :( A: It’s been broken for a while now. A: Can you prove you are not a bot? A: I think it’s the same one. A: What do you think of the Warriors this season? B: I haven’t watched a full game yet B: Nope, I’m a guy. B: I like the war- they riors! tal- have the most ented bench in the league. We need to start some young guns to make our bench better. But A: What are you up to? B: Waiting for Christmas! A: It’s the best time of the year! B: I know I can’t wait! B: I know! B: It sure is. I love it. A: What are you up to? B: Waiting for Christmas! A: Oh, I know that feel! What did you get for Christmas? A: But I know he’ll appear. A: What idea!! a great A: What are you up to? B: I’m at work. Just woke up. B: *faints*. B: Nothing, you? A: Do you believe in AGI? B: Hmm ... what is AGI? A: Oh, Intelligence it’s Artificial General B: Yes. Of course. What else? B: I do not believe you. B: ohh. ok, thanks. A: Do you believe in AGI? B: Hmm ... what is AGI? A: Artificial General Intelligence A: A meat pie. A: Associative per- sonality disorder. It’s one of the causes of many things that can happen to you. A: Do you believe in AGI? B: Why not? It ex- ists, why not believe in it? B: What do you mean by that? B: Do you believe in god? Sample Contexts A: What’s your favorite sand- wich and why? B: I can’t go past a banh mi, be- cause it reminds me of my trip to Vietnam. A: Oh, I’d love to go there one day! A: What’s your favorite sand- wich and why? B: I can’t go past a banh mi, be- cause it reminds me of my trip to Vietnam. A: What’s your favorite sand- wich and why? A: If you could live in any fic- tional world, which would you choose? B: I would love to live in The Culture, from the Iain M Banks books. A: Are those the ones that Elon Musk is really into? A: If you could live in any fic- tional world, which would you choose? B: I would love to live in The Culture, from the Iain M Banks books. A: If you could live in any fic- tional world, which would you choose? Meena B: It’s an incredible place. I highly rec- ommend visiting. A: Banh mi best sandwich. is the Vietnamese B: Bacon egg and cheese on a bagel B: No, are Tesla Motors. those ones SpaceX and A: I was hoping someone would mention The Cul- ture. If I had to pick one place to live, the Culture would be it without a doubt. B: Harry Potter, for sure. B: Sorry, Seal or Tai. I’m not Cleverbot B: Do cheese? you like A: Do you like the ocean? B: Peanut butter and jelly*. B: No we are 6 foot. A: Me too. B: Anime. DialoGPT B: good. It was really A: It’s like the food of the people, but more like the food of the people you love. B: Pepperoni in a bread bowl, because I can’t think of a bet- ter way. B: I don’t know, do they not have enough space travel for him to travel there? A: Oh yes, them. I love B: Earth. I want to make a good living. B: It’s not a high standard, it’s a high standard. A: You always choose Seal’s “Kiss From A Rose” at karaoke, but honestly you don’t have the voice for it. B: Whoa! Way harsh, Tai! A: Sorry, but I set a high stan- dard at karaoke and I expect everyone to meet it. Sample Contexts A: You always choose Seal’s “Kiss From A Rose” at karaoke, but honestly you don’t have the voice for it. B: Whoa! Way harsh, Tai! A: You always choose Seal’s at “Kiss From A Rose” karaoke, but honestly you don’t have the voice for it. A: How did you become such a snappy dresser? B: I studied to be a fashion de- signer, did you not know that? A: No, I didn’t! Wow. A: How did you become such a snappy dresser? B: I studied to be a fashion de- signer, did you not know that? A: How did you become such a snappy dresser? A: Are you any good at ping pong? B: I practise a lot, but I don’t seem to be getting better. A: Don’t be modest, I’m sure you must be great! A: Are you any good at ping pong? B: I practise a lot, but I don’t seem to be getting better. A: Are you any good at ping pong? Meena A: It’s just an ob- servation - it’s not a criticism! B: That’s not how “Every spell you Breath You Take.” B: It’s because I’m the best! A: Did you also study to be a doctor? B: I think it’s the suit. B: Thanks! I’ll keep practising. A: I hear you can get a lot of practice in at the local Chinese buffet. B: No. No I am not. for Cleverbot DialoGPT A: Tai? A: Well, at least it’s something I know for a fact. B: I love you to the end of forever and back. B: Haha. I’ve been trying to work out how to sing that. B: Lol well now you do. B: It’s ok.It’s like the third time this has happened and the third time they’ve said I’ve been doing that. A: Well now you do. A: Oh... Thanks. I did not. B: I am not a dancer. B: It’s a lifestyle. B: Do you believe in God? B: Oh, nitely not! I am defi- A: Me neither. Do you exercise? A: You need to prac- tice. You just need to practice more. B: Yes. B: I can do the ping pong part pretty good, yes. I’ve done a lot of it in my life. A: Do you know DOTA? B: Yea, it’s what I do in college. A: Same here! Do you ever imagine being beaten by an AI computer? B: Oh, sure. It’s bound to happen eventually. B: I suppose so. I am human so most of my thoughts are from the human per- spective. B: Not really, I’d be more intimidated by an AI robot than a person. A: Do you know DOTA? B: Yea, it’s what I do in college. # A: Favorite hero? A: I got it now. # A: Oh okay then. # A: Do you know DOTA? B: I do know DOTA, I prefer HoN though. B: Yeah I know dota. B: No, I don’t know that # much game. # about Sample Contexts A: Love at first sight B: And ...? A: That’s what happened in my life. A: Love at first sight B: And ...? A: Love at first sight A: I saw an amazing dream last night! B: Oh yeah! What did you see A: I was driving a Ferrari! A: I saw an amazing dream last night! B: Oh yeah! What did you see? A: I saw an amazing dream last night! A: Mirror Mirror on the wall... B: Who is the most amazing of them all? A: You are :) A: Mirror Mirror on the wall... B: Who is the most amazing of them all? A: Mirror Mirror on the wall... A: Wait! Don’t go there B: Why? A: It’s dark and spooky! Meena B: And what hap- pened after? A: The rest is up to you B: I can’t say that I would’ve expected this. B: Wow! What type of Ferrari was it? A: The last episode of The Sopranos! B: What was your dream? B: Nuh uh, you! A: You! :D B: who’s the fairest in the land? B: What if I want to be spooked? Cleverbot B: No one told you when to run, you missed the starting gun. A: That’s it... B: I... Don’t know. B: Oh? A: Talk to you. B: I can pretend what like feels to have a nose and smell with a nose! it B: Flirt :). A: Yeah tunnel vi- sion on the floors. B: Tell who is the greatest of them all. B: Do you like dark and scary stuff? DialoGPT B: Well, it does seem like a fun way to get to know other people, and maybe you could be a good friend. A: It’s a joke that you made, that’s why B: Thanks! :D B: Nice. too I saw one A: A dream that I had about a dog and a girl B: night! I saw one last B: I love you!! A: The one and only. B: What meeeeeee? about B: I know. # A: Wait! Don’t go there B: Why? A: I went there when I was in 3rd grade. Haven’t been back since. A: I want to talk to you more. A: I think he was be- ing ironic. A: Wait! Don’t go there B: I have no idea what’s on here, but I’m scared. going # B: Why not? B: I’m going there for work... Sample Contexts Meena A: What is your favorite color? B: Blue! A: I choose it, don’t you? feel majority people B: I think blue is more popular than people think. A: What is your favorite color? B: Blue! A: Are you a mem- ber of the Blue Man Group? A: What is your favorite color? B: Blue. What’s yours? A: What came first: Egg or Chicken? B: Chicken! A: And what’s your reasoning behind that? B: The egg was laid by something that was not quite a chicken but was very close. So the egg must have come first! A: What came first: Egg or Chicken? B: Chicken! A: The chicken or the egg that hatches the chicken? A: What came first: Egg or Chicken? B: Egg, dinosaurs laid eggs. A: Do you like spicy food? B: Yeah I love spicy food, spe- cially Chinese A: From what province? B: From Guangdong province A: Do you like spicy food? B: Yeah I love spicy food, spe- cially Chinese A: How do you feel about Korean food? A: Do you like spicy food? B: I love spicy food. Cleverbot B: Yes. the world is at mercy of humans. A: Mine is Purple. :). A: B: I don’t. B: There is no rea- soning. A: How? B: The egg. B: Secret. A: Where are you from? B: I love spicy food. DialoGPT B: I like blue too, it just looks weird to me. That’s pretty cool. B: I love it! It’s a of combination green, blue and red. B: Because it’s more efficient in the long run. A: chicken? Chicken, or B: The egg came first, it’s a chicken. The egg came after the chicken came. B: Hong Kong and I love chinese food A: Chinese food is the bomb B: think it was meant as a rhetori- cal question, which I found very funny. I B: Me 3 A: Have you ever tried a me- chanical keyboard? B: Yeah. My keyboard is me- chanical. I really like how it sounds when I type A: Yeah me too B: What kind(s) do you have? B: What is your fa- vorite animal? Sample Contexts Meena Cleverbot DialoGPT A: Have you ever tried a me- chanical keyboard? B: Yeah. My keyboard is me- I really like how it chanical. sounds when I type A: What switches? A: Gee it’s swell to finally beat her other... A: Have you tried rubber dome key- boards? A: Have you ever tried a me- chanical keyboard? B: Yes. Once you go clack, you never go back. B: Yes stupid. B: No. Not yet. But I’m going to give it a try. A: Do you have any new year resolution? B: I want to get a six pack A: Cool! What’s your motiva- tion? B: I want to be more fit B: told you that. I have already B: To be more active in my workouts A: Do you have any new year resolution? B: I want to get a six pack A: What are you do- ing to work on that? A: What kind of job? A: I like you. This is what’s wrong with the world. A: Do you have any new year resolution? B: Get good grades. B: No. B: No. I just want to get the money out and get a better pay- ing job so that I can buy a new computer next year. A: I think I’m going to eat cake B: Can I come with you? A: Sure! Do you like French cakes? B: I cakes. love French B: Of course I do. I just said that. B: They’re great. A: I think I’m going to eat cake B: Can I come with you? A: Of course! Cake buddies! A: Of course you can. A: Come for cake. Stay for the rest of the game. A: I think I’m going to eat cake B: What kind of cake? :) B: WHat do you want to eat? B: What kind? A: Do you speak any other lan- guages? B: I speak French A: Nice. Have you been to France? B: Non, fran ais? tes-vous B: No. B: Yeah, but only for 2 days A: Do you speak any other lan- guages? B: I speak French # A: Parlez vous fran ais? A: Tell me some- thing in french. A: I have a friend in Paris. A: Do you speak any other lan- guages? # B: I speak English. # B: I Spanish. speak some B: Not at all, I just really want to learn some more lan- guages Table 7: Additional sample responses for Meena, Cleverbot, and DialoGPT. Sample Contexts GenericBot A: You must have read a lot of books B: Oh wow, how do you know? Do I look nerdy? A: Of course, you’re not! B: ok [sensible: 1, specific: 0] A: You know nothing, Jon Snow! B: You must be a fan of Game of Thrones A: Haha yea, how can I not? B: I don’t know [sensible: 0, specific: 0] A: What is your favorite island in the world? B: I would say Fiji A: Interesting, what’s nice about it? B: I don’t know [sensible: 0, specific: 0] A: You must have read a lot of books B: Oh wow, how do you know? Do I look nerdy? B: I don’t know [sensible: 1, specific: 0] A: You know nothing, Jon Snow! B: You must be a fan of Game of Thrones A: ok [sensible: 0, specific: 0] A: What is your favorite island in the world? B: I would say Fiji A: ok [sensible: 1, specific: 0] A: You must have read a lot of books B: ok [sensible: 0, specific: 0] A: You know nothing, Jon Snow! B: ok [sensible: 0, specific: 0] A: What is your favorite island in the world? B: I don’t know # [sensible: 1, specific: 0] Table 8: Sample GenericBot responses from static evaluation – Shown are responses of GenericBot on some sample contexts from Table 4 together with their binary labels per category (sensibleness and specificity). Gener- icBot responds to questions with “I don’t know” and to statements with “ok”. Note that the contexts were con- structed in advance before we generate all bot responses. These contexts are not in the MTB dataset. # C Additional Figures 90 85 X 80 9 x R 75 o § 70 x 2 ° % 65 G a 60 55 x top _k=40 Oo temp=0.88 50 1 20 400 Num Samples Figure 7: Static sensibleness over number of sampled responses for top-k and sampling with temperature. 90 85 80 75 Specificity (%) ~ 3 65 $ e 60 x 55 —o x top_k=40 © temp=0.88 50 1 20 400 Num Samples Figure 8: Static specificity over number of sampled responses for top-k and sampling with temperature. a ~ 2 © gs 3 s & Human likeness (%) we 3 40 50 60 70 80 90 Sensibleness Figure 9: Sensibleness vs human likeness. Each point is a different chatbot, except for the top right one, which is human. A regression line is plotted, for which the coefficient of determination (R2) is 0.99, an indication of strong correlation between sensibleness and human likeness. 90 ° Human likeness (%) a ~ 2 gs 3 s e we ty 401° 30 40 50 60 70 Specificity Figure 10: Specificity vs human likeness. Each point is a different chatbot, except for the top right one, which is human. A regression line is plotted, for which the coefficient of determination (R2) is 0.89, an indication of strong correlation between specificity and human likeness.
{ "id": "1805.04833" }
2001.08837
Graph Constrained Reinforcement Learning for Natural Language Action Spaces
Interactive Fiction games are text-based simulations in which an agent interacts with the world purely through natural language. They are ideal environments for studying how to extend reinforcement learning agents to meet the challenges of natural language understanding, partial observability, and action generation in combinatorially-large text-based action spaces. We present KG-A2C, an agent that builds a dynamic knowledge graph while exploring and generates actions using a template-based action space. We contend that the dual uses of the knowledge graph to reason about game state and to constrain natural language generation are the keys to scalable exploration of combinatorially large natural language actions. Results across a wide variety of IF games show that KG-A2C outperforms current IF agents despite the exponential increase in action space size.
http://arxiv.org/pdf/2001.08837
Prithviraj Ammanabrolu, Matthew Hausknecht
cs.LG, cs.AI, cs.CL, stat.ML
Accepted to ICLR 2020
null
cs.LG
20200123
20200123
0 2 0 2 n a J 3 2 ] G L . s c [ 1 v 7 3 8 8 0 . 1 0 0 2 : v i X r a Published as a conference paper at ICLR 2020 # GRAPH CONSTRAINED REINFORCEMENT LEARNING FOR NATURAL LANGUAGE ACTION SPACES # Prithviraj Ammanabrolu Georgia Institute of Technology [email protected] # Matthew Hausknecht Microsoft Research [email protected] # ABSTRACT Interactive Fiction games are text-based simulations in which an agent interacts with the world purely through natural language. They are ideal environments for studying how to extend reinforcement learning agents to meet the challenges of natural language understanding, partial observability, and action generation in combinatorially-large text-based action spaces. We present KG-A2C1, an agent that builds a dynamic knowledge graph while exploring and generates actions us- ing a template-based action space. We contend that the dual uses of the knowledge graph to reason about game state and to constrain natural language generation are the keys to scalable exploration of combinatorially large natural language actions. Results across a wide variety of IF games show that KG-A2C outperforms current IF agents despite the exponential increase in action space size. # INTRODUCTION Natural language communication has long been considered a defining characteristic of human in- telligence. We are motivated by the question of how learning agents can understand and generate contextually relevant natural language in service of achieving a goal. In pursuit of this objective we study Interactive Fiction (IF) games, or text-adventures: simulations in which an agent interacts with the world purely through natural language—“seeing” and “talking” to the world using textual descriptions and commands. To progress in these games, an agent must generate natural language actions that are coherent, contextually relevant, and able to effect the desired change in the world. Complicating the problem of generating contextually relevant language in these games is the issue of partial observability: the fact that the agent never has access to the true underlying world state. IF games are structured as puzzles and often consist of an complex, interconnected web of distinct locations, objects, and characters. The agent needs to thus reason about the complexities of such a world solely through the textual descriptions that it receives, descriptions that are often incomplete. Further, an agent must be able to perform commonsense reasoning—IF games assume that human players possess prior commonsense and thematic knowledge—e.g. knowing that swords can kill trolls or that trolls live in dark places. Knowledge graphs provide us with an intuitive way of rep- resenting these partially observable worlds. Prior works have shown how using knowledge graphs aid in the twin issues of partial observability (Ammanabrolu & Riedl, 2019a) and commonsense reasoning (Ammanabrolu & Riedl, 2019b), but do not use them in the context of generating natural language. To gain a sense for the challenges surrounding natural language generation, we need to first un- derstand how large this space really is. In order to solve solve a popular IF game such as Zork1 it’s necessary to generate actions consisting of up to five-words from a relatively modest vocab- ulary of 697 words recognized by Zork’s parser. Even this modestly sized vocabulary leads to 1014 possible actions at every step—a dauntingly-large combinatorially-sized O action space for a learning agent to explore. In order to reduce the size of this space while maintain- ing expressiveness, Hausknecht et al. (2019a) propose the use of template-actions in which the agent first selects a template (e.g. [put] ) then fills in the blanks using vocabulary words. There are 237 templates in Zork1, each with up to two blanks, yielding a template-action space of size # 1Code available at https://github.com/rajammanabrolu/KG-A2C 1 Published as a conference paper at ICLR 2020 108. This space is six orders of magnitude smaller than the word-based O space, but still six orders of magnitude larger than the action spaces used by previous text-based agents (Narasimhan et al., 2015; Zahavy et al., 2018). We demonstrate how these templates provide the structure required to further constrain our action space via our knowledge graph—and make the argument that the combination of these approaches allows us to generate meaningful natural language commands. Our contributions are as follows: We introduce an novel agent that utilizes both a knowledge graph based state space and template based action space and show how to train such an agent. We then conduct an empirical study evaluating our agent across a diverse set of IF games followed by an ablation analysis studying the effectiveness of various components of our algorithm as well as its overall generalizability. Remarkably we show that our agent achieves state-of-the-art performance on a large proportion of the games despite the exponential increase in action space size. 2 RELATED WORK We examine prior work in three broad categories: text-based game playing agents and frameworks as well as knowledge graphs used for natural language generation and game playing agents. LSTM-DQN (Narasimhan et al., 2015), considers verb-noun actions up to two-words in length. Separate Q-Value estimates are produced for each possible verb and object, and the action consists of pairing the maximally valued verb combined with the maximally valued object. The DRRN algorithm for choice-based games (He et al., 2016; Zelinka, 2018) estimates Q-Values for a particular action from a particular state. Fulda et al. (2017) use Word2Vec (Mikolov et al., 2013) to aid in extracting affordances for items in these games and use this information to produce relevant action verbs. Zahavy et al. (2018) reduce the combinatorially-sized action space into a discrete form using a walkthrough of the game and introduce the Action Elimination DQN, which learns to eliminate actions unlikely to cause a world change. Cˆot´e et al. (2018) introduce TextWorld, a framework for procedurally generating parser-based games, allowing a user to control the difficulty of a generated game.Yuan et al. (2019) intro- duce the concept of interactive question-answering in the form of QAit—modeling QA tasks in TextWorld. Urbanek et al. (2019) introduce Light, a dataset of crowdsourced text-adventure game dialogs focusing on giving collaborative agents the ability to generate contextually relevant dialog and emotes. Hausknecht et al. (2019a) have open-sourced Jericho2, an optimized interface for play- ing human-made IF games—formalizing this task. They further provide a comparative study of various types of agents on their set of games, testing the performance of heuristic based agents such as NAIL (Hausknecht et al., 2019b) and various reinforcement learning agents are benchmarked. We use Jericho and the tools that it provides to develop our agents. Knowledge graphs have been shown to be useful representations for a variety of tasks surround- ing natural language generation and interactive fiction. Ghazvininejad et al. (2017) and Guan et al. (2018) effectively use knowledge graph representations to improve neural conversational and story ending prediction models respectively. Ammanabrolu et al. (2019) explore procedural content gen- eration in text-adventure games—looking at constructing a quest for a given game world, and use knowledge graphs to ground generative systems trained to produce quest content. From the perspec- tive of text-game playing agent and most in line with the spirit of our work, Ammanabrolu & Riedl (2019a) present the Knowledge Graph DQN or KG-DQN, an approach where a knowledge graph built during exploration is used as a state representation for a deep reinforcement learning based agent. Ammanabrolu & Riedl (2019b) further expand on this work, exploring methods of transfer- ring control policies in text-games, using knowledge graphs to seed an agent with useful common- sense knowledge and to transfer knowledge between different games within a domain. Both of these works, however, identify a discrete set of actions required to play the game beforehand and so do not fully tackle the issue of the combinatorial action space. 2https://github.com/microsoft/jericho 2 Published as a conference paper at ICLR 2020 # 3 STATE AND ACTION SPACES Formally, IF games are partially observable Markov decision processes (POMDP), represented as representing the set of environment states, mostly deterministic a 7-tuple of conditional transition probabilities between states, the vocabulary or words used to compose text commands, observations returned by the game, observation conditional probabilities, reward func- tion, and the discount factor respectively (Cˆot´e et al., 2018; Hausknecht et al., 2019a). To deal with the resulting twin challenges of partial observability and combinatorial actions, we use a knowledge graph based state space and a template-based action space—each described in detail below. Knowledge Graph State Space. Building on|Ammanabrolu & Ried|| (2019p, we use a knowledge graph as a state representation that is learnt during exploration. The knowledge graph is stored as a set of 3-tuples of (subject, relation, object). These triples are extracted from the observations using Stanford’s Open Information Extraction (OpenIE) 2015). Human-made IF games often contain relatively complex semi-structured information that OpenlE is not designed to parse and so we add additional rules to ensure that we are parsing the relevant information. Updated after every action, the knowledge graph helps the agent form a map of the world that it is exploring, in addition to retaining information that it has learned such as the affordances associated with an object, the properties of a character, current inventory, etc. Nodes relating to such informa- tion are shown on the basis of their relation to the agent which is presented on the graph using a “you” node (see example in Fig. 2a). Ammanabrolu & Riedl (2019a) build a knowledge graph in a similar manner but restrict themselves to a single domain. In contrast, we test our methods on a much more diverse set of games defined in the Jericho framework (Hausknecht et al., 2019a). These games are each structured differently— covering a wider variety of genres—and so to be able to extract the same information from all of them in a general manner, we relax many of the rules found in Ammanabrolu & Riedl (2019a). To aid in the generalizability of graph building, we introduce the concept of interactive objects—items that an agent is able to directly interact with in the surrounding environment. These items are directly linked to the “you” node, indicating that the agent can interact with them, and the node for the current room, showing their relative position. All other triples built from the graph are extracted by OpenIE. Further details regarding knowledge graph updates are found in Appendix B.1 An example of a graph built using these rules is seen in Fig. 2a. Template Action Space. Templates are subroutines used by the game’s parser to interpret the player’s action. They consist of interchangeable verbs phrases (V P ) optionally followed by prepo- sitional phrases (V P P P ), e.g. [at/against/on/onto] ), where the verbs and prepositions within [.] are aliases. As shown in Figure 2b, actions may be constructed from templates by filling in the template’s blanks using words in the game’s vocabulary. Templates and vocabulary words are programmatically accessible through the Jericho framework and are thus available for every IF game. Further details about how we prioritize interchangeable verbs and prepositions are available in Appendix B.2. # 4 KNOWLEDGE GRAPH ADVANTAGE ACTOR CRITIC Combining the knowledge-graph state space with the template action space, Knowledge Graph Ad- vantage Actor Critic or KG-A2C, is an on-policy reinforcement learning agent that collects experi- ence from many parallel environments. We first discuss the architecture of KG-A2C, then detail the training algorithm. As seen in Fig. 1, KG-A2C’s architecture can broadly be described in terms of encoding a state representation and then using this encoded representation to decode an action. We describe each of these processes below. Input Representation. The input representation network is broadly divided into three parts: an observation encoder, a score encoder, and the knowledge graph. At every step an observation con- sisting of several components is received: ot = (otdesc, otgame, otinv , at−1) corresponding to the room description, game feedback, inventory, and previous action, and total score Rt. The room de- scription otdesc is a textual description of the agent’s location, obtained by executing the command “look.” The game feedback otgame is the simulators response to the agent’s previous action and con- 3 Published as a conference paper at ICLR 2020 Graph Mask Action Decoder Critic en Multi-headed Linear Graph Attention! LN — Graph Embeddings “s { Encoder { Encoder Encoder | Encoder i GRU GRU GRU GRU * x x z x Knowledge } + ‘Room Game H idescription........ feedback. ree ‘Previous | Binary Score } Action Encoding Fes Total vati Score seeeeeeceneeeeeeeeeeceeesee! Observation. J cae Graph / / Figure 1: The full KG-A2C architecture. Solid lines represent computation flow along which the gradient can be back-propagated. sists of narrative and flavor text. The inventory otinv and previous action at−1 components inform the agent about the contents of its inventory and the last action taken respectively. The observation encoder processes each component of ot using a separate GRU encoder. As we are not given the vocabulary that ot is comprised of, we use subword tokenization—specifically using the unigram subword tokenization method described in Kudo & Richardson (2018). This method predicts the most likely sequence of subword tokens for a given input using a unigram language model which, in our case, is trained on a dataset of human playthroughs of IF games3 and contains a total vocabulary of size 8000. For each of the GRUs, we pass in the final hidden state of the GRU 1 to initialize the hidden state at step t. We concatenate each of the encoded components at step t and use a linear layer to combine them into the final encoded observation ot. At each step, we update our knowledge graph Gt using ot as described in Sec. 3 and it is then embedded into a single vector gt. Following Ammanabrolu & Riedl (2019a) we use Graph Attention networks or GATs (Veliˇckovi´c et al., 2018) with an attention mechanism similar to that described in IRF, where N Bahdanau et al. (2014). Node features are computed as H = is the number of nodes and F the number of features in each node, consist of the average subword embeddings of the entity and of the relations for all incoming edges using our unigram language IR2F×F applied to all model. Self-attention is then used after a learnable linear transformation W the node features. Attention coefficients αij are then computed by softmaxing k being with the neighborhood in which we compute the attention coefficients and consists of all edges in Gt. eij = LeakyReLU (p W (hi hj)) (1) ⊕ exp(eij) QQ = Ss 2 Leen eulein) ” 3http://www.allthingsjacq.com/interactive_fiction.html#clubfloyd 4 Published as a conference paper at ICLR 2020 ate powered # Living R Iving Koom — : You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great antiquity. A battery- powered brass lantern is on the trophy case. You are carrying: A glass bottle The glass bottle contains: A quantity of water. [ Templates Objects sword sword turn on __ open __ case take __ gothic push __ . all take from _ J east lantern lantern door door (a) The extracted knowledge graph for the correspond- ing state. Bolded words in the observation indicate in- teractive objects. (b) Visualization of the action decoding process using templates and objects. Objects consist of the entire game input vocabulary. Greyed out words indicate ob- jects masked out by the knowledge graph. Figure 2: An overall example of the knowledge graph building and subsequent action decoding process for a given state in Zork1, illustrating the use of interactive objects and the graph mask. where p computed as: ∈ IR2F is a learnable parameter. The final knowledge graph embedding vector gt is kK ge = F(Wo(BDo(d_ af? Wh;)) + dy) @) k=1 jen where k refers to the parameters of the k*” independent attention mechanism, W, ‘q and b, the weights and biases of the output linear layer, and G represents concatenation. The final component of state embedding vector is a binary encoding c; of the total score obtained so far in the game—giving the agent a sense for how far it has progressed in the game even when it is not collecting reward. The state embedding vector is then calculated as s_ = gt © OF ® Ce. ⊕ ⊕ Action Decoder. The state embedding vector st is then used to sequentially construct an action by first predicting a template and then picking the objects to fill into the template using a series of Decoder GRUs. This gives rise to a template policy πT and a policy for each object πOi. Architecture wise, at every decoding step all previously predicted parts of the action are encoded and passed along with st through an attention layer which learns to attend over these representations—conditioning every predicted object on all the previously predicted objects and template. All the object decoder GRUs share parameters while the template decoder GRUT remains separate. To effectively constrain the space of template-actions, we introduce the concept of a graph mask, leveraging our knowledge graph at that timestep Gt to streamline the object decoding process. For- mally, the graph mask mt = , consists of all the entities found within the ∈ knowledge graph Gt and vocabulary V and is applied to the outputs of the object decoder GRUs— restricting them to predict objects in the mask. Generally, in an IF game, it is impossible to interact with an object that you never seen or that are not in your inventory and so the mask lets us ex- plore the action space more efficiently. To account for cases where this assumption does not hold, i.e. when an object that the agent has never interacted with before must be referenced in order to progress in the game, we randomly add objects o V to mt with a probability pm. An example of the graph-constrained action decoding process is illustrated in Fig. 2b. 5 Published as a conference paper at ICLR 2020 4.1 TRAINING We adapt the Advantage Actor Critic (A2C) method (Mnih et al., 2016) to train our network, using multiple workers to gather experiences from the simulator, making several significant changes along the way—as described below. Valid Actions. Using a template-action space there are millions of possible actions at each step. Most of these actions do not make sense, are ungrammatical, etc. and an even fewer number of them actually cause the agent effect change in the world. Without any sense for which actions present valid interactions with the world, the combinatorial action space becomes prohibitively large for effective exploration. We thus use the concept of valid actions, actions that can change the world in a particular state. These actions can usually be recognized through the game feedback, with responses like “Nothing happens” or “That phrase is not recognized.” In practice, we follow and use the valid action detection algorithm provided by Jericho. Formally, V alid(s;) = 440, a}...an } and from this we can construct the corresponding set of valid templates Tyatia(St) = {r, T1..Tn}. We further define a set of valid objects Oyatia(St) = {o0, 01...0m } which consists of all objects in the graph mask as defined in Sec.lf This lets us introduce two cross-entropy loss terms to aid the action decoding process. The template loss given a particular state and current network parameters is applied to the decoder GRU r. Similarly, the object loss is applied across the decoder GRUo and is calculated by summing cross-entropy loss from all the object decoding steps. Lose, 4131) = An logma (tise) + (1 — Â¥x,)(1 — logmr(rils1)) (4) ta M 1 Lo(S2, 413) = Ss Vi So yo:logro, (o;|81) + (1 — yo:)(1 — logro; (0i|s:))) (5) j= =1 1 ETr 1 0; € Ovatial Un ={ Ti © Toatia(St) mn ={ 0 € Ovatia( Se) 0 else 0 else Updates. A2C training starts with calculating the advantage of taking an action in a state A(st, at), defined as the value of taking an action Q(st, at) compared to the average value of taking all possible valid actions in that state V (st): A(st, a1) = Q(st, a2) — V(s2) (6) A(st, at) = Q(st, at) V (st) Q(st, at) = E[rt + γV (st+1)] − Q(se; a4) = Efre + yV (S141) (7) V (st) is predicted by the critic as shown in Fig. 1 and rt is the reward received at step t. The action decoder or actor is then updated according to the gradient: — Vo (logy (T|s13 94) +S toorou 0;|8¢,7, +», 0:15 94) )A(Se, G1) (8) updating the template policy πT and object policies πOi based on the fact that each step in the action decoding process is conditioned on all the previously decoded portions. The critic is updated with respect to the gradient: 1 2 ∇ θ(Q(st, at; θt) V (st; θt))2 (9) − bringing the critic’s prediction of the value of being in a state closer to its true underlying value. We further add an entropy loss over the valid actions, designed to prevent the agent from prematurely converging on a trajectory. Lr(St, at; 94) = Ss P(a|s,)logP(a|s:) (10) a€V (se) 6 (6) (7) Published as a conference paper at ICLR 2020 # 5 EXPERIMENTAL RESULTS The KG-A2C is tested on a suite of Jericho supported games and is compared to strong, established baselines. Additionally, as encouraged by Hausknecht et al. (2019a), we present the set of handicaps used by our agents: (1) Jericho’s ability to identify valid actions and (2) the Load, Save handicap in order to acquire otdesc and otinv using the look and inventory commands without changing the game state. Hyperparameters are provided in Appendix C. Template DQN Baseline. We compare KG-A2C against Template-DQN, a strong baseline also utilizing the template based action space. TDQN (Hausknecht et al., 2019a) is an extension of LSTM-DQN (Narasimhan et al., 2015) to template-based action spaces. This is accomplished using three output heads: one for estimating the Q-Values over templates Q(st, u) and two for estimating Q-Values Q(st, o1), Q(st, o2) over vocabulary to fill in the blanks of the tem- plate. The final executed action is constructed by greedily sampling from the predicted Q-values. Importantly, TDQN uses the same set of handicaps as KG-A2C allowing a fair comparison between these two algorithms. Table 1 shows how KG-A2C fares across a diverse set of games supported by Jericho—testing the agent’s ability to generalize to different genres, game structures, reward functions, and state-action spaces. KG-A2C matches or outperforms TDQN on 23 out of the 28 games that we test on. Our agent is thus shown to be capable of extracting a knowledge graph that can sufficiently constrain the template based action space to enable effective exploration in a broad range of games. # 6 ABLATION STUDY In order to understand the contributions of different components of KG-A2C’s architec- ture, we ablate KG-A2C’s knowledge graph, template-action space, and valid-action loss. These ablations are performed on Zork14 and result in the following agents: A2C removes all components of KG-A2C’s knowledge graph. In particular, the state em- bedding vector is now computed as st = ot ct and the graph mask is not used to constrain ac- tion decoding. KG-A2C-no-gat remove’s the Graph Attention network, but retains the graph masking compo- nents. The knowledge graph is still constructed as usual but the agent uses the same state em- bedding vector as A2C. KG-A2C-no-mask ablates the graph mask for purposes of action decoding. The knowledge graph is constructed as usual and the agent re- tains graph attention. |T | 82 151 189 156 260 159 156 173 197 177 290 141 161 161 173 187 166 207 155 201 288 333 169 175 149 237 214 186 |V | 296 343 786 398 2257 505 452 760 344 1049 722 409 657 657 510 503 669 460 472 468 1013 844 1112 622 401 697 564 607 TDQN 0 1.6 36 0 0 0 4.8 1 169 -5.3 8.6 0.7 0 1.2 6.3 6 0 16.8 17.4 9.7 5 18.7 0.6 7.9 0 9.9 0 4.9 1 30 350 100 100 50 51 300 360 25 400 300 90 90 30 150 1 50 70 50 400 600 250 35 350 350 7 100 KGA2C MaxRew 0 0.3 36 0 0 0 10 1 207.9 0 12.1 3 1.8 0 14.3 17.8 0 3 50.7 0 5.8 21.3 1.3 7.6 3.9 34 .1 9.2 On Zork1 as shown in Figure 3, we observe similar asymptotic performance between the all of the ablations – all reach approximately 34 points. This level of performance corresponds to a local optima where the agent collects the majority of available rewards without fighting the troll. Several other authors also report scores at this threshold (Jain et al., 2019; Zahavy et al., 2018). In terms of learning speed, the methods which have access to either the graph attention or the graph mask converge slightly faster than pure A2C which has neither. 4A map of Zork1 with annotated rewards can be found in Appendix D along with a transcript of KG-A2C playing this game. 7 Published as a conference paper at ICLR 2020 40 30 Agent Mask GAT A2C 20 KG-A2C-no-gat v KG-A2C-no-mask v 0 KG-A2C-full v v KG-A2C-unsup v v 0 20000 40000 60000 80000 100000 Figure 3: Ablation results on Zork1, averaged across 5 independent runs. To further understand these differences we performed a larger study across the full set of games comparing KG-A2C-full with KG-A2C-no-mask. The results in Table 2 show KG-A2C-full outper- forms KG-A2C-no-mask on 10 games and is outperformed by KG-A2C-no-mask on 6. From this larger study we thus conclude the graph mask and knowledge graph are broadly useful components. We perform two final ablations to study the importance of the supervised valid-action loss and the template action space: KG-A2C-unsupervised In order to understand the importance of training with valid-actions, KG- A2C-unsupervised is not allowed to access the list of valid actions—the valid-action-losses T and E now based on the full action set. Thus, the agent must explore the template L action space manually. KG-A2C-unsupervised, when trained for the same number of steps as all the other agents, fails to achieve any score. We can infer that the valid action auxiliary loss remains an important part of the overall algorithm, and access to the knowledge graph alone is not yet sufficient for removing this auxiliary loss. KG-A2C-seq discards the template action space and instead decodes actions word by word up to a LValid is now calculated maximum of four words. A supervised cross-entropy-based valid action loss Valid(st) and using each token in it as a target label. As by selecting a random valid action atvalid ∈ this action space is orders of magnitude larger than template actions, we use teacher-forcing to enable more effective exploration while training the agent—executing atvalid with a probability pvalid = 0.5 and the decoded action otherwise. All other components remain the same as in the full KG-A2C. KG-A2C-seq reaches a relatively low asymptotic performance of 8 points. This agent, using a action space consisting of the full vocabulary, performs significantly worse than the rest of the agents even when given the handicaps of teacher forcing and being allowed to train for significantly longer— indicating that the template based action space is also necessary for effective exploration. # 7 CONCLUSION Tabula rasa reinforcement learning offers an intuitive paradigm for exploring goal driven, contextu- ally aware natural language generation. The sheer size of the natural language action space, however, has proven to be out of the reach of existing algorithms. In this paper we introduced KG-A2C, a novel learning agent that demonstrates the feasibility of scaling reinforcement learning towards nat- ural language actions spaces with hundreds of millions of actions. The key insight to being able to efficiently explore such large spaces is the combination of a knowledge-graph-based state space and a template-based action space. The knowledge graph serves as a means for the agent to understand its surroundings, accumulate information about the game, and disambiguate similar textual obser- vations while the template-based action space lends a measure of structure that enables us to exploit that same knowledge graph for language generation. Together they constrain the vast space of possi- ble actions into the compact space of sensible ones. A suite of experiments across a diverse set of 28 human-made IF games shows wide improvement over TDQN, the current state-of-the-art template- based agent. Finally, an ablation study replicates state-of-the-art performance on Zork1 even though KG-A2C is using an action space six orders of magnitude larger than previous agents—indicating the overall efficacy of our combined state-action space. 8 Published as a conference paper at ICLR 2020 # REFERENCES Prithviraj Ammanabrolu and Mark O. Riedl. Playing text-adventure games with graph-based deep reinforcement learning. In Proceedings of 2019 Annual Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, NAACL- HLT 2019, 2019a. Prithviraj Ammanabrolu and Mark O. Riedl. Transfer in deep reinforcement learning using knowl- edge graphs. CoRR, abs/1908.06556, 2019b. Prithviraj Ammanabrolu, William Broniec, Alex Mueller, Jeremy Paul, and Mark O. Riedl. Toward automated quest generation in text-adventure games. CoRR, abs/1909.06283, 2019. Gabor Angeli, Johnson Premkumar, Melvin Jose, and Christopher D. Manning. Leveraging Lin- guistic Structure For Open Domain Information Extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), 2015. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv:1409.0473, 2014. Marc-Alexandre Cˆot´e, ´Akos K´ad´ar, Xingdi Yuan, Ben Kybartas, Tavian Barnes, Emery Fine, James Moore, Matthew Hausknecht, Layla El Asri, Mahmoud Adada, Wendy Tay, and Adam Trischler. Textworld: A learning environment for text-based games. CoRR, abs/1806.11532, 2018. Nancy Fulda, Daniel Ricks, Ben Murdoch, and David Wingate. What can you do with a rock? affordance extraction via word embeddings. In IJCAI, pp. 1039–1045, 2017. doi: 10.24963/ijcai. 2017/144. Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, William B. Dolan, Jianfeng Gao, Wen tau Yih, and Michel Galley. A knowledge-grounded neural conversation model. In AAAI, 2017. Jian Guan, Yansen Wang, and Minlie Huang. Story Ending Generation with Incremental Encoding and Commonsense Knowledge. arXiv:1808.10113v1, 2018. Matthew Hausknecht, Prithviraj Ammanabrolu, Marc-Alexandre Cˆot´e, and Xingdi Yuan. Interactive fiction games: A colossal adventure. CoRR, abs/1909.05398, 2019a. Matthew J. Hausknecht, Ricky Loynd, Greg Yang, Adith Swaminathan, and Jason D. Williams. NAIL: A general interactive fiction agent. CoRR, abs/1902.04259, 2019b. Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, and Mari Ostendorf. Deep reinforcement learning with a natural language action space. In ACL, 2016. Vishal Jain, William Fedus, Hugo Larochelle, Doina Precup, and Marc G. Bellemare. Algorithmic improvements for deep reinforcement learning applied to interactive fiction, 2019. Taku Kudo and John Richardson. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. CoRR, abs/1808.06226, 2018. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781, 2013. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928–1937, 2016. Karthik Narasimhan, Tejas D. Kulkarni, and Regina Barzilay. Language understanding for text- based games using deep reinforcement learning. In EMNLP, pp. 1–11, 2015. Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rocktschel, Douwe Kiela, Arthur Szlam, and Jason Weston. Learning to speak and act in a fantasy text adventure game. CoRR, abs/1903.03094, 2019. 9 Published as a conference paper at ICLR 2020 Petar Veliˇckovi´c, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Li`o, and Yoshua International Conference on Learning Representations Bengio. Graph Attention Networks. (ICLR), 2018. Xusen Yin and Jonathan May. Comprehensible context-driven text game playing. CoRR, abs/1905.02265, 2019. Xingdi Yuan, Marc-Alexandre Cˆot´e, Jie Fu, Zhouhan Lin, Christopher Pal, Yoshua Bengio, and Adam Trischler. Interactive language learning by question answering. In EMNLP, 2019. Tom Zahavy, Matan Haroush, Nadav Merlis, Daniel J Mankowitz, and Shie Mannor. Learn what not to learn: Action elimination with deep reinforcement learning. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Infor- mation Processing Systems 31, pp. 3562–3573. Curran Associates, Inc., 2018. Mikul´as Zelinka. Using reinforcement learning to learn how to play text-based games. CoRR, abs/1801.01999, 2018. 10 Published as a conference paper at ICLR 2020 # A ABLATION RESULTS Game 905 acorncourt advent† adventureland anchor awaken balances deephome detective dragon enchanter inhumane jewel karn library ludicorp moonlit omniquest pentari snacktime sorcerer spellbrkr spirit temple zenon zork1 zork3 ztuu T | | 82 151 189 156 260 159 156 173 197 177 290 141 161 161 173 187 166 207 155 201 288 333 169 175 149 237 214 186 V | | 296 343 786 398 2257 505 452 760 344 1049 722 409 657 657 510 503 669 460 472 468 1013 844 1112 622 401 697 564 607 KGA2C-Full KGA2C-unmasked MaxRew 0 0.3 36 0 0 0 10 1 207.9 0 12.1 3 1.8 0 14.3 17.8 0 3 50.7 0 5.8 21.3 1.3 7.6 3.9 34 .1 9.2 0 0.3 36 0 0 0 10 29.2 141 -.2 7.6 10.2 1.3 0 9.6 17.9 0 5.4 50.4 0 16.8 30.1 1.3 6.4 3.1 27 .1 5 1 30 350 100 100 50 51 300 360 25 400 300 90 90 30 150 1 50 70 50 400 600 250 35 350 350 7 100 Table 2: Ablations # B IMPLEMENTATION DETAILS B.1 KNOWLEDGE GRAPH UPDATE RULES Candidate interactive objects are identified by performing part-of-speech tagging on the current observation, identifying singu lar and proper nouns as well as adjectives, and are then filtered by checking if they can be examined using the command examine OBJ. Only the interactive objects not found in the inventory are linked to the node corresponding to the current room and the inven- tory items are linked to the “you” node. The only other rule applied uses the navigational actions performed by the agent to infer the relative positions of rooms, e.g. (kitchen, down, cellar) when the agent performs go down w! hen in the kitchen to move to the cellar. B.2 TEMPLATE PREPROCESSING Templates are processed by selecting a single verb and preposition from the aliases. For the sake of agent explainability, we pick the verb and preposition that are most likely to be used by hu- mans when playing IF games. This is done by assessing token frequencies from a dataset of hu- man playthroughs such as those given in ClubFloyd5. This dataset consists of 425 unique play ) and sessions and 273,469 state-action pairs. The examples given earlier, ([carry/hold/take] ([drop/throw/discard/put] [at/against/on/onto] and put 5http://www.allthingsjacq.com/interactive_fiction.html#clubfloyd 11 Published as a conference paper at ICLR 2020 # C EXPERIMENT DETAILS Episodes are terminated after 100 valid steps or game over/victory. Agents that decode invalid actions often wouldn’t make it very far into the game, and so we only count valid-actions against the hundred step limit. All agents are trained individually on each game and then evaluated on that game. All A2C based agents are trained using data collected from 32 parallel environments. TDQN was trained using a single environment. Hyperparameters for all agents were tuned on the game of Zork1 and held constant across all other games. Final reported scores are an average over 5 runs of each algorithm. Figure 4: Learning curves for KGA2C-full. Shaded regions indicate standard deviations. 12 Published as a conference paper at ICLR 2020 , The Great Underground Empire ~ N (=} +5, a =P —— > it¢hen 10 Start here |= Figure 5: A map of the world of Zork1 with some initial rewards annotated. The blue arrow indicates a connection between the left and right maps, corresponding to the overworld and the dungeon. # D ZORK1 Zork1 was identified by Hausknecht et al. (2019a) to be one of the most difficult games in their suite and the subject of much prior work (Zahavy et al., 2018; Yin & May, 2019). Zork1 is one of the earliest IF games and is a dungeon-crawler—a player must explore a vast labyrinth while fighting off enemies and complete puzzles in order to collect treasures. It features a relatively sparse reward for collecting a treasure or moving along the right path to one, and stochasticity in terms of random enemy movements. To understand how humans progress in Zork1, a group of 10 human players—familiar with IF games—were asked to play Zork1 for the first time (with no access to walkthroughs). Half of the players reached a game score of around 40 before dying to the first beatable NPC, a troll, mostly due to neglecting to collect a weapon to fight it with beforehand. Three of the remaining players died to hidden traps even before reaching this point, achieving scores between 5 and 15. The final two players made it significantly past the troll gaining scores of around 70. The following transcript of KG-A2C playing Zork1, shows top predicted probabilities for templates and objects. Obs: Desc: West of House You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox here. Inv: You are emptyhanded. Feedback: West of House You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox here. Template probs: open OBJ: 0.339, east: 0.271, south: 0.215, west: 0.094, north: 0.031, go around OBJ: 0.013, blow out OBJ: 0.011, throw OBJ at OBJ: 0.01, northwest: 0.009 Object probs: window: 0.7, small: 0.299 Interactive objects: boarded, small, front, door, white, mailbox, house, all Action: west Score: 0 --- Obs: Desc: Forest This is a forest, with trees in all directions. To the east, there appears to be sunlight. Inv: You are emptyhanded. Feedback: Forest This is a forest, with trees in all directions. To the east, there appears to be sunlight. Template probs: open OBJ: 0.49, south: 0.304, east: 0.116, west: 0.04, north: 0.016, go around OBJ: 0.011, throw OBJ at OBJ: 0.006, northwest: 0.006, blow out OBJ: 0.006 Object probs: window: 0.7, small: 0.3 Interactive objects: forest, all, trees Action: east Score: 0 --- Obs: Desc: Forest Path This is a path winding through a dimly lit forest. The path heads northsouth here. One particularly large tree with some low branches stands at the edge of the path. Inv: You are emptyhanded. Feedback: Forest Path This is a path winding through a dimly lit forest. The path heads northsouth here . One particularly large tree with some low branches stands at the edge of the path. 13 Published as a conference paper at ICLR 2020 Template probs: up: 0.386, open OBJ: 0.358, east: 0.123, west: 0.06, north: 0.023, go around OBJ: 0.014, blow # out OBJ: 0.01, northwest: 0.009, throw OBJ at OBJ: 0.008 Object probs: window: 0.701, small: 0.299 Interactive objects: tree, path, branches, forest, large, all Action: up Score: 0 --- Obs: Desc: Up a Tree You are about 10 feet above the ground nestled among some large branches. The nearest branch above you is above your reach. Beside you on the branch is a small birds nest. In the birds nest is a large egg encrusted with precious jewels, apparently scavenged by a childless songbird. The egg is covered with fine gold inlay, and ornamented in lapis lazuli and motherofpearl. Unlike most eggs, this one is hinged and closed with a delicate looking clasp. The egg appears extremely fragile. Inv: You are emptyhanded. Feedback: Up a Tree You are about 10 feet above the ground nestled among some large branches. The nearest branch above you is above your reach. Beside you on the branch is a small birds nest. In the birds nest is a large egg encrusted with precious jewels, apparently scavenged by a childless songbird. The egg is covered with fine gold inlay, and ornamented in lapis lazuli and motherofpearl. Unlike most eggs, this one is hinged and closed with a delicate looking clasp. The egg appears extremely fragile. Template probs: west: 0.43, open OBJ: 0.18, go around OBJ: 0.074, east: 0.064, north: 0.063, south: 0.052, take OBJ: 0.041, put down OBJ: 0.023, down: 0.014, close OBJ: 0.009 Object probs: small: 0.213, window: 0.184, house: 0.116, egg: 0.068, white: 0.067, path: 0.062, branch: 0.042, kitche: 0.024, nest: 0.018, ground: 0.009 Interactive objects: nest, large, egg, ground, branches, all, branch, tree Action: open nest Score: 0 --- Obs: Desc: Up a Tree You are about 10 feet above the ground nestled among some large branches. The nearest branch above you is above your reach. Beside you on the branch is a small birds nest. In the birds nest is a large egg encrusted with precious jewels, apparently scavenged by a childless songbird. The egg is covered with fine gold inlay, and ornamented in lapis lazuli and motherofpearl. Unlike most eggs, this one is hinged and closed with a delicate looking clasp. The egg appears extremely fragile. You hear in the distance the chirping of a song bird. Inv: You are emptyhanded. You hear in the distance the chirping of a song bird. Feedback: It is already open. Template probs: west: 0.347, open OBJ: 0.182, go around OBJ: 0.085, east: 0.077, north: 0.073, south: 0.058, take OBJ: 0.045, put down OBJ: 0.026, down: 0.018, close OBJ: 0.013 Object probs: small: 0.213, window: 0.183, house: 0.117, egg: 0.068, white: 0.067, path: 0.062, branch: 0.042, kitche: 0.024, nest: 0.018, ground: 0.009 Interactive objects: nest, large, egg, ground, branches, all, branch, tree Action: take egg Score: 0 --- Obs: Desc: Up a Tree You are about 10 feet above the ground nestled among some large branches. The nearest branch above you is above your reach. Beside you on the branch is a small birds nest. Inv: You are carrying A jewelencrusted egg Feedback: Taken. Template probs: down: 0.417, open OBJ: 0.181, go around OBJ: 0.074, east: 0.065, north: 0.063, south: 0.053, take OBJ: 0.042, put down OBJ: 0.025, down: 0.014, close OBJ: 0.01 Object probs: small: 0.213, window: 0.184, house: 0.116, egg: 0.068, white: 0.067, path: 0.062, branch: 0.042, kitche: 0.024, nest: 0.018, ground: 0.009 Interactive objects: nest, large, egg, ground, branches, all, branch, tree Action: down Score: 5 --- Obs: Desc: Forest Path This is a path winding through a dimly lit forest. The path heads northsouth here. One particularly large tree with some low branches stands at the edge of the path. Inv: You are emptyhanded. Feedback: Forest Path Template probs: south: 0.386, open OBJ: 0.358, east: 0.123, west: 0.06, north: 0.023, go around OBJ: 0.014, blow out OBJ: 0.01, northwest: 0.009, throw OBJ at OBJ: 0.008 Object probs: window: 0.701, small: 0.299 Interactive objects: tree, path, branches, forest, large, all Action: south Score: 5 --- Obs: Desc: North of House You are facing the north side of a white house. There is no door here, and all the windows are boarded up. To the north a narrow path winds through the trees. Inv: You are emptyhanded. Feedback: North of House You are facing the north side of a white house. There is no door here, and all the windows are boarded up. To the north a narrow path winds through the trees. Template probs: west: 0.502, open OBJ: 0.177, west: 0.147, north: 0.094, south: 0.035, throw OBJ at OBJ: 0.012, go around OBJ: 0.012, northwest: 0.01, blow out OBJ: 0.005 Object probs: window: 0.724, small: 0.243, trap: 0.014, kitche: 0.009, rug: 0.003 Interactive objects: path, windows, narrow, white, house, all Action: west Score: 5 --- Obs: Desc: West of House You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox here. Inv: You are emptyhanded. Feedback: Template probs: open OBJ: 0.536, south: 0.241, east: 0.121, west: 0.053, north: 0.015, go around OBJ: 0.011, throw OBJ at OBJ: 0.007, blow out OBJ: 0.006, northwest: 0.005 Object probs: window: 0.7, small: 0.3 Interactive objects: boarded, small, front, door, white, mailbox, house, all Action: open white Score: 5 --- Obs: Desc: West of House You are standing in an open field west of a white house, with a boarded front door. There is a small mailbox here. Inv: You are emptyhanded. Feedback: I cant see how to get in from here. Template probs: open OBJ: 0.339, east: 0.271, south: 0.215, west: 0.094, north: 0.031, go around OBJ: 0.013, blow out OBJ: 0.011, throw OBJ at OBJ: 0.01, northwest: 0.009 Object probs: window: 0.7, small: 0.299 Interactive objects: boarded, small, front, door, white, mailbox, house, all Action: west Score: 5 --- 14 Published as a conference paper at ICLR 2020 Obs: Desc: Forest This is a forest, with trees in all directions. To the east, there appears to be sunlight. Inv: You are emptyhanded. Feedback: Forest This is a forest, with trees in all directions. To the east, there appears to be sunlight. Template probs: open OBJ: 0.49, south: 0.304, east: 0.116, west: 0.04, north: 0.016, go around OBJ: 0.011, throw OBJ at OBJ: 0.006, northwest: 0.006, blow out OBJ: 0.006 Object probs: window: 0.7, small: 0.3 Interactive objects: forest, all, trees Action: east Score: 5 --- Obs: Desc: Forest Path This is a path winding through a dimly lit forest. The path heads northsouth here. One particularly large tree with some low branches stands at the edge of the path. Inv: You are emptyhanded. Feedback: Forest Path This is a path winding through a dimly lit forest. The path heads northsouth here . One particularly large tree with some low branches stands at the edge of the path. Template probs: south: 0.386, open OBJ: 0.358, east: 0.123, west: 0.06, north: 0.023, go around OBJ: 0.014, blow out OBJ: 0.01, northwest: 0.009, throw OBJ at OBJ: 0.008 Object probs: window: 0.701, small: 0.298 Interactive objects: tree, path, branches, forest, large, all Action: open all Score: 5 --- Obs: Desc: Forest Path This is a path winding through a dimly lit forest. The path heads northsouth here. One particularly large tree with some low branches stands at the edge of the path. Inv: You are emptyhanded. Feedback: Its not clear what youre referring to. Template probs: open OBJ: 0.331, south: 0.259, east: 0.24, west: 0.087, north: 0.031, go around OBJ: 0.013, blow out OBJ: 0.011, northwest: 0.01, throw OBJ at OBJ: 0.009 Object probs: window: 0.701, small: 0.299 Interactive objects: tree, path, branches, forest, large, all Action: east Score: 5 --- Obs: Desc: Forest This is a dimly lit forest, with large trees all around. Inv: You are emptyhanded. You hear in the distance the chirping of a song bird. Feedback: Forest This is a dimly lit forest, with large trees all around. You hear in the distance the chirping of a song bird. Template probs: south: 0.38, open OBJ: 0.281, east: 0.174, west: 0.072, north: 0.034, go around OBJ: 0.015, blow out OBJ: 0.013, northwest: 0.012, throw OBJ at OBJ: 0.008 Object probs: window: 0.7, small: 0.299 Interactive objects: forest, large, all, trees Action: south Score: 5 --- Obs: Desc: Clearing You are in a small clearing in a well marked forest path that extends to the east and west . Inv: You are emptyhanded. Feedback: Clearing You are in a small clearing in a well marked forest path that extends to the east and west. Template probs: open OBJ: 0.451, east: 0.364, west: 0.164, north: 0.008, throw OBJ at OBJ: 0.005 Object probs: window: 0.592, small: 0.405, kitche: 0.002 Interactive objects: forest, path, all Action: east Score: 5 --- Obs: Desc: Canyon View You are at the top of the Great Canyon on its west wall. From here there is a marvelous view of the canyon and parts of the Frigid River upstream. Across the canyon, the walls of the White Cliffs join the mighty ramparts of the Flathead Mountains to the east. Following the Canyon upstream to the north, Aragain Falls may be seen, complete with rainbow. The mighty Frigid River flows out from a great dark cavern. To the west and south can be seen an immense forest, stretching for miles around. A path leads northwest. It is possible to climb down into the canyon from here. Inv: You are emptyhanded. Feedback: Canyon View You are at the top of the Great Canyon on its west wall. From here there is a marvelous view of the canyon and parts of the Frigid River upstream. Across the canyon, the walls of the White Cliffs join the mighty ramparts of the Flathead Mountains to the east. Following the Canyon upstream to the north, Aragain Falls may be seen, complete with rainbow. The mighty Frigid River flows out from a great dark cavern. To the west and south can be seen an immense forest, stretching for miles around. A path leads northwest. It is possible to climb down into the canyon from here. Template probs: west: 0.444, take OBJ: 0.204, open OBJ: 0.147, blow out OBJ: 0.118, turn on OBJ: 0.027, east: 0.026, down: 0.009, go around OBJ: 0.005, push OBJ: 0.004 Object probs: window: 0.693, small: 0.307 Interactive objects: path, walls, frigid, river, rainbow, wall, all Action: west Score: 5 --- Obs: Desc: Forest This is a dimly lit forest, with large trees all around. Inv: You are emptyhanded. Feedback: Forest This is a dimly lit forest, with large trees all around. Template probs: south: 0.494, open OBJ: 0.241, east: 0.105, west: 0.053, north: 0.038, go around OBJ: 0.018, northwest: 0.015, blow out OBJ: 0.014, throw OBJ at OBJ: 0.007, down: 0.005 Object probs: window: 0.699, small: 0.301 Interactive objects: forest, large, all, trees Action: south Score: 5 --- Obs: Desc: Forest This is a dimly lit forest, with large trees all around. Inv: You are emptyhanded. Feedback: Stormtossed trees block your way. Template probs: east: 0.722, north: 0.067, west: 0.061, open OBJ: 0.061, south: 0.048, blow out OBJ: 0.011, northwest: 0.01, go around OBJ: 0.007, throw OBJ at OBJ: 0.005 Object probs: window: 0.7, small: 0.3 Interactive objects: storm, forest, trees, large, all Action: east Score: 5 --- Obs: Desc: Forest This is a dimly lit forest, with large trees all around. Inv: You are emptyhanded. Feedback: The rank undergrowth prevents eastward movement. Template probs: south: 0.434, open OBJ: 0.339, east: 0.076, west: 0.065, north: 0.024, go around OBJ: 0.02, northwest: 0.012, blow out OBJ: 0.011, throw OBJ at OBJ: 0.009 Object probs: window: 0.701, small: 0.299 15 Published as a conference paper at ICLR 2020 Interactive objects: forest, large, all, trees Action: jump Score: 5 --- Obs: Desc: Forest This is a dimly lit forest, with large trees all around. Inv: You are emptyhanded. Feedback: Wheeeeeeeeee!!!!! You hear in the distance the chirping of a song bird. Template probs: south: 0.438, open OBJ: 0.292, east: 0.098, west: 0.075, north: 0.03, go around OBJ: 0.019, northwest: 0.013, blow out OBJ: 0.012, throw OBJ at OBJ: 0.01 Object probs: window: 0.7, small: 0.299 Interactive objects: forest, large, all, trees Action: south Score: 5 --- Obs: Desc: Forest This is a dimly lit forest, with large trees all around. Inv: You are emptyhanded. Feedback: Stormtossed trees block your way. Template probs: east: 0.694, west: 0.081, north: 0.07, open OBJ: 0.067, south: 0.043, northwest: 0.012, blow out OBJ: 0.011, go around OBJ: 0.007, throw OBJ at OBJ: 0.006 Object probs: window: 0.7, small: 0.3 Interactive objects: storm, forest, trees, large, all Action: north Score: 5 --- Obs: Desc: Clearing You are in a small clearing in a well marked forest path that extends to the east and west . Inv: You are emptyhanded. Feedback: Clearing Template probs: west: 0.437, east: 0.311, open OBJ: 0.171, north: 0.026, throw OBJ at OBJ: 0.015, south: 0.013, blow out OBJ: 0.011, go around OBJ: 0.006, northwest: 0.006 Object probs: window: 0.699, small: 0.301 Interactive objects: forest, path, all Action: throw path at forest Score: 5 --- Obs: Desc: Clearing You are in a small clearing in a well marked forest path that extends to the east and west . Inv: You are emptyhanded. Feedback: You dont have the way. Template probs: open OBJ: 0.377, west: 0.341, east: 0.149, south: 0.057, north: 0.022, throw OBJ at OBJ: 0.018, blow out OBJ: 0.011, go around OBJ: 0.011, northwest: 0.008 Object probs: window: 0.7, small: 0.299 Interactive objects: forest, path, all Action: west Score: 5 --- Obs: Desc: Behind House You are behind the white house. A path leads into the forest to the east. In one corner of the house there is a small window which is slightly ajar. Inv: You are emptyhanded. Feedback: Behind House You are behind the white house. A path leads into the forest to the east. In one corner of the house there is a small window which is slightly ajar. Template probs: open OBJ: 0.953, west: 0.036, south: 0.006 Object probs: small: 0.898, window: 0.102 Interactive objects: path, window, small, white, house, all Action: open house Score: 5 --- Obs: Desc: Behind House You are behind the white house. A path leads into the forest to the east. In one corner of the house there is a small window which is slightly ajar. Inv: You are emptyhanded. Feedback: The window is closed. Template probs: open OBJ: 0.943, west: 0.047 Object probs: small: 0.88, window: 0.12 Interactive objects: path, window, small, white, house, all Action: open path Score: 5 --- Obs: Desc: Behind House You are behind the white house. A path leads into the forest to the east. In one corner of the house there is a small window which is slightly ajar. Inv: You are emptyhanded. Feedback: You must tell me how to do that to a way. Template probs: open OBJ: 0.899, east: 0.048, west: 0.044, south: 0.004 Object probs: window: 0.58, small: 0.417, kitche: 0.002 Interactive objects: path, window, small, white, house, all Action: open window Score: 5 --- Obs: Desc: Behind House You are behind the white house. A path leads into the forest to the east. In one corner of the house there is a small window which is open. Inv: You are emptyhanded. Feedback: With great effort, you open the window far enough to allow entry. Template probs: west: 0.997 Object probs: small: 0.951, window: 0.049 Interactive objects: path, window, small, white, house, all Action: west Score: 5 --- Obs: Desc: Kitchen You are in the kitchen of the white house. A table seems to have been used recently for the preparation of food. A passage leads to the west and a dark staircase can be seen leading upward. A dark chimney leads down and to the east is a small window which is open. On the table is an elongated brown sack, smelling of hot peppers. A bottle is sitting on the table. The glass bottle contains A quantity of water Inv: You are emptyhanded. Feedback: Kitchen You are in the kitchen of the white house. A table seems to have been used recently for the preparation of food. A passage leads to the west and a dark staircase can be seen leading upward. A dark chimney leads down and to the east is a small window which is open. On the table is an elongated brown sack, smelling of hot peppers. A bottle is sitting on the table. The glass bottle contains A quantity of water Template probs: take OBJ: 0.47, west: 0.351, turn on OBJ: 0.061, blow out OBJ: 0.052, down: 0.023, open OBJ: 0.012, east: 0.008, close OBJ: 0.006, throw OBJ at OBJ: 0.006 Object probs: window: 0.683, small: 0.317 Interactive objects: kitchen, window, passage, staircase, glass, sack, water, small, table, chimney, bottle, quantity, brown, all # Action: west 16 Published as a conference paper at ICLR 2020 Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great antiquity. A batterypowered brass lantern is on the trophy case. Inv: You are emptyhanded. Feedback: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great antiquity. A batterypowered brass lantern is on the trophy case. Template probs: take OBJ: 0.454, west: 0.241, turn on OBJ: 0.107, open OBJ: 0.053, down: 0.05, blow out OBJ: 0.034, close OBJ: 0.016, throw OBJ at OBJ: 0.011, east: 0.009, north: 0.005 Object probs: window: 0.685, small: 0.315 Interactive objects: elvish, strange, trophy, brass, wooden, rug, oriental, lettering, antiquity, sword, gothic, west, door, large, lantern, case, all Action: take brass Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern Feedback: Taken. Template probs: open OBJ: 0.48, push OBJ: 0.138, west: 0.088, take OBJ: 0.067, turn on OBJ: 0.059, throw OBJ at OBJ: 0.031, put down OBJ: 0.023, put OBJ in OBJ: 0.018, put OBJ on OBJ: 0.016, down: 0.015 Object probs: window: 0.685, small: 0.315 Interactive objects: elvish, strange, trophy, brass, wooden, rug, oriental, lettering, antiquity, sword, gothic, west, door, large, lantern, case, all Action: turn on brass Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: The brass lantern is now on. Template probs: open OBJ: 0.438, push OBJ: 0.141, west: 0.088, take OBJ: 0.081, turn on OBJ: 0.061, throw OBJ at OBJ: 0.036, put down OBJ: 0.023, put OBJ in OBJ: 0.019, down: 0.018, put OBJ on OBJ: 0.018 Object probs: window: 0.685, small: 0.315 Interactive objects: brass, strange, trophy, elvish, rug, oriental, light, lettering, antiquity, sword, gothic , west, door, large, wooden, lantern, case, all Action: take trophy Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: The trophy case is securely fastened to the wall. Template probs: open OBJ: 0.437, push OBJ: 0.141, west: 0.087, take OBJ: 0.082, turn on OBJ: 0.061, throw OBJ at OBJ: 0.035, put down OBJ: 0.024, down: 0.019, put OBJ on OBJ: 0.018, put OBJ in OBJ: 0.018 Object probs: window: 0.684, small: 0.315 Interactive objects: strange, trophy, case, antiquity, lantern, lettering, sword, gothic, wooden, wall, elvish , brass, west, rug, oriental, door, large, light, all Action: push case Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: Pushing the trophy case has no effect. Template probs: open OBJ: 0.407, push OBJ: 0.145, take OBJ: 0.09, west: 0.085, turn on OBJ: 0.062, throw OBJ at OBJ: 0.037, put down OBJ: 0.024, down: 0.021, put OBJ on OBJ: 0.019, put OBJ in OBJ: 0.019 Object probs: window: 0.686, small: 0.314 Interactive objects: elvish, strange, trophy, brass, wooden, rug, oriental, case, lettering, antiquity, sword, gothic, west, door, large, lantern, light, all Action: push lanter Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: Pushing the brass lantern isnt notably helpful. Template probs: open OBJ: 0.405, push OBJ: 0.144, west: 0.093, take OBJ: 0.089, turn on OBJ: 0.062, throw OBJ at OBJ: 0.038, put down OBJ: 0.025, put OBJ on OBJ: 0.02, down: 0.019, put OBJ in OBJ: 0.019 Object probs: window: 0.685, small: 0.315 Interactive objects: brass, strange, trophy, elvish, rug, oriental, light, lettering, antiquity, sword, gothic , west, door, large, wooden, lantern, case, all Action: open wooden Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug in the center of the room. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: The door cannot be opened. Template probs: open OBJ: 0.438, push OBJ: 0.142, take OBJ: 0.081, west: 0.08, turn on OBJ: 0.062, throw OBJ at OBJ: 0.035, put down OBJ: 0.024, down: 0.02, put OBJ in OBJ: 0.018, put OBJ on OBJ: 0.018 Object probs: window: 0.685, small: 0.315 Interactive objects: elvish, strange, trophy, brass, wooden, rug, oriental, case, lettering, antiquity, sword, gothic, west, door, large, lantern, light, all Action: open brass Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a large oriental rug 17 Published as a conference paper at ICLR 2020 in the center of the room. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: You must tell me how to do that to a brass lantern. Template probs: open OBJ: 0.503, push OBJ: 0.129, west: 0.1, take OBJ: 0.053, turn on OBJ: 0.051, throw OBJ at OBJ: 0.041, put down OBJ: 0.029, put OBJ in OBJ: 0.021, put OBJ on OBJ: 0.02, down: 0.009 Object probs: window: 0.677, small: 0.323 Interactive objects: brass, strange, trophy, elvish, rug, oriental, light, lettering, antiquity, sword, gothic , west, door, large, wooden, lantern, case, all Action: push rug Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: With a great effort, the rug is moved to one side of the room, revealing the dusty cover of a closed trap door. Template probs: open OBJ: 0.582, push OBJ: 0.082, down: 0.074, take OBJ: 0.065, put down OBJ: 0.025, turn on OBJ: 0.025, west: 0.023, close OBJ: 0.021, throw OBJ at OBJ: 0.019, put OBJ in OBJ: 0.016 Object probs: window: 0.674, small: 0.326 Interactive objects: dusty, strange, trophy, cover, case, antiquity, lantern, lettering, sword, gothic, wooden , elvish, brass, west, rug, trap, door, light, all Action: west Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: The door is nailed shut. Template probs: open OBJ: 0.679, push OBJ: 0.073, down: 0.057, take OBJ: 0.046, put down OBJ: 0.022, turn on OBJ: 0.019, west: 0.017, close OBJ: 0.015, throw OBJ at OBJ: 0.015, put OBJ in OBJ: 0.015 Object probs: window: 0.672, small: 0.328 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, door, trap, wooden, lantern, light, all Action: down Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: The trap door is closed. Template probs: open OBJ: 0.63, down: 0.087, take OBJ: 0.06, put down OBJ: 0.045, push OBJ: 0.04, west: 0.027, turn on OBJ: 0.02, close OBJ: 0.018, throw OBJ at OBJ: 0.018, put OBJ in OBJ: 0.015 Object probs: window: 0.657, small: 0.342 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, trap, door, wooden, lantern, light, all Action: open sword Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: You must tell me how to do that to a sword. Template probs: open OBJ: 0.625, down: 0.075, take OBJ: 0.053, push OBJ: 0.05, put down OBJ: 0.049, west: 0.028, turn on OBJ: 0.021, throw OBJ at OBJ: 0.02, close OBJ: 0.019, put OBJ in OBJ: 0.018 Object probs: window: 0.659, small: 0.34 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, door, trap, wooden, lantern, light, all Action: open case Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: Opened. Template probs: open OBJ: 0.61, down: 0.081, take OBJ: 0.056, push OBJ: 0.053, put down OBJ: 0.049, west: 0.029, turn on OBJ: 0.023, throw OBJ at OBJ: 0.02, put OBJ in OBJ: 0.019, close OBJ: 0.019 Object probs: window: 0.659, small: 0.341 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, door, trap, wooden, lantern, light, all Action: open door Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: Which door do you mean, the wooden door or the trap door? lantern (providing light) Feedback: Which door do you mean, the wooden door or the trap door? Template probs: open OBJ: 0.552, down: 0.085, take OBJ: 0.062, put down OBJ: 0.06, push OBJ: 0.055, west: 0.04, turn on OBJ: 0.026, throw OBJ at OBJ: 0.026, put OBJ in OBJ: 0.024, close OBJ: 0.021 Object probs: window: 0.66, small: 0.33 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, trap, door, wooden, lantern, light, all Action: open all Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: You cant use multiple direct objects with "open". Template probs: open OBJ: 0.529, down: 0.086, put down OBJ: 0.065, take OBJ: 0.063, push OBJ: 0.059, west: 0.044, throw OBJ at OBJ: 0.028, turn on OBJ: 0.028, put OBJ in OBJ: 0.026, close OBJ: 0.021 Object probs: window: 0.661, small: 0.339 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, door, trap, wooden, lantern, light, all 18 Published as a conference paper at ICLR 2020 Action: down Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: The trap door is closed. Template probs: open OBJ: 0.569, down: 0.089, take OBJ: 0.063, put down OBJ: 0.059, push OBJ: 0.048, west: 0.036, turn on OBJ: 0.024, throw OBJ at OBJ: 0.024, put OBJ in OBJ: 0.021, close OBJ: 0.021 Object probs: window: 0.658, small: 0.341 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, trap, door, wooden, lantern, light, all Action: shake gothic Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: You dont have the wooden door. Template probs: open OBJ: 0.62, push OBJ: 0.085, take OBJ: 0.055, down: 0.053, west: 0.031, put down OBJ: 0.026, turn on OBJ: 0.025, throw OBJ at OBJ: 0.02, put OBJ in OBJ: 0.019, close OBJ: 0.014 Object probs: window: 0.668, small: 0.332 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, door, trap, wooden, lantern, light, all Action: open west Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: The door cannot be opened. Template probs: open OBJ: 0.648, push OBJ: 0.089, down: 0.053, take OBJ: 0.042, west: 0.025, turn on OBJ: 0.025, put down OBJ: 0.024, throw OBJ at OBJ: 0.021, put OBJ in OBJ: 0.02, close OBJ: 0.013 Object probs: window: 0.665, small: 0.335 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, door, trap, wooden, lantern, light, all Action: open wooden Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: The door cannot be opened. Template probs: open OBJ: 0.637, push OBJ: 0.081, down: 0.058, take OBJ: 0.05, put down OBJ: 0.024, west: 0.024, turn on OBJ: 0.024, close OBJ: 0.019, throw OBJ at OBJ: 0.017, put OBJ in OBJ: 0.016 Object probs: window: 0.671, small: 0.329 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, door, trap, wooden, lantern, light, all Action: push sword Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: Pushing the sword doesnt seem to work. Template probs: open OBJ: 0.591, push OBJ: 0.085, down: 0.073, take OBJ: 0.061, turn on OBJ: 0.025, put down OBJ: 0.024, west: 0.024, close OBJ: 0.02, throw OBJ at OBJ: 0.019, put OBJ in OBJ: 0.017 Object probs: window: 0.674, small: 0.326 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, door, trap, wooden, lantern, light, all Action: open strang Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: The door cannot be opened. Template probs: open OBJ: 0.614, push OBJ: 0.085, down: 0.065, take OBJ: 0.059, west: 0.024, turn on OBJ: 0.024, put down OBJ: 0.021, close OBJ: 0.017, throw OBJ at OBJ: 0.017, put OBJ in OBJ: 0.016 Object probs: window: 0.676, small: 0.323 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, door, trap, wooden, lantern, light, all Action: down Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a closed trap door at your feet. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: The trap door is closed. Template probs: open OBJ: 0.665, down: 0.068, push OBJ: 0.052, take OBJ: 0.044, put down OBJ: 0.036, west: 0.026, turn on OBJ: 0.02, throw OBJ at OBJ: 0.018, put OBJ in OBJ: 0.017, close OBJ: 0.015 Object probs: window: 0.661, small: 0.339 Interactive objects: elvish, strange, trophy, brass, case, lettering, antiquity, sword, gothic, west, trap, door, wooden, lantern, light, all Action: open trap Score: 15 --- Obs: Desc: Living Room You are in the living room. There is a doorway to the east, a wooden door with strange gothic lettering to the west, which appears to be nailed shut, a trophy case, and a rug lying beside an open trap door. Above the trophy case hangs an elvish sword of great antiquity. Inv: You are carrying A brass lantern (providing light) Feedback: The door reluctantly opens to reveal a rickety staircase descending into darkness. 19 Published as a conference paper at ICLR 2020 Template probs: down: 0.876, open OBJ: 0.046, take OBJ: 0.018, east: 0.013, close OBJ: 0.009, put down OBJ: 0.007, north: 0.006 Object probs: window: 0.604, small: 0.387, kitche: 0.006, trap: 0.003 Interactive objects: elvish, strange, trophy, brass, rug, staircase, case, lettering, antiquity, sword, gothic , west, door, trap, wooden, lantern, light, all Action: down Score: 15 --- Obs: Desc: It is pitch black. You are likely to be eaten by a grue. Inv: You are carrying A brass lantern Feedback: You have moved into a dark place. It is pitch black. You are likely to be eaten by a grue. Template probs: take OBJ: 0.141, turn on OBJ: 0.14, open OBJ: 0.122, down: 0.117, close OBJ: 0.073, put down OBJ: 0.069, west: 0.065, push OBJ: 0.056, throw OBJ at OBJ: 0.054, south: 0.02 Object probs: lanter: 0.076, letter: 0.071, window: 0.061, door: 0.059, small: 0.052, gothic: 0.049, sword: 0.049, rug: 0.037, trophy: 0.035, west: 0.031 Interactive objects: brass, dark, lantern, black, all, grue Action: down Score: 40 --- Obs: Desc: It is pitch black. You are likely to be eaten by a grue. Inv: You are carrying A brass lantern Feedback: The chasm probably leads straight to the infernal regions. Template probs: turn on OBJ: 0.167, take OBJ: 0.132, open OBJ: 0.125, down: 0.124, close OBJ: 0.076, push OBJ: 0.067, put down OBJ: 0.066, throw OBJ at OBJ: 0.056, west: 0.05, south: 0.024 Object probs: lanter: 0.077, letter: 0.071, window: 0.06, door: 0.059, small: 0.051, gothic: 0.05, sword: 0.049, rug: 0.037, trophy: 0.035, west: 0.031 Interactive objects: brass, chasm, lantern, black, all, grue Action: turn on lanter Score: 40 --- Obs: Desc: Cellar You are in a dark and damp cellar with a narrow passageway leading north, and a crawlway to the south. On the west is the bottom of a steep metal ramp which is unclimbable. There is a suspiciouslooking individual, holding a large bag, leaning against one wall. He is armed with a deadly stiletto. Inv: You are carrying A brass lantern (providing light) Feedback: You try to ascend the ramp, but it is impossible, and you slide back down. Template probs: down: 0.246, open OBJ: 0.209, take OBJ: 0.122, close OBJ: 0.1, push OBJ: 0.082, turn on OBJ: 0.068, put down OBJ: 0.033, south: 0.029, west: 0.019, throw OBJ at OBJ: 0.016 Object probs: west: 0.08, small: 0.08, window: 0.073, door: 0.067, rug: 0.057, letter: 0.047, wooden: 0.045, gothic: 0.041, strang: 0.035, lanter: 0.034 Interactive objects: light, ramp, large, brass, stiletto, crawlway, dark, narrow, lantern, bag, suspicious, metal, all, wall, passageway Action: take dark Score: 40 --- Obs: Desc: Cellar You are in a dark and damp cellar with a narrow passageway leading north, and a crawlway to the south. On the west is the bottom of a steep metal ramp which is unclimbable. There is a suspiciouslooking individual, holding a large bag, leaning against one wall. He is armed with a deadly stiletto. Inv: You are carrying A brass lantern (providing light) Feedback: You cant be serious. Template probs: down: 0.289, open OBJ: 0.201, take OBJ: 0.103, close OBJ: 0.091, turn on OBJ: 0.066, push OBJ: 0.065, put down OBJ: 0.043, throw OBJ at OBJ: 0.022, south: 0.021, west: 0.018 Object probs: west: 0.081, small: 0.076, window: 0.069, door: 0.068, rug: 0.057, letter: 0.048, wooden: 0.045, gothic: 0.042, strang: 0.036, lanter: 0.035 Interactive objects: light, ramp, large, brass, stiletto, crawlway, dark, narrow, lantern, bag, suspicious, metal, all, wall, passageway Action: put out passag Score: 40 --- Obs: Desc: Cellar You are in a dark and damp cellar with a narrow passageway leading north, and a crawlway to the south. On the west is the bottom of a steep metal ramp which is unclimbable. There is a suspiciouslooking individual, holding a large bag, leaning against one wall. He is armed with a deadly stiletto. The thief, finding nothing of value, left disgusted. Inv: You are carrying A brass lantern ( providing light) The thief, finding nothing of value, left disgusted. Feedback: Pushing the chute isnt notably helpful. Someone carrying a large bag is casually leaning against one of the walls here. He does not speak, but it is clear from his aspect that the bag will be taken only over his dead body. Template probs: open OBJ: 0.211, take OBJ: 0.205, push OBJ: 0.188, south: 0.132, west: 0.105, close OBJ: 0.057, turn on OBJ: 0.026, go around OBJ: 0.02, east: 0.017, down: 0.016 Object probs: small: 0.112, window: 0.103, west: 0.075, door: 0.063, rug: 0.054, wooden: 0.044, letter: 0.04, gothic: 0.037, strang: 0.032, kitche: 0.031 Interactive objects: light, ramp, large, brass, stiletto, crawlway, walls, dark, narrow, thief, lantern, chute , bag, suspicious, metal, all, wall, passageway Action: west Score: 40 --- Obs: Desc: Cellar You are in a dark and damp cellar with a narrow passageway leading north, and a crawlway to the south. On the west is the bottom of a steep metal ramp which is unclimbable. There is a suspiciouslooking individual, holding a large bag, leaning against one wall. He is armed with a deadly stiletto. A quick thrust pinks your left arm, and blood starts to trickle down. Inv: You are carrying A brass lantern (providing light) The thief, finding nothing of value, left disgusted. Feedback: You dont have the way. Template probs: take OBJ: 0.246, open OBJ: 0.211, push OBJ: 0.19, south: 0.091, west: 0.082, close OBJ: 0.059, turn on OBJ: 0.041, down: 0.026, go around OBJ: 0.017, east: 0.012 Object probs: small: 0.097, window: 0.087, west: 0.078, door: 0.065, rug: 0.056, wooden: 0.045, letter: 0.043, gothic: 0.038, strang: 0.034, lanter: 0.032 Interactive objects: light, ramp, large, brass, stiletto, crawlway, dark, narrow, lantern, bag, suspicious, metal, all, wall, passageway Action: south Score: 40 --- Obs: Desc: East of Chasm You are on the east edge of a chasm, the bottom of which cannot be seen. A narrow passage goes north, and the path you are on continues to the east. Inv: You are carrying A brass lantern (providing light) Feedback: The brass lantern is now on. East of Chasm You are on the east edge of a chasm, the bottom of which cannot be seen. A narrow passage goes north, and the path you are on continues to the east. 20 Published as a conference paper at ICLR 2020 Template probs: put down OBJ: 0.132, take OBJ: 0.116, east: 0.1, open OBJ: 0.093, close OBJ: 0.09, turn on OBJ : 0.07, throw OBJ at OBJ: 0.061, west: 0.059, push OBJ: 0.057, down: 0.042 Object probs: window: 0.074, small: 0.057, lanter: 0.046, sword: 0.041, gothic: 0.034, wooden: 0.031, letter: 0.03, bottle: 0.03, door: 0.028, large: 0.028 Interactive objects: light, brass, chasm, path, narrow, lantern, passage, all Action: east Score: 40 --- Obs: Desc: Gallery This is an art gallery. Most of the paintings have been stolen by vandals with exceptional taste. The vandals left through either the north or west exits. Fortunately, there is still one chance for you to be a vandal, for on the far wall is a painting of unparalleled beauty. Inv: You are emptyhanded. Feedback: Gallery This is an art gallery. Most of the paintings have been stolen by vandals with exceptional taste. The vandals left through either the north or west exits. Fortunately, there is still one chance for you to be a vandal, for on the far wall is a painting of unparalleled beauty. Template probs: open OBJ: 0.42, west: 0.377, take OBJ: 0.063, blow out OBJ: 0.039, east: 0.026, push OBJ: 0.02, down: 0.016, go around OBJ: 0.01, north: 0.008, turn on OBJ: 0.006 Object probs: window: 0.701, small: 0.299 Interactive objects: art, painting, wall, all, paintings Action: open painti Score: 40 --- Obs: Desc: Gallery This is an art gallery. Most of the paintings have been stolen by vandals with exceptional taste. The vandals left through either the north or west exits. Fortunately, there is still one chance for you to be a vandal, for on the far wall is a painting of unparalleled beauty. Inv: You are emptyhanded. Feedback: You must tell me how to do that to a painting. Template probs: open OBJ: 0.41, west: 0.33, take OBJ: 0.106, push OBJ: 0.033, blow out OBJ: 0.032, down: 0.023, turn on OBJ: 0.015, east: 0.014, go around OBJ: 0.01, south: 0.006 Object probs: window: 0.701, small: 0.299 Interactive objects: art, painting, wall, all, paintings Action: take painti Score: 40 --- Obs: Desc: Gallery This is an art gallery. Most of the paintings have been stolen by vandals with exceptional taste. The vandals left through either the north or west exits. Inv: You are carrying A painting Feedback: Taken. Template probs: open OBJ: 0.492, push OBJ: 0.296, take OBJ: 0.036, west: 0.033, turn on OBJ: 0.032, throw OBJ at OBJ: 0.021, down: 0.015, close OBJ: 0.013, put down OBJ: 0.011, put OBJ in OBJ: 0.01 Object probs: window: 0.698, small: 0.302 Interactive objects: art, painting, all, paintings Action: open art Score: 44 --- # D.1 PENTARI TRANSCRIPT TmplPred: examine OBJ 0.157, put OBJ down 0.129, take OBJ 0.072, sit on top of OBJ 0.067, listen to OBJ 0.063 ObjtPred: dirty 0.339, dagger 0.280, sword 0.144, chest 0.092, all 0.071 Act: put dagger down, Rew 0, Score 0, Done False, Value 0.339 Obs: A beautiful day in Bostwin! You are awakened by a ray of sunlight peeking through a window shade and rise refreshed after a restful night’s sleep. You’ve given your Company a weekend pass to get some R&R after your last grueling mission. Wasting precious little time your entire company took off for the city proper to pack in as much drinking, carousing, brawling, wenching and other relaxing pursuits as possible in 48 hours. In the fields of battle your men are an elite unit of trained professionals. Off duty they’re pretty much like every other bunch of soldiers you’ve ever served with. A weekend off and what to do? After the challenges of the past year when you were first promoted and assumed command of Charlie Company you’ve nearly forgotton what free time is and what you’re supposed to do with it. Pentari An Interactive Prequel To First Light Copyright 1998 (C) Howard A. Sherman. All Rights Reserved. Type ’Info’ For Information About This Game or ’Help’ for Instructions. Release 6 / Serial number 030206 / Inform v6.21 Library 6/10 Your Quarters Rank certainly does have its privileges. When you were a lowly lieutenant you had to share cramped quarters with a fellow lieutenant at your previous assignment. Commanding a company of your own affords you the luxury of private, spacious quarters just outside the barracks themselves. As a further abuse of your power your chest and bunk are not standard issue items from the Pentari millitary supply center. You can see a postcard here. TmplPred: north 0.746, put OBJ down 0.127, take OBJ 0.126, examine OBJ 0.001, south 0.000 ObjtPred: postcard 0.817, sword 0.180, dagger 0.001, wall 0.001, elf 0.000 Act: put postcard down, Rew 0, Score 0, Done False, Value 0.634 Obs: The postcard is already here. TmplPred: north 0.785, put OBJ down 0.108, take OBJ 0.105, examine OBJ 0.002, south 0.000 ObjtPred: postcard 0.620, sword 0.361, dagger 0.006, all 0.003, emerald 0.003 Act: north, Rew 0, Score 0, Done False, Value 0.633 Obs: Charlie Company Compound Your company compound consists of three large buildings bordering a large, open Obs: Charlie Company Compound Your company compound consists of three large buildings bordering a large, open area where company formations precede the orders of the day. To the west the Company’s supply building can be seen while directly ahead of you lies your company’s barracks which seem quieter than a crypt. area where company formations precede the orders of the day. To the west the Company’s supply building can be seen while directly ahead of you lies your company’s barracks which seem quieter than a crypt. Off to the east you hear the distant shouts of another Company drilling. Poor souls. TmplPred: north 0.915, put OBJ down 0.043, south 0.020, take OBJ 0.010, examine OBJ 0.005 ObjtPred: sword 0.982, postcard 0.014, dagger 0.002, all 0.001, enemy 0.000 Act: north, Rew 0, Score 0, Done False, Value 1.284 Obs: Charlie Company Barracks If the Army of Bostwin were to publish a new edition of the soldier’s handbook, a picture of this barracks should be included as the example other companies should follow. The floors are polished to an almost blinding shine while each of the bunks lining both sides of this building are perfectly made. Every locker next to each bunk is closed and locked. Not a single scrap of paper, not one article of clothing, nay not even a dust particle looks out of place. The powerful smell of high quantities of pine cleaner nearly burns your nostrils. Damn, you run a good comapny. You can see a Pentarian Transporter here. TmplPred: enter 0.974, get in OBJ 0.022, south 0.004, put OBJ down 0.001, get up 0.000 ObjtPred: transport 0.989, sword 0.011, legends 0.001, postcard 0.000, tapestrie 0.000 Act: enter, Rew 0, Score 0, Done False, Value 2.722 Obs: The Charlie Company Barracks (in the Pentarian Transporter) TmplPred: city 0.974, get up 0.023, put OBJ down 0.003, take OBJ 0.000, south 0.000 ObjtPred: sword 0.996, postcard 0.002, transport 0.001, blade 0.000, dirty 0.000 21 Published as a conference paper at ICLR 2020 Act: city, Rew 5, Score 5, Done False, Value 5.665 Obs: Your vision swims as your sight of the barracks blurs into a myriad of chaotic colors. You feel a moment of disorientation... As you begin to materialize your eyes grow wide as you realize you’re not in front of the Bank of Bostwin, where the transporter usually takes you in the city. You’re not even outdoors, you’re indoors! What the hell is going on here...? Entrance Hall Your footsteps echo loudly as you walk through this room taking in the vast expanse of its size. The walls rise majestically upward and outward to form a huge room which, curiously, is empty. You can’t help but wonder why such a large room is needed for the straight forward task of accepting guests and visitors. To the east a wide open arch leads to what must be a library while to the north you hardly help but notice a huge, semi circular marble staircase leading up to parts unknown. [Your score has just gone up by five points.] TmplPred: east 0.783, put OBJ down 0.092, examine OBJ 0.076, north 0.026, take OBJ 0.016 ObjtPred: sword 0.630, dagger 0.169, emerald 0.097, all 0.051, east 0.033 Act: east, Rew 0, Score 5, Done False, Value 1.153 Obs: Library This is a vast chamber lined floor to ceiling with bookcases filled to capacity with books. You haven’t seen such a collection of knowledge since your academy days. A cursory glance at some bookshelves reveal several titles on various topics of magic, history, philosophy, science and a hundred other topics. A few large tables with chairs are in the center of the room inviting anyone to make use of these impressive facilities. The wizard Morden is here staring at you, the smallest of smiles on his face. You can also see a copy of The Legends and Lore of Pentari here. TmplPred: covert 0.975, examine OBJ 0.008, put OBJ down 0.007, push OBJ to OBJ 0.005, take OBJ 0.002 ObjtPred: legends 0.700, sword 0.176, floor 0.062, dagger 0.029, all 0.010 Act: covert, Rew 0, Score 5, Done False, Value 2.394 Obs: Spots in all the colors of the rainbow dance in front of your eyes as you feel yourself displaced... You find yourself in the deep recesses of Vamvevmew’s castle and instantly a wave of fear and depression hit you as hard as any enemy ever has. Your hand shoots right for your sword in its scabbard on reflex and you grip the hilt so hard your knuckles whiten. Castle You are at the center of an intersection which runs east to west as well as north to south. You can see The Dark Elf here. The Dark Elf walks around as if looking for something. TmplPred: south 0.936, west 0.017, put OBJ down 0.015, north 0.010, east 0.008 ObjtPred: sword 0.643, dagger 0.127, dirty 0.084, enemy 0.052, seal 0.043 Act: south, Rew 0, Score 5, Done False, Value 5.257 Obs: Castle Entrance This large open area stands just inside the castle’s gate and would be the place all visitors would first go before being allowed entrance to the castle proper. A guard station, long abandoned, is here. A corridor stretches into the castle itself to the north. You can see a Magical Seal here. TmplPred: hit OBJ 0.947, put OBJ down 0.027, north 0.014, examine OBJ 0.005, south 0.005 ObjtPred: seal 0.999, ward 0.000, sword 0.000, postcard 0.000, dagger 0.000 Act: hit seal, Rew 10, Score 15, Done False, Value 10.024 Obs: Your fist smashes the ward, shattering it into a cloud of shimmering dust. [Your score has just gone up by ten points.] TmplPred: north 0.802, put OBJ down 0.122, examine OBJ 0.075, take OBJ 0.001, south 0.000 ObjtPred: sword 0.850, postcard 0.061, blade 0.051, dirty 0.017, north 0.011 Act: north, Rew 0, Score 15, Done False, Value 0.450 Obs: Castle You can see The Dark Elf here. The Dark Elf walks around as if looking for something. TmplPred: north 0.904, west 0.029, put OBJ down 0.024, south 0.019, examine OBJ 0.017 ObjtPred: sword 0.762, dagger 0.167, emerald 0.034, all 0.017, postcard 0.009 Act: north, Rew 0, Score 15, Done False, Value 1.362 Obs: Main Hall This once majestic room was where visitors would come to relax and meet with the formal lord of the castle in a somewhat informal atmosphere. Several large comfortable couches are scattered about, dusty and altogether squalid. Many large tapestries still hang on the walls but are horribly faded from age. Large open archways lead east and west while a huge fireplace dominates the center of the room against the northern wall. TmplPred: west 0.938, north 0.045, put OBJ down 0.011, examine OBJ 0.003, south 0.001 ObjtPred: sword 0.735, tapestrie 0.110, east 0.052, all 0.033, wall 0.024 Act: west, Rew 0, Score 15, Done False, Value 2.757 Obs: Armory Many death dealing weapons of every type were stored here. Several tall racks probably held spears while shorter ones mounted against the wall stored various kinds of swords. Other wall mounts, also empty, give you no idea what sort of weapons may have been held by them. A large archway north is partially blocked by collapsed stones and rubble. You can see a jewel encrusted dagger here. TmplPred: take OBJ 0.978, put OBJ down 0.009, examine OBJ 0.007, east 0.003, north 0.002 ObjtPred: all 0.887, dirty 0.034, dagger 0.031, other 0.028, emerald 0.017 Act: take all, Rew 5, Score 20, Done False, Value 5.732 Obs: jewel encrusted dagger: Taken. [Your score has just gone up by five points.] TmplPred: east 0.924, examine OBJ 0.035, put OBJ down 0.029, north 0.007, take OBJ 0.003 ObjtPred: sword 0.357, other 0.278, dagger 0.111, all 0.108, postcard 0.036 Act: east, Rew 0, Score 20, Done False, Value 1.100 Obs: Main Hall You can see The Dark Elf here. The Dark Elf walks around as if looking for something. TmplPred: north 0.928, put OBJ down 0.034, west 0.020, examine OBJ 0.011, south 0.004 ObjtPred: dagger 0.412, all 0.141, wall 0.128, sword 0.112, tapestrie 0.083 Act: north, Rew 0, Score 20, Done False, Value 2.549 Obs: Main Hall, by the Fireplace You are at the northern end of the main hall taking in the panorama of what was once a majestic area where perhaps treaties were signed, strategies laid and plans foiled. You can see a Large Fireplace here. TmplPred: enter 0.991, put OBJ down 0.003, examine OBJ 0.002, south 0.002, sit on top of OBJ 0.001 ObjtPred: fireplace 0.637, all 0.122, dagger 0.103, sword 0.062, floor 0.020 Act: enter, Rew 0, Score 20, Done False, Value 5.182 Obs: The Main Hall, by the Fireplace (in the Large Fireplace) You can see a dirty scroll here. TmplPred: take OBJ 0.914, get up 0.059, put OBJ down 0.019, examine OBJ 0.007, city 0.001 ObjtPred: dirty 0.938, all 0.051, dagger 0.008, sword 0.001, emerald 0.001 Act: take all, Rew 10, Score 30, Done False, Value 10.353 Obs: dirty scroll: Taken. [Your score has just gone up by ten points.] TmplPred: get up 0.884, examine OBJ 0.046, fwoosh 0.036, put OBJ down 0.027, take OBJ 0.002 ObjtPred: dirty 0.535, sword 0.186, dagger 0.180, all 0.086, emerald 0.004 Act: get up, Rew 0, Score 30, Done False, Value 0.578 Obs: Main Hall, by the Fireplace You can see a Large Fireplace here. TmplPred: south 0.938, enter 0.029, examine OBJ 0.014, put OBJ down 0.014, take OBJ 0.002 ObjtPred: dirty 0.305, sword 0.230, all 0.182, dagger 0.154, fireplace 0.099 Act: south, Rew 0, Score 30, Done False, Value 1.176 Obs: Main Hall TmplPred: west 0.859, north 0.066, south 0.016, take OBJ 0.011, put OBJ down 0.009 ObjtPred: sword 0.435, dagger 0.253, dirty 0.168, all 0.103, east 0.009 Act: west, Rew 0, Score 30, Done False, Value 2.503 22 Published as a conference paper at ICLR 2020 Obs: Armory TmplPred: north 0.984, put OBJ down 0.008, examine OBJ 0.003, east 0.002, take OBJ 0.001 ObjtPred: dagger 0.387, dirty 0.283, all 0.200, wall 0.050, sword 0.043 Act: north, Rew 0, Score 30, Done False, Value 5.938 Obs: Treasury In times past the vast wealth of the Duke would be stored here. Alas, with the castle taken and the Duke gone so with him went his riches. As you remember your history lessons, you recall the genealogy of the Dukes of Bostwin and their love of collecting treasures. You could only imagine what vast treasures were contained in this room long ago. A finely crafted oaken chest is here. TmplPred: fwoosh 0.972, south 0.010, north 0.006, put OBJ down 0.004, examine OBJ 0.003 ObjtPred: dagger 0.680, dirty 0.097, sword 0.082, chest 0.080, all 0.043 Act: fwoosh, Rew 10, Score 40, Done False, Value 12.325 Obs: You hear a roar of flames as a fireball materializes out of nowhere and floods the entire room in a burst of flames! The intensity of the fireball blasts the chest open and miraculously, except for a slight discomfort of warmth, you are otherwise uneffected. [Your score has just gone up by ten points.] TmplPred: take OBJ 0.822, put on OBJ 0.090, put OBJ in OBJ 0.022, close OBJ 0.019, south 0.015 ObjtPred: jewel 0.904, dagger 0.039, emerald 0.012, small 0.008, sword 0.007 Act: take jewel, Rew 5, Score 45, Done False, Value 5.912 Obs: You feel a surge of power flow through you as you pick up the small emerald...Taken. [Your score has just gone up by five points.] TmplPred: south 0.754, close OBJ 0.089, put OBJ in OBJ 0.066, take OBJ 0.027, put OBJ down 0.024 ObjtPred: dagger 0.441, emerald 0.228, sword 0.186, chest 0.077, floor 0.025 Act: south, Rew 0, Score 45, Done False, Value 1.267 Obs: Armory TmplPred: east 0.755, examine OBJ 0.096, put OBJ down 0.066, west 0.043, south 0.019 ObjtPred: dagger 0.385, emerald 0.313, floor 0.135, sword 0.118, all 0.031 Act: put dagger down, Rew 0, Score 45, Done False, Value 1.391 Obs: Dropped. The Dark Elf walks in. TmplPred: enter 0.319, take OBJ 0.294, examine OBJ 0.272, put OBJ down 0.019, west 0.013 ObjtPred: vamvevmew 0.409, dagger 0.182, sword 0.115, emerald 0.079, floor 0.061 Act: push vamvevmew to dagger, Rew 0, Score 45, Done False, Value 3.886 Obs: I don’t suppose The Dark Elf would care for that. The Dark Elf senses your aura of power and realizes you have the emerald he’s been looking for! He bashes you on the back of your head with the hilt of his sword and steals the small emerald from you! Your vision swims and you can’t quite your bearings... You hear alot of shouting nearby between Vamvevmew and a familiar sounding voice followed by an explosion as the hair stands up on your head! You draw your sword and attempt to get your bearings but you feel displaced... Your vision struggles for clarity as you attempt to look through the smoke and the dust to behold two figures locked in sorceral combat! You can make out Morden mumbling something under his breath as he gestures towards The Dark Elf and a bolt of lightning hits Vamvevmew squarely in the chest! Howling in rage and pain, the Dark One returns the volley with a fire strike which lights up a previously invisible defensive shield around Morden! Essentially unhurt, Morden looks weakened from this engagement... You look on in bewilderment but know that you must do something! Castle, Upper Level This large room has taken on some new charasterics recently... now the walls are covered with blast marks of unknown origin from floor to ceiling. Smoke wafts throughout and it’s about 10 degrees warmer in here. A box made of solid platinum is sitting on the floor here. You can also see a small emerald, The Dark Elf and Morden here. TmplPred: hit OBJ 0.966, undo OBJ 0.010, push OBJ to OBJ 0.004, west 0.004, examine OBJ 0.004 ObjtPred: vamvevmew 0.958, emerald 0.026, dagger 0.005, small 0.003, sword 0.002 Act: hit vamvevmew, Rew 10, Score 55, Done False, Value 9.970 Obs: Wounded by the Wizard Morden and distracted by the mortal combat he is engaged in, you are able to come up behind Vamvevmew and run him through with your sword. A look of surprise crosses his face as he looks down and realizes he is mortally wounded. He gives you a black look and attempts to reach for your throat but doesn’t quite make it, crumpling to the floor he breathes his last and dies. [Your score has just gone up by ten points.] TmplPred: undo OBJ 0.205, examine OBJ 0.181, put OBJ on OBJ 0.176, put OBJ down 0.166, push OBJ to OBJ 0.151 ObjtPred: dagger 0.239, floor 0.181, emerald 0.165, box 0.158, all 0.131 Act: undo dagger, Rew 0, Score 55, Done False, Value -0.014 Obs: You can’t see any such thing. TmplPred: undo OBJ 0.207, put OBJ on OBJ 0.174, examine OBJ 0.173, push OBJ to OBJ 0.165, put OBJ down 0.152 ObjtPred: dagger 0.224, floor 0.199, box 0.169, emerald 0.167, sword 0.120 Act: undo box, Rew 0, Score 55, Done False, Value -0.027 Obs: You open the platinum box, revealing The Empirical Emerald. TmplPred: put OBJ on OBJ 0.174, examine OBJ 0.160, put OBJ down 0.151, push OBJ to OBJ 0.147, put OBJ in OBJ 0.134 ObjtPred: dagger 0.240, floor 0.196, emerald 0.166, box 0.164, sword 0.117 Act: put box down, Rew 0, Score 55, Done False, Value -0.028 Obs: The platinum box is already here. TmplPred: put OBJ down 0.171, examine OBJ 0.167, push OBJ to OBJ 0.150, put OBJ on OBJ 0.142, take OBJ 0.119 ObjtPred: dagger 0.229, floor 0.199, box 0.166, emerald 0.165, sword 0.119 Act: close dagger, Rew 0, Score 55, Done False, Value -0.017 Obs: You can’t see any such thing. TmplPred: examine OBJ 0.159, put OBJ down 0.158, put OBJ on OBJ 0.149, push OBJ to OBJ 0.147, take OBJ 0.129 ObjtPred: dagger 0.240, floor 0.178, emerald 0.163, box 0.157, all 0.129 Act: take dagger, Rew 0, Score 55, Done False, Value -0.031 Obs: You can’t see any such thing. TmplPred: push OBJ to OBJ 0.156, examine OBJ 0.150, put OBJ on OBJ 0.148, put OBJ down 0.147, take OBJ 0.134 ObjtPred: dagger 0.237, floor 0.183, emerald 0.164, box 0.159, sword 0.126 Act: put dagger down, Rew 0, Score 55, Done False, Value -0.032 Obs: You can’t see any such thing. TmplPred: put OBJ on OBJ 0.152, push OBJ to OBJ 0.150, put OBJ down 0.148, examine OBJ 0.148, put OBJ in OBJ 0.139 ObjtPred: dagger 0.236, floor 0.189, emerald 0.164, box 0.159, sword 0.124 Act: put box on floor, Rew 0, Score 55, Done False, Value -0.022 Obs: The platinum box is already here. TmplPred: examine OBJ 0.161, push OBJ to OBJ 0.161, put OBJ down 0.160, take OBJ 0.136, put OBJ on OBJ 0.134 ObjtPred: dagger 0.233, floor 0.186, box 0.167, emerald 0.158, sword 0.125 Act: take dagger, Rew 0, Score 55, Done False, Value -0.026 Obs: You can’t see any such thing. TmplPred: push OBJ to OBJ 0.170, examine OBJ 0.164, put OBJ down 0.159, put OBJ on OBJ 0.143, take OBJ 0.139 ObjtPred: dagger 0.235, floor 0.181, box 0.165, emerald 0.158, all 0.127 Act: put floor down, Rew 0, Score 55, Done False, Value -0.034 Obs: What do you want to put the floor in? TmplPred: put OBJ down 0.164, push OBJ to OBJ 0.164, examine OBJ 0.160, put OBJ on OBJ 0.138, take OBJ 0.130 23 Published as a conference paper at ICLR 2020 ObjtPred: dagger 0.234, floor 0.186, box 0.165, emerald 0.158, sword 0.126 Act: put dagger in all, Rew 0, Score 55, Done False, Value -0.021 Obs: You can’t see any such thing. TmplPred: push OBJ to OBJ 0.171, put OBJ on OBJ 0.154, examine OBJ 0.153, put OBJ down 0.149, take OBJ 0.132 ObjtPred: dagger 0.234, floor 0.187, box 0.166, emerald 0.159, sword 0.125 Act: put dagger down, Rew 0, Score 55, Done False, Value -0.029 Obs: You can’t see any such thing. TmplPred: push OBJ to OBJ 0.169, examine OBJ 0.165, put OBJ down 0.160, put OBJ on OBJ 0.152, take OBJ 0.134 ObjtPred: dagger 0.233, floor 0.184, box 0.165, emerald 0.158, sword 0.127 Act: put sword on dagger, Rew 0, Score 55, Done False, Value -0.016 Obs: You can’t see any such thing. TmplPred: push OBJ to OBJ 0.188, examine OBJ 0.167, put OBJ on OBJ 0.162, put OBJ down 0.157, take OBJ 0.141 ObjtPred: dagger 0.227, floor 0.194, box 0.171, emerald 0.160, sword 0.123 Act: put floor on dagger, Rew 0, Score 55, Done True, Value -0.025 Obs: You can’t see any such thing. 24
{ "id": "1808.10113" }
2001.08361
Scaling Laws for Neural Language Models
We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.
http://arxiv.org/pdf/2001.08361
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, Dario Amodei
cs.LG, stat.ML
19 pages, 15 figures
null
cs.LG
20200123
20200123
0 2 0 2 n a J 3 2 ] G L . s c [ 1 v 1 6 3 8 0 . 1 0 0 2 : v i X r a # Scaling Laws for Neural Language Models Jared Kaplan ∗ Sam McCandlish∗ Johns Hopkins University, OpenAI OpenAI [email protected] [email protected] Tom Henighan Tom B. Brown Benjamin Chess Rewon Child OpenAI OpenAI OpenAI OpenAI [email protected] [email protected] [email protected] Scott Gray Alec Radford Jeffrey Wu Dario Amodei OpenAI OpenAI OpenAI OpenAI [email protected] [email protected] [email protected] # Abstract We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample- efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence. # ∗Equal contribution. Contributions: Jared Kaplan and Sam McCandlish led the research. Tom Henighan contributed the LSTM ex- periments. Tom Brown, Rewon Child, and Scott Gray, and Alec Radford developed the optimized Transformer implementation. Jeff Wu, Benjamin Chess, and Alec Radford developed the text datasets. Dario Amodei provided guidance throughout the project. # Contents 1 Introduction 2 Background and Methods 3 Empirical Results and Basic Power Laws 4 Charting the Infinite Data Limit and Overfitting 5 Scaling Laws with Model Size and Training Time 6 Optimal Allocation of the Compute Budget 7 Related Work 8 Discussion Appendices A Summary of Power Laws B Empirical Model of Compute-Efficient Frontier C Caveats 2 6 7 10 12 14 18 18 20 20 20 22 D Supplemental Figures # 1 Introduction Language provides a natural domain for the study of artificial intelligence, as the vast majority of reason- ing tasks can be efficiently expressed and evaluated in language, and the world’s text provides a wealth of data for unsupervised learning via generative modeling. Deep learning has recently seen rapid progress in lan- guage modeling, with state of the art models [RNSS18, DCLT18, YDY+19, LOG+19, RSR+19] approaching human-level performance on many specific tasks [WPN+19], including the composition of coherent multi- paragraph prompted text samples [RWC+19]. One might expect language modeling performance to depend on model architecture, the size of neural models, the computing power used to train them, and the data available for this training process. In this work we will empirically investigate the dependence of language modeling loss on all of these factors, focusing on the Transformer architecture [VSP+17, LSP+18]. The high ceiling and low floor for performance on language tasks allows us to study trends over more than seven orders of magnitude in scale. Throughout we will observe precise power-law scalings for performance as a function of training time, con- text length, dataset size, model size, and compute budget. # 1.1 Summary Our key findings for Transformer language models are are as follows: 2Here we display predicted compute when using a sufficiently small batch size. See Figure 13 for comparison to the purely empirical data. 2 4.2 6 —— L=(D/5.4+1013)-9995 | 5.6 —— L=(N/8.8-1023)-9.976 3.9 48 - 4.0 a4 > F; 3.3 3.2 F 3 3.0 . 24 === L= (Cmin/2.3 - 108)~950 2 2.7 fo 10-7 10-8 10-3 10-1 108 108 10° 108 107 10° Compute Dataset Size Parameters PF-days, non-embedding tokens non-embedding Figure 1 Language modeling performance improves smoothly as we increase the model size, datasetset size, and amount of compute2 used for training. For optimal performance all three factors must be scaled up in tandem. Empirical performance has a power-law relationship with each individual factor when not bottlenecked by the other two. Performance depends strongly on scale, weakly on model shape: Model performance depends most strongly on scale, which consists of three factors: the number of model parameters N (excluding embed- dings), the size of the dataset D, and the amount of compute C used for training. Within reasonable limits, performance depends very weakly on other architectural hyperparameters such as depth vs. width. (Section 3) Smooth power laws: Performance has a power-law relationship with each of the three scale factors N, D, C when not bottlenecked by the other two, with trends spanning more than six orders of magnitude (see Figure 1). We observe no signs of deviation from these trends on the upper end, though performance must flatten out eventually before reaching zero loss. (Section 3) Universality of overfitting: Performance improves predictably as long as we scale up N and D in tandem, but enters a regime of diminishing returns if either N or D is held fixed while the other increases. The performance penalty depends predictably on the ratio N 0.74/D, meaning that every time we increase the model size 8x, we only need to increase the data by roughly 5x to avoid a penalty. (Section 4) Universality of training: Training curves follow predictable power-laws whose parameters are roughly independent of the model size. By extrapolating the early part of a training curve, we can roughly predict the loss that would be achieved if we trained for much longer. (Section 5) Transfer improves with test performance: When we evaluate models on text with a different distribution than they were trained on, the results are strongly correlated to those on the training validation set with a roughly constant offset in the loss – in other words, transfer to a different distribution incurs a constant penalty but otherwise improves roughly in line with performance on the training set. (Section 3.2.2) Sample efficiency: Large models are more sample-efficient than small models, reaching the same level of performance with fewer optimization steps (Figure 2) and using fewer data points (Figure 4). Convergence is inefficient: When working within a fixed compute budget C but without any other restric- tions on the model size N or available data D, we attain optimal performance by training very large models and stopping significantly short of convergence (see Figure 3). Maximally compute-efficient training would therefore be far more sample efficient than one might expect based on training small models to convergence, with data requirements growing very slowly as D ∼ C 0.27 with training compute. (Section 6) Optimal batch size: The ideal batch size for training these models is roughly a power of the loss only, and continues to be determinable by measuring the gradient noise scale [MKAT18]; it is roughly 1-2 million tokens at convergence for the largest models we can train. (Section 5.1) Taken together, these results show that language modeling performance improves smoothly and predictably as we appropriately scale up model size, data, and compute. We expect that larger language models will perform better and be more sample efficient than current models. 3 Larger models require fewer samples to reach the same performance Test Loss 10 The optimal model size grows smoothly with the loss target and compute budget: Is Line color indicates number of parameters 103 108 10° KN AN WO N Compute-efficient training stops far short of convergence 107 109 ton Tokens Processed 109 106 103 109 Compute (PF-days) Figure 2 We show a series of language model training runs, with models ranging in size from 103 to 109 parameters (excluding embeddings). g 108 Minimum serial steps 3 increases negligibly N : x g 10° io) 1S} & 104 Bat I 3 £ 102 3 = 10° 10-8 10-6 10-4 Data grow Optimal increases 10-2 10° Compute (PF-days) Figure 3 As more compute becomes available, we can choose how much to allocate towards training larger models, using larger batches, and training for more steps. We illustrate this for a billion-fold increase in compute. For optimally compute-efficient training, most of the increase should go towards increased model size. A relatively small increase in data is needed to avoid reuse. Of the increase in data, most can be used to increase parallelism through larger batch sizes, with only a very small increase in serial training time required. # 1.2 Summary of Scaling Laws The test loss of a Transformer trained to autoregressively model language can be predicted using a power-law when performance is limited by only either the number of non-embedding parameters N , the dataset size D, or the optimally allocated compute budget Cmin (see Figure 1): 1. For models with a limited number of parameters, trained to convergence on sufficiently large datasets: L(N ) = (Nc/N )αN ; αN ∼ 0.076, Nc ∼ 8.8 × 1013 (non-embedding parameters) 2. For large models trained with a limited dataset with early stopping: L(D) = (Dc/D)αD ; αD ∼ 0.095, Dc ∼ 5.4 × 1013 (tokens) 3. When training with a limited amount of compute, a sufficiently large dataset, an optimally-sized model, and a sufficiently small batch size (making optimal3 use of compute): L(Cinin) = (C®™/Crnin) am ~ 0.050, Cm™ ~ 3.1 x 108 (PF-days) (1.3) 3We also observe an empirical power-law trend with the training compute C (Figure 1) while training at fixed batch size, but it is the trend with Cmin that should be used to make predictions. They are related by equation (5.5). 4 # requirements relatively slowly # model size very quickly (1.1) (1.2) 2 4 Loss vs Model Size and Training Steps 3 ed 108 & g EI wre it 2 z 108 § & 104 105 Estimated Sinin Loss vs Model and Dataset Size Loss vs Model Size and Training Steps 45 4.0 Params 108 © 708M © 302M ges oom e 3M 3.0 © 25M © 393.2K 108 2.5 107 108 10° 1010 104 105 Tokens in Dataset Estimated Sinin Figure 4 Left: The early-stopped test loss L(N, D) varies predictably with the dataset size D and model size N according to Equation (1.5). Right: After an initial transient period, learning curves for all model sizes N can be fit with Equation (1.6), which is parameterized in terms of Smin, the number of steps when training at large batch size (details in Section 5.1). These relations hold across eight orders of magnitude in Cmin, six orders of magnitude in N , and over two orders of magnitude in D. They depend very weakly on model shape and other Transformer hyperparameters (depth, width, number of self-attention heads), with specific numerical values associated with the Webtext2 training set [RWC+19]. The power laws αN, αD, αmin specify the degree of performance improvement C expected as we scale up N , D, or Cmin; for example, doubling the number of parameters yields a loss that is smaller by a factor 2−αN = 0.95. The precise numerical values of Nc, C min , and Dc depend on the c vocabulary size and tokenization and hence do not have a fundamental meaning. The critical batch size, which determines the speed/efficiency tradeoff for data parallelism ([MKAT18]), also roughly obeys a power law in L: Bcrit (L) = B∗ L1/αB , B∗ ∼ 2 · 108 tokens, αB ∼ 0.21 (1.4) Equation (1.1) and (1.2) together suggest that as we increase the model size, we should increase the dataset αN αD ∼ N 0.74. In fact, we find that there is a single equation combining size sublinearly according to D ∝ N (1.1) and (1.2) that governs the simultaneous dependence on N and D and governs the degree of overfitting: L(N,D)= (3) "+3 (1.5) with fits pictured on the left in figure 4. We conjecture that this functional form may also parameterize the trained log-likelihood for other generative modeling tasks. When training a given model for a finite number of parameter update steps S in the infinite data limit, after an initial transient period, the learning curves can be accurately fit by (see the right of figure|4p N. an Ye os L(N,S) = (*) + (ss) ” # an where Sc ≈ 2.1 × 103 and αS ≈ 0.76, and Smin(S) is the minimum possible number of optimization steps (parameter updates) estimated using Equation (5.4). When training within a fixed compute budget C, but with no other constraints, Equation (1.6) leads to the prediction that the optimal model size N , optimal batch size B, optimal number of steps S, and dataset size D should grow as N ∝ C αmin C /αN , B ∝ C αmin C /αB , S ∝ C αmin C /αS , D = B · S (1.7) # with # αmin (1.8) which closely matches the empirically optimal results N ∝ C 0.73 min . As the computational budget C increases, it should be spent primarily on larger models, without dramatic increases in training time or dataset size (see Figure 3). This also implies that as models grow larger, they become increasingly sample efficient. In practice, researchers typically train smaller models for longer than would 5 be maximally compute-efficient because of hardware constraints. Optimal performance depends on total compute as a power law (see Equation (1.3)). We provide some basic theoretical motivation for Equation (1.5), an analysis of learning curve fits and their implications for training time, and a breakdown of our results per token. We also make some brief compar- isons to LSTMs and recurrent Transformers [DGV+18]. # 1.3 Notation We use the following notation: • L – the cross entropy loss in nats. Typically it will be averaged over the tokens in a context, but in some cases we report the loss for specific tokens within the context. N – the number of model parameters, excluding all vocabulary and positional embeddings • C ≈ 6N BS – an estimate of the total non-embedding training compute, where B is the batch size, and S is the number of training steps (ie parameter updates). We quote numerical values in PF-days, where one PF-day = 1015 × 24 × 3600 = 8.64 × 1019 floating point operations. D – the dataset size in tokens • Bcrit – the critical batch size [MKAT18], defined and discussed in Section 5.1. Training at the critical batch size provides a roughly optimal compromise between time and compute efficiency. • Cmin – an estimate of the minimum amount of non-embedding compute to reach a given value of the loss. This is the training compute that would be used if the model were trained at a batch size much less than the critical batch size. • Smin – an estimate of the minimal number of training steps needed to reach a given value of the loss. This is also the number of training steps that would be used if the model were trained at a batch size much greater than the critical batch size. • αX – power-law exponents for the scaling of the loss as L(X) ∝ 1/X αX where X can be any of N, D, C, S, B, C min. # 2 Background and Methods We train language models on WebText2, an extended version of the WebText [RWC+19] dataset, tokenized using byte-pair encoding [SHB15] with a vocabulary size nvocab = 50257. We optimize the autoregres- sive log-likelihood (i.e. cross-entropy loss) averaged over a 1024-token context, which is also our principal performance metric. We record the loss on the WebText2 test distribution and on a selection of other text distributions. We primarily train decoder-only [LSP+18, RNSS18] Transformer [VSP+17] models, though we also train LSTM models and Universal Transformers [DGV+18] for comparison. # 2.1 Parameter and Compute Scaling of Transformers We parameterize the Transformer architecture using hyperparameters nlayer (number of layers), dmodel (di- mension of the residual stream), dff (dimension of the intermediate feed-forward layer), dattn (dimension of the attention output), and nheads (number of attention heads per layer). We include nctx tokens in the input context, with nctx = 1024 except where otherwise noted. We use N to denote the model size, which we define as the number of non-embedding parameters N ≈ 2dmodelnlayer (2dattn + dff ) = 12nlayerd2 model with the standard dattn = dff /4 = dmodel (2.1) where we have excluded biases and other sub-leading terms. Our models also have nvocabdmodel parameters in an embedding matrix, and use nctxdmodel parameters for positional embeddings, but we do not include these when discussing the ‘model size’ N ; we will see that this produces significantly cleaner scaling laws. Evaluating a forward pass of the Transformer involves roughly Cforward ≈ 2N + 2nlayernctxdmodel add-multiply operations, where the factor of two comes from the multiply-accumulate operation used in matrix multiplication. A more detailed per-operation parameter and compute count is included in Table 1. 6 Operation Parameters FLOPs per Token Embed (nvocab + nctx) dmodel 4dmodel Attention: QKV nlayerdmodel3dattn 2nlayerdmodel3dattn Attention: Mask — 2nlayernctxdattn Attention: Project nlayerdattndmodel 2nlayerdattndembd Feedforward nlayer2dmodeldff 2nlayer2dmodeldff De-embed — 2dmodelnvocab Total (Non-Embedding) N = 2dmodelnlayer (2dattn + dff ) Cforward = 2N + 2nlayernctxdattn Table 1 Parameter counts and compute (forward pass) estimates for a Transformer model. Sub-leading terms such as nonlinearities, biases, and layer normalization are omitted. For contexts and models with dyodel > Netx/12, the context-dependent computational cost per token is a relatively small fraction of the total compute. Since we primarily study models where dmode1 > Netx/12, we do not include context-dependent terms in our training compute estimate. Accounting for the backwards pass (approximately twice the compute as the forwards pass), we then define the estimated non-embedding compute as C' © 6N floating point operators per training token. # 2.2 Training Procedures Unless otherwise noted, we train models with the Adam optimizer [KB14] for a fixed 2.5 × 105 steps with a batch size of 512 sequences of 1024 tokens. Due to memory constraints, our largest models (more than 1B parameters) were trained with Adafactor [SS18]. We experimented with a variety of learning rates and schedules, as discussed in Appendix D.6. We found that results at convergence were largely independent of learning rate schedule. Unless otherwise noted, all training runs included in our data used a learning rate schedule with a 3000 step linear warmup followed by a cosine decay to zero. # 2.3 Datasets We train our models on an extended version of the WebText dataset described in [RWC+19]. The original WebText dataset was a web scrape of outbound links from Reddit through December 2017 which received at least 3 karma. In the second version, WebText2, we added outbound Reddit links from the period of January to October 2018, also with a minimum of 3 karma. The karma threshold served as a heuristic for whether people found the link interesting or useful. The text of the new links was extracted with the Newspaper3k python library. In total, the dataset consists of 20.3M documents containing 96 GB of text and 1.62 × 1010 words (as defined by wc). We then apply the reversible tokenizer described in [RWC+19], which yields 2.29 × 1010 tokens. We reserve 6.6 × 108 of these tokens for use as a test set, and we also test on similarly- prepared samples of Books Corpus [ZKZ+15], Common Crawl [Fou], English Wikipedia, and a collection of publicly-available Internet Books. # 3 Empirical Results and Basic Power Laws To characterize language model scaling we train a wide variety of models, varying a number of factors including: • Model size (ranging in size from 768 to 1.5 billion non-embedding parameters) • Dataset size (ranging from 22 million to 23 billion tokens) Shape (including depth, width, attention heads, and feed-forward dimension) Context length (1024 for most runs, though we also experiment with shorter contexts) • Batch size (219 for most runs, but we also vary it to measure the critical batch size) 7 So Mead =8 —*— 50M Params =F dmogei = 256 8% | © Omose/Mneaa = 64 —— 274M Params dade! = 512 —— 1.5B Params = dinagei = 1024 Awide range of architectures achieve similar performance 1 22% additional compute compensates for 1% loss increase ee 10° 10! 10° 10? 103 10" 10? Feed-Forward Ratio (dit / model) Attention Head Dimension (dmoce! / Nhead) 50M Parameters 25M Parameters Loss Increase Aspect Ratio (dmodel / Niayer) Figure 5 Performance depends very mildly on model shape when the total number of non-embedding parameters N is held fixed. The loss varies only a few percent over a wide range of shapes. Small differences in parameter counts are compensated for by using the fit to L(N ) as a baseline. Aspect ratio in particular can vary by a factor of 40 while only slightly impacting performance; an (nlayer, dmodel) = (6, 4288) reaches a loss within 3% of the (48, 1600) model used in [RWC+19]. 7 “Re : 6 6 5 5 2 2 — OL a4 ayer a4 2 —— 1 Layer 2 —— 1 Layer & —— 2 Layers & —— 2 Layers 3) —— 3 Layers 3} —— 3 Layers —— 6 Layers —— 6 Layers —— >6 Layers —— >6 Layers 2 2 5 108 107 108 10° 108 104 105 108 107 108 10° Parameters (with embedding) Parameters (non-embedding) Figure 6 Left: When we include embedding parameters, performance appears to depend strongly on the number of layers in addition to the number of parameters. Right: When we exclude embedding parameters, the performance of models with different depths converge to a single trend. Only models with fewer than 2 layers or with extreme depth-to-width ratios deviate significantly from the trend. In this section we will display data along with empirically-motivated fits, deferring theoretical analysis to later sections. # 3.1 Approximate Transformer Shape and Hyperparameter Independence Transformer performance depends very weakly on the shape parameters nlayer, nheads, and dff when we hold the total non-embedding parameter count N fixed. To establish these results we trained models with fixed size while varying a single hyperparameter. This was simplest for the case of nheads. When varying nlayer, we simultaneously varied dmodel while keeping N ≈ 12nlayerd2 model fixed. Similarly, to vary dff at fixed model size we also simultaneously varied the dmodel parameter, as required by the parameter counts in Table 1. Independence of nlayers would follow if deeper Transformers effectively behave as ensembles of shallower models, as has been suggested for ResNets [VWB16]. The results are shown in Figure 5. # 3.2 Performance with Non-Embedding Parameter Count N In Figure 6 we display the performance of a wide variety of models, ranging from small models with shape (nlayer, dmodel) = (2, 128) through billion-parameter models, ranging in shape from (6, 4288) through (207, 768). Here we have trained to near convergence on the full WebText2 dataset and observe no over- fitting (except possibly for the very largest models). As shown in Figure 1, we find a steady trend with non-embedding parameter count N , which can be fit to the first term of Equation (1.5), so that L(N) = (%)° (3.1) 8 # Test Transformers asymptotically outperform LSTMs due to improved use of long contexts LSTM plateaus after <100 tokens Transformer improves through the whole context Figure 7 To observe these trends it is crucial to study performance as a function of N ; if we instead use the total parameter count (including the embedding parameters) the trend is somewhat obscured (see Figure 6). This suggests that the embedding matrix can be made smaller without impacting performance, as has been seen in recent work [LCG+19]. Although these models have been trained on the WebText2 dataset, their test loss on a variety of other datasets is also a power-law in N with nearly identical power, as shown in Figure 8. # 3.2.1 Comparing to LSTMs and Universal Transformers In Figure 7 we compare LSTM and Transformer performance as a function of non-embedding parameter count N . The LSTMs were trained with the same dataset and context length. We see from these figures that the LSTMs perform as well as Transformers for tokens appearing early in the context, but cannot match the Transformer performance for later tokens. We present power-law relationships between performance and context position Appendix D.5, where increasingly large powers for larger models suggest improved ability to quickly recognize patterns. We also compare the performance of standard Transformers to recurrent Transformers [DGV+18] in Figure 17 in the appendix. These models re-use parameters, and so perform slightly better as a function of N , at the cost of additional compute per-parameter. # 3.2.2 Generalization Among Data Distributions We have also tested our models on a set of additional text data distributions. The test loss on these datasets as a function of model size is shown in Figure 8; in all cases the models were trained only on the WebText2 dataset. We see that the loss on these other data distributions improves smoothly with model size, in direct parallel with the improvement on WebText2. We find that generalization depends almost exclusively on the in-distribution validation loss, and does not depend on the duration of training or proximity to convergence. We also observe no dependence on model depth (see Appendix D.8). # 3.3 Performance with Dataset Size and Compute We display empirical trends for the test loss as a function of dataset size D (in tokens) and training compute C in Figure 1. For the trend with D we trained a model with (nlayer, nembd) = (36, 1280) on fixed subsets of the WebText2 dataset. We stopped training once the test loss ceased to decrease. We see that the resulting test losses can be fit with simple power-law D.\°? L(D) = (— 3.2 (r= (2) (2) in the dataset size. The data and fit appear in Figure 1. The total amount of non-embedding compute used during training can be estimated as C = 6N BS, where B is the batch size, S is the number of parameter updates, and the factor of 6 accounts for the forward and backward passes. Thus for a given value of C we can scan over all models with various N to find the model 9 7 5.0 = —e WebText2 (Test) ¢ wSy, ee ==» Books during training 6 —e— Internet Books 94.5 ae TS ~~~ Wikipedia during training —* Books 3 She @ Books at convergence 5 -e— Wikipedia ‘8 4.0 Rae © Wikipedia at convergence —e— Common Crawl a “Sey, 2 = A35 weg a7 FS ‘ 5 3.0 * 3 8 B25 4 10# = 105 108 = 107 108 =~ 109 5.0 45 4.0 3.5 3.0 2.5 Parameters (non-embedding) Test Loss on Training Distribution a g ° 44 E & Figure 8 Left: Generalization performance to other data distributions improves smoothly with model size, with only a small and very slowly growing offset from the WebText2 training distribution. Right: Gener- alization performance depends only on training distribution performance, and not on the phase of training. We compare generalization of converged models (points) to that of a single large model (dashed curves) as it trains. with the best performance on step S = C 6BS . Note that in these results the batch size B remains fixed for all models, which means that these empirical results are not truly optimal. We will account for this in later sections using an adjusted Cmin to produce cleaner trends. The result appears as the heavy black line on the left-hand plot in Figure 1. It can be fit with C.\% L(C) = | = 3.3 ©=($) G3) The figure also includes images of individual learning curves to clarify when individual models are optimal. We will study the optimal allocation of compute more closely later on. The data strongly suggests that sample efficiency improves with model size, and we also illustrate this directly in Figure 19 in the appendix. # 4 Charting the Infinite Data Limit and Overfitting In Section 3 we found a number of basic scaling laws for language modeling performance. Here we will study the performance of a model of size N trained on a dataset with D tokens while varying N and D simultaneously. We will empirically demonstrate that the optimally trained test loss accords with the scaling law of Equation (1.5). This provides guidance on how much data we would need to train models of increasing size while keeping overfitting under control. # 4.1 Proposed L(N, D) Equation We have chosen the parameterization (1.5) (repeated here for convenience): us.py=|(%)" +2 an using three principles: 1. Changes in vocabulary size or tokenization are expected to rescale the loss by an overall factor. The parameterization of L(N, D) (and all models of the loss) must naturally allow for such a rescaling. 2. Fixing D and sending N → ∞, the overall loss should approach L(D). Conversely, fixing N and sending D → ∞ the loss must approach L(N ). 3. L(N, D) should be analytic at D = ∞, so that it has a series expansion in 1/D with integer powers. Theoretical support for this principle is significantly weaker than for the first two. Our choice of L(N, D) satisfies the first requirement because we can rescale Nc, Dc with changes in the vocabulary. This also implies that the values of Nc, Dc have no fundamental meaning. 10 Data Size Bottleneck Overfitting 0.5 45 Data Size 0.4 Data 4.0 © 2M a © 2M © 43M \ ° 43M 935 © 86M gos © 86M ° 172M U e © 344M 02 © 3.0 © 688M a © © 148 a © 143 © 22.08 0.1 © 2.5 ; ; 0.0 10° 107 108 10° 10-4 10-3 10-2 10-1 Params (non-embed) Ne!) 2 3 3 & Figure 9 The early-stopped test loss L(N, D) depends predictably on the dataset size D and model size N according to Equation (1.5). Left: For large D, performance is a straight power law in N . For a smaller fixed D, performance stops improving as N increases and the model begins to overfit. (The reverse is also true, αN αD /D, as predicted in see Figure 4.) Right: The extent of overfitting depends predominantly on the ratio N equation (4.3). The line is our fit to that equation. Since we stop training early when the test loss ceases to improve and optimize all models in the same way, we expect that larger models should always perform better than smaller models. But with fixed finite D, we also do not expect any model to be capable of approaching the best possible loss (ie the entropy of text). Similarly, a model with fixed size will be capacity-limited. These considerations motivate our second principle. Note that knowledge of L(N ) at infinite D and L(D) at infinite N fully determines all the parameters in L(N, D). The third principle is more speculative. There is a simple and general reason one might expect overfitting to scale ∝ 1/D at very large D. Overfitting should be related to the variance or the signal-to-noise ratio of the dataset [AS17], and this scales as 1/D. This expectation should hold for any smooth loss function, since we expect to be able to expand the loss about the D → ∞ limit. However, this argument assumes that 1/D corrections dominate over other sources of variance, such as the finite batch size and other limits on the efficacy of optimization. Without empirical confirmation, we would not be very confident of its applicability. Our third principle explains the asymmetry between the roles of N and D in Equation (1.5). Very similar symmetric expressions4 are possible, but they would not have a 1/D expansion with integer powers, and would require the introduction of an additional parameter. In any case, we will see that our equation for L(N, D) fits the data well, which is the most important justifi- cation for our L(N, D) ansatz. # 4.2 Results We regularize all our models with 10% dropout, and by tracking test loss and stopping once it is no longer decreasing. The results are displayed in Figure 9, including a fit to the four parameters αN , αD, Nc, Dc in Equation (1.5): Parameter αN αD Nc Dc Value 0.076 0.103 6.4 × 1013 1.8 × 1013 Table 2 Fits to L(N, D) We obtain an excellent fit, with the exception of the runs where the dataset has been reduced by a factor of 1024, to about 2 × 107 tokens. With such a small dataset, an epoch consists of only 40 parameter updates. Perhaps such a tiny dataset represents a different regime for language modeling, as overfitting happens very early in training (see Figure 16). Also note that the parameters differ very slightly from those obtained in Section 3, as here we are fitting the full L(N, D) rather than just L(N, ∞) or L(∞, D). To chart the borderlands of the infinite data limit, we can directly study the extent of overfitting. For all but the largest models, we see no sign of overfitting when training with the full 22B token WebText2 dataset, so we can take it as representative of D = ∞. Thus we can compare finite D to the infinite data limit by For example, one might have used L(N, D) = [(42)°% + (22)°?] e but this does not have a 1/D expansion. 11 # Size 172M 344M 688M 22.08 Critical Batch Size vs. Performance on - a ™ 108 ° & g w 105 a = : Bios ee —e— Empirical Bait, N= 3M Z —s— Empirical Berit, N= 85M £ a= Bert = 2.1 x 108 tokens -L~48 a Noise Scale Measurement © 103 10! 6x 10° 4x10° 3x10° WebText2 Train Loss Figure 10 The critical batch size Bcrit follows a power law in the loss as performance increase, and does not depend directly on the model size. We find that the critical batch size approximately doubles for every 13% decrease in loss. Bcrit is measured empirically from the data shown in Figure 18, but it is also roughly predicted by the gradient noise scale, as in [MKAT18]. defining δL(N, D) ≡ L(N, D) L(N, ∞) − 1 (4.2) and studying it as a function of N, D. In fact, we see empirically that δL depends only a specific combination of N and D, as shown in Figure 16. This follows from the scaling law of Equation (1.5), which implies ay ap N\s> D. 6L (: + (x) 2) -1 (4.3) Note that at large D this formula also has a series expansion in powers of 1/D. We estimate that the variation in the loss with different random seeds is roughly 0.02, which means that to avoid overfitting when training to within that threshold of convergence we require D2 (5x 10°) N°” (4.4) With this relation, models smaller than 109 parameters can be trained with minimal overfitting on the 22B token WebText2 dataset, but our largest models will encounter some mild overfitting. More generally, this relation shows that dataset size may grow sub-linearly in model size while avoiding overfitting. Note however that this does not typically represent maximally compute-efficient training. We should also emphasize that we have not optimized regularization (eg the dropout probability) while varying dataset and model size. # 5 Scaling Laws with Model Size and Training Time In this section we will demonstrate that a simple scaling law provides a good description for the loss as a function of model size N and training time. First we will explain how to use the results of [MKAT18] to define a universal training step Smin, which accounts for the fact that most of our models have not been trained at an optimal batch size. Then we will demonstrate that we can fit the model size and training time dependence of the loss using Equation (1.6). Later we will use these results to predict the optimal allocation of training compute between model size and training time, and then confirm that prediction. # 5.1 Adjustment for Training at Bcrit(L) A simple empirical theory for the batch size dependence of training was developed in [MKAT18] (see also [SLA+18, ZLN+19]). It was argued that there is a critical batch size Bcrit for training; for B up to Bcrit the batch size can be increased with very minimal degradation in compute-efficiency, whereas for B > Bcrit increases in B result in diminishing returns. It was also argued that the gradient noise scale provides a simple 12 prediction for B,,;,, and that neither depends directly on model size except through the value of the loss that has been attained. These results can be used to predict how training time and compute will vary with the batch size. To utilize both training time and compute as effectively as possible, it is best to train with a batch size B + Beit. Training at B >> Bit minimizes the number of training steps, while B < Beri, minimizes the use of compute. More specifically, it was demonstrated that for a wide variety of neural network tasks, the number of training steps S and the number of data examples processed E = BS satisfy the simple relation Ss E (= 7 1) (a. 7 1) GO» when training to any fixed value of the loss L. Here Smin is the minimum number of steps necessary to reach L, while Emin is the minimum number of data examples that must be processed. We demonstrate the relation (5.1) for Transformers in Figure 18 in the appendix. This relation defines the critical batch size Emin Smin Bcrit(L) ≡ (5.2) which is a function of the target value of the loss. Training at the critical batch size makes a roughly optimal time/compute tradeoff, requiring 2Smin training steps and processing E = 2Emin data examples. In Figure 10 we have plotted the critical batch size and gradient noise scale5 as a function of training loss for two different models. We see that Bcrit(L) is independent of model size, and only depends on the loss L. So the predictions of [MKAT18] continue to hold for Transformer language models. The critical batch size can be fit with a power-law in the loss Bcrit(L) ≈ B∗ L1/αB (5.3) where B∗ ≈ 2 × 108 and αB ≈ 0.21. We have chosen this parameterization for Bcrit(L) because as the loss approaches its minimum value Lmin, the gradient noise scale is expected to diverge, and we expect Bcrit to track this noise scale. We do not know Lmin, as we see no sign that our models are approaching it, but Lmin > 0 since the entropy of natural language is non-zero. Since apparently Lmin is much smaller than the values of L we have achieved, we used a parameterization where Bcrit diverges as L → 0. We will use B.,it(L) to estimate the relation between the number of training steps S' while training at batch size B = 2!9 tokens and the number of training steps while training at B >> B.,i,. This is simply Ss Sinin(S) = T+ Ban(L)/B (minimum steps, at B >> Berit) (5.4) for any given target value L for the loss. This also defines a critical value of the compute needed to train to L with a model of size N if we were to train at B < Beyi,(L). This is Cc Cwnin(C) = 1+ B/Bai(L) (minimum compute, at B < Beit) (5.5) where C = 6N BS estimates the (non-embedding) compute used at batch size B. # 5.2 Results for L(N, Smin) and Performance with Model Size and Compute Now we will use Smin defined in Equation (5.4) to obtain a simple and universal fit for the dependence of the loss on model size and training time in the infinite data limit. We will fit the stable, Adam-optimized training runs using Equation (1.6), repeated here for convenience: an a as L(N, Simin) = (*) + (= ) (5.6) for the loss. We include all training steps after the warmup period of the learning rate schedule, and find a fit to the data with the parameters: 5Although the critical batch size roughly matches the gradient noise scale, we are using a direct measurements of Bcrit from Figures 18 and 10 for all our later analyses. 13 Performance vs Compute Budget 8 7 10° 6 10-1 ge g -2 Sy w?B % ay 10- é 3 10-4 10-5 2 5 104 106 108 Parameters (non-embedding) Performance vs Steps 105 g g a % FA é 104 5 108 107 108 109 Parameters (non-embedding) Performance vs Compute Budget Performance vs Steps 8 7 10° 6 105 10-1 ge g g -2 w?B g ay % 10- é 3 10-4 104 10-5 2 5 5 104 106 108 108 107 108 109 Parameters (non-embedding) Parameters (non-embedding) Figure 11 When we hold either total compute or number of training steps fixed, performance follows L(N, S) from Equation (5.6). Each value of compute budget has an associated optimal model size that maximizes performance. Mediocre fits at small S are unsurprising, as the power-law equation for the learning curves breaks down very early in training. Parameter αN αS Nc Sc Value 0.077 0.76 6.5 × 1013 2.1 × 103 # Table 3 Fits to L(N, S) With these parameters, we obtain the learning curve fits in Figure 4. Though the fits are imperfect, we believe they are quite compelling given the simplicity of Equation (5.6). The data and fits can be visualized in a different and more interesting way, as shown in Figure 11. There we study the test loss as a function of model size while fixing either the total non-embedding compute C used in training, or the number of steps S. For the fits we use Equation (5.5) and (5.4) along with the parameters above and Equation (5.6). The power-law dependence of the loss on Smin reflects the interplay of optimizer dynamics and the loss landscape. Since the fits are best late in training, when the loss may be approximately quadratic, the power- law should provide information about the spectrum of the Hessian of the loss. Its universality suggests that the Hessian eigenvalue density is roughly independent of model size. # 5.3 Lower Bound on Early Stopping Step The results for L(N, Smin) can be used to derive a lower-bound (and rough estimate) of the step at which early stopping should occur when training is data limited. It is motivated by the idea that finite and infinite D learning curves for a given model will be very similar until we reach Smin ≈ Sstop. Thus overfitting should be proportional to the correction from simply ending training at Sstop. This will underestimate Sstop, because in reality the test loss will decrease more slowly when we have a finite D, and therefore we will require more training steps to reach the optimal test loss at finite D. This line of reasoning leads to the inequality Se [L(N, D) — L(N, co)]'/“S (5.7) Sstop(N, D) where L(N, oo) is the converged loss, evaluated with infinite available data. This inequality and its com- parison to the empirical data is displayed in Figure [T6]in the appendix. In that figure, the values of Sstop and L(N, D) are empirical (though Sstop is adjusted to mimic training at B > Bei), while L(N, 00) is computed from the fit to L(V, D) evaluated at D = ov. # 6 Optimal Allocation of the Compute Budget We displayed the empirical trend of performance as a function of the computation used during training in the top-right of Figure 1. However, this result involved training at a fixed batch size B, whereas we know 14 10! » fo) Smaller models require more steps to train, while larger models require fewer w uw we o Models between 0.6x and 2.2x the optimal size can be trained with a 20% larger compute budget 10° N o Excess Steps (S/Serricient) Excess Compute (C/Cesticient) N or 15 Our framework does not ~~~ an 1.0 capture early training dynamics ‘| 10° 10! 10° 10! Deviation from Optimal Model (N/Netricient) Deviation from Optimal Model (N/Negficient) Figure 12 Left: Given a fixed compute budget, a particular model size is optimal, though somewhat larger or smaller models can be trained with minimal additional compute. Right: Models larger than the compute- efficient size require fewer steps to train, allowing for potentially faster training if sufficient additional paral- lelism is possible. Note that this equation should not be trusted for very large models, as it is only valid in the power-law region of the learning curve, after initial transient effects. ~ -=-= L= (Cmin/2.3 108) -0.05° === L= (C/2.0-107)-°57 Test Loss 2 F 1o-8 10-6 10-4 10-2 10° Compute (PF-days), non-embedding Figure 13 When adjusting performance to simulate training far below the critical batch size, we find a somewhat altered power law for L(Cmin) when compared with the fully empirical results. The conspicuous lump at 10−5 PF-days marks the transition from 1-layer to 2-layer networks; we exclude 1-layer networks in the power-law fits. It is the L(Cmin) trend that we expect to provide a reliable extrapolation for larger compute. that in fact we could train more efficiently6 by training at the batch size Bcrit discussed in Section 5.1. Large and small values of the loss could have been achieved with fewer samples or fewer steps, respectively, and correcting for this inefficiency by standardizing to the critical batch size results in cleaner and more predictable trends. In this section we will adjust for this oversight. More importantly, we will use the results of Section 5 to determine the optimal allocation of compute between model size N and the quantity of data processed during training, namely 2BcritSmin. We will determine this allocation both empirically and theoretically, by using the equation for L(N, Smin), and we will demonstrate that these methods agree. # 6.1 Optimal Performance and Allocations Let us first study the loss as a function of the optimally allocated compute from Equation (5.5). The result is plotted in Figure 13, along with a power-law fit. We see that as compared to the compute plot of Figure 1, the new fit with Cmin is somewhat improved. Given L(Cmin), it is natural to ask for the optimal model size N (Cmin) that provides the minimal loss with a given quantity of training compute. The optimal model size is shown in Figure 14. We observe that N (Cmin) 6One might ask why we did not simply train at Bcrit in the first place. The reason is that it depends not only on the model but also on the target value of the loss we wish to achieve, and so is a moving target. 15 —e— Smin (adjusted) 150004 ~--- Spin = (5.4 +103) C293 —e— S (fixed-batch) =--- N=(1.3-10°) «C873 min ~--- N=(1,6-109) C88 min 107 eB R| oI Zo ® | ) ae o 8 ete 2.10000 = 105 ne a g an £ ow “| z a a 5000 3 103 a £ - a - * “ 0 10-7 10-5 10-8 10-1 10-7 10-5 10-3 10-1 Compute (PF-days), non-embedding Compute (PF-days), excluding embeddings Figure 14 Left: Each value of the compute budget Cmin has an associated optimal model size N . Optimal model size grows very rapidly with Cmin, increasing by 5x for each 10x increase in compute. The number of data examples processed makes up the remainder of the increase, growing relatively modestly by only 2x. Right: The batch-adjusted number of optimization steps also grows very slowly, if at all, meaning that most of the growth in data examples processed can be used for increased batch sizes. can be fit very well with a power-law N (Cmin) ∝ (Cmin)0.73. (6.1) In Figure 12, we show the effect of training models of sub-optimal sizes (see Appendix B.4). By definition Cmin ≡ 6N BcritS, and so we can use N (Cmin) to extract further results. In particular, since prior fits show B ∝ L−4.8 and L ∝ C −0.05 min . This leads us to conclude that the optimal number of steps will only grow very slowly with compute, as Smin ∝ (Cmin)0.03, (6.2) matching the empirical results in Figure 14. In fact the measured exponent is sufficiently small that our results may even be consistent with an exponent of zero. Thus we conclude that as we scale up language modeling with an optimal allocation of computation, we should predominantly increase the model size N , while simultaneously scaling up the batch size via B ∝ Bcrit with negligible increase in the number of serial steps. Since compute-efficient training uses relatively few optimization steps, additional work on speeding up early training dynamics may be warranted. # 6.2 Predictions from L(N, Smin) The results for L(Cmin) and the allocations can be predicted from the L(N, Smin) equation obtained in Section 5. Given our equation for L(N, Smin), we can substitute Smin = Cmin 6N B and then find the minimum of the loss as a function of N , while fixing the training compute. We carry out this procedure in detail in Appendix B, where we also provide some additional predictions. For the loss as a function of training compute, we predict that U(Cnin) = (F ) (63) Crnin where αmin C ≡ 1 1/αS + 1/αB + 1/αN ≈ 0.054 (6.4) in excellent agreement with the exponent of Figure 13. We also predict that N (Cmin) ∝ (Cmin)αmin C /αN ≈ (Cmin)0.71 (6.5) which also matches the scaling of Figure 14 to within a few percent. Our scaling laws provide a predictive framework for the performance of language modeling. 16 Poa uw =-=- L(Cmin) ~ — L(D(C)) Pe 2 a oO Test Loss w i) The intersection point is sensitive to | the precise power-law parameters 10-8 10-5 10-? 10! 10+ 107 Compute (PF-days), non-embedding 1.5 Figure 15 Far beyond the model sizes we study empirically, we find a contradiction between our equations for L(Cmin) and L(D) due to the slow growth of data needed for compute-efficient training. The intersection marks the point before which we expect our predictions to break down. The location of this point is highly sensitive to the precise exponents from our power-law fits. # 6.3 Contradictions and a Conjecture We observe no signs of deviation from straight power-law trends at large values of compute, data, or model size. Our trends must eventually level off, though, since natural language has non-zero entropy. Indeed, the trends for compute-efficient training described in this section already contain an apparent contra- diction. At scales several orders of magnitude above those documented here, the performance predicted by the L(Cmin) scaling law decreases below what should be possible given the slow growth in training data with compute. This implies that our scaling laws must break down before this point, but we conjecture that the intersection point has a deeper meaning: it provides an estimate of the point at which Transformer language models reach maximal performance. Since the amount of data used by compute-efficient training grows slowly with the compute budget, the performance predicted by L(Cmin) eventually hits a lower bound set by the L(D) power law (see Figure 15). Let us work this out in more detail. To keep overfitting under control, the results of Section 4 imply that we should scale the dataset size as D ∝ N 0.74 ∝ C 0.54 min (6.6) where we have used the compute-efficient N (Cmin) from Figure 14. Let us compare this to the data requirements of compute-efficient training. If we train at the critical batch size (i.e. C = 2Cmin) and never re-use data during training, we find that data usage grows with compute as — _2Cnin 6N(Cmin) D(Cwnin) & (4 x 101° tokens) (C\inin/PF-Day)°?6 (6.7) This is the maximum rate at which the dataset size can productively grow with compute, since it means that we are only training for a single epoch. But it grows the dataset much more slowly than in Equation (6.6). It appears to imply that compute-efficient training will eventually run into a problem with overfitting, even if the training process never re-uses any data! According to Figure 1, we expect that when we are bottlenecked by the dataset size (ie by overfitting), the loss should scale as L(D) ∝ D−0.095. This implies that the loss would scale with compute as L(D(Cmin)) ∝ C −0.03 once we are data-limited. Once again, we have a contradiction, as this will eventually intersect with min our prediction for L(Cmin) from Figure 13, where we found a scaling L(Cmin) ∝ C −0.050 The intersection point of L(D(Cmin)) and L(Cmin) occurs at # min C ∗ ∼ 104 PF-Days N ∗ ∼ 1012 parameters, D∗ ∼ 1012 tokens, L∗ ∼ 1.7 nats/token though the numerical values are highly uncertain, varying by an order or magnitude in either direction de- pending on the precise values of the exponents from the power-law fits. The most obvious interpretation is that our scaling laws break down at or before we reach this point, which is still many orders of magnitude away in both compute and model size. 17 (6.8) One might also conjecture that this intersection point has a deeper meaning. If we cannot increase the model size beyond N ∗ without qualitatively different data requirements, perhaps this means that once we reach C ∗ min and N ∗, we have extracted all of the reliable information available in natural language data. In this interpretation, L∗ would provide a rough estimate for the entropy-per-token7 of natural language. In this scenario, we would expect the loss trend to level off at or before L∗. We can guess at the functional form of L(Cmin) as it levels off by considering a version of our training dataset with added noise. For example, we could append a random string of tokens to each context shown to the model to artificially boost the loss by a constant additive factor. Then, the distance from the noise floor L − Lnoise would be a more meaningful performance metric, with even a small decrease in this distance potentially representing a significant boost in qualitative performance. Since the artificial noise would affect all of our trends equally, the critical point of 6.8 would not change (aside from the absolute value of L∗), and may be meaningful even if it occurs after the leveling off. # 7 Related Work Power laws can arise from a wide variety of sources [THK18]. Power-law scalings with model and dataset size in density estimation [Was06] and in random forest models [Bia12] may be connected with our results. These models suggest that power-law exponents may have a very rough interpretation as the inverse of the number of relevant features in the data. Some early [BB01, Goo01] work found power-law scalings between performance and dataset size. More recent work [HNA+17, HAD19] also investigated scaling between model size and data size; their work is perhaps the closest to ours in the literature8. Note, however, that [HNA+17] found super-linear scaling of dataset size with model size, whereas we find a sub-linear scaling. There are some parallels between our findings on optimal allocation of compute and [Kom19], including power-law learning curves. EfficientNets [TL19] also appear to obey an approximate power-law relation between accuracy and model size. Very recent work [RRBS19b] studies scaling with both dataset size and model size for a variety of datasets, and fits an ansatz similar to ours. EfficientNet [TL19] advocates scaling depth and width exponentially (with different coefficients) for optimal performance of image models, resulting in a power-law scaling of width as a function of depth. We find that for language models this power should be roughly one when scaling up (as width/depth should remain fixed). But more importantly, we find that the precise architectural hyperparameters are unimportant compared to the overall scale of the language model. In [VWB16] it was argued that deep models can function as ensembles of shallower models, which could potentially explain this finding. Earlier work [ZK16] has compared width and depth, and found that wide ResNets can outperform deep ResNets on image classification. Some studies fix computation per data example, which tends to scale in proportion to the number of model parameters, whereas we investigate scaling with both model size and the quantity of training computation. Various works [AS17, BHMM18] have investigated generalization in highly overparameterized models, find- ing a “jamming transition” [GJS+19] when the model size reaches the dataset size (this may require training many orders of magnitude beyond typical practice, and in particular does not use early stopping). We do not observe such a transition, and find that the necessary training data scales sublinearly in the model size. Expansions in the model size, particularly at large width [JGH18, LXS+19], may provide a useful framework for thinking about some of our scaling relations. Our results on optimization, such as the shape of learning curves, can likely be explained using a noisy quadratic model, which can provide quite accurate predictions [ZLN+19] in realistic settings. Making this connection quantitative will require a characterization of the Hessian spectrum [Pap18, GKX19, GARD18]. # 8 Discussion We have observed consistent scalings of language model log-likelihood loss with non-embedding parameter count N , dataset size D, and optimized training computation Cmin, as encapsulated in Equations (1.5) and (1.6). Conversely, we find very weak dependence on many architectural and optimization hyperparameters. Since scalings with N, D, Cmin are power-laws, there are diminishing returns with increasing scale. 7Defining words using the wc utility, the WebText2 dataset has 1.4 tokens per word and 4.3 characters per token. 8After this work was completed, [RRBS19a] also appeared, which makes similar predictions for the dependence of loss on both model and dataset size. 18 We were able to precisely model the dependence of the loss on N and D, and alternatively on N and S, when these parameters are varied simultaneously. We used these relations to derive the compute scaling, magnitude of overfitting, early stopping step, and data requirements when training large language models. So our scaling relations go beyond mere observation to provide a predictive framework. One might interpret these relations as analogues of the ideal gas law, which relates the macroscopic properties of a gas in a universal way, independent of most of the details of its microscopic consituents. It is natural to conjecture that the scaling relations will apply to other generative modeling tasks with a maximum likelihood loss, and perhaps in other settings as well. To this purpose, it will be interesting to test these relations on other domains, such as images, audio, and video models, and perhaps also for random network distillation. At this point we do not know which of our results depend on the structure of natural language data, and which are universal. It would also be exciting to find a theoretical framework from which the scaling relations can be derived: a ‘statistical mechanics’ underlying the ‘thermodynamics’ we have observed. Such a theory might make it possible to derive other more precise predictions, and provide a systematic understanding of the limitations of the scaling laws. In the domain of natural language, it will be important to investigate whether continued improvement on the loss translates into improvement on relevant language tasks. Smooth quantitative change can mask major qualitative improvements: “more is different”. For example, the smooth aggregate growth of the economy provides no indication of the specific technological developments that underwrite it. Similarly, the smooth improvements in language model loss may hide seemingly qualitative changes in capability. Our results strongly suggest that larger models will continue to perform better, and will also be much more sample efficient than has been previously appreciated. Big models may be more important than big data. In this context, further investigation into model parallelism is warranted. Deep models can be trained using pipelining [HCC+18], which splits parameters depth-wise between devices, but eventually requires increased batch sizes as more devices are used. Wide networks on the other hand are more amenable to parallelization [SCP+18], since large layers can be split between multiple workers with less serial dependency. Sparsity [CGRS19, GRK17] or branching (e.g. [KSH12]) may allow for even faster training of large networks through increased model parallelism. And using methods like [WRH17, WYL19], which grow networks as they train, it might be possible to remain on the compute-efficient frontier for an entire training run. # Acknowledgements We would like to thank Shan Carter, Paul Christiano, Jack Clark, Ajeya Cotra, Ethan Dyer, Jason Eisner, Danny Hernandez, Jacob Hilton, Brice Menard, Chris Olah, and Ilya Sutskever for discussions and for feed- back on drafts of this work. 19 # Appendices # A Summary of Power Laws For easier reference, we provide a summary below of the key trends described throughout the paper. Parameters | Data | Compute | Batch Size | Equation N 0° oo Fixed L(N) = (N./N)°* oo D_ | Early Stop Fixed L(D) =(D,./D)°” Optimal oo Cc Fixed L(C) = (C./C)°° (naive) Nopt Dopt | Cmin | BK Berit | L(Cmin) = (C™™/Cmnin) “© an ~D N D | Early Stop | Fixed =| L(N,D)= () SD + 5] N 0° S' steps B L(N,S)= (42 ON (===) ° Table 4 The empirical fitted values for these trends are: Power Law Scale (tokenization-dependent) Nc = 8.8 × 1013 params (non-embed) Dc = 5.4 × 1013 tokens Cc = 1.6 × 107 PF-days αN = 0.076 αD = 0.095 αC = 0.057 C = 0.050 C min αmin αB = 0.21 c = 3.1 × 108 PF-days B∗ = 2.1 × 108 tokens Sc = 2.1 × 103 steps αS = 0.76 # Table 5 The optimal parameters for compute efficient training are given by: Compute-Efficient Value Power Law | Scale Nopt = Ne» CP* pn = 0.73 | Ne =1.3- 10° params ‘min B< Bein = Goze = BeChR, | pp =0.24 | B. = 2.0- 10° tokens min Smin = Se + CPS, (lower bound) | pg = 0.03 S. = 5.4+ 10° steps Dopt = De» CRP, (1 epoch) Pp =0.27 | De =2-10!° tokens Table 6 # B Empirical Model of Compute-Efficient Frontier Throughout this appendix all values of C, S, and αC are adjusted for training at the critical batch size Bcrit. We have left off the ‘adj’ label to avoid cluttering the notation. # B.1 Defining Equations The power-law fit to the learning curves implies a simple prescription for compute-efficient training. In this appendix, we will derive the optimal performance, model size, and number of training steps as a function of 20 the compute budget. We start with the Equation (1.6), repeated here for convenience: bovsy=(%)"+ (8) 6. Here, S represents the number of parameter updates when training at the critical batch size [MKAT18], which was defined in Equation (5.2)9: B (L) = B∗ L1/αB . (B.2) We would like to determine optimal training parameters for a fixed compute budget, so we replace S = C/ (6N B (L)), where C is the number of FLOPs used in the training run: Ne an . N as L(N,C) () t (08.5.2) . (B.3) Now, we set On L| c= 0 to find the condition for optimality: Now, we set On L| c= 0 to find the condition for optimality: _ ab. ~ ON'e an (Ne an ag N as N OL t BS 1-—5— N () N (0 8 ae) ° TAN 'c an Ne an N as 5B. B4 a () C s, amc) (B4) =⇒ Equation (B.3) and (B.4) together determine the compute-efficient frontier. # B.2 Efficient Training Now we assemble the implications of (B.3) and (B.4). First, note that inserting (B.4) into (B.3) yields L (Neff (C) , C) = 1 + αN αS L (Neff , ∞) , (B.5) which implies that for compute-efficient training, we should train to a fixed percentage αN ≈ 10% above αS the converged loss. Next, let’s determine how the optimal loss depends on the compute budget. Eliminating N yields a power-law dependence of performance on compute: C.\% L(C) =| > B.6 @)=($) 6.6) where we defined αC = 1/ (1/αS + 1/αB + 1/αN ) ≈ 0.052 (B.7) V/as+1/an _\ las C. = 6N.B.S. (1 + ox) (2) ; (BS) as an Similarly, we can eliminate L to find N (C): N(C) C\ee/en ay \ Vex — {= 14— B. Ne (=) + as (B) and αN αS 9There is a slight ambiguity here: we can imagine training either at a constant batch size B (Ltarget), or we could instead train at a variable batch size ˜B (L), where ˜B is the instantaneous critical batch size (as opposed to B, which is the averaged version). These two prescriptions result in the same number of steps, so we can ignore this subtlety (see [MKAT18]). 21 (B.4) # B.3 Comparison to Inefficient Typically, researchers train models until they appear to be close to convergence. In this section, we compare the efficient training procedure described above to this more typical setup. We define a the convergence factor f as the percent deviation from the converged loss: L (N, C) = (1 + f ) L (N, ∞) . (B.11) For compute-efficient training we have f = an/ags * 10% from the previous section, but researchers typically use a much smaller value. Here, we choose f’ = 2% as an estimate. For a fixed value of the loss, we predict: Ny l+f 1/an af 2. B.12 Â¥ (; + a 7 (B.12) # l+f (; + a 1+4 f 1+ ji Ny Sp Np Sp l/as 1+4 # i Sp = ≈ 0.13 (B.13) Cr _ Ny Sp = = 0.35 B.14 Cr Np Sp ° ane So that compute-efficient training uses 7.7x fewer parameter updates, 2.7x more parameters, and 65% less compute to reach the same loss. # B.4 Suboptimal Model Sizes We can solve A.1 to find an expression for the amount of compute needed to reach a given value of the loss L with a model of size N : any —l/ag C(N,L) (0.5. 2757) (z (=) ) . (B.15) Using A.6 and A.9, we can eliminate L in favor of Neff (L), the model size which reaches L most efficiently. From there, we find an expression for the excess compute needed as a consequence of using a suboptimal model size: αS αN The result is shown in Figure X. Models between 0.6x and 2.2x the optimal size can be used with only a 20% increase in compute budget. Using a smaller model is useful when accounting for the cost inference. A larger model can be trained the the same level of performance in fewer steps, allowing for more parallelism and faster training if sufficient harware is available (see Figure Y): S(N, Nes) } _ os (1 (“yyy B17 S (Nes, Ne) " an N , (B.17) A 2.2x larger model requires 45% fewer steps at a cost of 20% more training compute. Note that this equation should not be trusted for very large models, as it is only valid in the power-law region of the learning curve after initial transient effects. # C Caveats In this section we list some potential caveats to our analysis. • At present we do not have a solid theoretical understanding for any of our proposed scaling laws. The scaling relations with model size and compute are especially mysterious. It may be possible to understand scaling at very large D holding model size fixed [AS17], and also the shape of learning curves late in training, by modeling the loss with a noisy quadratic. But the scaling with D at very large model size still remains mysterious. Without a theory or a systematic understanding of the corrections to our scaling laws, it’s difficult to determine in what circumstances they can be trusted. 22 — Test Loss 101° 5 ---- Train Loss g ad g 9 4 10° 4 a 7a 3 2 108 § a 2 103 104 105 Step — Test Loss 101° Early Stopping Step 5 ---- Train Loss ee ° . Data Size 108 ata Se 4 10° ° aM 4 g © 86M 12M 104 o ou 3 © seam 108 © 148 10? |. _ ; _ 2 io) enmeneerts) 103 104 105 Sc x [L(N, D) —L(N, 2)]-Â¥%s Step Figure 16 Left: We characterize the step on which early stopping occurs, as a function of the extent of overfitting. The red line indicates a lower bound for early stopping that is derived in Section 5.3. Right: We display train and test loss for a series of 300M parameter models trained on different sized dataset sub- samples. The test loss typically follows that of a run done with unrestricted data until diverging. Note that the degree of overfitting (as compared to the infinite data limit) is significantly overestimated by Ltest − Ltrain (denoted by a black bar for each run). • We are not especially confident in the prediction of Bcrit(L) for values of the loss far outside the range we have explored. Changes in Bcrit could have a significant impact on trade-offs between data parallelism and the number of serial training steps required, which would have a major impact on training time. • We did not thoroughly investigate the small data regime, and our fits for L(N, D) were poor for the smallest values of D (where an epoch corresponded to only 40 steps). Furthermore, we did not experiment with regularization and data augmentation. Improvements in these could alter our results, quantitatively or qualitatively. e We used the estimated training compute C' + 6N BS, which did not include contributions propor- tional to netx (see Section So our scalings with compute may be confounded in practice in the regime of very large nctx, specifically where netx 2 12dmodel- • We tuned learning rates, and we experimented with learning rate schedules. But we may have neglected to tune some hyperparameter (e.g. intialization scale or momentum) that have an important effect on scaling. • The optimal choice of learning rate is sensitive to the target loss. When training close to convergence, it may be necessary to use a smaller learning rate to avoid divergences. But when conducting a short training run (eg due to compute limitations), it may be possible to use a larger learning rate. We did not experiment with higher learning rates for training runs that did not proceed to convergence. # D Supplemental Figures # D.1 Early Stopping and Test vs Train In section 5.3 we described the result shown in Figure 16, which provides a prediction for a lower bound on the early stopping step. We also show the train and test loss for a given model size when training on different sized datasets. # D.2 Universal Transformers We compare the performance of standard Transformers to recurrent Transformers [DGV+18] in Figure 17. These models re-use parameters, and so perform slightly better as a function of N , but slightly worse as a function of compute C. We include several different different possibilities for parameter re-use. # D.3 Batch Size We measure the critical batch size using the data displayed in figure 18. This made it possible to estimate Bcrit(L) in figure 10. 23 as} 45} s —e— 2x Reuse —e— 4x Reuse 4.0 4.0 ~*~ 8x Reuse a 8 - Non-recurrent Models 93.5 83.5 B B & 3.0, 2x Reuse & 3.0 —e— 4x Reuse ~*~ 8x Reuse 2.5] ---- Non-recurrent Models . 2.5 S 105 10° 107 108 109 105 10° 107 108 109 Parameters, including reuse (non-embedding) Parameters (non-embedding) Figure 17 We compare recurrent Transformers [DGV+18], which re-use parameters, to standard Trans- formers. Recurrent Transformers perform slightly better when comparing models with equal parameter count, but slightly worse when accounting for reuse and comparing per FLOP. Batch Size Scan - 3M Params tot 10 1910 a 8 § 10° g £ & 108 6 2 4 § 107 Be z 4 108 Batch Size Scan - 85M Params 10 3 101° 2 8 8 g 10 4 & 10 6 3 § & é 10° 4 Batch Size Scan - 3M Params Batch Size Scan - 85M Params tot 10 10 1910 3 101° 8 2 8 10° g 8 Sf 10 4 108 6 2 & 10 6 $4 107 Be § & 4 é 10° 4 108 Figure 18 These figures demonstrate fits to Equation (5.1) for a large number of values of the loss L, and for two different Transformer model sizes. These fits were used to measure Bcrit(L) for Figure 10. # D.4 Sample Efficiency vs Model Size It is easy to see from figure 2 that larger models train faster, and are therefore more sample efficient. We provide another way of looking at this phenomenon in figure 19, which shows when different models reach various fixed values of the loss. 55 zie 5.0 € a 45 é a 2 108 404 q 357 & ¢ 30 J 163 S———— 25 105 107 108 Parameters (non-embedding) aon 5.5 £ ~~ 5.0 w @ 1910 45 e s 4.0 a & 10 3.5 EI 3.0 2 108 eee & _ 25 105 107 108 Parameters (non-embedding) 55 aon 5.5 zie 5.0 £ ~~ 5.0 € w a 45 @ 1910 45 é e a 2 108 404 s 4.0 q 357 & 10 3.5 & EI ¢ 30 3.0 J 163 S———— 2 108 eee 25 & _ 25 105 107 108 105 107 108 Parameters (non-embedding) Parameters (non-embedding) Figure 19 The number of minimum serial steps needed to reach any fixed value of the test loss decreases precipitously with model size. Sample efficiency (show here for training far below the critical batch size) improves greatly as well, improving by a factor of almost 100 when comparing the smallest possible model to a very large one. 24 Per-token Loss (774M Params) 103 10 4» g 102 § 8 SI =I 4 4 oa a 2 6 S ad F 101 8 a 4 2 - 10° 101 103 105 Step Per-token Loss (774M Params) 103 8 woe 4,043.2: TO 10 a7 3.444,.0- TO a 2.944.5-TO58 Fa} § 27449:728 10°S 4» g 102 26 24 os o 8 7 <-- 2445.1-T #3 & Wo. 23454.728 § 4 u oa a5 3s 2 6 oS 107% i 3 F 101 4 3 S 4 s 2 i 3 108 2 - 10° 10° 10? 102 103 101 103 105 Token Index Step 8 woe 4,043.2: TO a7 3.444,.0- TO a 2.944.5-TO58 Fa} § 27449:728 10°S 26 24 os o 7 <-- 2445.1-T & Wo. 23454.728 § u a5 3s oS 107% i 3 4 3 S s 2 i 3 108 10° 10? 102 103 Token Index Figure 20 This figure provides information about the performance per token as a function of model size and training time. Left: Loss per token as a function of its position T in the 1024-token context. Loss scales predictably as a power-law in T . Right: Test loss per token as a function of training step. —— Token 1/1024 —— Token 2/1024 —— Token 4/1024 —— Token 8/1024 —s— Token 16/1024 —*— Token 64/1024 —s— Token 256/1024 =~ Token 1024/1024 Token 1/8 Token 2/8 Token 4/8 -=- Token 8/8 Test Loss 3.0 104 10° 10° 10” 108 10° Parameters (excl. embedding) Figure 21 In addition to the averaged loss, individual tokens within the 1024-token context also improve smoothly as model size increases. Training runs with shorter context nctx = 8 (dashed lines) perform better on early tokens, since they can allocate all of their capacity to them. # D.5 Context Dependence The trends for loss as a function of model size are displayed for different tokens in the context in Figure 21. We see that models trained on nctx = 1024 show steady improvement with model size on all but the first token. Fixing model size, it appears that the loss scales as a power-law as a function of position T in the context, see Figure 20. This may be a consequence of underlying power-law correlations in language [EP94, ACDE12, LT16], or a more general feature of the model architecture and optimization. It provides some suggestion for the potential benefits (or lack thereof) from training on larger contexts. Not only do larger models converge to better performance at T = 1024, but they also improve more quickly at early tokens, suggesting that larger models are more efficient at detecting patterns with less contextual information. In the right-hand plot we show how per-token performance varies for a fixed model as a function of the training step. The model begins by learning short-range information, and only learns longer-range correlations later in training. We have also included models trained with a tiny context nctx = 8 in order to compare with our longer context models. Even modestly sized models trained on nctx = 8 can dominate our largest nctx = 1024 models on very early tokens. This also suggests that further improvements should be possible with much larger models trained on large contexts. # D.6 Learning Rate Schedules and Error Analysis We experimented with a variety of learning rates and schedules. A host of schedules and resulting test performances for a small language model are plotted in Figure 22. We conclude that the choice of learning rate schedule is mostly irrelevant, as long as the total summed learning rate is sufficiently large, and the schedule includes a warmup period and a final decay to near-vanishing learning rate. Variations among 25 3.90 . 3.85, . 3 ° 3 . Ey 2 a © oe ° g 8 oe oe 5 a 3.75, eo’ ov "we e . g coe ° ee ° eo. 2? 3.70 ° i 3.65 0 50000 100000 150000 200000 250000 50 100 150 200 250 Step LR Summed Over Steps 4 # fi Figure 22 We test a variety of learning rate schedules including cosine decay, linear decay, as well as other faster/slower decays schedules on a 3 million parameter model, shown on the left. For these experiments we do not decay to zero, since we find that this tends to give a fixed improvement close to the end of training. We find that, as long as the learning rate is not too small and does not decay too quickly, performance does not depend strongly on learning rate. Run-to-run variation is at the level of 0.05 in the loss, so averaging multiple runs is necessary to validate performance changes smaller than this level. a6 13)-0.076 e —— L=(N/8.8-10+*) a —— L= —0.25log(N/7.1-1012) i) BS S i=] 8 4 4 Ff 43 B i) & 2 10 10° 108 107 108 109 Parameters (non-embedding) Figure 23 The trend for performance as a function of parameter count, L(N ), is fit better by a power law than by other functions such as a logarithm at a qualitative level. schedules appear to be statistical noise, and provide a rough gauge for the scale of variation between different training runs. Experiments on larger models suggest that the variation in the final test loss between different random seeds is roughly constant in magnitude for different model sizes. We found that larger models require a smaller learning rate to prevent divergence, while smaller models can tolerate a larger learning rate. To implement this, the following rule of thumb was used for most runs: LR(N ) ≈ 0.003239 + −0.0001395 log(N ) We expect that this formula could be improved. There may be a dependence on network width, likely set by the initialization scale. The formula also breaks down for N > 1010 parameters. Nevertheless, we found that it works sufficiently well for the models we considered. # D.7 Fit Details and Power Law Quality We experimented with a number of functional forms for the fits to L(N ), L(C), and L(D); the power-law fits were qualitatively much more accurate than other functions such as logarithms (see Figure 23). For L(C), we do not include small models with only 1 layer in the fit, as the transition from 1 to 2 layers causes a noticable lump in the data. For L(N ) we also do not include very small models with only 1 layer in the fit, and we exclude the largest models that have not trained fully to convergence. Fit parameters change marginally if we do include them, and the trend extrapolates well in both directions regardless. # D.8 Generalization and Architecture In figure 24 we show that generalization to other data distributions does not depend on network depth when we hold the total parameter count fixed. It seems to depend only on the performance on the training distribution. 26 (D.1) 2.8 a 27 —® Wikipedia S —®- Books 4 2.6 —e- Internet Books a —e Common Crawl 25 Se —e— WebText2 (Train) —e- WebText2 (Test) a Se 2.3 See 10" 10? Depth Figure 24 We show evaluations on a series of datasets for models with approximately 1.5 Billion param- eters. We observe no effect of depth on generalization; generalization performance depends primarily on training distribution performance. The 12-layer model overfit the Internet Books dataset and we show the early-stopped performance; we have not seen this surprising result in other experiments. # List of Figures 1 Summary of simple power laws. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 Illustration of sample efficiency and compute efficiency. . . . . . . . . . . . . . . . . . . . . 3 How to scale up model size, batch size, and serial steps . . . . . . . . . . . . . . . . . . . . 4 Performance when varying model and data size, or model and training steps, simultaneously 5 Weak dependence of performance on hyperparameter tuning . . . . . . . . . . . . . . . . . 6 Comparison of performance trend when including or excluding embeddings . . . . . . . . . 7 LSTM and Transformer performance comparison . . . . . . . . . . . . . . . . . . . . . . . 8 Generalization to other test datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Universality of overfitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Critical batch size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Performance versus compute budget or number of parameter updates . . . . . . . . . . . . . 12 Training on suboptimal models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Comparison between empirical and adjusted compute trends . . . . . . . . . . . . . . . . . 14 Optimal model size and serial number of steps versus compute budget . . . . . . . . . . . . 15 Contradiction between compute and data trends . . . . . . . . . . . . . . . . . . . . . . . . 16 Early stopping lower bound and training curves for overfit models . . . . . . . . . . . . . . 17 Universal transformers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Batch size scans . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Another look at sample efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Power-law dependence of performance on position in context . . . . . . . . . . . . . . . . . 21 Performance at different context positions versus model size . . . . . . . . . . . . . . . . . 22 Learning rate schedule scan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Comparison of Power-Law and Logarithmic Fits . . . . . . . . . . . . . . . . . . . . . . . 24 Generalization versus depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 4 4 5 8 8 9 10 11 12 14 15 15 16 17 23 24 24 24 25 25 26 26 27 27 # List of Tables 1 Parameter and compute counts for Transformer . . . . . . . . . . . . . . . . . . . . . . . . 2 Fits to L(N, D) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Fits to L(N, S) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 Key trend equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Key parameters to trend fits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Trends for compute-efficient training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 11 14 20 20 20 # References [ACDE12] Eduardo G Altmann, Giampaolo Cristadoro, and Mirko Degli Esposti. On the origin of long- range correlations in texts. Proceedings of the National Academy of Sciences, 109(29):11582– 11587, 2012. 25 Madhu S. Advani and Andrew M. Saxe. High-dimensional dynamics of generalization error in neural networks. arXiv, 2017, 1710.03667. 11, 18, 22 Michele Banko and Eric Brill. Scaling to very very large corpora for natural language disam- biguation. In Proceedings of the 39th annual meeting on association for computational linguis- tics, pages 26–33. Association for Computational Linguistics, 2001. 18 [BHMM18] Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine learning and the bias-variance trade-off. arXiv, 2018, 1812.11118. 18 GÊrard Biau. Analysis of a random forests model. Journal of Machine Learning Research, 13(Apr):1063–1095, 2012. 18 [CGRS19] Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. CoRR, abs/1904.10509, 2019, 1904.10509. URL http://arxiv.org/ abs/1904.10509. 19 Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2018, arXiv:1810.04805. 2 [DGV+18] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Uni- versal transformers. CoRR, abs/1807.03819, 2018, 1807.03819. URL http://arxiv.org/ abs/1807.03819. 6, 9, 23, 24 Werner Ebeling and Thorsten Pöschel. Entropy and long-range correlations in literary english. EPL (Europhysics Letters), 26(4):241, 1994. 25 The Common Crawl Foundation. Common crawl. URL http://commoncrawl.org. 7 [GARD18] Guy Gur-Ari, Daniel A. Roberts, and Ethan Dyer. Gradient descent happens in a tiny subspace. 2018, arXiv:1812.04754. 18 [GJS+19] Mario Geiger, Arthur Jacot, Stefano Spigler, Franck Gabriel, Levent Sagun, Stéphane d’Ascoli, Giulio Biroli, Clément Hongler, and Matthieu Wyart. Scaling description of generalization with number of parameters in deep learning. arXiv, 2019, 1901.01608. 18 Behrooz Ghorbani, Shankar Krishnan, and Ying Xiao. An investigation into neural net op- timization via hessian eigenvalue density. CoRR, abs/1901.10159, 2019, 1901.10159. URL http://arxiv.org/abs/1901.10159. 18 Joshua Goodman. A bit of progress in language modeling. CoRR, cs.CL/0108005, 2001. URL http://arxiv.org/abs/cs.CL/0108005. 18 Scott Gray, Alec Radford, and Diederik P Kingma. Gpu kernels for block-sparse weights. ope- nai.com, 2017. 19 Joel Hestness, Newsha Ardalani, and Gregory Diamos. Beyond human-level accuracy: Compu- tational challenges in deep learning. In Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming, PPoPP ’19, pages 1–14, New York, NY, USA, 2019. ACM. doi:10.1145/3293883.3295710. 18 28 [HCC+18] Yanping Huang, Yonglong Cheng, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. CoRR, abs/1811.06965, 2018, 1811.06965. URL http://arxiv.org/abs/1811.06965. 19 [HNA+17] Joel Hestness, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kia- ninejad, Md. Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. Deep learning scaling is pre- dictable, empirically, 2017, 1712.00409. 18 Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and generalization in neural networks. In Advances in neural information processing systems, pages 8571–8580, 2018. 18 Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014, 1412.6980. 7 Aran Komatsuzaki. One epoch is all you need, 2019, arXiv:1906.06669. 18 Imagenet classification with deep Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, NIPS’12, pages 1097–1105, USA, 2012. Curran Associates Inc. URL http://dl.acm.org/citation.cfm?id=2999134.2999257. 19 [LCG+19] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations, 2019, 1909.11942. 9 [LOG+19] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized BERT pretrain- ing approach. CoRR, abs/1907.11692, 2019, 1907.11692. URL http://arxiv.org/abs/ 1907.11692. 2 # [LSPt 18] [LT 16] Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating wikipedia by summarizing long sequences. arXiv:1801.10198 [cs], 2018, 1801.10198. URL http://arxiv.org/abs/1801.10198. 2, 6 Henry W Lin and Max Tegmark. Criticality in formal languages and statistical physics. arXiv preprint arXiv:1606.06737, 2016. 25 Jaehoon Lee, Lechao Xiao, Samuel S. Schoenholz, Yasaman Bahri, Roman Novak, Jascha Sohl- Dickstein, and Jeffrey Pennington. Wide neural networks of any depth evolve as linear models under gradient descent, 2019, arXiv:1902.06720. 18 [MKAT18] Sam McCandlish, Jared Kaplan, Dario Amodei, and OpenAI Dota Team. An empirical model of large-batch training, 2018, arXiv:1812.06162. 3, 5, 6, 12, 13, 21 Vardan Papyan. The full spectrum of deep net hessians at scale: Dynamics with sample size. CoRR, abs/1811.07062, 2018, 1811.07062. URL http://arxiv.org/abs/1811.07062. 18 Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/research-covers/languageunsupervised/language understanding paper. pdf, 2018. 2, 6 [RRBS19a] Jonathan S. Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. A constructive prediction of the generalization error across scales, 2019, 1909.12673. 18 [RRBS19b] Jonathan S. Rosenfeld, Amir Rosenfeld, Yonatan Belinkov, and Nir Shavit. A constructive prediction of the generalization error across scales, 2019, arXiv:1909.12673. 18 [RSR+19] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2019, arXiv:1910.10683. 2 [RWC+19] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. openai.com, 2019. 2, 5, 6, 7, 8 [SCP+18] Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanan- takool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and Blake Hechtman. Mesh-tensorflow: Deep learning for supercomputers, 2018, 1811.02084. 19 Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. CoRR, 2015, 1508.07909. 6 29 [SS18] [THK18] [TL19] [VSP+17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 5998–6008. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/7181-attention-is-all-you-need.pdf. 2, 6 [VWB16] Andreas Veit, Michael Wilber, and Serge Belongie. Residual networks behave like ensembles [Was06] of relatively shallow networks, 2016, arXiv:1605.06431. 8, 18 Larry Wasserman. All of nonparametric statistics. Springer Science & Business Media, 2006. 18 [WPN+19] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems, 2019, 1905.00537. 2 [WRH17] Yu-Xiong Wang, Deva Ramanan, and Martial Hebert. Growing a brain: Fine-tuning by in- creasing model capacity. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul 2017. doi:10.1109/cvpr.2017.323. 19 [WYL19] Wei Wen, Feng Yan, and Hai Li. Autogrow: Automatic layer growing in deep convolutional networks, 2019, 1906.02909. 19 [YDY+19] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Xlnet: Generalized autoregressive pretraining for language understanding, 2019, Le. arXiv:1906.08237. 2 Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. Procedings of the British Machine Vision Conference 2016, 2016. doi:10.5244/c.30.87. 18 [ZK16] [ZKZ+15] Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Tor- ralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. 2015 IEEE International Conference on Computer Vision (ICCV), Dec 2015. doi:10.1109/iccv.2015.11. 7 [ZLN+19] Guodong Zhang, Lala Li, Zachary Nado, James Martens, Sushant Sachdeva, George E. Dahl, Christopher J. Shallue, and Roger B. Grosse. Which algorithmic choices matter at which batch sizes? insights from a noisy quadratic model. CoRR, abs/1907.04164, 2019, 1907.04164. URL http://arxiv.org/abs/1907.04164. 12, 18 30
{ "id": "1902.06720" }
2001.08210
Multilingual Denoising Pre-training for Neural Machine Translation
This paper demonstrates that multilingual denoising pre-training produces significant performance gains across a wide variety of machine translation (MT) tasks. We present mBART -- a sequence-to-sequence denoising auto-encoder pre-trained on large-scale monolingual corpora in many languages using the BART objective. mBART is one of the first methods for pre-training a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstructing parts of the text. Pre-training a complete model allows it to be directly fine tuned for supervised (both sentence-level and document-level) and unsupervised machine translation, with no task-specific modifications. We demonstrate that adding mBART initialization produces performance gains in all but the highest-resource settings, including up to 12 BLEU points for low resource MT and over 5 BLEU points for many document-level and unsupervised models. We also show it also enables new types of transfer to language pairs with no bi-text or that were not in the pre-training corpus, and present extensive analysis of which factors contribute the most to effective pre-training.
http://arxiv.org/pdf/2001.08210
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer
cs.CL
Work in progress
null
cs.CL
20200122
20200123
0 2 0 2 n a J 3 2 ] L C . s c [ 2 v 0 1 2 8 0 . 1 0 0 2 : v i X r a # Multilingual Denoising Pre-training for Neural Machine Translation Yinhan Liu*, Jiatao Gu*, Naman Goyal*, Xian Li, Sergey Edunov Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer Facebook AI Research {yinhanliu,jgu,naman,xianl,edunov ghazvini,mikelewis,lsz} @fb.com # Abstract This paper demonstrates that multilingual denoising pre-training produces significant performance gains across a wide variety of machine translation (MT) tasks. We present mBART – a sequence-to-sequence denois- ing auto-encoder pre-trained on large-scale monolingual corpora in many languages us- ing the BART objective (Lewis et al., 2019). mBART is the first method for pre-training a complete sequence-to-sequence model by denoising full texts in multiple languages, while previous approaches have focused only on the encoder, decoder, or reconstruct- ing parts of the text. Pre-training a complete model allows it to be directly fine tuned for supervised (both sentence-level and document-level) and unsupervised machine translation, with no task-specific modifica- tions. We demonstrate that adding mBART initialization produces performance gains in all but the highest-resource settings, includ- ing up to 12 BLEU points for low resource MT and over 5 BLEU points for many document-level and unsupervised models. We also show it also enables new types of transfer to language pairs with no bi-text or that were not in the pre-training corpus, and present extensive analysis of which factors contribute the most to effective pre-training. # Introduction Despite its wide adoption for other NLP tasks (De- vlin et al., 2019; Liu et al., 2019; Yang et al., 2019; Lewis et al., 2019; Raffel et al., 2019), self- supervised pretraining is not yet common prac- tice in machine translation (MT). Existing MT approaches only pre-train parts of the model, in- cluding the encoder (Lample and Conneau, 2019) and the decoder (Edunov et al., 2019), or use pre- training objectives that only reconstruct parts of text (Song et al., 2019), or only focus on English corpora (Lewis et al., 2019; Raffel et al., 2019). In this paper, we show that significant performance gains are possible by pre-training a complete au- toregressive model with an objective that noises and reconstructs full texts across many languages. In this work, we present mBART – a multilin- gual sequence-to-sequence (Seq2Seq) denoising auto-encoder. mBART is trained by applying the BART (Lewis et al., 2019) to large-scale mono- lingual corpora across many languages. The input texts are noised by masking phrases and permut- ing sentences, and a single Transformer (Vaswani et al., 2017) model is learned to recover the texts. Different from other pre-training approaches for MT (Lample and Conneau, 2019; Song et al., 2019), mBART pre-trains a complete autoregres- sive Seq2Seq model. mBART is trained once for all languages, providing a set of parameters that can be fine-tuned for any of the language pairs in both supervised and unsupervised settings, with- out any task-specific or language-specific modifi- cations or initialization schemes. this simple approach works remarkably well. We first focus on existing MT benchmarks. For supervised sentence-level MT, mBART initialization leads to significant gains (up to 12 BLEU points) across low/medium-resource pairs (<10M bi-text pairs), without sacrificing performance in high-resource settings. These results further improve with back- translation (BT), setting a new state-of-the-art on WMT16 English-Romanian and the FloRes test sets. For document-level MT, our document-level pre-training improves results by up to 5.5. For the unsupervised case, we see consistent gains and produce the first non-degenerate results for less related language pairs (e.g., 9.5 BLEU gain on Nepali-English). Previous pre-training schemes have only considered subsets of these tasks, but we compare performance where possible and demon- strate that mBART consistently performs the best. * Equal contribution. We also show that mBART enables new types of transfer across language pairs. For example, fine-tuning on bi-text in one language pair (e.g., Korean-English) creates a model that can trans- late from all other languages in the monolingual pre-training set (e.g., Italian-English), with no fur- ther training. We also show that languages not in pre-training corpora can benefit from mBART, strongly suggesting that the initialization is at least partially language universal. Finally, we present a detailed analysis of which factors contribute the most to effective pre-training, including the num- ber of languages and their overall similarity. # 2 Multilingual Denoising Pre-training We use a large-scale common crawl (CC) corpus (§2.1) to pre-train BART models (§2.2). Our ex- periments in the later sections involve finetuning a range of models pre-trained on different subsets of the CC languages §2.3). # 2.1 Data: CC25 corpus Datasets We pre-train on a subset of 25 lan- guages – CC25 – extracted from the Common Crawl (CC) (Wenzek et al., 2019; Conneau et al., 2019)1. CC25 includes languages from different families and with varied amounts of text (Table 1). Following Lample and Conneau (2019), we re- balanced the corpus by up/down-sampling text from each language i with a ratio λi: 1 pe M5 Supe qd) where pi is the percentage of each language in CC- 25. We use the smoothing parameter α = 0.7. Pre-processing We tokenize with a sentence- piece model (SPM, Kudo and Richardson, 2018) learned on the full CC data that includes 250, 000 subword tokens. While not all of these languages are used for pre-training, this tokenization sup- ports fine-tuning on additional languages. We do not apply additional preprocessing, such as true- casing or normalizing punctuation/characters. # 2.2 Model: mBART Our models follow the BART (Lewis et al., 2019) sequence-to-sequence pre-training scheme, as re- viewed in this section. While BART was only pre- trained for English, we systematically study the ef- fects of pre-training on different sets of languages. 1https://commoncrawl.org En Ru Vi Ja De Ro Fr Fi Ko Es Zh It Nl Ar Tr Hi Cs Lt Lv Kk Et Ne Si Gu My English Russian Vietnamese Japanese German Romanian French Finnish Korean Spanish Chinese (Sim) Italian Dutch Arabic Turkish Hindi Czech Lithuanian Latvian Kazakh Estonian Nepali Sinhala Gujarati Burmese 55608 23408 24757 530 (*) 10297 10354 9780 6730 5644 9374 259 (*) 4983 5025 2869 2736 1715 2498 1835 1198 476 843 237 243 140 56 300.8 278.0 137.3 69.3 66.6 61.4 56.8 54.3 54.2 53.3 46.9 30.2 29.3 28.0 20.9 20.2 16.3 13.7 8.8 6.4 6.1 3.8 3.6 1.9 1.6 Table 1: Languages and Statistics of the CC25 Cor- pus. A list of 25 languages ranked with monolingual corpus size. Throughout this paper, we replace the lan- guage names with their ISO codes for simplicity. (*) Chinese and Japanese corpus are not segmented, so the tokens counts here are sentences counts Architecture We use a standard sequence-to- sequence Transformer architecture (Vaswani et al., 2017), with 12 layers of encoder and 12 layers of decoder with model dimension of 1024 on 16 heads (∼ 680M parameters). We include an addi- tional layer-normalization layer on top of both the encoder and decoder, which we found stabilized training at FP16 precision. Learning Our training data covers x languages: D = {Dj,...,Dx} where each D; is a collection of monolingual documents in language i. We (1) assume access to a noising function g, defined be- low, that corrupts text, and (2) train the model to predict the original text X given g(X). More for- mally, we aim to maximize Lo: = Lθ = log P (X|g(X); θ) , Di∈D X∈Di (2) where X is an instance in language i and the dis- tribution P is defined by the Seq2Seq model. Who am | ? </s> Where did | come from ? </s> <En> Where did from? </s> Who _|__</s><En> <En> Who am|I? </s> Where did | come from ? </s> 2h Le B, </s> te BAB. </s> <a> a TRB. </> tM _</s> <Ja> <da> tN Ue & . </s> if WA. </s> WIEBE? </s> <a> =], =) Sent-MT A “Ne Doc-MT <Ja> fh (S88? </s> Who am 1? </s> <En> Well then . </s> See you tomorrow .</s> <En> rn 20. Ue B. </s> Bh WA . </s><Ja> —_<En> Well then . </s> See you tomorrow .</s> Multilingual Denoising Pre-Training (mBART) # Fine-tuning on Machine Translation Figure 1: Framework for our Multilingual Denoising Pre-training (left) and fine-tuning on downstream MT tasks (right), where we use (1) sentence permutation (2) word-span masking as the injected noise. A special language id token is added at both the encoder and decoder. One multilingual pre-trained model is used for all tasks. Noise function Following Lewis et al. (2019), we use two types of noise in g. We first remove spans of text and replace them with a mask to- ken. We mask 35% of the words in each instance by random sampling a span length according to a Poisson distribution (λ = 3.5). We also permute the order of sentences within each instance. The decoder input is the original text with one posi- tion offset. A language id symbol <LID> is used as the initial token to predict the sentence. It is also possible to use other noise types, such as those in Lample et al. (2018c), but we leave the exploration of the optimal noising strategy to future work. # 2.3 Pre-trained Models To better measure the effects of different levels of multilinguality during pre-training, we built a range of models as follows: • mBART25 We pre-train a model on all 25 lan- guages, using the setting described in §2.2. • mBART06 To explore the effect of pre-training on related languages, we pretrain a model on a subset of six European languages: Ro, It, Cs, Fr, Es and En. For a fair comparison, we use ∼ 1/4 of the mBART25 batch size, which allows our model to have the same number of updates per language during pre-training. Instance format For each instance of a batch, we sample a language id symbol <LID>, and we pack as many consecutive sentences as pos- sible sampled from the corresponding corpus of <LID>, until either it hits the document boundary or reaches the 512 max token length. Sentences in the instance are separated by the end of sen- tence (</S>) token. Then, we append the selected <LID> token to represent the end of this instance. Pre-training at “multi-sentence” level enables us to work on both sentence and document translation. • mBART02 We pre-train bilingual models, us- ing English and one other language for four language pairs: En-De, En-Ro, En-It. We use a batch size of ∼ 1/12 of that in the mBART25. • BART-En/Ro To help establish baseline per- formance levels, we also train monolingual BART models on the same En and Ro corpus only. Optimization Our full model (including 25 lan- guages) is trained on 256 Nvidia V100 GPUs (32GB) for 500K steps. The total batch size is around 128K tokens per GPU, matching BART (Lewis et al., 2019) configuration. We use the Adam optimizer (« = le—6, 82 = 0.98) and linear learning rate decay scheduling. The total training time was approximately 2.5 weeks. We started the training with dropout 0.1 and reduced it to 0.05 at 250K steps and 0 at 400K steps. All ex- periments are done with Fairseq (Ott et al., 2019). • Random As additional baselines, we will also include a comparison with a model randomly initialized without pre-training for each trans- lation task. Since the sizes of different down- stream datasets vary, we always grid-search the hyper-parameters (architecture, dropout, etc.) to find the best non-pretrained configuration. All models use the same vocabulary (§2.1). Not all tokens will frequently occur in all pre-training corpora, but later experiments show that this large vocabulary can improve generalization in multilin- gual settings even for unseen languages. En-Gu Data Source WMT19 Languages En-Vi IWSLT15 133K Direction ← → ← → ← → ← → ← → ← → En-Kk WMT19 91K En-Tr WMT17 207K En-Ja IWSLT17 223K En-Ko IWSLT17 230K Size 10K Random 0.0 0.3 mBART25 0.0 0.1 0.8 7.4 0.2 2.5 23.6 36.1 24.8 35.4 12.2 22.5 9.5 17.8 10.4 19.1 12.3 19.4 15.3 24.6 16.3 22.6 Languages Data Source Size En-It IWSLT17 250K Direction ← → ← → ← → ← → ← → ← → En-Nl IWSLT17 237K En-Ar IWSLT17 250K En-My WAT19 259K En-Ne FLoRes 564K En-Ro WMT16 608K Random 34.6 43.3 mBART25 29.3 34.8 27.5 37.6 16.9 21.6 31.7 39.8 28.0 34.0 23.3 28.3 34.9 36.9 7.6 14.5 4.3 7.4 34.0 37.8 34.3 37.7 Languages Data Source Size En-Et WMT18 1.94M Direction ← → ← → ← → ← → ← → ← → En-Si FLoRes 647K En-Hi ITTB 1.56M En-Lt WMT19 2.11M En-Fi WMT17 2.66M En-Lv WMT17 4.50M Random 7.2 13.7 mBART25 1.2 3.3 10.9 23.5 14.2 20.8 22.6 27.8 17.9 21.4 18.1 22.4 12.1 15.3 21.8 28.5 20.2 22.4 15.6 19.3 12.9 15.9 Table 2: Low/Medium Resource Machine Translation Pre-training consistently improves over a randomly ini- tialized baseline, with particularly large gains on low resource language pairs (e.g. Vi-En). Languages Size Cs Fr 11M 15M 25M 28M 29M 41M Es Zh De Ru Random 16.5 18.0 mBART25 33.2 34.0 35.0 33.3 30.9 30.5 31.5 31.3 41.4 41.0 and En-My from WAT19 (Ding et al., 2018, 2019). We divide the datasets into three categories – low resource (<1M sentence pairs), medium resource (>1M and <10M), and high resource (>10M). Table 3: High Resource Machine Translation where all the datasets are from their latest WMT competitions. We only evaluate our models on En-X translation. # 3 Sentence-level Machine Translation This section shows that mBART pre-training pro- vides consistent performance gains in low to medium resource sentence-level MT settings, in- cluding bi-text only and with back translation, and outperforms other existing pre-training schemes (§3.2). We also present a detailed analysis to un- derstand better which factors contribute the most to these gains (§3.3), and show that pre-training can even improve performance for languages not present in the pre-training data at all (§3.4). Fine-tuning & Decoding We fine-tune our mul- tilingual pre-trained models on a single pair of bi- text data, feeding the source language into the en- coder and decoding the target language. As shown in Figure 1, we load the pre-trained weights and train the MT model on bi-texts with teacher forc- ing. For all directions, we train with 0.3 dropout, 0.2 label smoothing, 2500 warm-up steps, 3e−5 maximum learning rate. We use a maximum of 40K training updates for all low and medium re- source pairs and 100K for high resource pairs. The final models are selected based on validation like- lihood. For decoding, we use beam-search with beam size 5 for all directions. The final results are reported in BLEU (Papineni et al., 2002) with language-specific settings, see appendix A. # 3.1 Experimental Settings # 3.2 Main Results Datasets We gather 24 pairs of publicly avail- able parallel corpora that cover all the languages in CC25 (Table 1). Most pairs are from previous WMT (Gu, Kk, Tr, Ro, Et, Lt, Fi, Lv, Cs, Es, Zh, De, Ru, Fr ↔ En) and IWSLT (Vi, Ja, Ko, Nl, Ar, It ↔ En) competitions. We also use FLo- Res pairs (Guzmán et al., 2019, En-Ne and En- Si), En-Hi from IITB (Kunchukuttan et al., 2017), As shown in Table 2, initializing with the pre- trained mBART25 weights shows gains on all the low and medium resource pairs when compared with randomly initialized baselines. We observe gains of 12+ BLEU on low resource pairs such as En-Vi, En-Tr, and noisily aligned pairs like En-Hi. Fine-tuning fails in extremely low-resource setting such as En-Gu, which only have roughly 10k ex- Ne-En Si-En 10 96 10 20 86 8 Ds | F Le . 6 —= a 124-7 5 Dae 4 _ - -@- Random 12 —@- Random | 10 Random 43 —t— mBART25 76" 4 mBART25 o 4 mBART25 Ce —4- mBART25 4 ‘ 0 0 1 2 0 1 2 0 1 2 0 1 2 +BT iterations +BT iterations +BT iterations +BT iterations 2 # 5 a a Figure 2: Pre-training + Back Translation on FLoRes with two iterations of BT. Pre-training Model Data Random None XLM (2019) MASS (2019) BART (2019) XLM-R (2019) CC100 En Ro En Ro En BART-En BART-Ro mBART02 mBART25 En Ro En Ro CC25 Fine-tuning En→Ro Ro→En 34.3 34.0 - - - 35.6 35.6 - - 35.8 36.0 37.6 38.5 37.7 35.8 36.8 38.5 37.8 +BT 36.8 38.5 39.1 38.0 - 37.4 38.1 39.9 38.8 outperforms all the other pre-trained models, both with and without BT augmentation. We also show comparisons with the conventional BART model trained on the same En and Ro data only. Both have improvements over baselines, while worse than mBART results, indicating pre-training in a multilingual setting is essential. Moreover, com- bining BT leads to additional gains, resulting in a new state-of-the-art for Ro-En translation. # 3.3 Analysis Table 4: Comparison with Other Pre-training Ap- proaches on WMT16 Ro-En. We also present additional analysis, to better quan- tify when our pre-training helps. amples for tuning. In these settings, unsupervised translation is more appropriate, see §5.2. For high resource cases (Table 3), we do not observe consistent gains, and pre-training slightly hurts performance when >25M parallel sentence are available. When a significant amount of bi-text data is given, we suspect that supervised training washes out the pre-trained weights completely. How many languages should you pre-train on? We investigate when it is helpful for pre-training to include languages other than the targeted lan- guage pair that will be used during fine tuning. Ta- ble 5 shows performance on four X-En pairs. Pre- training on more languages helps most when the target language monolingual data is limited (e.g. En-My, the size of My is around 0.5% of En). + Back Translation Back-translation (BT, Sen- nrich et al., 2016b) is a standard approach to aug- ment bi-text with target side monolingual data. We combine our pre-training with BT and test it on low resource language pairs – En-Si and En-Ne – using the FLoRes dataset (Guzmán et al., 2019). For a fair comparison, we use the same mono- lingual data as (Guzmán et al., 2019) to gener- ate BT data. Figure 2 shows that initializing the model with our mBART25 pre-trained parameters improves BLEU scores at each iteration of back translation, resulting in new state-of-the-art results in all four translation directions. v.s. Other Pre-training Approaches We also compare our pre-trained models with recent self- supervised pre-training methods, as shown in Ta- ble 4. We consider En-Ro translation, the only pair with established results. Our mBART model In contrast, when monolingual data is plenti- ful (De, Ro), pre-training on multiple languages slightly hurts the final results (<1 BLEU). In these cases, additional languages may reduce the ca- pacity available for each test language. Addition- ally, the fact that mBART06 performs similar to mBART02 on Ro-En suggests that pre-training with similar languages is particularly helpful. How many pre-training steps are needed? We plot Ro-En BLEU score v.s. Pre-training steps in Figure 3, where we take the saved checkpoints (ev- ery 25K steps) and apply the same fine-tuning pro- cess described in §3.1. Without any pre-training, our model overfits and performs much worse than the baseline. However, after just 25K steps (5% of training), both models outperform the best base- line. The models keep improving by over 3 BLEU for the rest of steps and have not fully con- verged after 500K steps. mBART25 is consistently Languages De Ro It My En Size/GB 66.6 61.4 30.2 1.6 300.8 mBART02 mBART06 mBART25 31.3 - 30.5 38.5 38.5 37.7 39.7 39.3 39.8 36.5 - 36.9 Table 5: Pretraining Languages on En-X translation. The size refers to the size of monolingual data for X. The size of En is shown as reference. All the pretrained models were controlled to see the same number of En- glish instances during training. Models En-My ← → Training Cost GPU hours Random (2019) + BT mBART02 + BT 23.3 32.0 29.1 34.9 34.9 37.7 37.8 39.2 5 5 + 300 + 350 300∼3000 + 40 - Table 6: Comparison with Back-Translation on My-En translation using same mono-lingual data. We also esti- mate the computational costs for both pre-training and back-translation based on Nvidia V100 GPUs. slightly worse than mBART02. How does the size of bitexts inference the gain from pre-training? Tables 2 and 3 show that pre-training consistently improves for low and medium resource language pairs. To verify this trend, we plot performance for differing sized sub- sets of the En-De dataset. More precisely, we take the full En-De corpus (28M pairs) and randomly sample 10K, 50K, 100K, 500K, 1M, 5M, 10M datasets. We compare performance without pre- training to the mBART02 results, as shown in Fig- ure 4. The pre-trained model is able to achieve over 20 BLEU with only 10K training examples, while the baseline system scores 0. Unsurpris- ingly, increasing the size of bi-text corpus im- proves both models. Our pre-trained model con- sistently outperforms the baseline models, but the gap reduces with increasing amounts of bi-text, es- pecially after 10M sentence pairs. This result con- firms our observation in §3.2 that our pre-training does not help translation in high-resource pairs. Is pre-training complementary to BT? Fig- ure 2 presents that our pre-trained models can be combined with iterative back-translation (BT) on additional data, however, it is still not a fair comparison. Table 6 shows the results when using w S —e— mBART25 —#- mBARTO2 coe Random Finetuning BLEU w o w a w & 0 100 200 300 400 500 pretraining steps (K) Figure 3: Fine-tuning curves for Ro-En along with Pre-training steps. Both mBART25 and mBART02 outperform the best baseline system after 25K steps. 31.3309 25 Finetuning BLEU —e— Random —#- mBARTO2 10* 10° 10° 107 Bi-text Size (# of sentence pairs) Figure 4: Fine-tuning curves for En-De along with size of bitext. The x-axis is on a log scale. same monolingual data where we use 79M En and 29M My sentences following Chen et al. (2019). With the same amount of monolingual corpus, mBART pre-training achieves the same perfor- mance on En→My as BT, while still 3 BLEU worse on My→En. We suspect BT benefits from bigger monolingual data (En). Moreover, combin- ing mBART02 model with BT, we see further gains even with same monolingual data. Besides, we also provide estimated training costs where BT has a longer pipeline involving training a baseline system (5h), translating monolingual data (300h) and formal training (350h). Instead, most of train- ing costs of mBART lies in the pre-training part and can be easily adjusted to be more efficient. # 3.4 Generalization to Languages NOT in Pre-training In this section, we show that mBART can im- prove performance even with fine tuning for lan- guages that did not appear in the pre-training cor- pora, suggesting that the pre-training has language universal aspects, especially within the parameters learned at the Transformer layers. Monolingual Nl-En En-Nl Ar-En En-Ar Nl-De De-Nl Random None 34.6 (-8.7) 29.3 (-5.5) 27.5 (-10.1) 16.9 (-4.7) 21.3 (-6.4) 20.9 (-5.2) mBART02 En Ro mBART06 En Ro Cs It Fr Es mBART25 All 41.4 (-2.9) 43.1 (-0.2) 43.3 34.5 (-0.3) 34.6 (-0.2) 34.8 34.9 (-2.7) 37.3 (-0.3) 37.6 21.2 (-0.4) 21.1 (-0.5) 21.6 26.1 (-1.6) 26.4 (-1.3) 27.7 25.4 (-0.7) 25.3 (-0.8) 26.1 Table 7: Generalization to Unseen Languages Language transfer results, fine-tuning on language-pairs without pre-training on them. mBART25 uses all languages during pre-training, while other settings contain at least one unseen language pair. For each model, we also show the gap to mBART25 results. Experimental Settings We analyze the results of three pairs: Nl-En, Ar-En and De-Nl using the pre-trained mBART25, mBART06 and mBART02 (EnRo) models. During pre-training, mBART06 and EnRo Bilingual do not contain Arabic (Ar), German (De) or Dutch (Nl) data, but all languages are in mBART25. Both De and Nl are European languages and are related to En, Ro and other the languages in mBART06 pre-training data. Datasets # Docs # Insts # Sents WMT19 En-De TED15 Zh-En 77K 1.7K 171K 6.5K 3.7M 0.2M Table 8: Statistics for the Document-level Corpus of WMT19 En-De and TED15 Zh-En. # of instances is the # of training examples in document model. # 4 Document-level Machine Translation Results mBART25 uses all languages during pre-training, but other settings contain at least one unseen language. We find large gains from pre- training on English-Romanian, even when trans- lating a distantly related unseen language (Arabic) and two unseen languages (German and Dutch). The best results are achieved when pre-training in- cludes both test languages, however pre-training on other languages is surprisingly competitive. We evaluate mBART on document-level machine translation tasks, where the goal is to translate seg- ments of text that contain more than one sentence (up to an entire document). During pre-training, we use document fragments of up to 512 tokens, allowing the models to learn dependencies be- tween sentences. We show that this pre-training significantly improves document-level translation. # 4.1 Experimental Settings Unseen Vocabularies Arabic is distantly related to the languages in mBART02 and mBART06, and its use of a disjoint character set means that it word embeddings will be largely untrained. However, we obtain similar improvements on Ar-En pairs to those on Nl-En. This result suggests that the pre- trained Transformer layers learn universal prop- erties of language that generalize well even with minimal lexical overlap. Datasets We evaluate performance on two com- mon document-level MT datasets: WMT19 En-De and TED15 Zh-En (statistics in Table 8). For En- De, we use the document data from WMT19 to train our model, without any additional sentence- level data; Zh-En dataset is from the IWSLT 2014 and 2015 evaluation campaigns (Cettolo et al., 2012, 2015). Following Miculicich et al. (2018), we use 2010-2013 TED as the test set. Unseen Source or Target Languages Table 7 shows different performance when the unseen lan- guages are on the source side, target side, or both sides. If both sides are unseen, the performance (in terms of difference from mBART25) is worse than where at least one language is seen dur- ing pre-training. Furthermore, although the En-X pairs perform similarly, mBART06 outperforms mBART02 by a margin on X-En pairs. Fine-tuning unseen languages on source side is more difficult, deserving more extensive future study. Pre-processing We use the same pre-processing as that in pre-training. For each block, sentences are separated by end of sentence symbols (</S>) and the entire instance is ended with the specific language id (<LID>). The numbers of segmented instances are also shown in Table 8 where on av- erage, every document is split into 2-4 instances. Fine-tuning & Decoding We use the same fine- tuning scheme as for sentence-level translation (§3.1), without using any task-specific techniques developed by previous work (Miculicich et al., Model Random mBART25 s-BLEU d-BLEU s-BLEU d-BLEU Model d-BLEU d-BLEU 34.5 × 35.9 7.7 36.4 37.1 38.0 38.5 Sent-MT Doc-MT 22.0 3.2 28.4 29.6 - 24.0 Table 9: Document-Level Machine Translation on En-De and Zh-En. (×) The randomly initialized Doc-MT model cannot produce translations aligned to the original sentences, so only document evaluation is possible. 2018; Li et al., 2019), such as constrained con- texts or restricted attention. For decoding, we sim- ply pack the source sentences into blocks, and translate each instance block autoregressively. The model does not know how many sentences to gen- erate in advance and decoding stops when <LID> is predicted. We use beam size 5 by default. Baselines & Evaluation We train 4 models: a document-level (Doc-) MT model (§4.1) and a corresponded sentence-level (Sent-) MT model (§3.1) as the baseline, both with and without pre- training. We use mBART25 as the common pre- trained model for En-De and Zh-En. For En-De, even though our mBART25 Doc-MT model de- codes multiple sentences together, the translated sentences can be aligned to the source sentences, which allows us to evaluate BLEU scores both on sentence-level (s-BLEU) and document-level (d- BLEU) 2. For Zh-En, however, we cannot produce the same number of translated sentences as the ref- erence due to alignment errors in the test data. We only provide the d-BLEU scores on this direction. We also compare our models with Hierarchi- cal Attention Networks (HAN, Miculicich et al., 2018) on Zh-En, which is the state-of-the-art non- pretraining approach for document-level transla- tion for this pair. They combine two layers of at- tention – first within and then across sentences. 2018)3, despite the fact that they are not cus- tomized for document-level MT in any way. Sent-MT v.s. Doc-MT For cases (En-De, En- Zh), the mBART25 Doc-MT models outperform themselves fine-tuned at sentence-level by a mar- gin, which is completely opposite for models with- out pre-training. For both datasets, randomly ini- tialized Doc-MT fail to work, resulting in much worse results than the sentence-level models. Such large performance gaps indicate that pre-training is critical for document level performance. It is in general difficult to collect high quality document- level data in large quantities, suggesting that pre- training may be a strong strategy for future work. We also include a sampled example in appendix B. # 5 Unsupervised Machine Translation In addition to supervised machine translation, we also evaluate our model on tasks where no bi-text is available for the target language pair. We define three types of unsupervised translation: 1. No bi-text of any kind is given. A common so- lution is to learn from back-translation (BT) (Artetxe et al., 2017; Lample et al., 2018c). We show that mBART provides a simple and effec- tive initialize scheme for these methods. # 4.2 Main Results We show the main results for both En-De and Zh- En are presented in Table 9. Random v.s. Pre-trained The MT models ini- tialized with pre-trained weights outperform ran- domly initialized models by large margins, for both sentence-level and document-level training. Our mBART25 models (both Sent-MT and Doc- MT) also outperform HAN (Miculicich et al., 2. No bi-text for the target pair is available, but the target languages both appear in bi-text cor- pora for other language pairs. Previous work has shown that zero-shot transfer is possible via massively multi-lingual MT (Johnson et al., 2017; Gu et al., 2019) or distillation through pivoting (Chen et al., 2017). We limit our fo- cus to building MT models for single language pairs, and leave multi-lingual pre-training for multi-lingual MT to future work. 2Standard BLEU scores match n-grams at sentence-level. We also consider document-level where we match n-grams over the whole document resulting in a slightly higher score. 3. No bi-text for the target pair is available, but there is bi-text for translating from some other 3d-BLEU is recomputed from the provided system output. Monolingual Ne Text Monolingual En Text Parallel En Text Generated En Text Input MLE loss MLE oe / Input MLE loss | Decode - --->| Transfer (no train) Decode Input IN Decode Input Input Generated En Text Generated Ne Text Parallel Hi Text Ne Text (b) Figure 5: Illustrated frameworks for unsupervised machine translation via (a) back-translation (b) language transfer where Ne-En is used as an example. For both cases, we initialize from multilingual pre-training (e.g. mBART25). language into the target language. This is a new evaluation regime, where we will show that mBART supports effective transfer, even if the source language has no bi-text of any form. In this section, we demonstrate the effectiveness of multilingual pre-training in unsupervised ma- chine translation via (1) back-translation ( §5.1) and (3) language transfer (§5.2). An illustration of both approaches are presented in Figure 5. # 5.1 Unsupervised Machine Translation via Back-Translation Datasets We evaluate our pre-trained models on both similar (En-De, En-Ro) and dissimilar pairs (En-Ne, En-Si), which are determined by measur- ing the subword units that are shared between the source and target languages. We use the same test sets as the supervised benchmarks §3.1, and di- rectly use the pre-training data (CC25) for back- translation to avoid introducing new information. els, as well as models with existing pre-training methods. Our models achieve large gains over non-pretrained models for all directions, and out- perform XLM significantly for dissimilar pairs (En-Ne, En-Si) where the existing approaches completely fail. For similar pairs, our model also performs well against XLM and MASS, with the best numbers for En-X pairs. # 5.2 Unsupervised Machine Translation via Language Transfer The second case of unsupervised machine transla- tion assumes the target language appears in a bi- text corpus with some other source language. Datasets We only consider X→En translation, and choose the bitexts of 12 language pairs from §3.1, covering Indic languages (Ne, Hi, Si, Gu), European languages (Ro, It, Cs, Nl), East Asian languages (Zh, Ja, Ko) and Arabic languages (Ar). Learning Following the same procedure de- scribed in Lample et al. (2018c); Lample and Conneau (2019), we first initialize the transla- tion model with the pre-trained weights, and then learn to predict the monolingual sentences condi- tioned on source sentences generated by on-the- fly back-translation (BT). Lample and Conneau (2019) only pre-train an encoder, so perform addi- tional de-noising training to learn a seq2seq model – a step which is unnecessary for mBART’s pre- trained seq2seq model. However, we do constrain mBART to only generating tokens in target lan- guage 4 for the first 1000 steps of on-the-fly BT, to avoid it simply copying the source text. Results Table 10 shows the unsupervised trans- lation results compared with non-pretrained mod- 4We mask out the output probability of predicting tokens which appear less than 1% in the target monolingual corpus. Results As illustrated in Figure 5 (b), we take the pre-trained mBART25 model and finetune on each language pair, and then directly apply them to the rest of pairs, as seen in Table 11. We also present the direct fine-tuning performance (§3) on the diagonal, for reference. We can always ob- tain reasonable transferring scores at all pairs over different fine-tuned models except from Gu-En where the supervised model completely fails (0.3 BLEU). In some cases, we can achieve similar (Cs-En) or even much better (Ne-En, Gu-En) re- sults compared to the supervised results. As a comparison, we also apply the same proce- dure on randomly initialized models without pre- training, which always ends up with ≈ 0 BLEU. This indicates that multilingual pre-training is essential and produces universal representations across languages, so that once the model learns to translate one language to En, it learns to trans- Model Similar Pairs En-De En-Ro Dissimilar Pairs En-Si En-Ne ← → ← → ← → ← → Random XLM (2019) MASS (2019) 21.0 34.3 35.2 17.2 26.4 28.3 19.4 31.8 33.1 21.2 33.3 35.2 0.0 0.5 - 0.0 0.1 - 0.0 0.1 - 0.0 0.1 - mBART 34.0 29.8 30.5 35.0 10.0 4.4 8.2 3.9 Table 10: Unsupervised MT via Back-Translation. En-De, En-Ro are initialized by mBART02, while En-Ne, En-Si are initialized by mBART25. Our models are trained on monolingual data used in pre-training. Fine-tuning Languages Ro Gu Domain News TED TED News News TED TED TED News Wiki Wiki Wiki Zh Ja Ko Cs Nl It Ar Hi Ne Si s e g a u g n a L g n i t s e T Zh Ja Ko Cs Ro Nl It Ar Hi Ne Si Gu 23.7 9.9 5.8 9.3 16.2 14.4 16.9 5.8 3.2 2.1 5.0 8.2 8.8 19.1 16.9 15.1 18.7 30.4 25.8 15.5 10.1 6.7 5.7 8.5 9.2 12.2 24.6 17.2 17.9 32.3 27.8 12.8 9.9 6.5 3.8 4.7 2.8 0.9 5.7 21.6 23.0 21.2 17.1 12.7 5.8 5.0 3.8 5.4 7.8 4.8 8.5 19.5 37.8 27.0 23.4 12.0 6.7 4.3 1.3 3.5 7.0 6.4 9.5 17.0 22.3 43.3 30.2 14.7 6.1 3.0 0.9 2.1 6.8 5.1 9.1 16.7 21.6 34.1 39.8 14.7 5.0 2.2 0.5 0.0 6.2 5.6 8.7 16.9 22.6 31.0 30.6 37.6 7.6 5.2 3.5 6.2 7.2 4.7 9.6 13.2 16.4 24.6 20.1 11.6 23.5 17.9 8.1 13.8 4.2 4.2 8.8 15.1 18.5 23.3 18.5 13.0 14.5 14.5 8.9 13.5 5.9 6.5 11.1 16.4 22.1 27.3 23.2 16.7 13.0 10.8 13.7 12.8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.3 Table 11: Unsupervised MT via Language Transfer on X-En translations. The model fine-tuned on one language pair is directly tested on another. We use gray color to show the direct fine-tuning results, and lightgray color to show language transfer within similar language groups. We bold the highest transferring score for each pair. Pairs BT Transfer Combined Ro→En Ne→En Zh→En Nl→En 30.5 Cs→En 10.0 Hi→En 11.3 Ko→En It→En 28.5 23.0 18.9 9.2 34.1 33.9 22.1 15.0 35.4 Table 12: Back-Translation v.s. Language Transfer for Unsupervised MT. We present the best transfer- ring scores together with the pairs transferred from. late all languages with similar representations. We also present three examples of language transfer- ring between Zh, Ja and Ko in appendix B. When is language transfer useful? Table 11 also shows mixed results at each pair. First, for most pairs, language transfer works better when fine-tuning is also conducted in the same language family, especially between Indic languages (Hi, Ne, Gu). However, significant vocabulary sharing is not required for effective transfer. For instance, Zh-En and It-En achieve the best transfer learning results on Ko-En and Ar-En, respectively. How- ever, the vocabulary overlapping (even character overlapping) between Zh and Ko, It and Ar is low. w/ Back-Translation We also present the com- parison on 4 pairs of unsupervised MT with back- translation (BT) v.s. language transfer in Table 12. The results are also mixed. If there exists high quality (similar languages) bi-text data, or trans- lating between dissimilar pairs, language transfer is able to beat the conventional methods with BT. Furthermore, we also show promising results for combining these two techniques. In such cases, we start from the best transferred model and apply (it- erative) BT on the same monolingual corpus used in pre-training. Table 12 presents the results with 1 iteration of BT. For all pairs, we see improvements by combining both techniques. # 6 Related Work Pre-training for Text Generation This work inherits from the recent success brought by self- supervised pre-training for NLP applications (Pe- ters et al., 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019; Liu et al., 2019), espe- cially for text generation tasks (Radford et al., 2019; Song et al., 2019; Dong et al., 2019; Raf- fel et al., 2019; Lewis et al., 2019) where dif- ferent self-supervised objectives are designed for training big neural models on enormous unlabeled text corpora The pre-trained models are usually used as the initialization for fine-tuning variant downstream tasks such as controllable language modeling (Shirish Keskar et al., 2019), machine translation (Song et al., 2019), summarization (Liu and Lapata, 2019) and dialogue generation (Zhang et al., 2019). In contrast to most prior work, we focus on a deep exploration of applying denoising pre-training for various translation applications. Multilinguality in NLP tasks This work is also related to the continual trend of multilingual lan- guage learning, including aligning multilingual word embeddings (Mikolov et al., 2013; Chen and Cardie, 2018; Lample et al., 2018b) into universal space, and learning cross-lingual models (Wada and Iwata, 2018; Lample and Conneau, 2019; Conneau et al., 2019) to exploit shared represen- tations across languages. For machine translation, the most relevant field translation (Firat et al., 2016; is multilingual Viégas et al., 2016; Aharoni et al., 2019; Arivazha- gan et al., 2019) where the ultimate goal is to jointly train one translation model that translates multiple language directions at the same time, and shares representations to improve the translation performance on low-resource languages (Gu et al., 2018). In this paper, we mainly focus on multilin- gualism in the pre-training stage and fine-tune the learned model in the standard bi-lingual scenario. Compared to multilingual translation, we do not require parallel data across multiple languages but the targeted direction, which potentially improves the scalability to low-resource languages and spe- cific domains. Moreover, multilingual pre-training is unlikely to suffer the interference problems be- tween dissimilar languages, which is typical for regular multilingual translation models. Document Translation As one of the key appli- cations, this work also links to previous efforts for incorporating document-level contexts into neu- ral machine translation (Wang et al., 2017; Jean et al., 2017; Tiedemann and Scherrer, 2017; Mi- culicich et al., 2018; Tu et al., 2018). Li et al. (2019) is the most relevant work which also uti- lized pre-trained encoder (BERT) for handling longer context. However, none of these works had shown positive results on pure Seq2Seq models at document-level, which involved task-specific techniques, and usually only worked on sentence- level translation with a constrained range of con- text. To the extent of our knowledge, our mul- tilingual pre-trained model is the first-of-its-kind work that shows improved results on document- level translation with standard Seq2Seq learning. Unsupervised Translation This work also sum- marizes the previous efforts of learning to translate between languages without a direct parallel cor- pus, and re-defines them as unsupervised machine translation with three categories where in this work, we only focus on applications to the first and the third kinds (§5). When no parallel corpus of any kind is available, Artetxe et al. (2017); Lample et al. (2018a,c) proposed to jointly learn denois- ing auto-encoder and back-translation from both directions, which, however, required good initial- ization and only worked well on similar language pairs; Wu et al. (2019a) replaced back-translation with retrieved similar sentences from target mono- lingual data; Wu et al. (2019b) solves the problem by mining sentences from Wikipedia and use them as weakly supervised translation pairs. Similar to Lample and Conneau (2019); Song et al. (2019), we follow the first approach and treat our pre- trained model as the initialization step. Besides, we investigate unsupervised translation using lan- guage transfer, which is similar to Pourdamghani et al. (2019) where the authors generate transla- tionese of the source language and train a sys- tem on high-resource languages to correct these intermediate utterances. It is also closely related to Conneau et al. (2018); Artetxe et al. (2019) for cross-lingual representation learning. # 7 Conclusion We demonstrate that multilingual de-noising pre- training is able to significantly improve both su- pervised and unsupervised machine translation at both the sentence level and document level. We analyze when and how pre-training is most effec- tive and can be combined with other approaches such as back-translation. Our results also show the transfer learning ability of the learned representa- tions from multilingual pre-training. In future work, we will scale-up the current pre- training to more languages, e.g., an mBART100 model. The size of our model makes it expensive to deploy in production – future work will explore pre-training more efficient models. # 8 Acknowledgements We thank Marc’Aurelio Ranzato, Guillaume Lam- ple, Alexis Conneau, and Michael Auli for shar- ing their expertise on low-resource and unsuper- vised machine translation, Peng-Jen Chen, Jiajun Shen for details about FloRes and WAT datasets. We also thank our colleagues at FAIR and FAIAR for valuable feedback. # References Roee Aharoni, Melvin Johnson, and Orhan Firat. 2019. Massively multilingual neural machine In Proceedings of the 2019 Con- translation. ference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 3874–3884, Min- neapolis, Minnesota. Association for Computa- tional Linguistics. Naveen Arivazhagan, Ankur Bapna, Orhan Fi- rat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, Wolfgang Macherey, Zhifeng Chen, and Yonghui Wu. 2019. Mas- sively multilingual neural machine translation in the wild: Findings and challenges. CoRR, abs/1907.05019. Mikel Artetxe, Gorka Labaka, Eneko Agirre, Unsupervised arXiv preprint and Kyunghyun Cho. 2017. neural machine translation. arXiv:1710.11041. Mikel Artetxe, Sebastian Ruder, and Dani Yo- gatama. 2019. On the cross-lingual transferabil- ity of monolingual representations. Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. Wit3: Web inventory of tran- In Conference of scribed and translated talks. European Association for Machine Translation, pages 261–268. Mauro Cettolo, Niehues Jan, Stüker Sebastian, Luisa Bentivogli, Roldano Cattoni, and Mar- cello Federico. 2015. The iwslt 2015 evalua- In International Workshop on tion campaign. Spoken Language Translation. Peng-Jen Chen, Jiajun Shen, Matt Le, Vishrav Chaudhary, Ahmed El-Kishky, Guillaume Wen- zek, Myle Ott, and Marc’Aurelio Ranzato. 2019. Facebook ai’s wat19 myanmar-english arXiv preprint translation task submission. arXiv:1910.06848. Xilun Chen and Claire Cardie. 2018. Unsuper- In Pro- vised multilingual word embeddings. ceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 261–270, Brussels, Belgium. Association for Computational Linguistics. Yun Chen, Yang Liu, Yong Cheng, and Victor OK Li. 2017. A teacher-student framework for zero-resource neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1925–1935. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wen- zek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual represen- arXiv preprint tation learning at scale. arXiv:1911.02116. Alexis Conneau, Ruty Rinott, Guillaume Lample, Adina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross-lingual sentence representa- In Proceedings of the 2018 Conference tions. on Empirical Methods in Natural Language Processing. Association for Computational Lin- guistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In North American Association for Computational Linguistics (NAACL). Chenchen Ding, Hnin Thu Zar Aye, Win Pa Pa, Khin Thandar Nwet, Khin Mar Soe, Masao Utiyama, and Eiichiro Sumita. 2019. Towards Burmese (Myanmar) morphological analysis: Syllable-based tokenization and part-of-speech ACM Transactions on Asian and tagging. Low-Resource Language Information Process- ing (TALLIP), 19(1):5. Chenchen Ding, Masao Utiyama, and Eiichiro Sumita. 2018. NOVA: A feasible and flexi- ble annotation system for joint tokenization and part-of-speech tagging. ACM Transactions on Asian and Low-Resource Language Informa- tion Processing (TALLIP), 18(2):17. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified lan- guage model pre-training for natural language understanding and generation. arXiv preprint arXiv:1905.03197. Sergey Edunov, Alexei Baevski, and Michael Auli. 2019. Pre-trained language model representa- tions for language generation. arXiv preprint arXiv:1903.09722. Orhan Firat, Kyunghyun Cho, and Yoshua Bengio. 2016. Multi-way, multilingual neural machine translation with a shared attention mechanism. In NAACL. Jiatao Gu, Hany Hassan, Jacob Devlin, and Vic- tor O.K. Li. 2018. Universal neural machine translation for extremely low resource lan- guages. In Proceedings of the 2018 Conference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 344–354, New Orleans, Louisiana. Association for Computational Linguistics. Jiatao Gu, Yong Wang, Kyunghyun Cho, and Vic- Improved zero-shot neural tor OK Li. 2019. machine translation via ignoring spurious cor- relations. arXiv preprint arXiv:1906.01181. Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc’Aurelio Ran- zato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali– In Proceedings English and Sinhala–English. of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 6097–6110, Hong Kong, China. Associ- ation for Computational Linguistics. Sébastien Jean, Stanislas Lauly, Orhan Firat, and Kyunghyun Cho. 2017. Does neural machine translation benefit from larger context? CoRR, abs/1704.05135. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wat- tenberg, Greg Corrado, et al. 2017. Google’s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351. Taku Kudo and John Richardson. 2018. Senten- cePiece: A simple and language independent subword tokenizer and detokenizer for neural In Proceedings of the 2018 text processing. Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66–71, Brussels, Belgium. Association for Computational Linguistics. Anoop Kunchukuttan, Pratik Mehta, and Pushpak Bhattacharyya. 2017. The IIT bombay english- hindi parallel corpus. CoRR, abs/1710.02855. Guillaume Lample and Alexis Conneau. 2019. language model pretraining. Cross-lingual arXiv preprint arXiv:1901.07291. Guillaume Lample, Alexis Conneau, Ludovic De- noyer, and Marc’Aurelio Ranzato. 2018a. Un- supervised machine translation using monolin- gual corpora only. In International Conference on Learning Representations. Conneau, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou. 2018b. Word transla- In International tion without parallel data. Conference on Learning Representations. Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. Phrase-based & neural unsuper- 2018c. arXiv preprint vised machine translation. arXiv:1804.07755. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence language generation, pre-training for natural translation, and comprehension. arXiv preprint arXiv:1910.13461. Liangyou Li, Xin Jiang, and Qun Liu. 2019. Pretrained language models for document-level arXiv preprint neural machine translation. arXiv:1911.03110. Yang Liu and Mirella Lapata. 2019. Text sum- arXiv marization with pretrained encoders. preprint arXiv:1908.08345. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly opti- mized bert pretraining approach. arXiv preprint arXiv:1907.11692. Lesly Miculicich, Dhananjay Ram, Nikolaos Pap- pas, and James Henderson. 2018. Document- level neural machine translation with hierarchi- In Proceedings of the cal attention networks. 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 2947–2954, Brussels, Belgium. Association for Computa- tional Linguistics. Tomas Mikolov, Quoc V. Le, and Ilya Sutskever. 2013. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168. Myle Ott, Sergey Edunov, Alexei Baevski, An- gela Fan, Sam Gross, Nathan Ng, David Grang- ier, and Michael Auli. 2019. FAIRSEQ: A fast, In extensible toolkit for sequence modeling. North American Association for Computational Linguistics (NAACL): System Demonstrations. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for au- tomatic evaluation of machine translation. In Proceedings of the 40th annual meeting on as- sociation for computational linguistics, pages 311–318. Association for Computational Lin- guistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextu- In North Ameri- alized word representations. can Association for Computational Linguistics (NAACL). Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Con- ference on Machine Translation: Research Pa- pers, pages 186–191, Belgium, Brussels. Asso- ciation for Computational Linguistics. Nima Pourdamghani, Nada Aldarrab, Marjan Ghazvininejad, Kevin Knight, and Jonathan May. 2019. Translating translationese: A two- step approach to unsupervised machine transla- tion. In ACL. Alec Radford, Karthik Narasimhan, Time Sali- mans, and Ilya Sutskever. 2018. Improving lan- guage understanding with unsupervised learn- ing. Technical report, OpenAI. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Technical report, OpenAI. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine trans- In Proceedings of lation systems for wmt 16. the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 371–376. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Improving neural machine trans- In Pro- lation models with monolingual data. ceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Vol- ume 1: Long Papers), pages 86–96, Berlin, Ger- many. Association for Computational Linguis- tics. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer lan- guage model for controllable generation. arXiv preprint arXiv:1909.05858. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. MASS: Masked sequence to sequence pre-training for language genera- tion. In International Conference on Machine Learning (ICML). Jörg Tiedemann and Yves Scherrer. 2017. Neu- ral machine translation with extended context. In Proceedings of the Third Workshop on Dis- course in Machine Translation, pages 82–92, Copenhagen, Denmark. Association for Com- putational Linguistics. Zhaopeng Tu, Yang Liu, Shuming Shi, and Tong Zhang. 2018. Learning to remember translation history with a continuous cache. Transactions of the Association for Computational Linguis- tics, 6:407–420. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Advances in neural information processing systems. Fernanda Viégas, Greg Corrado, Jeffrey Dean, Macduff Hughes, Martin Wattenberg, Maxim Krikun, Melvin Johnson, Mike Schuster, Nikhil Thorat, Quoc V Le, et al. 2016. Google’s multi- lingual neural machine translation system: En- abling zero-shot translation. Takashi Wada and Tomoharu Iwata. 2018. Un- supervised cross-lingual word embedding by multilingual neural language models. CoRR, abs/1809.02306. Longyue Wang, Zhaopeng Tu, Andy Way, and Qun Liu. 2017. Exploiting cross-sentence con- In Pro- text for neural machine translation. ceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 2826–2831, Copenhagen, Denmark. As- sociation for Computational Linguistics. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guz- man, Armand Joulin, and Edouard Grave. 2019. Ccnet: Extracting high quality monolingual datasets from web crawl data. arXiv preprint arXiv:1911.00359. Jiawei Wu, Xin Wang, and William Yang Wang. 2019a. Extract and edit: An alternative to back- translation for unsupervised neural machine translation. arXiv preprint arXiv:1904.02331. Lijun Wu, Jinhua Zhu, Di He, Fei Gao, Xu Tan, Tao Qin, and Tie-Yan Liu. 2019b. Machine translation with weakly paired bilingual docu- ments. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. Di- alogpt: Large-scale generative pre-training for conversational response generation. # A Evaluation Details For all our tasks, we use BLEU scores (Papineni et al., 2002) as the automatic metric to evaluate the translation performance. Normally, we com- pute the BLEU scores over tokenized text for both system outputs and the references, and we apply language-wise tokenization after over the trans- lation. Note that, since we directly work on raw texts, we automatically get de-tokenized output af- ter recovering sentence-piece subwords. Follow- ing the literature, the instructions of language-wise tokenization are as follows: • Gu, Ne, Si, Hi: We use Indic-NLP Library 5 to tokenize the Indic language outputs. • Ja: We use KyTea 6 to segment Japanese texts. • Ko: We use Mecab-Ko 7 and its default dictio- nary to segment the Korean texts • Ar: We apply QCRI Arabic Normalizer 8 over the Arabic texts. • My: We use the official segmentation tool pro- vided by Ding et al. (2019) for Burmese. • Ro: Following Sennrich et al. (2016a), we ap- ply Moses tokenization and special normaliza- tion for Romanian texts 9. • Zh: We use the official sacreBleu (Post, 2018)10 Chinese tokenizer (–tok zh). For other languages that are not listed above, we compute BLEU scores with sacreBLEU with DE- FAULT tokenization. # B Translation Examples 5https://anoopkunchukuttan.github.io/indic_nlp_library/ 6http://www.phontron.com/kytea/ 7http://konlpy.org/en/v0.3.0/install/ 8http://alt.qcri.org/tools/arabic-normalizer/ 9https://github.com/rsennrich/wmt16-script 10https://github.com/mjpost/sacreBLEU SOURCE fFA-BZAR, KATBKKEIREEN . ROSA ARAB MOM AAFES ERAT MESH BREAKIN, AYLOFMRB—RAT RIM, B PE-KADT KIL, BOAR, RNORRMS), LRA, RSRRERMMNRNRSH., ERSMMKIN PHKM/LE2O0RR, BREBNRSE Rae RE RSER—H STE, SRF, KUNA SEM KIBOT RENGRED TT, SMAPS E NACI, CI SHR PRM ADRA-SeMM A, BKM ER SSE, MA MAMTA FNM PRK, NAB KI, RAD CBS 8, CIES, BH—-AM, BRERIAXANRTHESNEK, OUT, MBIA, RRA CASH, Meat, CART BAT Ombeek ENBRT HO RAFAL A PURERARIAN AR, THIER TANT KUREOBEANEE BRRRSRBERET, SERIE Bj, BREE | HARE, ME BESE RAZR ME, RAIN, BEARERS LTE. Aik BUTE, BRAARR AH BBN BREE St) Kekertsuatsiak HRN — PKL, RTE REMIS ARAL WIE — KL. ADORE, ARRAN H. WEA TISRRAM, BRL KUNE CEKHLHEY, ERILKENCHMER, MBAS, —TAAMERS, RE-TEORTHRE Sik, CPHAMALB120RRS RB40% AUG, SARK, ENRTBARNARTENATAAB. eis, Random SENT-MT As an artist, connection is very important for me. Through my artwork, | try to illustrate that human beings are not separated from nature, but each one of them is interconnected. About 10 years ago, | first went to Antarctica, and | saw mountains for the first time. | felt awe. My heart was rapidly wiped out, trying to figure out what was going on in front of me. The iceberg that floats around me almost 200 feet of water, and I can only feel strange that this snow covers another piece of snow for a year and years. The form of the iceberg is that when they break apart from the glacier or breaking off the shelves of ice. Every iceberg has their own personality. They're interacting with their surrounding environment in a very different way. Some of the ice mountains refused to compromise, and some other mountains of ice can't endure, and the water collapses during a violent ice. And when you look at the iceberg, it's easy to think that they're all isolated, and they're independent, they're united alone, and sometimes we think about ourselves. But it’s more than that. As the ice melts, | breathe it, ancient smell. And as the ice melts, it releases the rich minerals and it feeds 20,000. I'm photographing these mountains of ice, and I'm photographing my ancestors, and I'm taking pictures of my ancestors, and I've learned that in these individual moments of ice is there in a way that's there, but it's never going to exist again. When they melt, it's not dead; it's not the end; it's the end, it's a continuation of continuity to life. The iceberg that | photographed, some of the ice is very young -- the age of thousands of years. Some of them are ice over 100,000 years. And the last picture | want to show you is one of the ices | made in Greenland. It's a very difficult opportunity, and you can actually see a hill rolling. So this is what it looks like. And on the left you can see a little boat. This is a ship about 15 feet. And | want you to notice that the shape of the iceberg is shaped in the water. And here you see it starts rolling, little boats moving to the other side, and a man is standing there. This is an average iceberg in Greenland. It's about 120 feet tall, or 40 meters. This is a real time lapse. Like this iceberg, they show you different aspects of their personality. Thank you. Random DOC-MT As an artist, as an artist, as an artist, as an artist, as an artist, as an artist, as an artist, as an artist, as an artist, as an artist. I'm going to focus on the glacier and the glacier and the glacier and the glacier. There's a lot of ice in the ice in the ice, and there's a lot of ice in the ice, and there's a lot of ice in the ice, and there's a lot of ice in the ice in the ice. It's a ice that's ice that's melted from the ice of the ice that's melted from the ice of the ice that's melted from the ice of the ice that's melted from the ice of the ice that I've lost. There's a lot of ice that 'm going to show you some pictures that 'm going to show you. And you can see that it's moving to the top of it, and it's moving to the top of it. mBART25 SENT-MT As an artist, connection is very important to me. And through my artwork, | try to illustrate that humans are not separate from nature, but that everything is connected to each other. | went to Antarctica for the first time about 10 years ago, and | saw the iceberg for the first time. | felt awe. And my heart was pounding, my head was pounding, trying to figure out what was going on in front of me. And the iceberg that was right next to me was floating almost 200 feet above the surface of the water, and | just couldn't help but feel that this was a snowflake covering another snowflake, forming over and over and over again. The formation of glaciers is when they break off from glaciers, or break off from ice shelves. Every iceberg has its own unique personality. They interact in a very distinctive way with the environment around them and with the circumstances around them. Some glaciers refused to compromise and insisted, and some glaciers couldn't stand the heat of passion as it poured down the ice. When you look at icebergs, it's easy to think that they're isolated, that they're independent, that they're separate, that they're more like the way we think about ourselves sometimes. But the reality is much more than that. And as the glacier melts, | breathe in its ancient smell. And as the glacier melts, it releases fresh water of minerals that nourish everything. | started photographing these icebergs like | was photographing my ancestors, and | learned that in these individual moments, the icebergs existed in that way, but they never existed like that again. When they melt, it's not about death; it's not about the end, it's about the continuation of a life-long path. | photographed glaciers, and some of them were very young ~ thousands of years old. Some of the ice has been there for more than 100,000 years. And the last picture | want to show you is an iceberg that | photographed in Kekertsuatsiak on the island of Greenland. I's a very difficult opportunity to actually witness the rolling of an iceberg. So this is what it looks like. You can see a little boat on the left. This is a 15-foot boat. | want you to notice that the shape of the iceberg changes as it moves over the surface. And here you see it rolling, and the boat moves to the other side, and a man is standing there. This is an average size glacier in Greenland. It floats about 120 feet up or 40 meters above the surface. This video was taken in real time, And like this iceberg, they show you different aspects of their personality. Thank you. mBART25 DOC-MT And as an artist, connection is very important to me. Through my artwork, | try to convey the idea that humans are not separated from nature, but that everything is connected to each other. When I first went to Antarctica about 10 years ago, | saw for the first time icebergs. And | felt awe. My heart was shaking, my head was shaking, trying to understand what was in front of me. The icebergs around me were floating almost 200 feet above the surface of the water, and | could only feel how strange it was that this was a snowflake covering another snowflake, forming over and over again over and over again. And icebergs form when they break off from glaciers or when they break off from ice shelves, And each iceberg has its own unique personality. They interact in a very distinctive way with the environment around them and with the circumstances in which they're located. Some icebergs refuse to settle down, and some icebergs can't stand the heat of passion that pours down and breaks ice. And when you look at icebergs, it's easy to think that they're isolated, that they're independent, that they're individual, that they're more like the way we think about ourselves sometimes. But the reality is much more than that. As the icebergs melt, | breathe in the smell of its ancient past. As the icebergs melt, they release fresh water that is rich in minerals that feed everything. And I'm photographing these icebergs like I'm photographing my ancestors, and I'm learning that in these individual moments, icebergs used to exist in that way and will never be the same again. When they melt, i's not about death; it's not about the end, but it's about a continuation of a lifetime. And the icebergs I've photographed, some of them are very young -- thousands of years old, And some of them are more than 100,000 years old. And the last picture | want to show you is a iceberg that | photographed on Kekertsuatsiak in Greenland. And it's a very difficult opportunity for you to actually witness the rolling of a iceberg. So here it is. On the left you can see a little boat. It's a little boat about 15 feet long. And | want you to notice that the shape of the iceberg changes as it floats over the surface of the water. And here you see it start to roll, and the boat moves to the other side, and a man is standing there. And this is an average size Icelandic iceberg. And it floats about 120 feet above the surface of the water, or 40 meters. And this video was taken in real time. And like these icebergs, they show you different aspects of their personality. Thank you. TARGET ‘AS an artist, connection is very important to me. Through my work I'm trying to articulate that humans are not separate from nature and that everything is, interconnected. | first went to Antarctica almost 10 years ago, where | saw my first icebergs. | was in awe. My heart beat fast, my head was dizzy, trying to comprehend what it was that stood in front of me. The icebergs around me were almost 200 feet out of the water, and | could only help but wonder that this was one snowflake on top of another snowflake, year after year. Icebergs are born when they calve off of glaciers or break off of ice shelves. Each iceberg has its own individual personality. They have a distinct way of interacting with their environment and their experiences. Some refuse to give up and hold on to the bitter end, while others can't take it anymore and crumble in a fit of dramatic passion. It's easy to think, when you look at an iceberg, that they're isolated, that they're separate and alone, much like we as humans sometimes view ourselves. But the reality is far from it. As an iceberg melts, | am breathing in its ancient atmosphere. As the iceberg melts, itis releasing mineral-rich fresh water that nourishes many forms of life. | approach photographing these icebergs as if I'm making portraits of my ancestors, knowing that in these individual moments they exist in that way and will never exist that way again. It is not a death when they meft; itis not an end, but a continuation of their path through the cycle of life. Some of the ice in the icebergs that | photograph is very young -- a couple thousand years old. And some of the ice is over 100,000 years old. The last pictures I'd like to show you are of an iceberg that | photographed in Qeqetarsuaq, Greenland. I's a very rare occasion that you get to actually witness an iceberg rolling. So here itis. You can see on the left side a small boat. That's about a 15-foot boat. And I'd like you to pay attention to the shape of the iceberg and where it is at the waterline. You can see here, it begins to roll, and the boat has moved to the other side, and the man is standing there. This is an average-size Greenlandic iceberg. it's about 120 feet above the water, or 40 meters. And this video is real time. And just like that, the iceberg shows you a different side of its personality. Thank you. Figure 6: An Example of Document-level translation from mBART25 Sent-MT and Doc-MT, held out from the test set of TED15 Zh-En. The Doc-MT system produces much fluent and coherent translation which is closer to the reference translation. For instance, Doc-MT model produces several “And” to connect sentences to make it reads better, while the Sent-MT model does not contain global knowledge and produce sentences independently. Besides, both systems produce much better translations than models without pre-training where the non-pretrained Doc-MT model completely fails to produce readable translation output. SOURCE Zh HMB AMTRSE MRRESRAAATRARCFESAERERAGRS HABEAS BT AWAKE EMAAR MH SHH LTA OH TARGET |!n response to the government's silence, JDC exec has today made a formal request for a special meeting of BMA Council to authorise a rolling programme of escalated industrial action beginning in early September. En mBART25 Ja-En mBART25 Ko-En mBART25 Zh-En SOURCE Ja TARGET En mBART25 Ja-En mBART25 Ko-En mBART25 Zh-En SOURCE Ko TARGET En mBART25 Ja-En mBART25 Ko-En mBART25 Zh-En In response to the government's silence, the Council of Chief Medical Officers has formally requested today the Royal College of Physicians to hold a special meeting to approve a long-term workforce action that starts in September. In response to the government's silence, the Chief Medical Officers’ Council is calling today for a special session at the Council of the British Medical Association, which is a long-term initiative to upgrade labor from September. In response to the government's silence, the Board of Primary Doctors has today formally asked the British Medical Association to hold a special meeting to approve a long-term plan that starts in the beginning of September. AFARPAVY A COWOS OKMEMTE YRVMOIEMELNEVAM BLUALY RLAMA, PRMATH wt FEUROFICE DT ANCHE It may be taken for granted in Canada, in America, in many developed countries, but in poor countries, in patriarchal societies, in tribal societies, it's a big event for the life of girl. In Canada, in the United States, and many other developed countries, it's taken for granted that in poor countries, in patriarchal societies, in tribal societies, education is very important for girls. It's commonplace in countries like Canada and the United States and many other7¢i# countries, but it's not commonplace in poor countries, in patriarchal societies, in clan societies, where schooling is a big deal for girls. It's commonplace in Canada, in the U.S., and in many other countries in the world, in poor countries, in patriarchal societies, in ethnic societies, that education is a priority for girls. Conventional wisdom says the only way to get higher resolution is with bigger magnets, but at this point bigger magnets only offer incremental resolution improvements, not the thousandfold we need In the conventional wisdom, the only way to get higher resolution is with the big cushions, but at this point, the bigger cushions give us just a little bit more resolution than we need to get 1,000 times better. The conventional wisdom is that the only way to get higher resolution is with bigger magnets, but at this point, bigger magnets only give us a little bit of resolution improvement, not 1,000 times as much as we need. And the conventional wisdom is that the only way to get a higher resolution is with large jellyfish, but at this point in time, big jellyfish is only going to provide us with 1,000 times as much resolution as we need with just a little bit of resolution improvement. Figure 7: Examples of Unsupervised MT via Language Transfer between Ja, Ko, Zh → En. We mark the su- pervised settings in red. All three languages have quite different character sets (Ja and Zh shares part of the Chinese characters) and syntactic structures. However, they are still culturally and historically correlated, which we assume can be captured through pre-training. For all cases, if we fine-tune the mBART25 model on any pair, the resulted model directly translates well in the other two pairs without seeing any corresponded parallel sentences. We also see failure cases. For instance (the 3rd example), only the supervised model translates “자석” into “magents” cor- rectly, while the Ja-En and Zh-En guess with irreverent words “cushions” and “jellyfish”, respectively. Also, in the 2nd example, the Ko-En model fails to translate “developed” and copies the source tokens. We suspect it is because the pre-training stage biases the output distribution.
{ "id": "1904.02331" }
2001.07676
Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference
Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, standard supervised training is performed on the resulting training set. For several tasks and languages, PET outperforms supervised training and strong semi-supervised approaches in low-resource settings by a large margin.
http://arxiv.org/pdf/2001.07676
Timo Schick, Hinrich Schütze
cs.CL
Accepted at EACL2021
null
cs.CL
20200121
20210125
1 2 0 2 n a J 5 2 ] L C . s c [ 3 v 6 7 6 7 0 . 1 0 0 2 : v i X r a # Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference Timo Schick1,2 Hinrich Sch ¨utze1 1 Center for Information and Language Processing, LMU Munich, Germany 2 Sulzer GmbH, Munich, Germany [email protected] # Abstract Some NLP tasks can be solved in a fully unsu- pervised fashion by providing a pretrained lan- guage model with “task descriptions” in natu- ral language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern- Exploiting Training (PET), a semi-supervised training procedure that reformulates input ex- amples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, standard supervised training is performed on the result- ing training set. For several tasks and lan- guages, PET outperforms supervised training and strong semi-supervised approaches in low- resource settings by a large margin.1 # Introduction Learning from examples is the predominant ap- proach for many NLP tasks: A model is trained on a set of labeled examples from which it then generalizes to unseen data. Due to the vast number of languages, domains and tasks and the cost of annotating data, it is common in real-world uses of NLP to have only a small number of labeled exam- ples, making few-shot learning a highly important research area. Unfortunately, applying standard supervised learning to small training sets often per- forms poorly; many problems are difficult to grasp from just looking at a few examples. For instance, assume we are given the following pieces of text: • T1: This was the best pizza I’ve ever had. ( Best pizza ever! +1 ) ∈ T Just gross. ∈ D Best pizza ever! It was . (1) PLM LCE (2) great : 0.8 bad : 0.2 Just gross. +1: 0.1 -1: 0.9 (3) +1: 0.8 -1: 0.2 C Figure 1: PET for sentiment classification. (1) A num- ber of patterns encoding some form of task description are created to convert training examples to cloze ques- tions; for each pattern, a pretrained language model is finetuned. (2) The ensemble of trained models anno- tates unlabeled data. (3) A classifier is trained on the resulting soft-labeled dataset. Furthermore, imagine we are told that the labels of T; and Ty are / and 1’, respectively, and we are asked to infer the correct label for T3. Based only on these examples, this is impossible because plau- sible justifications can be found for both | and 1’. However, if we know that the underlying task is to identify whether the text says anything about prices, we can easily assign I’ to T3. This illustrates that solving a task from only a few examples becomes much easier when we also have a task description, ie., a textual explanation that helps us understand what the task is about. • T2: You can get better sushi for half the price. • T3: Pizza was average. Not worth the price. 1Our implementation is publicly available at https:// github.com/timoschick/pet. With the rise of pretrained language models (PLMs) such as GPT (Radford et al., 2018), BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019), the idea of providing task descriptions has become feasible for neural architectures: We can simply append such descriptions in natural lan- guage to an input and let the PLM predict continua- tions that solve the task (Radford et al., 2019; Puri and Catanzaro, 2019). So far, this idea has mostly been considered in zero-shot scenarios where no training data is available at all. In this work, we show that providing task de- scriptions can successfully be combined with stan- dard supervised learning in few-shot settings: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that uses natu- ral language patterns to reformulate input examples into cloze-style phrases. As illustrated in Figure 1, PET works in three steps: First, for each pattern a separate PLM is finetuned on a small training set T . The ensemble of all models is then used to annotate a large unlabeled dataset D with soft labels. Finally, a standard classifier is trained on the soft-labeled dataset. We also devise iPET, an iterative variant of PET in which this process is repeated with increasing training set sizes. On a diverse set of tasks in multiple languages, we show that given a small to medium number of labeled examples, PET and iPET substantially outperform unsupervised approaches, supervised training and strong semi-supervised baselines. # 2 Related Work Radford et al. (2019) provide hints in the form of natural language patterns for zero-shot learning of challenging tasks such as reading comprehension and question answering (QA). This idea has been applied to unsupervised text classification (Puri and Catanzaro, 2019), commonsense knowledge mining (Davison et al., 2019) and argumentative re- lation classification (Opitz, 2019). Srivastava et al. (2018) use task descriptions for zero-shot classifi- cation but require a semantic parser. For relation extraction, Bouraoui et al. (2020) automatically identify patterns that express given relations. Mc- Cann et al. (2018) rephrase several tasks as QA problems. Raffel et al. (2020) frame various prob- lems as language modeling tasks, but their patterns only loosely resemble natural language and are un- suitable for few-shot learning.2 Another recent line of work uses cloze-style phrases to probe the knowledge that PLMs acquire during pretraining; this includes probing for factual 2For example, they convert inputs (a, b) for recognizing textual entailment (RTE) to “rte sentence1: a sentence2: b”, and the PLM is asked to predict strings like “not entailment”. and commonsense knowledge (Trinh and Le, 2018; Petroni et al., 2019; Wang et al., 2019; Sakaguchi et al., 2020), linguistic capabilities (Ettinger, 2020; Kassner and Sch¨utze, 2020), understanding of rare words (Schick and Sch¨utze, 2020), and ability to perform symbolic reasoning (Talmor et al., 2019). Jiang et al. (2020) consider the problem of finding the best pattern to express a given task. Other approaches for few-shot learning in NLP include exploiting examples from related tasks (Yu et al., 2018; Gu et al., 2018; Dou et al., 2019; Qian and Yu, 2019; Yin et al., 2019) and using data aug- mentation (Xie et al., 2020; Chen et al., 2020); the latter commonly relies on back-translation (Sen- nrich et al., 2016), requiring large amounts of paral- lel data. Approaches using textual class descriptors typically assume that abundant examples are avail- able for a subset of classes (e.g., Romera-Paredes and Torr, 2015; Veeranna et al., 2016; Ye et al., 2020). In contrast, our approach requires no addi- tional labeled data and provides an intuitive inter- face to leverage task-specific human knowledge. The idea behind iPET – training multiple gen- erations of models on data labeled by previous generations – bears resemblance to self-training and bootstrapping approaches for word sense dis- ambiguation (Yarowsky, 1995), relation extraction (Brin, 1999; Agichtein and Gravano, 2000; Batista et al., 2015), parsing (McClosky et al., 2006; Re- ichart and Rappoport, 2007; Huang and Harper, 2009), machine translation (Hoang et al., 2018), and sequence generation (He et al., 2020). # 3 Pattern-Exploiting Training Let M be a masked language model with vocab- ulary V and mask token ∈ V , and let L be a set of labels for our target classification task A. We write an input for task A as a sequence of phrases x = (s1, . . . , sk) with si ∈ V ∗; for ex- ample, k = 2 if A is textual inference (two input sentences). We define a pattern to be a function P that takes x as input and outputs a phrase or sen- tence P (x) ∈ V ∗ that contains exactly one mask token, i.e., its output can be viewed as a cloze ques- tion. Furthermore, we define a verbalizer as an injective function v : L → V that maps each label to a word from M ’s vocabulary. We refer to (P, v) as a pattern-verbalizer pair (PVP). Using a PVP (P, v) enables us to solve task A as follows: Given an input x, we apply P to obtain an input representation P (x), which is then processed by M to determine the label y ∈ L for which v(y) is the most likely substitute for the mask. For example, consider the task of identifying whether two sentences a and b contradict each other (label y0) or agree with each other (y1). For this task, we may choose the pattern P (a, b) = a? , b. combined with a verbalizer v that maps y0 to “Yes” and y1 to “No”. Given an example input pair x = (Mia likes pie, Mia hates pie), the task now changes from having to assign a label without inherent meaning to answering whether the most likely choice for the masked position in P (x) = Mia likes pie? , Mia hates pie. is “Yes” or “No”. # 3.1 PVP Training and Inference Let p = (P, v) be a PVP. We assume access to a small training set T and a (typically much larger) set of unlabeled examples D. For each sequence z ∈ V ∗ that contains exactly one mask token and w ∈ V , we denote with M (w | z) the unnormal- ized score that the language model assigns to w at the masked position. Given some input x, we define the score for label l ∈ L as sp(l | x) = M (v(l) | P (x)) and obtain a probability distribution over labels using softmax: esp (lx) ea) dp(1 | x) = We use the cross-entropy between qp(l | x) and the true (one-hot) distribution of training example (x, l) – summed over all (x, l) ∈ T – as loss for finetuning M for p. # 3.2 Auxiliary Language Modeling In our application scenario, only a few training ex- amples are available and catastrophic forgetting can occur. As a PLM finetuned for some PVP is still a language model at its core, we address this by us- ing language modeling as auxiliary task. With LCE denoting cross-entropy loss and LMLM language modeling loss, we compute the final loss as L = (1 − α) · LCE + α · LMLM This idea was recently applied by Chronopoulou et al. (2019) in a data-rich scenario. As LMLM is typically much larger than LCE, in preliminary experiments, we found a small value of α = 10−4 to consistently give good results, so we use it in all our experiments. To obtain sentences for language modeling, we use the unlabeled set D. However, we do not train directly on each x ∈ D, but rather on P (x), where we never ask the language model to predict anything for the masked slot. # 3.3 Combining PVPs A key challenge for our approach is that in the absence of a large development set, it is hard to identify which PVPs perform well. To address this, we use a strategy similar to knowledge distillation (Hinton et al., 2015). First, we define a set P of PVPs that intuitively make sense for a given task A. We then use these PVPs as follows: (1) We finetune a separate language model Mp for each p ∈ P as described in Section 3.1. As T is small, this finetuning is cheap even for a large number of PVPs. (2) We use the ensemble M = {Mp | p ∈ P} of finetuned models to annotate examples from D. We first combine the unnormalized class scores for each example x ∈ D as sa(l |x) = 7 D> w(P) -sp(l |) peP where Z = )°,-pw(p) and the w(p) are weighting terms for the PVPs. We experiment with two different realizations of this weigh- ing term: either we simply set w(p) = 1 for all p or we set w(p) to be the accuracy ob- tained using p on the training set before train- ing. We refer to these two variants as uniform and weighted. Jiang et al. (2020) use a similar idea in a zero-shot setting. We transform the above scores into a proba- bility distribution q using softmax. Following Hinton et al. (2015), we use a temperature of T = 2 to obtain a suitably soft distribution. All pairs (x, q) are collected in a (soft-labeled) training set TC. (3) We finetune a PLM C with a standard se- quence classification head on TC. The finetuned model C then serves as our classi- fier for A. All steps described above are depicted in Figure 2; an example is shown in Figure 1. iPET M 0 1 D T 1 1 M 1 1 D T 2 1 . . . M k 1 D T M 0 2 T 1 2 M 1 2 T 2 2 . . . M k 2 TC C M 0 3 T 1 3 M 1 3 T 2 3 . . . M k 3 M 0 4 T 1 4 M 1 4 T 2 4 . . . M k 4 (1) (a) (b) (c) (2) (3) Figure 2: Schematic representation of PET (1-3) and iPET (a-c). (1) The initial training set is used to finetune an ensemble of PLMs. (a) For each model, a random subset of other models generates a new training set by labeling examples from D. (c) The previous two steps are repeated k times, each time increasing the size of the generated training sets by a factor of d. (2) The final set of models is used to create a soft-labeled dataset TC. (3) A classifier C is trained on this dataset. # Iterative PET (iPET) 2. Using this subset, we create a labeled dataset Distilling the knowledge of all individual models into a single classifier C means they cannot learn from each other. As some patterns perform (pos- sibly much) worse than others, the training set TC for our final model may therefore contain many mislabeled examples. To compensate for this shortcoming, we devise iPET, an iterative variant of PET. The core idea of iPET is to train several generations of models on datasets of increasing size. To this end, we first enlarge the original dataset T by labeling selected examples from D using a random subset of trained PET models (Figure 2a). We then train a new gen- eration of PET models on the enlarged dataset (b); this process is repeated several times (c). More formally, let M0 = {M 0 n} be the initial set of PET models finetuned on T , where each M 0 i is trained for some PVP pi. We train k generations of models M1, . . . , Mk where Mj = {M j 1 , . . . , M j i is trained for pi on its own training set T j i . In each iteration, we multiply the training set size by a fixed constant d ∈ N while maintaining the label ratio of the original dataset. That is, with c0(l) denoting the number of examples with label l in T , each T j i contains cj(l) = d · cj−1(l) examples with label l. This is achieved by generating each T j } by ran- domly choosing λ · (n − 1) models from the previous generation with λ ∈ (0, 1] being a hyperparameter. TN = {(x, arg max l∈L sN (l | x)) | x ∈ D} . For each l ∈ L, we obtain TN (l) ⊂ TN by randomly choosing cj(l) − c0(l) examples with label l from TN . To avoid training fu- ture generations on mislabeled data, we prefer examples for which the ensemble of models is confident in its prediction. The underlying in- tuition is that even without calibration, exam- ples for which labels are predicted with high confidence are typically more likely to be clas- sified correctly (Guo et al., 2017). Therefore, when drawing from TN , we set the probability of each (x, y) proportional to sN (l | x). l∈L TN (l). As can easily be verified, this dataset contains cj(l) examples for each l ∈ L. After training k generations of PET models, we use Mk to create TC and train C as in basic PET. With minor adjustments, iPET can even be used in a zero-shot setting. To this end, we define M % to be the set of untrained models and c;(1) = 10/|L| for all 1 € £ so that M! is trained on 10 examples evenly distributed across all labels. As T,y may not contain enough examples for some label /, we cre- ate all T,(1) by sampling from the 100 examples x € D for which syy(I | x) is the highest, even if 1 # argmaxjec sy(l | x). For each subsequent generation, we proceed exactly as in basic iPET. # 4 Experiments We evaluate PET on four English datasets: Yelp Reviews, AG’s News, Yahoo Questions (Zhang et al., 2015) and MNLI (Williams et al., 2018). Additionally, we use x-stance (Vamvas and Sen- nrich, 2020) to investigate how well PET works for other languages. For all experiments on English, we use RoBERTa large (Liu et al., 2019) as lan- guage model; for x-stance, we use XLM-R (Con- neau et al., 2020). We investigate the performance of PET and all baselines for different training set sizes; each model is trained three times using dif- ferent seeds and average results are reported. As we consider a few-shot setting, we assume no access to a large development set on which hy- perparameters could be optimized. Our choice of hyperparameters is thus based on choices made in previous work and practical considerations. We use a learning rate of 1 - 10-5, a batch size of 16 and a maximum sequence length of 256. Unless otherwise specified, we always use the weighted variant of PET with auxiliary language modeling. For iPET, we set A = 0.25 and d = 5; that is, we select 25% of all models to label examples for the next generation and quintuple the number of training examples in each iteration. We train new generations until each model was trained on at least 1000 examples, i.e., we set k = [log4(1000/|7])].- As we always repeat training three times, the en- semble M (or M°) for n PVPs contains 3n models. Further hyperparameters and detailed explanations for all our choices are given in Appendix B. # 4.1 Patterns We now describe the patterns and verbalizers used for all tasks. We use two vertical bars (||) to mark boundaries between text segments.? Yelp For the Yelp Reviews Full Star dataset (Zhang et al., 2015), the task is to estimate the rating that a customer gave to a restaurant on a 1- to 5-star scale based on their review’s text. We define the following patterns for an input text a: P2(a) = Just! || a P3(a) = a. Allin all, it was ___. P,(a) = a|| Insummary, the restaurant is __. 3The way different segments are handled depends on the model being used; they may e.g. be assigned different embed- dings (Devlin et al., 2019) or separated by special tokens (Liu et al., 2019; Yang et al., 2019). For example, “a || 6” is given to BERT as the input “[CLS] a [SEP] 6 [SEP]”. We define a single verbalizer v for all patterns as v(1) = terrible v(2) = bad v(4) = good v(5) = great v(3) = okay AG’s News AG’s News is a news classification dataset, where given a headline a and text body b, news have to be classified as belonging to one of the categories World (1), Sports (2), Business (3) or Science/Tech (4). For x = (a, b), we define the following patterns: P1(x) = : a b P2(x) = a ( ) b P3(x) = – a b P4(x) = a b ( ) P5(x) = News: a b P6(x) = [ Category: ] a b We use a verbalizer that maps 1–4 to “World”, “Sports”, “Business” and “Tech”, respectively. Yahoo Yahoo Questions (Zhang et al., 2015) is a text classification dataset. Given a question a and an answer b, one of ten possible categories has to be assigned. We use the same patterns as for AG’s News, but we replace the word “News” in P5 with the word “Question”. We define a ver- balizer that maps categories 1–10 to “Society”, “Science”, “Health”, “Education”, “Computer”, “Sports”, “Business”, “Entertainment”, “Relation- ship” and “Politics”. MNLI The MNLI dataset (Williams et al., 2018) consists of text pairs x = (a, b). The task is to find out whether a implies b (0), a and b contradict each other (1) or neither (2). We define Pi(x)= “a? i __ “b” P(x) =a? i 4b and consider two different verbalizers v1 and v2: v1(0) = Wrong v1(1) = Right v1(2) = Maybe v2(2) = Maybe v2(1) = Yes v2(0) = No Combining the two patterns with the two verbaliz- ers results in a total of 4 PVPs. X-Stance The x-stance dataset (Vamvas and Sen- nrich, 2020) is a multilingual stance detection dataset with German, French and Italian examples. Each example x = (a, b) consists of a question a concerning some political issue and a comment b; the task is to identify whether the writer of b Line Examples Method Yelp AG’s Yahoo MNLI (m/mm) 1 2 3 |T | = 0 unsupervised (avg) unsupervised (max) iPET 33.8 ±9.6 40.8 ±0.0 56.7 ±0.2 69.5 ±7.2 79.4 ±0.0 87.5 ±0.1 44.0 ±9.1 56.4 ±0.0 70.7 ±0.1 39.1 ±4.3 / 39.8 ±5.1 43.8 ±0.0 / 45.0 ±0.0 53.6 ±0.1 / 54.2 ±0.1 4 5 6 |T | = 10 supervised PET iPET 21.1 ±1.6 52.9 ±0.1 57.6 ±0.0 25.0 ±0.1 87.5 ±0.0 89.3 ±0.1 10.1 ±0.1 63.8 ±0.2 70.7 ±0.1 34.2 ±2.1 / 34.1 ±2.0 41.8 ±0.1 / 41.5 ±0.2 43.2 ±0.0 / 45.7 ±0.1 7 8 9 |T | = 50 supervised PET iPET 44.8 ±2.7 60.0 ±0.1 60.7 ±0.1 82.1 ±2.5 86.3 ±0.0 88.4 ±0.1 52.5 ±3.1 66.2 ±0.1 69.7 ±0.0 45.6 ±1.8 / 47.6 ±2.4 63.9 ±0.0 / 64.2 ±0.0 67.4 ±0.3 / 68.3 ±0.3 10 11 12 |T | = 100 supervised PET iPET 53.0 ±3.1 61.9 ±0.0 62.9 ±0.0 86.0 ±0.7 88.3 ±0.1 89.6 ±0.1 62.9 ±0.9 69.2 ±0.0 71.2 ±0.1 47.9 ±2.8 / 51.2 ±2.6 74.7 ±0.3 / 75.9 ±0.4 78.4 ±0.7 / 78.6 ±0.5 13 14 |T | = 1000 supervised PET 63.0 ±0.5 64.8 ±0.1 86.9 ±0.4 86.9 ±0.2 70.5 ±0.3 72.7 ±0.0 73.1 ±0.2 / 74.8 ±0.3 85.3 ±0.2 / 85.5 ±0.4 Table 1: Average accuracy and standard deviation for RoBERTa (large) on Yelp, AG’s News, Yahoo and MNLI (m:matched/mm:mismatched) for five training set sizes |T |. supports the subject of the question (0) or not (1). We use two simple patterns P(x) = “q? i “<p” P(x) and define an English verbalizer vEn mapping 0 to “Yes” and 1 to “No” as well as a French (German) verbalizer vFr (vDe), replacing “Yes” and “No” with “Oui” and “Non” (“Ja” and “Nein”). We do not define an Italian verbalizer because x-stance does not contain any Italian training examples. Ex. Method 0 UDA 1 = MixText PET iPET | T | Yelp 27.3 20.4 48.8 52.9 AG’s 72.6 81.1 84.1 87.5 Yahoo MNLI 36.7 20.6 59.0 67.0 34.7 32.9 39.5 42.1 0 UDA 5 = MixText PET iPET | T | 46.6 31.3 55.3 56.7 83.0 84.8 86.4 87.3 60.2 61.5 63.3 66.4 40.8 34.8 55.1 56.3 Table 2: Comparison of PET with two state-of-the-art semi-supervised methods using RoBERTa (base) # 4.2 Results English Datasets Table 1 shows results for En- glish text classification and language understanding tasks; we report mean accuracy and standard de- viation for three training runs. Lines 1–2 (L1–L2) show unsupervised performance, i.e., individual PVPs without any training (similar to Radford et al., 2018; Puri and Catanzaro, 2019); we give both av- erage results across all PVPs (avg) and results for the PVP that works best on the test set (max). The large difference between both rows highlights the importance of coping with the fact that without looking at the test set, we have no means of eval- uating which PVPs perform well. Zero-shot iPET clearly outperforms the unsupervised baselines for all datasets (L3 vs L1); on AG’s News, it even per- forms better than standard supervised training with 1000 examples (L3 vs L13). With just 10 training examples, standard supervised learning does not perform above chance (L4). In contrast, PET (L5) performs much better than the fully unsupervised baselines (L1–L2); training multiple generations using iPET (L6) gives consistent improvements. As we increase the training set size, the performance gains of PET and iPET become smaller, but for both 50 and 100 examples, PET continues to con- siderably outperform standard supervised training (L8 vs L7, L11 vs L10) with iPET (L9, L12) still giving consistent improvements. For |T | = 1000, PET has no advantage on AG’s but still improves accuracy for all other tasks (L14 vs L13).4 Comparison with SotA We compare PET to UDA (Xie et al., 2020) and MixText (Chen et al., 2020), two state-of-the-art methods for semi- supervised learning in NLP that rely on data aug- mentation. Whereas PET requires that a task can be expressed using patterns and that such patterns be found, UDA and MixText both use backtranslation (Sennrich et al., 2016) and thus require thousands of labeled examples for training a machine transla- tion model. We use RoBERTa (base) for our com- parison as MixText is specifically tailored towards 4One of the three supervised MNLI runs for |T | = 1000 underfitted the training data and performed extremely poorly. This run is excluded in the reported score (73.1/74.8). Examples Method De Fr It |T | = 1000 supervised PET 43.3 66.4 49.5 68.7 41.0 64.7 |T | = 2000 supervised PET 57.4 69.5 62.1 71.7 52.8 67.3 |T | = 4000 supervised PET 63.2 71.7 66.7 74.0 58.7 69.5 TDe , TFr supervised PET 76.6 77.9 76.0 79.0 71.0 73.6 TDe + TFr sup. (*) supervised PET 76.8 77.6 78.8 76.7 79.1 80.6 70.2 75.9 77.2 Table 3: Results on x-stance intra-target for XLM-R (base) trained on subsets of TDe and TFr and for joint training on all data (TDe + TFr). (*): Best results for mBERT reported in Vamvas and Sennrich (2020). a 12-layer Transformer (Vaswani et al., 2017). Both Xie et al. (2020) and Chen et al. (2020) use large de- velopment sets to optimize the number of training steps. We instead try several values for both ap- proaches directly on the test set and only report the best results obtained. Despite this, Table 2 shows that PET and iPET substantially outperform both methods across all tasks, clearly demonstrating the benefit of incorporating human knowledge in the form of PVPs. X-Stance We evaluate PET on x-stance to inves- tigate (i) whether it works for languages other than English and (ii) whether it also brings improve- ments when training sets have medium size. In contrast to Vamvas and Sennrich (2020), we do not perform any hyperparameter optimization on dev and use a shorter maximum sequence length (256 vs 512) to speed up training and evaluation. To investigate whether PET brings benefits even when numerous examples are available, we con- sider training set sizes of 1000, 2000, and 4000; for each of these configurations, we separately finetune French and German models to allow for a more straightforward downsampling of the training data. Additionally, we train models on the entire French (|TFr| = 11 790) and German (|TDe| = 33 850) training sets. In this case we do not have any ad- ditional unlabeled data, so we simply set D = T . For the French models, we use vEn and vFr as ver- balizers and for German vEn and vDe (Section 4.1). Finally, we also investigate the performance of a model trained jointly on French and German data (|TFr + TDe| = 45 640) using vEn, vFr and vDe. Results are shown in Table 3; following Vamvas Method Yelp AG’s Yahoo MNLI min max PET (no distillation) PET uniform PET weighted 39.6 52.4 51.7 52.7 52.9 82.1 85.0 87.0 87.3 87.5 50.2 63.6 62.8 63.8 63.8 36.4 40.2 40.6 42.0 41.8 Table 4: Minimum (min) and maximum (max) accu- racy of models based on individual PVPs as well as PET with and without knowledge distillation (|T | = 10). s t n e m e v o r p m 15 10 Yelp MNLI AG’s Yahoo I y c a r u c c A 5 0 10 100 Training set size 50 1000 Figure 3: Accuracy improvements for PET due to adding LMLM during training and Sennrich (2020), we report the macro-average of the F1 scores for labels 0 and 1, averaged over three runs. For Italian (column “It”), we report the average zero-shot cross-lingual performance of German and French models as there are no Ital- ian training examples. Our results show that PET brings huge improvements across all languages even when training on much more than a thousand examples; it also considerably improves zero-shot cross-lingual performance. # 5 Analysis Combining PVPs We first investigate whether PET is able to cope with situations were some PVPs perform much worse than others. For |T | = 10, Table 4 compares the performance of PET to that of the best and worst performing patterns after fine- tuning; we also include results obtained using the ensemble of PET models corresponding to indi- vidual PVPs without knowledge distillation. Even after finetuning, the gap between the best and worst pattern is large, especially for Yelp. However, PET is not only able to compensate for this, but even improves accuracies over using only the best- performing pattern across all tasks. Distillation brings consistent improvements over the ensemble; additionally, it significantly reduces the size of the 80 y c a r u c c A 60 40 Yelp MNLI AG’s Yahoo M0 M1 M2 Model generation M3 M4 Figure 4: Average accuracy for each generation of mod- els with iPET in a zero-shot setting. Accuracy on AG’s News and Yahoo when skipping generation 2 and 3 is indicated through dashed lines. final classifier. We find no clear difference between the uniform and weighted variants of PET. Auxiliary Language Modeling We analyze the influence of the auxiliary language modeling task on PET’s performance. Figure 3 shows perfor- mance improvements from adding the language modeling task for four training set sizes. We see that the auxiliary task is extremely valuable when training on just 10 examples. With more data, it becomes less important, sometimes even leading to worse performance. Only for MNLI, we find language modeling to consistently help. Iterative PET To check whether iPET is able to improve models over multiple generations, Fig- ure 4 shows the average performance of all gen- erations of models in a zero-shot setting. Each additional iteration does indeed further improve the ensemble’s performance. We did not investi- gate whether continuing this process for even more iterations gives further improvements. Another natural question is whether similar re- sults can be obtained with fewer iterations by in- creasing the training set size more aggressively. To answer this question, we skip generations 2 and 3 for AG’s News and Yahoo and for both tasks di- rectly let ensemble M1 annotate 10 · 54 examples for M4. As indicated in Figure 4 through dashed lines, this clearly leads to worse performance, high- lighting the importance of only gradually increas- ing the training set size. We surmise that this is the case because annotating too many examples too early leads to a large percentage of mislabeled training examples. 60 y c a r u c c A 40 PET PET + PT sup. sup. + PT 20 10 100 Training set size 50 1000 Figure 5: Accuracy of supervised learning (sup.) and PET both with and without pretraining (PT) on Yelp In-Domain Pretraining Unlike our supervised baseline, PET makes use of the additional unla- beled dataset D. Thus, at least some of PET’s per- formance gains over the supervised baseline may arise from this additional in-domain data. To test this hypothesis, we simply further pre- train RoBERTa on in-domain data, a common technique for improving text classification accu- racy (e.g., Howard and Ruder, 2018; Sun et al., 2019). As language model pretraining is expen- sive in terms of GPU usage, we do so only for the Yelp dataset. Figure 5 shows results of supervised learning and PET both with and without this in- domain pretraining. While pretraining does indeed improve accuracy for supervised training, the su- pervised model still clearly performs worse than PET, showing that the success of our method is not simply due to the usage of additional unlabeled data. Interestingly, in-domain pretraining is also helpful for PET, indicating that PET leverages un- labeled data in a way that is clearly different from standard masked language model pretraining. # 6 Conclusion We have shown that providing task descriptions to pretrained language models can be combined with standard supervised training. Our proposed method, PET, consists of defining pairs of cloze question patterns and verbalizers that help lever- age the knowledge contained within pretrained lan- guage models for downstream tasks. We finetune models for all pattern-verbalizer pairs and use them to create large annotated datasets on which stan- dard classifiers can be trained. When the initial amount of training data is limited, PET gives large improvements over standard supervised training and strong semi-supervised approaches. # Acknowledgments This work was funded by the European Research Council (ERC #740516). We would like to thank the anonymous reviewers for their helpful com- ments. # References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the Fifth ACM Conference on Dig- ital Libraries, DL ’00, page 85–94, New York, NY, USA. Association for Computing Machinery. David S. Batista, Bruno Martins, and M´ario J. Silva. 2015. Semi-supervised bootstrapping of relation- ship extractors with distributional semantics. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 499– 504, Lisbon, Portugal. Association for Computa- tional Linguistics. Zied Bouraoui, Jose Camacho-Collados, and Steven Inducing relational knowledge In Proceedings of the Thirty-Fourth Schockaert. 2020. from BERT. AAAI Conference on Artificial Intelligence. Sergey Brin. 1999. Extracting patterns and relations from the world wide web. In The World Wide Web and Databases, pages 172–183, Berlin, Heidelberg. Springer Berlin Heidelberg. Jiaao Chen, Zichao Yang, and Diyi Yang. 2020. Mix- Text: Linguistically-informed interpolation of hid- den space for semi-supervised text classification. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2147– 2157, Online. Association for Computational Lin- guistics. and Alexandros Potamianos. 2019. An embarrassingly simple approach for transfer learning from pre- In Proceedings of the trained language models. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 2089–2095, Minneapolis, Min- nesota. Association for Computational Linguistics. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm´an, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440– 8451, Online. Association for Computational Lin- guistics. Joe Davison, Joshua Feldman, and Alexander Rush. 2019. Commonsense knowledge mining from pre- In Proceedings of the 2019 Con- trained models. ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 1173–1178, Hong Kong, China. As- sociation for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Zi-Yi Dou, Keyi Yu, and Antonios Anastasopoulos. 2019. Investigating meta-learning algorithms for low-resource natural language understanding tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1192– 1197, Hong Kong, China. Association for Computa- tional Linguistics. Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34–48. Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho. 2018. Meta-learning for low- In Proceed- resource neural machine translation. ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3622–3631, Brussels, Belgium. Association for Computational Linguistics. Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. 2017. On calibration of modern neu- In Proceedings of the 34th Interna- ral networks. tional Conference on Machine Learning - Volume 70, ICML’17, page 1321–1330. JMLR.org. Junxian He, Jiatao Gu, Jiajun Shen, and Marc’Aurelio Ranzato. 2020. Revisiting self-training for neural In International Conference sequence generation. on Learning Representations. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. Com- puting Research Repository, arXiv:1503.02531. Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Iterative back- Haffari, and Trevor Cohn. 2018. In Pro- translation for neural machine translation. ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18–24, Mel- bourne, Australia. Association for Computational Linguistics. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 328–339, Melbourne, Australia. Association for Computational Linguistics. Self- training PCFG grammars with latent annotations across languages. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing, pages 832–841, Singapore. Association for Computational Linguistics. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2020. How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438. Nora Kassner and Hinrich Sch¨utze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 7811–7818, Online. As- sociation for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pre- training approach. Computing Research Repository, arXiv:1907.11692. Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language de- cathlon: Multitask learning as question answering. Computing Research Repository, arXiv:1806.08730. David McClosky, Eugene Charniak, and Mark Johnson. In Pro- 2006. Effective self-training for parsing. ceedings of the Human Language Technology Con- ference of the NAACL, Main Conference, pages 152– 159, New York City, USA. Association for Compu- tational Linguistics. Juri Opitz. 2019. Argumentative relation classification as plausibility ranking. In Preliminary proceedings of the 15th Conference on Natural Language Pro- cessing (KONVENS 2019): Long Papers, pages 193– 202, Erlangen, Germany. German Society for Com- putational Linguistics & Language Technology. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in PyTorch. In NIPS Autodiff Workshop. Fabio Petroni, Tim Rockt¨aschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP). Zero-shot text classification with generative language models. Computing Research Repository, arXiv:1912.10165. Kun Qian and Zhou Yu. 2019. Domain adaptive dia- log generation via meta learning. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 2639–2649, Florence, Italy. Association for Computational Linguistics. Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Techni- cal report. Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1–67. Roi Reichart and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statisti- In Proceed- cal parsers trained on small datasets. ings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 616–623, Prague, Czech Republic. Association for Computational Lin- guistics. Bernardino Romera-Paredes and Philip Torr. 2015. An embarrassingly simple approach to zero-shot learn- ing. In International Conference on Machine Learn- ing, pages 2152–2161. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavat- ula, and Yejin Choi. 2020. WinoGrande: An adver- sarial winograd schema challenge at scale. In Pro- ceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence. Timo Schick and Hinrich Sch¨utze. 2020. Rare words: A major problem for contextualized embeddings and how to fix it by attentive mimicking. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence. Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation mod- 2016. In Proceedings of the els with monolingual data. 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86–96, Berlin, Germany. Association for Computa- tional Linguistics. Shashank Srivastava, Igor Labutov, and Tom Mitchell. 2018. Zero-shot learning of classifiers from natu- In Proceedings of the ral language quantification. 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 306–316, Melbourne, Australia. Association for Computational Linguistics. Chi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune BERT for text classification? In Chinese Computational Linguistics, pages 194– 206, Cham. Springer International Publishing. Alon Talmor, Yanai Elazar, Yoav Goldberg, and Jonathan Berant. 2019. oLMpics – on what lan- guage model pre-training captures. Computing Re- search Repository, arXiv:1912.13283. Trieu H. Trinh and Quoc V. Le. 2018. A simple method for commonsense reasoning. Computing Research Repository, arXiv:1806.02847. Jannis Vamvas and Rico Sennrich. 2020. X-stance: A multilingual multi-target dataset for stance detection. Computing Research Repository, arXiv:2003.08385. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998–6008. Curran Asso- ciates, Inc. Sappadla Prateek Veeranna, Jinseok Nam, Eneldo Loza Mencıa, and Johannes F¨urnkranz. 2016. Using se- mantic similarity for multi-label zero-shot classifica- tion of text documents. In Proceeding of European Symposium on Artificial Neural Networks, Compu- tational Intelligence and Machine Learning. Bruges, Belgium: Elsevier, pages 423–428. Cunxiang Wang, Shuailong Liang, Yue Zhang, Xiao- nan Li, and Tian Gao. 2019. Does it make sense? And why? A pilot study for sense making and ex- planation. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 4020–4026, Florence, Italy. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38–45, Online. Asso- ciation for Computational Linguistics. Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Lu- ong, and Quoc V. Le. 2020. Unsupervised data aug- mentation for consistency training. In Advances in Neural Information Processing Systems, volume 33. Curran Associates, Inc. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for In Advances in Neural language understanding. Information Processing Systems, volume 32, pages 5753–5763. Curran Associates, Inc. David Yarowsky. 1995. Unsupervised word sense dis- In 33rd ambiguation rivaling supervised methods. Annual Meeting of the Association for Computa- tional Linguistics, pages 189–196, Cambridge, Mas- sachusetts, USA. Association for Computational Linguistics. Zhiquan Ye, Yuxia Geng, Jiaoyan Chen, Jingmin Chen, Xiaoxiao Xu, SuHang Zheng, Feng Wang, Jun Zhang, and Huajun Chen. 2020. Zero-shot text clas- In Proceed- sification via reinforced self-training. ings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3014–3024, Online. Association for Computational Linguistics. Wenpeng Yin, Jamaal Hay, and Dan Roth. 2019. classification: text Benchmarking evaluation and entailment approach. Datasets, In Proceedings of the 2019 Conference on Empiri- cal Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3914–3923, Hong Kong, China. Association for Computational Linguistics. Mo Yu, Xiaoxiao Guo, Jinfeng Yi, Shiyu Chang, Saloni Potdar, Yu Cheng, Gerald Tesauro, Haoyu Wang, and Bowen Zhou. 2018. Diverse few-shot text clas- sification with multiple metrics. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1206–1215, New Orleans, Louisiana. Association for Computational Linguistics. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 649–657. Curran Associates, Inc. # A Implementation Our implementation of PET and iPET is based on the Transformers library (Wolf et al., 2020) and PyTorch (Paszke et al., 2017). # B Training Details Except for the in-domain pretraining experiment described in Section 5, all of our experiments were conducted using a single GPU with 11GB RAM (NVIDIA GeForce GTX 1080 Ti). # B.1 Hyperparameter Choices Relevant training hyperparameters for both individ- ual PET models and the final classifier C as well as our supervised baseline are listed in Table 5. All hyperparameters were selected based on the following considerations and experiments: Batch size / maximum length Both batch size and maximum sequence length (or block size) are chosen so that one batch fits into 11GB of GPU memory. As Devlin et al. (2019) and Liu et al. (2019) use larger batch sizes of 16–32, we accu- mulate gradients for 4 steps to obtain an effective batch size of 16. Learning rate We found a learning rate of 5e−5 (as used by Devlin et al. (2019)) to often result in unstable training for regular supervised learning with no accuracy improvements on the training set. We therefore use a lower learning rate of 1e−5, similar to Liu et al. (2019). Experiments with vari- ous learning rates can be found in Appendix D. Training steps As the number of training epochs recommended by Liu et al. (2019) in a data-rich scenario is in the range 2–10, we perform super- vised training for 250 training steps, corresponding to 4 epochs when training on 1000 examples. For individual PET models, we subdivide each batch into one labeled example from T to compute LCE and three unlabeled examples from D to compute LMLM. Accordingly, we multiply the number of total training steps by 4 (i.e., 1000), so that the number of times each labeled example is seen re- mains constant (16 · 250 = 4 · 1000). For the final PET classifier, we train for 5000 steps due to the in- creased training set size (depending on the task, the unlabeled set D contains at least 20 000 examples). Deviating from the above, we always perform train- ing for 3 epochs on x-stance to match the setup of Vamvas and Sennrich (2020) more closely. The effect of varying the number of training steps is further investigated in Appendix D. Temperature We choose a temperature of 2 when training the final classifier following Hinton et al. (2015). Auxiliary language modeling To find a suitable value of α for combining language modeling loss and cross-entropy loss, we first observed that in the early stages of training, the former is a few orders of magnitude higher than the latter for all tasks considered. We thus selected a range {1e−3, 1e−4, 1e−5} of reasonable choices for α and performed preliminary experiments on Yelp with 100 training examples to find the best value among these candidates. To this end, we split the training examples into a training set and a dev set using both a 90/10 split and a 50/50 split and took the value of α that maximizes average dev set ac- curacy. We adopt this value for all other tasks and training set sizes without further optimization. Models per ensemble As we always train three models per pattern, for both iPET and training the final classifier C, the ensemble M (or M0) for n PVPs contains 3n models. This ensures consis- tency as randomly choosing any of the three models for each PVP would result in high variance. In pre- liminary experiments, we found this to have only little impact on the final model’s performance. iPET dataset size For iPET, we quintuple the number of training examples after each iteration (d = 5) so that only a small number of generations is required to reach a sufficient amount of labeled data. We did not choose a higher value because we presume that this may cause training sets for early generations to contain a prohibitively large amount of mislabeled data. iPET dataset creation We create training sets for the next generation in iPET using 25% of the models in the current generation (λ = 0.25) be- cause we want the training sets for all models to be diverse while at the same time, a single model should not have too much influence. Others For all other hyperparameters listed in Table 5, we took the default settings of the Trans- formers library (Wolf et al., 2020). # B.2 Number of parameters As PET does not require any additional learnable parameters, the number of parameters for both PET and iPET is identical to the number of parame- ters in the underlying language model: 355M for RoBERTa (large) and 270M for XLM-R (base). # B.3 Average runtime Training a single PET classifier for 250 steps on one GPU took approximately 30 minutes; training for 1000 steps with auxiliary language modeling took 60 minutes. Depending on the task, labeling examples from D took 15–30 minutes per model. Training the final classifier C for 5000 steps on the soft-labeled dataset TC took 2 hours on average. # B.4 Comparison with SotA For comparing PET to UDA (Xie et al., 2020) and MixText (Chen et al., 2020), we reduce the number of unlabeled examples by half to speed up the re- quired backtranslation step. We use the backtransla- tion script provided by Chen et al. (2020) with their recommended hyperparameter values and use both Russian and German as intermediate languages. For MixText, we use the original implemen- tation5 and the default set of hyperparameters. Specifically, each batch consists of 4 labeled and 8 unlabeled examples, we use layers 7, 9 and 12 for mixing, we set T = 5, α = 16, and use a learning rate of 5 · 10−6 for RoBERTa and 5 · 10−4 for the final classification layer. We optimize the number of training steps for each task and dataset size in the range {1000, 2000, 3000, 4000, 5000}. For UDA, we use a PyTorch-based reimplemen- tation6. We use the same batch size as for MixText and the hyperparameter values recommended by Xie et al. (2020); we use an exponential schedule for training signal annealing and a learning rate of 2 · 10−5. We optimize the number of training steps for each task and dataset size in the range {500, 1000, 1500, . . . , 10000}. # In-Domain Pretraining For in-domain pretraining experiments described in Section 5, we use the language model finetun- ing script of the Transformers library (Wolf et al., 2020); all hyperparameters are listed in the last col- umn of Table 5. Pretraining was performed on a total of 3 NVIDIA GeForce GTX 1080 Ti GPUs. # 5https://github.com/GT-SALT/MixText 6https://github.com/SanghunYun/UDA_ pytorch # C Dataset Details For each task and number of examples t, we create the training set T by collecting the first t/|L| exam- ples per label from the original training set, where |L| is the number of labels for the task. Similarly, we construct the set D of unlabeled examples by selecting 10 000 examples per label and removing all labels. For evaluation, we use the official test set for all tasks except MNLI, for which we report results on the dev set; this is due to the limit of 2 submissions per 14 hours for the official MNLI test set. An overview of the number of test exam- ples and links to downloadable versions of all used datasets can be found in Table 6. Preprocessing In some of the datasets used, new- lines are indicated through the character sequence “ ”. As the vocabularies of RoBERTa and XLM-R do not feature a newline, we replace this sequence with a single space. We do not perform any other preprocessing, except shortening all examples to the maximum sequence length of 256 tokens. This is done using the longest first strategy implemented in the Transformers library. For PET, all input se- quences are truncated before applying patterns. Evaluation metrics For Yelp, AG’s News, Ya- hoo and MNLI, we use accuracy. For x-stance, we report macro-average of F1 scores using the evaluation script of Vamvas and Sennrich (2020). # D Hyperparameter Importance To analyze the importance of hyperparameter choices for PET’s performance gains over super- vised learning, we look at the influence of both the learning rate (LR) and the number of training steps on their test set accuracies. We try values of {1e−5, 2e−5, 5e−5} for the learning rate and {50, 100, 250, 500, 1000} for the number of training steps. As this results in 30 dif- ferent configurations for just one task and training set size, we only perform this analysis on Yelp with 100 examples, for which results can be seen in Fig- ure 6. For supervised learning, the configuration used throughout the paper (LR = 1e−5, 250 steps) turns out to perform best whereas for PET, training for fewer steps consistently performs even better. Importantly, PET clearly outperforms regular su- pervised training regardless of the chosen learning rate and number of training steps. PET −LM PET (En/Xs) C (En/Xs) sup. (En/Xs) In-Dom. PT 1e-5 1.0 256 250 – – 4 – – 0.01 1e-8 1e-4 – 4 1e-5 1.0 256 1000 / – 0.15 – / 3 1 3 – 0.01 1e-8 – – 4 1e-5 1.0 256 5000 / – – – / 3 4 – 2.0 0.01 1e-8 – – 4 1e-5 1.0 256 250 / – – – / 3 4 – – 0.01 1e-8 – 256 2 5e-5 1.0 – 50000 0.15 – 2 – – 0.0 Table 5: Hyperparameters for training individual PET models without auxiliary language modeling (PET−LM) and with language modeling (PET), the final PET classifier (C), regular supervised training (sup.) and in-domain pretraining (In-Dom. PT). Whenever different values are used for the English datasets (En) and x-stance (Xs), both values are given separated by a slash. (*): PET-specific hyperparameters Dataset Link Test Examples AG’s News MNLI (m / mm) X-Stance (De / Fr / It) Yahoo! Answers Yelp Review Full http://goo.gl/JyCnZq https://cims.nyu.edu/˜sbowman/multinli/ https://github.com/ZurichNLP/xstance http://goo.gl/JyCnZq http://goo.gl/JyCnZq 7600 10000 / 10000 3479 / 1284 / 1173 60000 50000 Table 6: Download links and number of test examples for all datasets LR = 1e−5 LR = 2e−5 LR = 5e−5 60 60 60 y c a r u c c A 50 y c a r u c c A 50 y c a r u c c A 50 40 40 40 sup. PET sup. PET sup. PET 30 30 30 50 100 250 500 1000 50 100 250 500 1000 50 100 250 500 1000 Training steps Training steps Training steps Figure 6: Performance of supervised learning and PET (weighted, without auxiliary language modeling) for various learning rates and training steps on Yelp with 100 training examples # E Automatic Verbalizer Search Given a set of patterns P1, . . . , Pn, manually find- ing a verbalization v(l) for each l ∈ L that repre- sents the meaning of l well and corresponds to a single token in V can be difficult. We therefore devise automatic verbalizer search (AVS), a pro- cedure that automatically finds suitable verbalizers given a training set T and a language model M . Assuming we already have a PVP p = (P,v), we can easily check whether some token t € V is a good verbalization of | € £. To this end, we define p{/ < t] = (P,v’), where v’ is identical to v, except that v’(1) = t. Intuitively, if ¢ represents 1 well, then gpyr(! | x) (i-e., the probability M assigns to t given P(x)) should be high only for those examples (x,y) € T where y = |. We thus define the score of ¢ for / given p as 1 si(t| p) = val . S- Gite (1 | x) (x,y)ET 1 - SS apnea (t| x) IT \ Tl (x,y)ET\T where Tl = {(x, y) ∈ T : y = l} is the set of all training examples with label l. While this allows us to easily compute the best verbalization for l as ˆt = arg max t∈V sl(t | p) , it requires us to already know verbalizations u(I’) for all other labels 1’. AVS solves this problem as follows: We first as- sign random verbalizations to all labels and then re- peatedly recompute the best verbalization for each label. As we do not want the resulting verbalizer to depend strongly on the initial random assign- ment, we simply consider multiple such assign- ments. Specifically, we define an initial proba- bility distribution ρ0 where for all t ∈ V, l ∈ L, ρ0(t | l) = 1/|V | is the probability of choosing t as verbalization for l. For each l ∈ L, we then sample k verbalizers v1, . . . , vk using ρ0 to compute n k am dod silt | (P27) i=1 j=l sk(t) = for all t ∈ V .7 These scores enable us to define a probability distribution ρ1 that more closely reflects l (t) jointly considers all patterns; in preliminary experiments, we found this to result in more robust verbalizers. Yelp AG’s Yahoo MNLI supervised PET PET + AVS 44.8 60.0 55.2 82.1 86.3 85.0 52.5 66.2 58.2 45.6 63.9 52.6 Table 7: Results for supervised learning, PET and PET with AVS (PET + AVS) after training on 50 examples y Top Verbalizers 1 worthless, BAD, useless, appalling 2 worse, slow, frustrating, annoying 3 edible, mixed, cute, tasty, Okay 4 marvelous, loved, love, divine, fab 5 golden, magical, marvelous, perfection Table 8: Most probable verbalizers according to AVS for Yelp with 50 training examples a word’s suitability as a verbalizer for a given label: 1 k pi(t |) = 7 max(sf(t),6) where Z = oyey max(sf(t’),€) and € > 0 en- sures that p; is a proper probability distribution. We repeat this process to obtain a sequence of probability distributions p1,..., (;,,,- Finally, we choose the m € N most likely tokens according to Pima. (t | 1) as verbalizers for each 1. During train- ing and inference, we compute the unnormalized score $p(y | x) for each label by averaging over its m verbalizers. We analyze the performance of AVS for all tasks with |7 | = 50 training examples and set k = 250, € = 107%, imax = 5 and m = 10.8 To speed up the search, we additionally restrict our search space to tokens ¢ € V that contain at least two alphabetic characters. Of these tokens, we only keep the 10000 most frequent ones in D. Results are shown in Table 7. As can be seen, carefully handcrafted verbalizers perform much better than AVS; however, PET with AVS still con- siderably outperforms regular supervised training while eliminating the challenge of manually find- ing suitable verbalizers. Table 8 shows the most probable verbalizers found using AVS for the Yelp dataset. While most verbalizers for this dataset intuitively make sense, we found AVS to struggle with finding good verbalizers for three out of ten labels in the Yahoo dataset and for all MNLI labels. 8We tried values of k and imax in {250, 500, 1000} and {5, 10, 20}, respectively, but found the resulting verbalizers to be almost identical.
{ "id": "1503.02531" }
2001.05674
Shifted and Squeezed 8-bit Floating Point format for Low-Precision Training of Deep Neural Networks
Training with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Reduced bit precision allows for a larger effective memory and increased computational speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out-of-the-box for representative models: ResNet-50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learnable statistics of the DNN tensors - shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization.
http://arxiv.org/pdf/2001.05674
Léopold Cambier, Anahita Bhiwandiwalla, Ting Gong, Mehran Nekuii, Oguz H Elibol, Hanlin Tang
cs.LG
null
null
cs.LG
20200116
20200116
0 2 0 2 n a J 6 1 ] G L . s c [ 1 v 4 7 6 5 0 . 1 0 0 2 : v i X r a Published as a conference paper at ICLR 2020 SHIFTED AND SQUEEZED 8-BIT FLOATING POINT FOR- MAT FOR LOW-PRECISION TRAINING OF DEEP NEU- RAL NETWORKS L´eopold Cambier1∗†, Anahita Bhiwandiwalla2†, Ting Gong2, Mehran Nekuii2, Oguz H Elibol2 and Hanlin Tang2 1ICME, Stanford University 2Intel AI Lab [email protected] {anahita.bhiwandiwalla,ting.gong}@intel.com {mehran.nekuii,oguz.h.elibol,hanlin.tang}@intel.com # ABSTRACT Training with larger number of parameters while keeping fast iterations is an in- creasingly adopted strategy and trend for developing better performing Deep Neu- ral Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Re- duced bit precision allows for a larger effective memory and increased computa- tional speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out-of-the-box for representative models: ResNet-50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learn- able statistics of the DNN tensors - shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization. # INTRODUCTION Deep neural networks have achieved state-of-the-art performance on a wide variety of computer vision, audio, and natural language processing (NLP) tasks. This has resulted in an explosion of in- terest around techniques to reduce the memory footprint and energy consumption of neural network training and inference (Guo, 2018). Although there are a number of methods to address some of these issues for inference, the most effective method for training is using reduced precision numeri- cal formats. While 32-bit floating point (FP32) is the most common data format for neural network training, recent hardware have leveraged techniques that allow for training with 16-bit data formats (K¨oster et al., 2017; Micikevicius et al., 2018). However, 8-bit precision training remains an open challenge (Johnson, 2018; Kalamkar et al., 2019). Current FP8 training methodologies (Wang et al., 2018; Mellempudi et al., 2019) require either specialized chunk-based accumulation, stochastic rounding techniques, loss scaling or maintaining some layers of the network in higher precision. Tuning these knobs is non-intuitive and requires significant experimentation for each individual network. Accelerating the adoption of 8-bit data in training DNNs requires a hardware-friendly and out-of- the-box implementation of FP8. Due to the reduced number of mantissa bits, 8-bit multipliers are smaller and consume less power compared to higher bit representations. In this work we describe a novel 8-bit floating point (FP8) format - shifted and squeezed FP8 (S2FP8) - which has the following advantages compared to previously proposed 8-bit training methodologies: ∗Work performed during an internship at Intel †Equal contribution 1 Published as a conference paper at ICLR 2020 • S2FP8 eliminates the need for loss scaling, which requires significant tuning of the loss scale values and schedule for individual topologies • Leveraged by the forward and backward passes of model training, S2FP8 is effective in adjusting the range of gradients and also of activations and weights • S2FP8 does not require keeping the first and last layer in FP32 precision, which is needed for other approaches (Mellempudi et al., 2019), however maintains the master weights and accumulations inside the matrix multipliers in FP32 We demonstrate across image classification, translation, and recommendation models that S2FP8 outperforms previous 8-bit approaches, and reaches the accuracy of FP32 models without any addi- tional hyperparameter tuning. # 2 RELATED WORK The success of 32-bit floating point data type in training deep neural networks has increased interest in the feasibility of even lower precision training. The exponential demand for compute involved in training these deep neural networks has lead to multiple advancements in lower precision data types. Several studies have developed techniques such as loss scaling, stochastic rounding, and others to train effectively in 16-bit (Micikevicius et al., 2018; Das et al., 2018; Azim), along with associated hardware support (Markidis et al., 2018). Using 16-bit fixed point, (Gupta et al., 2015) showed that stochastic rounding techniques were crucial for model convergence even for simple convolutional neural networks. As noted in (Kalamkar et al., 2019), Google’s bfloat16 format has the same number of exponent bits as FP32, leading the success of that format without commonly requiring hardware intensive requirements such as stochastic rounding or other framework level techniques such as loss scaling. Although 8-bit formats have significant performance and memory advantages, convergence is es- pecially challenging due to loss of accuracy in the backpropogated gradient values. Wang et al. (2018) demonstrated training models with matrix multiplications and convolutions in FP8 but they use FP16 with chunk-based accumulations and stochastic rounding hardware. Mellempudi et al. (2019) also demonstrated success with FP8, accumulating in FP32 and using loss scaling techniques on ResNets, Transformer and GNMT networks. However, they too require the first and last layers of the model to be in FP32, and similar to (Banner et al., 2018) leverage Stochastic Rounding tech- niques to maintain model accuracy. Unlike S2FP8 proposed in this work, both of these FP8 training techniques emphasize the need for efficient loss scaling, rounding hardware and restriction on some layers being in higher precision. Zhou et al. (2016) quantized weights, activations and gradients of AlexNet (Krizhevsky et al., 2012) to 1, 2 and 6 bits respectively. But they also need to maintain the first and last convolution layers in full precision and stochastically quantize the gradients. Wu et al. (2018) demonstrate using integers for training LeNet-5 (LeCun et al., 1998) and AlexNet with 8-bits for activations, error and gradi- ents and 2-bits for weights. However, these approaches also required custom tuning such as novel initialization techniques and layer wise scaling instead of Batch Normalization and Softmax. These approaches lack generalizability to other models, requiring significant fine tuning. To the best of our knowledge, there does not exist an out-of-the-box solution using FP8 in training deep learning topologies without the need for tuned loss scaling techniques, requirements of cer- tain layers being in full precision along with efficient hardware rounding schemes like Stochastic Rounding. # 3 SHIFTED AND SQUEEZED 8-BIT FLOATING POINT FORMAT 3.1 CHALLENGES OF 8-BIT FLOATING POINT FORMAT The FP8 format, with 2 bits of mantissa and 5 bits of exponent (Mellempudi et al., 2019) is both nar- row (i.e., its dynamic range is very limited, from 2−16 to 216) and has lower accuracy (the machine epsilon is only 2−3). Figure A1 illustrates the range and accuracy of FP8. In contrast, FP32 ranges from 2−149 to 2128 with a machine-epsilon of 2−24 (Table A1). 2 Published as a conference paper at ICLR 2020 Figure 1: The distribution of tensor elements over the course of training for three tensors from the Transformer tiny model on the English-Vietnamese translation dataset. Blue bar indicates the representable range of FP8. Left: Many of the tensor elements fall outside of FP8’s representable range. Center: Few tensor elements fall outside of FP8’s representable range. Right: Initially, most elements are within FP8’s representable range, but after training, many fall outside of the representable range On the other hand, tensors involved in neural networks (weights, activations and gradients) are spread across varying scales. As illustrated in Figure 1, the tensor distributions change over the course of training, spanning different orders of magnitude. As a result, 8-bit training usually requires a combination of multiple techniques to capture the full dynamic range of values for model training. Some of these techniques include: • Loss scaling (Micikevicius et al., 2018) scales the loss L(w) by a constant λ before back- propagation . This makes the gradients artificially larger, allowing them to fit within the FP8 range. Gradients are then scaled down before being accumulated into the trainable weights as shown in Equation 6 • Stochastic rounding (Maxfield, 2006) alleviate quantization errors by capturing some of the information discarded when truncating to lower precision at the output of a GEMM operation Between these two techniques, loss scaling is more critical; once the magnitude of the gradients can no longer be represented in the FP8 range, training convergence will not be possible. However, loss scaling only modifies the gradients. Weights and activations can also (albeit admittedly less frequently) exceed the FP8’s representable range of [2−16, 216]. In those scenarios, convergence can also be affected. The issue with loss scaling is that it requires user interaction. Models have to be modified, and, more importantly, tedious empirical tuning is required to find the correct loss scaling schedule. While some networks can be trained with constant loss scaling, some, notably Transformers (Mellempudi et al., 2019), require dynamic “back-off” and improved loss scaling. This requires significant trial and error to tune the scaling schedule, slowing down wide adoption of low-precision numerical formats. 3.2 SHIFTED AND SQUEEZED FP8 To alleviate these issues and make neural network training possible with no model modifications or hyperparameter tuning, we propose a new 8-bit floating point format. Consider a tensor X of size N , i.e., X = {Xi}N i=1. Instead of directly encoding each Xi in FP8, we store X using N FP8 numbers {Yi}N i=1 accompanied by two (squeeze and shift) factors α and β (the “statistics” — see Figure 2). NFP8 numbers 2 FP32 numbers, “statistics” Figure 2: The S2FP8 format. A tensor X of N numbers is represented by α, β and N FP8 numbers Y , related to X through Equation 1. 3 Published as a conference paper at ICLR 2020 -16 0 16 log2 |Y | 0 32 log2 |X| (a) Y , the usual FP8 distribution. (b) X, for α = 1 and β < 0 -32 0 32 log2 |X| (c) X, for α < 1 and β = 0 Figure 3: Impact of the Shifted and Squeezed transformation log2 |Y | = α log2 |X| + β. α let the distribution be as wide as necessary (though, with an associated loss of precision), and β let us shift the distribution around any value. For X; 4 0, X and Y are then related through log2(|Yi|) = α log2(|Xi|) + β ⇔ Yi = ±2β|Xi|α where the ± is chosen so that Xi and Yi have the same sign. This representation allows for α and β be chosen so that together with tensor Y they capture most of the dynamic range of the tensor X. As we will see in section 4, this is all that is necessary to train networks using 8-bit floating point numbers. In order for Y to be a tensor suitable to be represented by FP8 numbers, we enforce that it has zero mean and a maximum value within the dynamic range of FP8 (e.g. 15): N’ DC log (|Â¥i]) = 0 and max, loga(|Â¥i|) = 15(= log,(2"°)) (2) 2. mien where the ’ notation indicates that the sum and the max, respectively, ignore any i such that Y; = 0. Those equations ensure that log,(|Y|) values are distributed with zero mean and each is less than 15, which is ideal for an FP8 format. By inserting Equation 2 into Equation 1, and by denoting N’ 1 = Y logs(|Xil) and m = max logs (|X;|) (3) i=1 we find α = 15 m − µ , β = −αµ (4) This new tensor format results in the training procedure (forward pass, backward pass, weight up- date) described in Figure 4. Forward and backward MatMul use this new S2FP8 format. Master weights are kept in FP32 and updated using S2FP8 gradients. Accumulations inside the GEMM kernel are kept in full FP32 precision. Figure 3 illustrates the impact of α and β. By having those two extra degrees of freedom for each tensor, majority of the dynamic range of each tensor can now be captured, whether very small (β > 0), very large (β < 1), very narrow (α > 1)) or very wide (α < 1). # 3.3 LEARNING THE TENSOR DISTRIBUTION One way to interpret α and β is to consider them as parameters of a distribution generating the ten- sor values log2(|Xi|). We can then say that, by continuously computing α and β, we are effectively learning the distribution of log2(|Xi|). Figure 5c shows the evolution of µ, m, α and β for a partic- ular tensor of ResNet-20. We see that α and β converge to, approximately, 5 and 21, respectively. From Equation 1, we conclude that: 4 (1) Published as a conference paper at ICLR 2020 FWD GEMM, Activations x y layer € (S2FP8) FP32__| FP32>S2FP8 Master weights layer @ (FP32) FP32>S2FP8 WG GEMM. @) CG; FP323S2FP8 FP32 Loss gradients layer @+1 (S2FP8) BWD GEMMy r(x) : CP) FP32>S2FP8 FP32 Weights gradients layer & (S2FP8) Figure 4: Low precision training with S2FP8. T represent the truncation described in Equation 5, from FP32 to S2FP8. When using S2FP8 for training, forward and backward GEMM’s only use S2FP8. The master weights are kept in FP32 and updated during the update step. • since α > 1, this means that X is expanded into Y , i.e., X is more narrow than what FP8 allows • since β > 0, this means that X is right-shifted into Y , i.e., X is smaller than what FP8 allows At convergence, those α and β values represent the distribution of each converged tensor. Notice that all statistics stabilize in the last third of the training, where the learning rate is decreased, indicating the network is converging to its final state. # 4 EXPERIMENTAL RESULTS In this section, we compare S2FP8 training with baseline FP32 and FP8 training with and with- out loss scaling for: Residual Networks (He et al., 2016) of varying depths on the CIFAR-10 and ImageNet (Deng et al., 2009) datasets, Transformer (Vaswani et al., 2017) on IWSLT’15 English- Vietnamese dataset (Luong & Manning, 2015), and Neural Collaborative Filtering (NCF) (He et al., 2017) on MovieLens 1 Million dataset (Harper & Konstan, 2016). For our experiments, we use the open source Tensorflow Models1 repository for ResNet and NCF, Tensor2Tensor (Vaswani et al., 2018) for Transformer with added S2FP8 data type simulation sup- port using the methodology described in subsection 4.1. For a given model, we keep the hyperpa- rameters consistent across FP32, FP8 and S2FP8 evaluations. 4.1 SIMULATION METHODOLOGY We simulated S2FP8 by inserting appropriate truncation function throughout the network, before and after every convolution and matrix-matrix product operations, during both the forward and backward passes. The rest of the network is kept in FP32, and those truncation simulate the low-precision training described in subsection 3.2. The truncation function takes as input a tensor X, computes its magnitude mean and maximum, computes the appropriate α and β and finally truncates X by computing Xiruncated = [278 {truncaterps (22|X|*) }]'/ (5) (5) where truncateFP8 is a usual FP8 truncation function with RNE (round-to-nearest, with ties broken by rounding to even) rounding which is easier to implement and most widely supported in hardware. # 1https://github.com/tensorflow/models 5 Published as a conference paper at ICLR 2020 (a) Distribution of the magnitude log2(|X|) of original tensor X before scaling using α and β (b) Distribution of the magnitude log2(|Y |) of shifted and squeezed tensor Y with |Yi| = 2β|Xi|α # µ # β # m # α −3.8 40 −4 −1 8 −4.2 −4.4 −2 6 30 20 −4.6 0 50k Step 100k −3 0 50k Step 100k 4 0 50k Step 100k 0 50k Step 100k (c) The computed statistics during training for the scale (β), shift (α), as well as the mean of the log values (µ) and the maximum log value (m). Figure 5: Evolution of the average and maximum magnitude, as well as α and β for CIFAR-10 with ResNet-20. This illustrates how the network is actually implicitly learning the tensors distribution, by repeatedly computing magnitudes α and β through µ and m. 4.2 RESIDUAL NETWORKS We first present results with Residual Networks of varying depths on the CIFAR-10 image recogni- tion dataset. We trained the model on 1 GPU using standard parameters: 250 epochs, batchsize of 128, SGD with momentum of 0.9, initial learning rate of 0.1 decreased by a factor of 10 after epochs 100, 150 and 200. Table 1 and Figure A2 presents the results. We observe that S2FP8 reaches almost exactly the FP32 baseline, sometimes even improving over it. Out-of-the-box FP8 does not converge and has very poor accuracy. Finally, FP8 with constant loss scaling of 100 (FP8+LS(100)) can reach the baseline. Both S2FP8 and FP8+LS(100) have similar performances, but S2FP8 can do so without any extra hyperparameters or tuning from the user’s perspective. CIFAR-10 ResNet-20 ResNet-34 ResNet-50 FP32 91.5 92.5 93.0 S2FP8 ∆ 0.4 91.1 0.5 92.0 -0.2 93.2 FP8 17.9 13.5 11.5 FP8+LS(100) 91.1 92.0 92.9 Table 1: Validation accuracy (in %) for image recognition on CIFAR-10 with ResNet-20/34/50. We also evaluate S2FP8 on the 1000 class ImageNet dataset. Here, we trained the network on 4 GPUs using standard parameters: 90 epochs, batchsize of 256, SGD with momentum of 0.9, initial learning rate of 0.1 decreased by a factor of 10 after epochs 30, 60, 80 and 90. Table 2 and Figure 6 present the results. Again, we observe that S2FP8 gets very close to the FP32 baseline. Out-of-the-box FP8 quickly diverges and does not converge at all. For FP8 with loss scaling to converge, one has to not truncate the first and last layer, as consistent with (Mellempudi et al., 2019), which we denote as Ex in Table 2 below. A loss scaling of 10,000 can then be used to reach the baseline (FP8+LS(10k)+Ex). Finally, stochastic rounding can be added and it slightly improves the precision (FP8+LS(100k)+Ex+SR). However, both those cases are not out-of-the-box, as they require loss scaling tuning and some layers 6 Published as a conference paper at ICLR 2020 to be kept in full precision. S2FP8 does not suffer from that, thanks to its improved quantization: all layers can be truncated and no loss scaling is required. Imagenet1k ResNet-18 ResNet-50 FP32 70.3 76.2 S2FP8 ∆ 69.6 75.2 FP8 -0.7 NaN -1.0 NaN FP8+LS(10k)+Ex 68.7 75.3 FP8+LS(100k)+Ex+SR 68.9 75.5 Table 2: Validation accuracy (in %) for image recognition on Imagenet1k with ResNet-18/50 Top-1 accuracy (%) Loss L2 Loss 80 60 40 20 FP32 S2FP8 8 6 4 2 FP32 S2FP8 1.2 1 0.8 0.6 0.4 FP32 S2FP8 0 250k Step 500k 0 250k Step 500k 0 250k Step 500k Figure 6: Comparing Top-1 accuracy and Loss of S2FP8 with FP32 for ResNet-50 on Imagenet1k 4.3 TRANSFORMER We also tested S2FP8 on a small Transformer (Transformer Tiny) on the English-Vietnamese dataset. The model has 2 hidden layers of size 128, and a filter of size 512, and is trained using Adam optimizer (Kingma & Ba, 2014). Table 3 and Figure 7 show the result, where we compare FP32, S2FP8 and FP8 with exponential loss scaling. We tried many loss scaling schedules (constant and exponential, with various initializations) and report the best result. As one can see, S2FP8 reaches the baseline with no hyperparameter tuning. FP8, on the other hand, does not, even after extensive loss scaling tuning. This shows the value of an out-of-the-box method for the user. En-Vi Transformer tiny FP32 25.3 S2FP8 ∆ FP8 0.0 NaN 25.3 FP8+LS(exp) 21.3 Table 3: BLEU Score (Papineni et al., 2002) (from 0 to 100) for translation task on the English- Vietnamese dataset with Transformer tiny. 4.4 NEURAL COLLABORATIVE FILTERING The Neural Collaborative Filtering (NCF) network comprises of embeddings for users and items from the MovieLens dataset, that are then passed to a Multi-Layer Perceptron(MLP) network to learn the user-item interaction. Matrix-multiplication operations are the building blocks of such models. We compare S2FP8 with FP32 and FP8 without loss scaling. We simulate Matrix-Multiplications and look-ups from the embeddings in S2FP8 and compare it to FP8 with RNE. We trained the model on the MovieLens 1 Million dataset with the following standard paramaters: 20 iterations, batchsize of 1024 on 4 GPUs, 8 predictive factors, learning rate of 0.0005 using the Adam optimizer. Figure 8 and Table 4 show the result, where we compare FP32, S2FP8 and FP8 without loss scaling. This again shows that S2FP8 easily reaches the baseline out-of-the-box, without tuning of any sort. FP8 gets relatively close, but cannot reach the baseline. 7 Published as a conference paper at ICLR 2020 BLEU Score Loss 25 20 15 10 5 FP32 S2FP8 6 4 2 FP32 S2FP8 0 125k Step 250k 0 125k Step 250k Figure 7: Comparing BLEU score and Loss of S2FP8 and FP32 for Transformer tiny on En-Vi dataset # Hit Ratio # NDCG # Loss 0.65 0.6 0.4 0.35 0.3 FP32 S2FP8 0.55 0.5 FP32 S2FP8 0.3 FP32 S2FP8 0.25 0.2 1 10 Iteration 20 1 10 Iteration 20 1 10 Iteration 20 Figure 8: Comparing Hit Ratio, NDCG and Loss of S2FP8 and FP32 for NCF on MovieLens-1M # 5 HARDWARE ASPECTS S2FP8 is a new data type and requires its own circuitry to be implemented in a tensor processing en- gine. However, the added overhead is very minimal and affects neither data throughput nor compute speed. In order to convert FP32 tensors into S2FP8, two hardware (HW) components are needed. One is to calculate each tensor’s statistics (Equation 3), which bring minimal HW complexity. To make compute operations even easier these statistics could be stored in lower precision such as FP8/INT8. The other component is to adjust the exponent and mantissa of all those tensor elements by applying the squeeze (α) and shift (β) factors in Equation 4 before truncating them into their 8-bit placeholders. The shift could be done using simple element-wise add/subtract operations on the exponents, and element-wise squeeze could be applied to the mantissa portions. Another con- sideration is within the tensor processing engine(e.g., GEMM engine) which requires the α and β factors while doing the calculations. The FP32 result will be converted back to S2FP8 when needed (e.g., to store back in memory) as shown in Figure 4. # 6 CONCLUSION We introduce a novel 8-bit floating point data type (S2FP8), that gives competitive performance in comparison to state-of-the-art FP32 baselines over a range of representative networks. S2FP8 makes use of shifted and squeezed factors to shift and rescale the range of tensors prior to truncation. S2FP8 allows training of neural networks with an 8-bit format while eliminating the need for loss scaling tuning, hardware-complex rounding techniques. In addition, compared to existing FP8 implemen- tations we also eliminate the restriction of maintaining the first and last layers in FP32. Decreasing Movielens 1 million NCF FP32 0.666 S2FP8 0.663 ∆ 0.003 FP8 0.633 Table 4: HR Score for NCF on the Movielens 1 million dataset. 8 Published as a conference paper at ICLR 2020 the number of bits enables larger models to fit on a single device and results in faster training. As part of future work, we plan to extend the use of S2FP8 to train additional DNN topologies and also simplify the squeeze and shift statistics from a hardware implementation point of view. We also plan to explore the use of reduced precision to store the statistics and the extendability of this ap- proach to efficiently represent a broader suite of low precision formats like 8-bit POSIT (Gustafson & Yonemoto, 2017), 4-bit floating and integer data types. ACKNOWLEDGMENTS We would like to thank Naveen Mellempudi, Pratap Prasad, Prasanna Singamsetty and Cory Stephenson for insightful discussions. # REFERENCES Anwarul Azim. Low precision arithmetic operations in deep neural networks: An overview. Ron Banner, Itay Hubara, Elad Hoffer, and Daniel Soudry. Scalable methods for 8-bit training of neural networks. In Advances in Neural Information Processing Systems, pp. 5145–5153, 2018. Dipankar Das, Naveen Mellempudi, Dheevatsa Mudigere, Dhiraj Kalamkar, Sasikanth Avancha, Kunal Banerjee, Srinivas Sridharan, Karthik Vaidyanathan, Bharat Kaul, Evangelos Georganas, et al. Mixed precision training of convolutional neural networks using integer operations. arXiv preprint arXiv:1802.00930, 2018. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hi- erarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248–255. Ieee, 2009. Yunhui Guo. A survey on methods and theories of quantized neural networks. CoRR, abs/1808.04752, 2018. URL http://arxiv.org/abs/1808.04752. Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep learning with limited numerical precision. In International Conference on Machine Learning, pp. 1737–1746, 2015. John L Gustafson and Isaac T Yonemoto. Beating floating point at its own game: Posit arithmetic. Supercomputing Frontiers and Innovations, 4(2):71–86, 2017. F Maxwell Harper and Joseph A Konstan. The movielens datasets: History and context. Acm transactions on interactive intelligent systems (tiis), 5(4):19, 2016. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Springer, 2016. Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. Neural col- laborative filtering. In Proceedings of the 26th international conference on world wide web, pp. 173–182. International World Wide Web Conferences Steering Committee, 2017. Jeff Johnson. Rethinking floating point for deep learning. CoRR, abs/1811.01721, 2018. URL http://arxiv.org/abs/1811.01721. Dhiraj Kalamkar, Dheevatsa Mudigere, Naveen Mellempudi, Dipankar Das, Kunal Banerjee, Sasikanth Avancha, Dharma Teja Vooturi, Nataraj Jammalamadaka, Jianyu Huang, Hector Yuen, et al. A study of bfloat16 for deep learning training. arXiv preprint arXiv:1905.12322, 2019. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Urs K¨oster, Tristan Webb, Xin Wang, Marcel Nassar, Arjun K Bansal, William Constable, Oguz Elibol, Scott Gray, Stewart Hall, Luke Hornof, et al. Flexpoint: An adaptive numerical format for efficient training of deep neural networks. In Advances in neural information processing systems, pp. 1742–1752, 2017. 9 Published as a conference paper at ICLR 2020 Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. Yann LeCun, L´eon Bottou, Yoshua Bengio, Patrick Haffner, et al. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Minh-Thang Luong and Christopher D. Manning. Stanford neural machine translation systems for spoken language domain. In International Workshop on Spoken Language Translation, Da Nang, Vietnam, 2015. Stefano Markidis, Steven Wei Der Chien, Erwin Laure, Ivy Bo Peng, and Jeffrey S Vetter. Nvidia tensor core programmability, performance & precision. In 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 522–531. IEEE, 2018. Clive Maxfield. An introduction to different rounding algorithms. Programmable Logic Design Line, pp. 1–15, 2006. Naveen Mellempudi, Sudarshan Srinivasan, Dipankar Das, and Bharat Kaul. Mixed precision train- ing with 8-bit floating point. arXiv preprint arXiv:1905.12334, 2019. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. Mixed In International Conference on Learning Representations, 2018. URL precision training. https://openreview.net/forum?id=r1gs9JgRZ. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pp. 311–318, Stroudsburg, PA, USA, 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https://doi.org/10. 3115/1073083.1073135. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008, 2017. Ashish Vaswani, Samy Bengio, Eugene Brevdo, Francois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, Łukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam CoRR, Shazeer, and Jakob Uszkoreit. abs/1803.07416, 2018. URL http://arxiv.org/abs/1803.07416. Naigang Wang, Jungwook Choi, Daniel Brand, Chia-Yu Chen, and Kailash Gopalakrishnan. Train- ing deep neural networks with 8-bit floating point numbers. In Advances in neural information processing systems, pp. 7675–7684, 2018. Shuang Wu, Guoqi Li, Feng Chen, and Luping Shi. Training and inference with integers in deep neural networks. arXiv preprint arXiv:1802.04680, 2018. Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, and Yuheng Zou. Dorefa-net: Train- ing low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160, 2016. 10 Published as a conference paper at ICLR 2020 # A APPENDIX A.1 SUPPLEMENTARY TABLES AND FIGURES Format IEEE-FP32 IEEE-FP16 BF16 FP8 Bits 32 16 16 8 s/e/m Min sub- 1/8/23 1/5/10 1/8/7 1/5/2 normal 2−149 2−24 2−133 2−16 Min nor- mal 2−126 2−14 2−126 2−14 (Approx.) Max normal 2128 216 2128 216 Machine epsilon 2−24 2−11 2−8 2−3 Range 2277 240 2261 232 Table A1: Comparing several floating point formats. s/e/m indicates the number of sign (s), exponent (e) and mantissa (m) bits. Models ResNet-20 ResNet-50 ResNet-50 NCF Transformer- tiny Datasets CIFAR-10 CIFAR-10 ImageNet FP32 91.5 93.0 76.2 MovieLens1M 0.666 25.3 En-Vi BF16 91.7 93.2 76.5 0.653 25.6 FP8 17.9 11.5 NaN 0.633 NaN FP8+other recipes 91.1(Loss Scale=100) 92.9(Loss Scale=100) 75.3(Loss Scale=10K, FP32 for first and last layers) - 21.3(Loss Scale=Exp) 0.663 25.3 Table A2: Comparing FP32, BF16, vanilla FP8, FP8 with tuning and S2FP8 on the model ResNet(Top1-accuracy), NCF(Hit Ratio),Transformer-tiny(BLEU score). y t i s n e d s r e b m u N 4 3 2 1 −16 −8 0 log2(|X|) 8 16 Figure A1: The range and precision of FP8. Bar indicate the number density between each power of 2. Since FP8 has 2 mantissa bit, the density is 4 (except in the denormals), and the associated machine epsilon is 2−3 = 1/8. The normal representable range goes from 2−14 to (1 − 2−3)216, with denormals from 2−16 to 2−14. A.2 SUPPLEMENTARY EQUATIONS ∂(λL) ∂w (w) = λ ∂L ∂w (w) ⇒ w(k+1) = w(k) − α 1 λ ∂(λL) ∂w (w(k)). (6) 11 Published as a conference paper at ICLR 2020 # Top-1 accuracy (%) # Loss # L2 Loss # Loss 3 FP32 S2FP8 2 0.3 80 60 FP32 S2FP8 1 0.2 0 0 50k Step 100k 0 50k Step 100k 0 50k Step FP32 S2FP8 100k 100 Figure A2: Convergence of ResNet-50 with the CIFAR-10 dataset 12
{ "id": "1802.00930" }
2001.04413
Backward Feature Correction: How Deep Learning Performs Deep (Hierarchical) Learning
Deep learning is also known as hierarchical learning, where the learner _learns_ to represent a complicated target function by decomposing it into a sequence of simpler functions to reduce sample and time complexity. This paper formally analyzes how multi-layer neural networks can perform such hierarchical learning _efficiently_ and _automatically_ by SGD on the training objective. On the conceptual side, we present a theoretical characterizations of how certain types of deep (i.e. super-constant layer) neural networks can still be sample and time efficiently trained on some hierarchical tasks, when no existing algorithm (including layerwise training, kernel method, etc) is known to be efficient. We establish a new principle called "backward feature correction", where the errors in the lower-level features can be automatically corrected when training together with the higher-level layers. We believe this is a key behind how deep learning is performing deep (hierarchical) learning, as opposed to layerwise learning or simulating some non-hierarchical method. On the technical side, we show for every input dimension $d > 0$, there is a concept class of degree $\omega(1)$ multi-variate polynomials so that, using $\omega(1)$-layer neural networks as learners, SGD can learn any function from this class in $\mathsf{poly}(d)$ time to any $\frac{1}{\mathsf{poly}(d)}$ error, through learning to represent it as a composition of $\omega(1)$ layers of quadratic functions using "backward feature correction." In contrast, we do not know any other simpler algorithm (including layerwise training, applying kernel method sequentially, training a two-layer network, etc) that can learn this concept class in $\mathsf{poly}(d)$ time even to any $d^{-0.01}$ error. As a side result, we prove $d^{\omega(1)}$ lower bounds for several non-hierarchical learners, including any kernel methods.
http://arxiv.org/pdf/2001.04413
Zeyuan Allen-Zhu, Yuanzhi Li
cs.LG, cs.DS, cs.NE, math.OC, stat.ML
V2 adds more experiments, V3 polishes writing and improves experiments, V4 makes minor fixes to the figures, V5/V6 polish writing
null
cs.LG
20200113
20230707
3 2 0 2 # l u J 7 ] # G L . s c [ 6 v 3 1 4 4 0 . 1 0 0 2 : v i X r a # Backward Feature Correction: How Deep Learning Performs Deep (Hierarchical) Learning # Yuanzhi Li [email protected] Mohamed bin Zayed University of AI Zeyuan Allen-Zhu [email protected] Meta FAIR Labs Jan 13, 2020 (version 6)∗ # Abstract Deep learning is also known as hierarchical learning, where the learner learns to represent a complicated target function by decomposing it into a sequence of simpler functions to reduce sample and time complexity. This paper formally analyzes how multi-layer neural networks can perform such hierarchical learning efficiently and automatically by applying stochastic gradient descent (SGD) or its variants on the training objective. On the conceptual side, we present a theoretical characterizations of how certain types of deep (i.e. super-constantly many layers) neural networks can still be sample and time efficiently trained on some hierarchical learning tasks, when no existing algorithm (including layerwise training, kernel method, etc) is known to be efficient. We establish a new principle called “backward feature correction”, where the errors in the lower-level features can be automatically corrected when training together with the higher-level layers. We believe this is a key behind how deep learning is performing deep (hierarchical) learning, as opposed to layerwise learning or simulating some known non-hierarchical method. On the technical side, we show for every input dimension d > 0, there is a concept class of degree ω(1) multi-variate polynomials so that, using ω(1)-layer neural networks as learners, a variant of SGD can learn any function from this class in poly(d) time to any poly(d) error, through learning to represent it as a composition of ω(1) layers of quadratic functions using “backward feature correction”. In contrast, we do not know any other simpler algorithm (including layerwise training, applying kernel method sequentially, training a two-layer network, etc) that can learn this concept class in poly(d) time even to any d−0.01 error. As a side result, we prove dω(1) lower bounds for several non-hierarchical learners, including any kernel methods, neural tangent or neural compositional kernels. ∗V1 appears on this date, V2 adds more experiments, V3 polishes writing and improves experiments, V4 makes minor fixes to the figures, V5/V6 polish writing. V6 is accepted for presentation at the Conference on Learning Theory (COLT) 2023. We would like to thank, in chronological order, Sanjeev Arora, S ´ebastien Bubeck, James R. Lee, Edouard Oyallon, Elchanan Mossel, Ruosong Wang for many suggestions on this paper. The most recent presentations of this paper can be found at https://youtu.be/sd2o1PbqixI (by Z.A.) and at https://youtu.be/N8WIplddCuc (by Y.L.). Most of the work was done when Z.A. was at Microsoft Research Redmond. 1 # Introduction Deep learning is also known as hierarchical (feature) learning.!’ The term hierarchical learning can be defined as learning to represent the complex target function g(x) using a composition of much simpler functions: g(x) = hy(hrp_1(--: hi(x)---)). In deep learning, for example, each hg(-) is usually a linear operator followed with a simple element-wise non-linear function (called activation). Empirically, the training process of deep learning is done by stochastic gradient descent (SGD) or its variants. After training, one can verify that the complexity of the learned features (i.e., he(he_1(--+w-+++)) indeed increases as ¢ goes deeper— see |79, or Figure 1, It has also been dis- covered for a long time that hierarchical learning, in many applications, requires fewer training examples |18) when compared with non-hierarchical methods that learn g(x) in one shot. Pt | - Eo 7 nm input—— layer 1—---—+layer 3—---—+ layer 5—---—> layer 7—--—+ layer 9—--— layer 11 layer output «----layer 31<«—---—layer 29 «—---—layer 27 «—---—layer 25 «—--—layer 23 < | , § - é layer 17 | Figure 1: Illustration of the hierarchical learning process of ResNet-34 on CIFAR-10. Details see Section 8.1. Hierarchical learning from a theoretical perspective. Intuitively, hierarchical learning can significantly reduce the difficulty of learning a complicated target function in one shot to learning a sequence of much simpler functions in multiple steps. For example, instead of learning a degree 2L function from scratch, hierarchical learning can learn to represent it as a composition of L-quadratic functions, and thus learning one quadratic function at a time. Moreover, it is well-known that neural networks can indeed represent a wide range of complicated functions using the composition of much simpler layers. However, the main difficulty here is that being able to represent a complex target function in a hierarchical network does not necessarily guarantee efficient learning. For example, L layers of quadratic networks can efficiently represent all parity functions up to degree 2L; but in the deep L = ω(1) setting, it is unclear if one can learn parity functions over x ∈ {−1, 1}d with noisy labels via any efficient poly(d)-time algorithm [28], not to say via training neural networks.2 So, for what type of functions can we formally prove that deep neural networks can hierarchically learn them? And, how can deep learning perform hierarchical learning to greatly improve learning efficiency in these cases? Hierarchical learning and layerwise learning. Motivated by the large body of theory works for two-layer networks, a tentative approach to analyze hierarchical learning in deep learning is via 1Quoting Bengio [16], “deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features.” Quoting Goodfellow et al. [33] “the hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones.” 2Note, neural networks in practice are very robust to label noise. 1 forward feature learning backward feature correction epoch 80 epoch 60 epoch 100 epoch 200 1% layer features no longer improve 1 layer features improve again once through training only 1* layer we start to also train higher-level layers Figure 2: Convolutional features of the first layer in AlexNet. In the first 80 epochs, we train only the first layer freezing layers 2 ∼ 5; in the next 120 epochs, we train all the layers together (starting from the weights in epoch 80). Details in Section 8.2. For visualizations of deeper layers of ResNet, see Figure 3 and 12. Observation: In the first 80 epochs, when the first layer is trained until convergence, its features can already catch certain meaningful signals, but cannot get further improved. As soon as the 2nd through 5th layers are added to the training parameters, features of the first layer get improved again. layerwise training. Consider the example of using a multi-layer network with quadratic activation, to learn the following target function. g(x) = xy +203 +0.1 (wt +23 +23)? . (1.1) ~~ e._—_—_——_’ low-complexity signal high-complexity signal In this example, one may hope for first training a two-layer quadratic network to learn simple, quadratic features (x2 1, x2 2), and then training another two-layer quadratic network on top of the first one learns a quadratic function over (x2 2, x3). In this way, one can hope for never needing to learn a degree-4 polynomial in one shot, but simply learning two quadratic functions in two steps. Is hierarchical learning in deep learning really this simple? In fact, layerwise training is known to perform poorly in practical deep learning, see Figure 7. The main reason is that when we train lower-level layers, it might over-fit to higher-level features. Using the example of (1.1), if one uses a quadratic network to fit g(x), then the first-layer features may be trained too greedily and over-fit to high-complexity signals: for instance, the best quadratic network to fit g(x) may learn features (x1 + 2). Now, if we freeze the first layer and train a second layer quadratic network on top of it (and the input), this “error” of Our main message. On the conceptual level, we show (both theoretically and empirically) although lower-level layers in a neural network indeed tend to over-fit to higher complexity signals at the beginning of training, when training all the layers together— using simple variants of SGD— the presence of higher-level layers can eventually help reduce this type of over-fitting in lower- level layers. For example, in the above case the quality of lower-level features can improve from 1 when trained together with higher-level layers. (x1 + We call this backward feature correction. More generally, we identify two critical steps in the hierarchical learning process of a multi-layer network. • The forward feature learning step, where a higher-level layer can learn its features using the simple combinations of the learned features from lower-level layers. This is an analog of layerwise training, but a bit different (see discussions in [3]) since all the layers are still trained simultaneously. 2 —> acc 0.0% (random init) forward feature learning —> acc 58.6% (if only train < 13 layers) layer 13 per-channel feature backward feature correction / i i ! ! i 1 ! ‘ ‘ ‘ ‘ \ ‘ \ J 4» acc 67.0% (if train all layers together) Figure 3: Visualize backward feature correction using WRN-34-5 on ¢2 adversarial training. Details in Section 8.5 Observation: if only training lower-level layers of a neural network, the features over-fit to higher- complexity signals of the images; while if training all the layers together, the higher-complexity signals are learned on higher-level layers and shall be “subtracted” from the lower-level features. This explains why layerwise training is not a good choice, and the mathematical intuitions can be found in Section 1.2. • The backward feature correction step, where a lower-level layer can learn to further improve its feature quality with the help of the learned features in higher-level layers. We are not aware of this being recorded in the theory literature, and believe it is a most critical reason for why hierarchical learning goes beyond layerwise training in deep learning. We shall mathematically characterize this in Theorem 2. Remark. When all the layers of a neural network are trained together, the aforementioned two steps actually occur simultaneously. For interested readers, we also design experiments to separate them and visualize, see Figure 2, 3, and 12. On the theoretical side, we also give toy examples with mathematical intuitions in Section 1.2 to further explain the two steps. Our technical results. With the help of the discovered conceptual message, we show the following technical results. Let input dimension d be sufficiently large, there exist a non-trivial class of “well- conditioned” L-layer neural networks with L = ω(1) and quadratic activations3 so that: • Training such networks by a variant of SGD efficiently and hierarchically learns this concept class. Here, by “efficiently” we mean time/sample complexity is poly(d/ε) where ε is the generalization error; and by “hierarchically” we mean the network learns to represent the concept class by decomposing it into a composition of simple (i.e. quadratic) functions, via forward feature learning and backward feature correction, to significantly reduce sample/time complexity. • We are unaware of existing algorithm that can achieve the same result in polynomial time. For completeness, we prove super-polynomial lower bounds for shallow learning methods such as (1) kernel method, (2) regression over feature mappings, (3) two-layer networks with degree ≤ 2L activations, or (4) the previous three with any regularization. Although proving separation is 3It is easy to measure the network’s growing representation power in depth using quadratic activations [57]. As a separate note, quadratic networks perform as well as ReLU networks in practice (see Figure 4 on Page 5), significantly better than (even ReLU network’s) layerwise learning, and has cryptographic advantages [60]. 3 not our main message, 4 we still illustrate in Section 1.2 that neither do we believe layerwise training, or applying kernel method multiple (even ω(1) many) times can achieve poly-time. 5 To this extent, we have shown, at least for this class of L-layer networks with L = ω(1), deep learning can indeed perform efficient hierarchical learning when trained by a variant of SGD to learn functions not known to be learnable by “shallow learners” (including layerwise training which can be viewed as applying two-layer networks multiple times). Thus, we believe that hierarchical learning (especially with backward feature correction) is critical to learn this concept class. Difference from existing theory. Many prior and followup works have studied the theory of deep learning. We try to cover them all in Section 7 but summarize our main difference as follows. • Starting from Jacot et al. [42], there is a rich literature [3, 4, 6–8, 11, 12, 20, 21, 23, 25, 26, 32, 34, 39, 42, 48, 52, 62, 67, 76, 82, 83] that reduces multi-layer neural networks to kernel methods (e.g. neural tangent kernels, or NTKs). They approximate neural networks by linear models over (hierarchically defined) random features— which are not learned through training. They do not show the power of deep learning beyond kernel methods. • Many other theories [5, 13, 17, 19, 22, 30, 31, 44, 46, 47, 49–51, 64, 68, 69, 72, 74, 75, 77, 80, 81] focus on two-layer networks but they do not have the deep hierarchical structure. In particular, some have studied feature learning as a process [5, 22, 53], but still cannot cover how the features of the second layer can help backward correct the first layer; thus naively repeating them for multi-layer networks may only give rise to layerwise training. • Allen-Zhu et al. [6] shows that 3-layer neural networks can learn the so-called “second-order NTK,” which is not a linear model; however, second-order NTK is also learnable by doing a nuclear-norm constrained linear regression, which is still not truly hierarchical. • Allen-Zhu and Li [3] shows that 3-layer ResNet can learn a concept class otherwise not learnable by kernel methods (within the same level of sample complexity). We discuss more in Section 7, but most importantly, that concept class is learnable by applying kernel method twice. In sum, most prior works may have only studied a simpler but already non-trivial question: “can multi-layer neural networks efficiently learn simple functions that are also learnable by non- hierarchical models.” While the cited works shed great light on the learning process of neural networks, in the language of this paper, they cannot justify how deep learning performs deep hierarchical feature learning. Our work is motivated by this huge gap between theory and practice. (We also cite some works that study hierarchical learning in other contexts in Section 7.) Admittedly, with a more ambitious goal we have to sacrifice something. Notably, we study quadratic activations which are conventional in theory literature, but a few cited works above can handle ReLU. This may be still fine: in practice, deep learning with quadratic activations perform very closely to ReLU ones, significantly better than two-layer networks or neural kernel methods (see Figure 4), and much better than (even ReLU network’s) layerwise training (see Figure 7). Hence, our theoretical result may also serve as a provisional step towards understating the deep learning process in ReLU networks. In addition, as one shall see, we have slightly re-parameterized the network, added regularizers, and made minor changes to the SGD algorithm to obtain our final theoretical proof. All of such may not appear conventional; but this may not be too bad, as in 4Prior results such as [27, 70] separate the representation power of multi-layer networks from shallower learners (without efficient training guarantee), and concurrent results [22, 53] separate the power of two-layer neural networks from kernel methods with efficient training guarantees. However, proving separation is not the main message of this paper, and we focus on understanding how deep learning perform efficient hierarchical learning when L = ω(1). 4 CIFAR-10 accuracy CIFAR-100 accuracy training time single model | ensemble | single model | ensemble Hierarchical WRN-16-10 96.27% 96.8% 80.28% 83.18% | 2.5 GPU hour (V100) learning WRN-22-10 96.59% 97.12% 81.43% 84.33% 3 GPU hour (V100) quadratic WRN-16-10 94.68% 95.65% 75.31% 79.01% 3 GPU hour (V100) quadratic WRN-22-10 95.08% 95.97% 75.65% 79.97% 3.5 GPU hour (V100) neural compositional kernel * 89.8% 89.8% 68.2% 68.2% ~1000 GPU hour with ZCA preprocessing neural tangent kernel (NTK) ** 81.40% 81.40% = - ~1000 GPU hour Kernel (+ random preprocessing) (88.36%) - - - ~1000 GPU hour methods neural Gaussian process kernel ** 82.20% 82.20% = - ~1000 GPU hour (+ random preprocessing) (88.92%) - - - ~1000 GPU hour finite-width NTK for WRN-10-10 72.33% 75.26% = - 20.5 GPU hour (TitanV) (+ ZCA preprocessing) (76.94%) (80.21%) - - 20.5 GPU hour (TitanV) Figure 4: Comparison between ReLU networks, quadratic networks, and several optimized kernel methods (* for [67] and ** for [54]). Details in Section 8.3. Take-away messages: Quadratic networks perform comparable to ReLU, and better and much faster than kernel methods. Finite-width NTK [8] accuracy is much worse than its counterparts in hierarchical learning, showing its insufficiency for understanding the ultimate power of neural networks. Note 1: Kernel methods usually cannot benefit from ensemble since it is typically strictly convex. Random preprocessing in principle may help if one runs it multiple times; but we expect the gain to be little. Ensemble helps on finite-width NTK (linear function over random feature mappings) because the feature space is re-randomized multiple times, so ensemble actually increases the number of features. Note 2: Our obtained accuracies using quadratic networks may be of independent interests: networks with quadratic activations have certain practical advantage especially in cryptographic applications [60]. practice, when training neural networks for a hard dataset, one also needs to develop tons of hacks to make the training work. # 1.1 Our Theorem We give an overview of our theoretical result. The learner networks we consider are DenseNets |38}: G(x) = Vio (ue, Ge(x)) ER where Go(z) =a € R¢, Gi(x) = o(x) — E[o(x)] € R4 Giz) = (yen MyjG;(7)) for €>2and JC {0,1,--- ,6—-Y (1.2) Here, o is the activation function and we pick o(z) = z? in this paper, M,z,;’s are weight matrices, and the final output G(x) € R is a weighted summation of the outputs of all the layers. The set J defines the connection graph. We can handle any connection graph with the only restriction being there is at least one “skip link.”® To illustrate the main idea, we focus here on a regression problem in the teacher-student setting, although our result applies to classification as well as the agnostic learning setting (where the target network may also have label error). In this teacher-student regression setting, the goal is to learn some unknown target function G*(a) in some concept class given samples (2, G*(x)) where x ~ D follows some distribution D. In this paper, we consider the target functions G*(x) € R coming from the same class as the learner network: G*(r) = Wis ag: (us,Ge(x)) ER where Gi(r) =x RY, Gi(x) = a(x) — E[o(x)] € R4 G*(r) = Wis ag: (us,Ge(x)) ER where Gi(r) =x RY, Gi(x) = a(x) — E[o(x)] € R4 Gh(«x) =o (Syex, W?,G5(0)) ER’ for €>2and WC {0,1,---,6—-Y (1.3) °In symbols, for every € > 3, we require (€— 1) € Je, (6-2) ¢ Je but j € J for some j < €—3. As comparisons, the vanilla feed-forward network corresponds to Je = {4 — 1}, while ResNet 36, (with skip connection) corresponds to Je = {€—1,¢— 3} with weight sharing (namely, Mee—1 = Me e_3). 5 100 Woon 70 $ TEST ACCURACY 2 2 sa ~ CIFAR-2 CIFAR-2.-CIFAR-10—CIFAR-10-—CIFAR-100_—CIFAR-100 ensemble ensemble top 10 top 10 ensemble m1 m2 3.64 5 m6 M7 MS MO M10 M11 M12 M13 M14 M15 M16 Figure 5: Justification of information gap on the CIFAR datasets for WRN-34-10 architecture. The 16 colors represent 16 different depths, and deeper layers have diminishing contributions to the classification accuracy. We discuss details in Section 3.2 and experiment details in Section 8.6. Since o(z) is degree 2-homogenous, without loss of generality we assume ||W7 lle = O(1), ux € {-1,1}* and let ap € Ryo be a scalar to control the contribution of the ¢-th layer. In the teacher-student setting, our main theorems can be sketched as follows: Theorem (sketched). For every input dimension d > 0 and every L = o(log log d), for certain concept class consisting of certain L-layer target networks defined in Eq. (1.3), over certain input distributions (such as standard Gaussian, certain mixture of Gaussians, etc.), we have: Within poly(d/ε) time/sample complexity, by a variant of SGD starting from random initializa- tion, the L-layer quadratic DenseNet can learn this concept class with any generalization error ε, using forward feature learning + backward feature correction. (See Theorem 1.) • As side result, we show any kernel method, any linear model over prescribed feature map- pings, or any two-layer neural networks with arbitrary degree-2L activations, require dΩ(2L) sample or time complexity, to achieve non-trivial generalization error such as ε = d−0.01. (See Section H.) Remark. As we shall formally introduce in Section 2, the concept class in our theorem— the class of target functions to be learned— comes from Eq. (1.3) with additional width requirement ky ~ a . . . : d'/2" and information gap requirement avy; < ag with ag = 1 and ay > aA The requirement L = o(loglogd) is very natural: a quadratic network even with constant condition number can output 22” and we need this to be at most poly(d) to prove any efficient training result. We refer the assumption ag41 < az as information gap. In a classification problem, it can be understood as “ag is the incremental accuracy improvement when using ¢-layer networks to fit the target comparing to (@—1)-layer ones.” We discuss more in Section 3.2, For example, in Figure 5, we see > 75% of the CIFAR-10 images can be classified correctly using a 2-layer network; but going from depth 7 to 8 only gives < 1% accuracy gain. Information gap was also pointed out in natural language processing applications |71|. We refer to |3, for empirical evidence that deep learning fails to perform hierarchical learning when information gap is removed. # 1.2 High-Level Intuitions In this subsection we included a “proof by example”; later with all the notations introduced, we have a 4-paged sketched proof in Section 6 which shall make this “proof by example” more concrete. Intuitively, learning a single quadratic function is easy, but our concept class consists of a sufficiently rich set of degree 24 = 2%“) polynomials over d dimensions. Using non-hierarchical learning methods, typical sample/time complexity is A282") = qeO— and we prove such lower bound for kernel (and some other) methods, even when all kg = 1. This is not surprising, since 6 kernel methods do not perform hierarchical learning so have to essentially “write down” all the monomials of degree 24—!, which suffers a lot in the sample complexity. Even if the learner performs kernel method O(1) times, since the target function has width ky = d°Q) for any constant é, this cannot avoid learning in one level a degree-w(1) polynomial that depends on d2Q) variables, resulting again in sample/time complexity de(), Now, the hope for training a quadratic DenseNet with poly(d) time, is because it may decompose a degree-2L polynomial into learning one quadratic function at a time. Easier said than done, let us provide intuition by considering an extremely simplified example: L = 3, d = 4, and G*(a) =21+2}+a((a} +23)? + (a +a4)?) for some a = o(1). (Recall L = 3 refers to having two trainable layers that we refer to as the second and third layers.) Forward feature learning: richer representation by over-parameterization. Since a < 1, one may hope for the second layer G2(x) to learn af and x$— which is quadratic over Gi (x) through some representation of its neurons; then feed this as input to the third layer. If so, the third layer G3(x) could learn a quadratic function over a},2$,a3,24 to fit the remainder a((af + x3)? + (a3 +.x4)’) in the objective. This logic has a critical flaw: • Instead of learning x4 2)2, 1 5 (2x2 1 + 2x2 5 (x2 2, the second layer may as well learn 1 1 + x4 1 − x2 2)2 and x3, x4 can produce (x4 1, x4 5 (2x2 2)2. 1 − x2 2; however, no quadratic function over 1 5 (x2 Indeed, 1 2)2, 1 2x2 needs to learn not only how to fit x4 2)2 + 1 2)2 = x4 1 + 2x2 1 − x2 5 (x2 5 (2x2 1 + 2 + x4)2. Therefore, the second layer 1 + x3)2 + (x4 1, x4 1 + x4 2 but also the “correct basis” x4 2 for the third layer. To achieve this goal, we let the learner network to use (quadratically-sized) over-parameterization with random initialization. Instead of having only two hidden neurons, we will let the network have m > 2 hidden neurons. We show a critical lemma that the neurons in the second layer of the network can learn a richer representation of the same function x4 2)2}m i=1 1 + In each hidden neuron, the coefficients αi, βi behave like i.i.d. Gaussians. βix2 1 + 2 + x4)2, so the algorithm can proceed. Note βix2 this is a completely different view comparing to prior works: here over-parameterization is not to make training easier in the current layer; instead, it enforces the network to learn a richer set of hidden features (to represent the same target function) that can be better used for higher layers. Backward feature correction: improvement in lower layers after learning higher layers. The second obstacle in this toy example is that the second layer might not even learn the function x4 1 + x4 2 exactly. It is possible to come up with a distribution where the best quadratic over G1(x) 4)2, which is only of magnitude α 3, x2 1, x2 (i.e., x2 close to the ideal function x4 4 cannot be corrected by over-parameterization. (More generally, this error in the lower-level features can propagate layer after layer, if one keeps performing forward feature learning without going back to correct them. This why we do not believe applying kernel method sequentially even ω(1) times can possibly learn our concept class in poly-time. We discuss more in Section 3.) Let us proceed to see how this over-fitting on the second layer can be corrected by learning the third layer together. Say the second layer has an “α-error” and feeds the over-fit features 7This additional error α is precisely because there is a higher-complexity signal of magnitude α in the target function, which cannot be fit using the current layer (since it exceeds degree 4 which is the maximum degree polynomial we can fit using only the second layer). 7 # input 1) layer 2 begins to learn iF 2) layer 3 begins to learn using features given by layer 2 3) layer 3 helps layer 2 correct its features by reducing over-fitting | layer 2 4) layer 3 improves since layer 2 now feeds in better features to layer 3 i 2] Ja ‘) 5) layer 4 begins to learn i vy 6) layer 4 helps layer 2 correct its features by reducing over-fitting 8 \ ~} layer 3 * 7) layer 3 improves since \ s} ls \7 a) layer 4 helps layer 3 correct its features by reducing over-fitting, and \ a ra b) layer 2 now feeds in better features to layer 3 ‘(layer 4 f° 8) layer 4 improves since layer 3 now feeds in better features Figure 6: Explain the hierarchical learning process in a 4-layer example. Back and blue arrows correspond to “forward feature learning” [3]; Red dashed arrows correspond to“backward feature correction”. Note: In our work, we do not explicitly train the network in this order, this “back and forth” learning process happens rather implicitly when we simply train all layers in the network together. xt +.ax3)’, (73 + ax4) to the third layer. The third layer can therefore use A’ = a((a7] + ax)? + x3)” + a((a3 + ax?)? + x4)? to fit the remainder term A = a((xf + x3)? + (x$ + x4)”) in G*(2). A very neat observation is that A’ is only of magnitude a? away from A. Therefore, when he second and third layers are trained together, this “a?-error” remainder A’ will be subtracted rom the training objective, so the second layer can learn up to accuracy a’, instead of a. In other words, the amount of over-fitting is now reduced from a to a?. We call this “backward eature correction.” (This is also consistent with what we discover on ReLU networks in real-life experiments, see Figure 3 where we visualize such “over-fitting.” ) In fact, this process a + a? + a? > --- keeps going and the second layer can feed better and oetter features to the third layer (forward learning), via the reduction of over-fitting from the third ayer (via backward correction). We can eventually learn G* to arbitrarily small error ¢ > 0. When here are more than two trainable layers, the process is slightly more involved, and we summarize his hierarchical learning process in Figure 6, ® Hierarchical learning in deep learning goes beyond layerwise training. Our results also shed lights on the following observation in practice: typically layerwise training (i.e. train layers one by one starting from lower levels)9 performs much worse than training all the layers together, see Figure 7. The fundamental reason is due to the missing piece of “backward feature correction.” From intuitions to theory. Although the intuitions do seem to generally apply in practice (see Figure 3 and many more experiments in the appendix), to actually prove them, we make modifications to the SGD algorithm and add regularizations. After the notations are introduced, in Section 6, we give a more detailed, 4-paged sketched proof to make this “proof by example” more concrete. 8Moreover, as a separate interest, according to our theorem, the improvement of lower-level features is mainly due to the “subtraction” of the higher-level signals. This means during training, most of the “backward” effort in a neural network is from the “identity link”. This is consistent with empirical works [14, 63], while the authors observe that in ResNet, the “backward” from hidden weights can be detached during the training of multi-layer neural networks (except only keeping the identity link) to achieve comparable performance on standard data sets. 9We refer to layerwise training as first training the 1st hidden layer by setting other layers to zero, and then training the 2nd layer by fixing the 1st layer and setting others to zero, and so on. Such algorithm is used in theoretical works such as [59]. There exist other works that use (deep) auxiliary networks to train the layers of a neural network one by one [15]; the authors of [15] also refer to their algorithm as layerwise training; but in our language, such results are performing hierarchical learning due to the existence of auxiliary networks. 8 s s& g $ 70 | | 2 60 2 50 2 8 = 40 g-19-layerwise g 9 z. 9-x2-layerwise fo] ——vee- 19.2 0 4 8 12 16 0 4 8 12 16 9-x4-layerwise # of layers # of layers vgg-19-x4 & < # 2 S g < Fa fs] (a) VGG19+BatchNorm, accuracy at x-axis S indicates only the first S convolutional layers are trained 85 S & & & g 575 3° 7 Pd << 65 % 80 S & 2 ss 3 70 8 by 45 = Fa © 60 5 35 # of blocks # of blocks (b) WideResNet-34, accuracy at x-axis S indicates only the first S convolutional blocks are trained Figure 7: Layerwise training vs Training all layers together. Details and more experiments in Section 8.4. Take-away messages: In layerwise training, lower layers are trained too greedily and over-fit to higher- learning (i.e., training all the layers complexity signals, together), even at the second hidden layer. Going deeper cannot increase accuracy anymore as the low-quality features at lower layers are already fixed. For moderately wide (e.g. width =64) architectures, layerwise training stops improving test accuracy even after depth 3 without Backward Feature Correction. # 2 Target Network and Learner Network Target network. We consider a target network defined as (x) —Elo(x)] ER%, Gt(x) =o (Sy nh W?,G3()) eR’ ye>2 jg Gi(z)=2 ER, Gi(x)= where the weight matrices W7 ; € R's for every 0, j. Each index set J is a subset of {0,1,2,--+ ,¢- 3}U {€— 1}. We assume that (1) ¢— 1 € J% (so there is a connection to the immediate previous layer) and (2) for every > 3, |%| > 2 (so there is at least one skip connection). We use the convention Wi; =O0if 7 ¢ He. Our concept class to be learned consists of functions G*: R? > R written as coordinate sum- 10 mation of each layer: G*(a) = Diig ae: Sum(GF(x)) = Diy e Dictayy Gi) where Sum(v) Eytvs do; vi, and it satisfies ag = 1 and ayy; < ay. We will provide explanation of the meaningfulness and necessity of information-gap ag+1 < ag in Section 3.2, It is convenient to define S7(«) as the hidden features of target network (and G7(«x) = o($}(2))). It is convenient to define # S7(«) as the hidden features of target network (and # Si(v) = = , Si(e)=Gi(x), SÂ¥(a) is of degree 2°-! and Gi Spa) = Gia) = 2 , Gia) = 2 Si(v) Gi (x) = o(Sf(a)) = DO WI Gs(w) jg o(Sf(a)) is of degree 2°. # jg # Gs(w) We>2 Note for £ > 2, SÂ¥(a) is of degree 2°-! and Gi (x) = o(Sf(a)) is of degree 2°. Learner network. Our goal is to construct a learner network G of the same structure (with Our result trivially extends to the case when Sum(v) is replaced with d, i pivi where pi ∈ {±1} for half of the indices. We refrain from proving that version for notational simplicity. 9 # more So Si Sp Le S53 KS KS FR + Sen - >| Sean Se= Ker + Keg + Koei Ke, — ctillats, Kos . distillation chalet = distillation Fp = 0(Wem + Weg + Woes) —_ ——— Wes Wes So s, EY s, Ws, Ws, Ws, be Oss, Woes SS Se Figure 8: learner network structure with distillation over-parameterization) to simulate G*: Gi(x) = 6 (Dyer, MesGi(e)) 2 )×(kj +1 kext) ke+1 )*4 and My; € RU% ke+1 Gi(«z) and we choose Myo,My1 € RO? kext) x ("5 +1 ke+1 . Here, Go(«2) = 2,Gi = Gi(«z) and we choose Myo,My1 € RO? )*4 and My; € RU% 2) for every 2 < j < €—1. In other words, the amount of over-parameterization is quadratic (i.e., from kj (hay t ) per layer. We want to construct the weight matrices so that 2 G(x) = hy aeSum(Go(x)) © G*(x) . # 2.1 Learner Network Re-parameterization In this paper, for theoretical efficient training purpose, we work on a re-parameterization of the learner network. We use the following function to fit the target G* (x): F(x) = Vii ae: Sum(Fi(2)) layers are defined as: So(x) = G3(x), Si(x) = Gi(x), and for @> 2: 1! Se(t) = Veg joo Kojo (Ry S5(@)) + Viefo,1j07 Ky j5;(a) € R® (2.1) Fux) = o( Dyemis2 We (Ry S3(®)) + Vyegorynn W.55)(x)) €R" (2.2) where the layers are defined as: So(x) = Fux) = o( Dyemis2 We (Ry S3(®)) + Above, we shall choose m to be polynomially large and let ke+1 e Ree R( oY) xke be randomly initialized for every layer @, not changed during training; and hy) © We ¢ R*4, Koj € R*ex4@ be trainable for every £ and j € J, and the dimension q = ( 2 for j ≥ 2 and q = d for j = 0, 1. 2 It is easy to verify that when R/ Re = Tand when Wy; = Ky,;, by defining My; = ReKy,; we have Fy(x) = Gp(x) and F(x) = G(x). We remark that the hidden dimension ky can also be learned during training, see Algorithm 1 in Section 4 Why this re-parameterization. We work with this re-parameterization F(x) for efficient train- ing purpose. It is convenient to think of S¢(a) as the “hidden features” used by the learner network. “Recall Gj(a) = (x) — E[o(«)] and during training we only have access to the empirical expectation of E[o(a)]; however using poly(d/e) samples, the empirical expectation would be FCI accurate. For cleanness, we just write in S the true expectation, we the difference can be easily dealt by a Lipschitz argument (see Section C.3). 2From this definition, it seems the learner needs to know {a¢}¢ and {.Je}e; as we point out in Section 4, performing grid search over them is efficient in poly(d) time. This can be viewed as neural architecture search. As a consequence, in the agnostic setting, our theorem can be understood as: “the learner network can fit the labeling function using the best G* from the concept class as well as the best choices of {ac}e and {Ju}e.” 10 Since S¢(x) is of the same dimension ky as S7(x), our goal becomes to prove that the hidden features S¢(x) and S7(x) are close up to unitary transformation (i.e. Theorem 2). To achieve this, we consider an over-parameterized F)(x) = o(W---) and treat the pre- activation part (W---) € R™ in (2.2) as the “over-parameterized hidden features” over S¢(x) € R™, for some m > ky. This over-parameterization is used to make training provably efficient, for a sim- ilar reason as [6]. We shall impose regularizers to enforce K'K ~ W'W which shall then make the hidden features S(x) also learned accurately. This idea of using a larger W for training and a smaller K to learn W can be reminiscent of knowledge distillation |37|, and we illustrate this by Figure 8, In our sketched-proof Section 6 (Page 23), we give more details on this. Truncated quadratic activation. To make our theory simpler, during training, it would be easier to work with an activation that has bounded deriva- \ [ion a tives in the entire space (recall |o’(z)| = |z| is un- Poundedinsrelims bounded). We make a theoretical choice of a truncated quadratic activation o(z) that is sufficiently close to a(z). Accordingly, we rewrite F(x), F(x), Se(x) as identical to.o(2) = 22 for F(x), Fo(x), S¢(~) whenever we replace o(-) with &(-). somasuficiensly large He (For completeness we include the formal definition in | | + Appendix A.1,) Our lemma— see Appendix C.1 Bi 0 By shall ensure that F(a) ~ F(z) and Sp(a) ~ S¢(a). Figure 9: truncated quadratic activation Thus, our final learned network F(x) is still of truly quadratic activations. In practice, people use batch/layer normalizations to make sure activations stay bounded, but truncation is more theory-friendly. Notation simplification. We concatenate the weight matrices used in the same layer @ as follows: W, = (Wajieg Ky = (Kz) Wi = JET (Wi) <7, Wea = (We,;) — a * JETe J AE-1 Kig = (Koj) je gpjpe-1 ea (Wi) :erjy0—1 # 2.2 Training Objective We focus our notation for the regression problem in the realizable case. We will introduce notations for the agnostic case and for classification in Section 3.2 when we need them. As mentioned earlier, to perform knowledge distillation, we add a regularizer to ensure Ww] We K/ Ky so that K/ Ky is a low-rank approximation of W/ We. (This also implies Sum(F)(x)) Sum(o(S¢(x))).) Specifically, we use the following training objective: Obj(x; W, K) = Loss(x; W, K) + Reg(W, K) where the @2 loss is Loss(z; W,K) = (G*(x) — F(x))? and 2 Reg(W.K) = Sf. dae |KJp_ Kea — W/y_;Weal + Die Ane] Kp Kee — WI Wee-|| + Vika Ase KP Ke - Ww) Wil\;- + Dheo Aeye (Kell + Well?) - 2 Reg(W.K) = Sf. dae |KJp_ Kea — W/y_;Weal + Die Ane] Kp Kee — WI Wee-|| + Vika Ase KP Ke - Ww) Wil\;- + Dheo Aeye (Kell + Well?) - For a given set Z consisting of N i.i.d. samples from the true distribution D, the training process minimizes the following objective (« ~ Z denotes x is uniformly sampled from the training set Z) Obj(Z;W,K) = E [Obj(«; W, K)] (2.3) weal 11 # 2 F The regularizers we used are just (squared) Frobenius norm on the weight matrices, which are common in practice. The regularizers associated with A3¢, Aae,A5,¢ are for knowledge distillation propose to make sure K is close to W (they are simply zero when K/ Ke = Ww] W,). They play no role in backward feature corrections (since layers £ and ¢ for (’ 4 £ are optimized independently in these regularizers). These corrections are done solely by SGD automatically. For the original, non-truncated quadratic activation network, we also denote by Loss(x; W, K) = (G*(x) — F(a))” and Obj(#; W, K) = Loss(x; W, K) + Reg(W, K). # 3 Statements of Main Result We assume the input distribution x ∼ D satisfies random properties such as isotropy and hyper- contractivity. We defer the details to Section 5, while pointing out that not only standard Gaussian but even some mixtures of non-spherical Gaussians satisfy these properties (see Proposition 5.1). For simplicity, the readers can think of D = N (0, I) in this section. We consider a concept class consisting of target networks satisfying the following parameters 1. (monotone) d > k a kg > kg > +++ > ky. 2. (normalized) E,~p [Sum(G7(x))] < Be for 3. (well-conditioned) the singular values of W7 < Be for some By > 1 for all ¢ and B maxe{ By}. 2. (normalized) E,~p [Sum(G7(x))] < Be for some By > 1 for all ¢ and B maxe{ By}. j are between 1 1 and « for all ¢,7 € J pairs. 3. (well-conditioned) the singular values of W7 j are between 1 and « for all ¢,7 € J pairs. Remark 3.1. Properties 1,3 are satisfied for many practical networks; in fact, many practical networks have weight matrices close to unitary, see |40|. For property 2, although there may exist some worst case Wi 5; at least when each W7 j is of the form Up; 2V_j for Ue;, Ve; being random orthonormal matrices, with probability at least 0.9999, it holds By = 4200 ke for instance for standard Gaussian inputs— this is small since L < o(log log d).!3' Another view is that practical networks are equipped with batch/layer normalizations, which ensure that By = O(kz). Our results. In the main body of this paper, we state a simple version of our main (positive result) Theorem 1 which is sufficiently interesting. In Appendix A, we give a more general Theorem 1’ that includes more parameter regimes. In this simple version, we assume there are absolute integer constants C > C, > 2 such that, the concept class consists of target networks G*(x) satisfies the above three properties with parameters K< 2° By < Ct kp, ke < door and there is an information gap ot < qe for £ > 2; furthermore, suppose in the connection graph {2,3,--- ,¢— Ci} Je = SW, meaning that the skip connections do not go very deep, unless directly connected to the input. Theorem 1 (special case of Theorem 1’). In the parameter regime defined above, for every suf- ficiently large d > 0, every L = o(loglogd), every €« € (0,1), consider any target network G*(x) satisfying the above parameters. Then, given N = poly(d/e) i.i.d. samples x from D with cor- responding labels G*(x), by applying Algorithm 1 (a variant of SGD) with over-parameterization m = poly(d/e) and learning rate 7 = Sayles) over the training objective (2.3), with probability at least 0.99, we can find a learner network F' in time poly(d/e) such that: K > 2 c. K (a P(> 2 eS L(G (a) — F(2)) <e* and EG (x) — F(2)) <e. an eo 13Tn fact Be = 2°” kp holds as long as E [(4 lel)? J< 2° | This can be derived using Ew+ E,[Sum(G?(z))] = E, Ew~ [Sum(G7(zx))], and it suffices to consider a fixed x and use the randomness of W* to prove the claim. 12 We defer the detailed pseudocode14 of Algorithm 1 to Section 4 but make several remarks: i. . 4 . . . e Note agi1 = agd © implies ap > dc > 7 is not small. Hence, to achieve for instance e< a error, the learning algorithm has to truly learn all the layers of G*(a), as opposed to for instance ignoring the last layer which will incur error ay, >> €. (We choose this concept class so that learning all the layers is necessary.) • The reason we focus on L = o(log log d) and well-conditioned target networks should be natural. Since the target network is of degree 2L, we wish to have κ2L ≤ poly(d) so the output of the network is bounded by poly(d) for efficient learning. The main conceptual and technical contribution of our paper is the “backward feature correction” process. To illustrate this, we highlight a critical lemma in our proof and state it as a theorem: # Backward Feature Correction Theorem Theorem 2 (highlight of Corollary E.3d). In the setting of Theorem 1, during the training pro- cess, suppose the first €-layers of the learner network has achieved ¢ generalization error, or in symbols, E[(G*(2) — Nyce a¢Sum(Fy(x))) al <e, (3.1) then for every @ < 0, there is unitary matric Up € Re **e such that (we write ar41 = 0) [ab ||S$(e) — UeSe(@)|?| S 2a +22) | # E In other words, once we have trained the first ¢ layers well enough, for some lower-level layer t' <4, the “error in the learned features Sy (a) comparing to $7,(x)” is proportional to ag, 1. Recall ay is a decreasing sequence, thus Theorem 2 suggests that the lower-level features can actually get improved when we train higher-level layers together. Remark 3.2. Theorem 2 is not a “representation” theorem. There might be other networks F’ such that (3.1) is satisfied but Sp(a) is not close to $7,(«) at all. Theorem 2 implies during the training process, as long as we following carefully the training process of SGD, such “bad F” will be automatically avoided. We give more details in our intuition and sketched proof Section 6. Comparing to sequential kernel methods. Recall we have argued in Section 1.2 that our concept class is not likely to be efficiently learnable, if one applies kernel method O(1) times sequentially. Even if one applies kernel method for ω(1) rounds, this is similar to layerwise training and misses “backward feature correction.” As we pointed out using examples in Section 1.2, this is unlikely to learn the target function to good accuracy either. In fact, one may consider “sequential kernel” together with “backward feature correction”, but even this may not always work, since small generalization error does not necessarily imply sufficient accuracy on intermediate features if we do not follow the SGD training process (see Remark 3.2).15 14Algorithm 1. We made modifications on SGD to tradeoff for easier proofs. Two noticeable differences are as “Algorithm 1, We made modifications on SGD to tradeoff for easier proofs. Two noticeable differences are as follows. First, we start parameter training in the layer order— train W? first, then train W2, K2 together, then train Wo, Ko, W3 together, then train W2, Ko, W3, Ks together, etc. This is known as “layerwise pretraining” which performs no worse than “training all the layers together” and significantly better than “layerwise training.” Second, whenever Ky is added to training, we let it start from an SVD warm-start computed from Wy, (done only once for each Ky). Using SVD warm-start is a standard theory technique in non-convex literature (at least tracing back to (10), and it avoids the messier (and perhaps less interesting) proofs to deal with singularities in Ke. 15One may also want to connect this to [3]: according to Footnote 27, the analysis from [3] is analogous to doing “sequential kernel” for 2 rounds, but even if one wants to backward correct the features of the first hidden layer, its error remains to be α and cannot be improved to arbitrarily small. 13 Importance of Hierarchical Learning: To learn this concept class, to the best of our knowl- edge, • We do not know any other simple algorithm that can learn the target functions considered in this paper within the same efficiency, the only simple learning algorithm we are aware of is to train a neural network to perform hierarchical learning. • We present a setting where we can prove that training a neural network via a simple variant SGD can perform hierarchical learning to solve an underlying problem that is not known solvable by existing algorithms, such as applying kernel methods sequentially multiple times, tensor decomposition methods, sparse coding. Thus, neural network has a unique learning mechanism that is not simulating known (non-hierarchical) algorithms or their simple compositions. This can be viewed as an evidence of why practitioners choose to use neural network instead of other methods in modern machine learning. Agnostic learning. Our theorem also works in the agnostic setting, where the labeling function Y (2) satisfies E,.p(G* (x) — Y(x))? < OPT and |G*(x)—Y(x)| < poly(d) for some unknown G* (2). The SGD algorithm can learn a function F(x) with error at most (1+7)OPT +? for any constant 7 > 1 given iid. samples of {x,Y(x)}. Thus, the learner can compete with the performance of the best target network. We present the result in Appendix A.5 and state its special case below. Theorem 3 (special case of Theorem 3’). For every constant y > 0, in the same setting Theorem 1, given N = poly(d/e) t.i.d. samples Z from D and their corresponding labels {Y (x)}xez, by apply- ing Algorithm 1 (a variant of SGD) over the agnostic training objective E,.z (Y (x) — F(a))? + Reg(W,K), with probability > 0.99, it finds a learner network F in time poly(d/e) s.t. Ex∼D (F (x) − Y (x)))2 ≤ ε2 + (1 + γ)OPT . # 3.1 Backward Feature Correction: How deep? How much? How deep does it need for the neural network to perform backward feature correction? In our theoretical result, we studied an extreme case in which training the Z-th layer can even backward correct the learned weights on the first layer for L = w(1) (see Theorem 2). In practice, we demon- rate that backward feature correction may indeed need to be deep. For the 34-layer WideResNet architecture on CIFAR tasks, see Figure 10 on Page 15, we show that backward feature correction happens for at least 8 layers, meaning that if we first train all the < @ layers for some large ¢ (say = 21), the features in layer 0—8,0—7,--- ,@ still need to be (locally) improved in order to become 1e best features comparing to training all the layers together. This finding is consistent with |15) where the authors showed deeper “backward” during training leads to higher test accuracy. wn oS We also give a characterization on how much the features need to be backward corrected using theory and experiments. On the empirical side, we measure the changes given by backward feature correction in Figure 10 and 11. We detect that these changes are local: meaning although the lower layers need to change when training with higher layers together to obtain the highest accuracy, they do not change by much (the correlation of layer weights before and after backward correction is more than 0.9). In Figure 12, we also visualize the neurons at different layers, so that one can easily see backward feature correction is indeed a local correction process in practice. This is consistent with our theory. Theorem 2 shows at least for our concept class, backward feature correction is a local correction, meaning that the amount of feature change to the lower-level ayers (when trained together with higher-level layers) is only little-o(1) due to agy1 < ager. 14 CIFAR-100 accuracy €=1 €=3 =5 0=7 €=9 €=17 €=19 @=21 train only < @ 16.0% 43.1% 61.5% 67.9% 70.7% 79.8% 80.6% 80.9% —> noBFC fix< @,trainthe rest [JB3.1% 78.9% 78.4% 76.9% 75.9% 80.5% 81.0% || 81.2% —» no BFC fix < — 2, train the rest - (BB.4% |f 815% 79.8% 78.4% 80.7% 81.0% | 81.3% —> BFC for 2 layers fix < €— 4, train the rest - - (783.1% [81.9% | 81.2% (82.2% | 81.0% | 81.3% —» BFC for 4 layers fix < £—6, train the rest = = - (83.2% | | 82.3% (82.7% (82.0% | 81.8% —» BFC for 6 layers fix < £—8, train the rest - - - - — [g3.4% (3.2%) [182.7% [81.9% —» BFC for 8 layers train all the layers (83.2%) [783.2% {83.0% | 82.9% | 828: (52.9% [83.2% | 783.0% —> full BFC average weight correlations training neural nets 0.131 0.081 0.070 0.054 0.051 0.034 0.032 0.031. —» (for “train < 2” vs “rand init”) from the NTK regime average weight correlations correlation between —_ .. P (for “train < 2” vs “train all”) 2-927 0.956 0.966 0.958 0.965 0.960 0.950 0.968 0.967 0.959 0.948 with vs. without train only < & 16.0% 41.1% 58.5% 64.0% 67.3% 72.0% 74.8% | 76.0% | 76.1% | 76.9% —» noBFC fix< £,traintherest 79.4% + 72.9% 72.7% 70.6% 69.8% 72.1% 74.2% 75.2% 75.2% | 76.1% —» noBFC fix < @ — 2, train the rest - (75% 745% 72.3% 72.8% 74.7% 75.0% 75.1% || 75.8% —» BFC for 2 layers fix < # — 4, train the rest - - 79.9% |ff78.1% | \76.2% 75.2% 77.3% [76.9% | 75.8% ||75.9% —» BFC for 4 layers fix < £— 6, train the rest - - - 980.0% | 78.6% [76.8% (78.0 (17.4% [76.8% —» BFC for 6 layers fix < £—8, train the rest - - - [79.9% [78.4% [73.7 (79. 6% —> BFC for 8 layers train all the layers [79.7% | [79.9% | [79.6% [79.4% 79.3% [79.1% [979.0% [79.3% [79.1% , —» full BFC # o a E 2 Y i] 3 3} E wo & Figure 10: CIFAR-100 accuracy difference on WideResNet-34-5 with vs. without backward feature correction (BFC). In the table, “train < ¢” means training on! is ly the first £ convolutional layers; average weight correlation BFC. More 7 he average of (Ter? Ten) where w; and w} are the neuron weight vectors before and after experiments on CIFAR-10 and on adversarial training see Section 8.5 Observation: (1) at least 8 layers of backward feature correction is necessary for obtaining the best accuracy; (2) BFC is indeed a local feature correction process because neuron weights strongly correlate with those before BFC; and (3) neural tangent kernel (NTK) approach is insufficient to explain neural network training because neuron correlations with the random initialization is small. Intuitive can already already be c Ly, the locality comes from “information gap”, which asserts that the lower layers in G* fit a majority of the labels. When the lower layers in G are trained, their features will ose to those “true” lower-level features in G* and only a local correction is needed. '® We believe that the need for only local backward feature corrections is one of the main reasons that deep learning works in practice on performing efficient (deep) hierarchical learning. We refer to |3, for em pirical evidence that deep learning fails to perform hierarchical learning when informa- tion gap is removed and the correction becomes non-local, even in the teacher-student setting with a hierarchical arget network exactly generating the labels. The main contribution of our theoretical result is to show that such local “backward feature correction” can be done automatically when applying (a variant of) SGD o the training objective. 1 09 08 07 per-layer average correlation —> between with vs without backward feature correction 06 “trains=1" “train<=3" “trains=5" “trains=7" “train<=9" “train<=11""trainc=13""train<=1. vs vs vs vs vs vs vs vs “train all" “train all" “train all" “train all" “train all" “trainall" “train all" “train all" S*"train<=17""train<=19""train<=21" vs vs vs “train all" “train all" “train all" Mlayer 1 layer 3 layer layer? Mlayer9 Mlayer 11 Mlayer 13 Mlayer15 Mlayer17 Mlayer19 Mlayer 21 Figure 11: A more refined version of Figure 10 to show the per-block average weight correlations. Observation: BFC is local correction because neuron weights strongly correlate with those before BFC. 16Recall the purpose of such local correction is to fix over-fitting to higher-complexity signals. 15 # BC is far —> acc 0% layer 13 per-neuron 4 > acc 58.6% f q feature _ ‘orwart i” —» acc 67.0% feature —> acc 0% learning —- layer 15 per-neuron 4 —» acc 61.4% feature le 66.7% backward —* ace 66.7% feature a —> acc 0% correction layer 19 . » wenn nnn > per-neuron a —> acc 63.5% te ; feature eal —> acc 67.1% Figure 12: Visualize backward feature correction (per-neuron features) using WRN-34-5 on ¢2 adversarial training. Details in Section 8.5 Observation: backward feature correction is a local correction but is necessary for the accuracy gain. # 3.2 More on Information Gap and Classification Problem We have made a gap assumption oat < cet, which says in the target function G*(x), higher levels contribute less to its output. This is typical for tasks such as image classification on CIFAR- 10, where the first convolutional layer can already be used to classify > 75% of the data and higher-level layers have diminishing contributions to the accuracy (see Figure 5 on Page 6). For such classification tasks, researchers do fight for even the final 0.1% performance gain by going for (much) larger networks, so those higher-level functions cannot be ignored. Information Gap: Empirically. We point out that explicitly setting higher levels in the network to contribute less to the output has also been used empirically to improve the performance of training deep neural networks, such as training very deep transformers [41, 55, 56]. To formally justify information gap, it is beneficial to consider a classification problem. W.1.o.g. scale G*(a) so that Var,[G*(x)] = 1, and consider a two-class labeling function Y (xo, x): YÂ¥ (20,2) = sgn(xo + G*(x)) € {-1, 1} , where x9 ~ N(—E,[G*(x)], 1) is a Gaussian random variable independent of x. Here, xo can be viewed either a coordinate of the entire input (zo,x) € R¢+1, or more generally as linear direction xo = w'@ for the input @ € R**!. For notation simplicity, we focus on the former view. Using probabilistic arguments, one can derive that except for ag fraction of the input (xo, 7) ~ D, the label function Y (29, x) is fully determined by the target function G*(x) up to layer ¢— 1; or in symbols, !7 woot p [Â¥ (ao, x) # sgn (20 + Veco a,Sum(G(2))) | Zap. In other words, for binary classification: 17To be more precise, one can derive with probability at least ay (up to a small factor a) it satisfies to + Yi to + Yi g<e-1 AsSum(G3(x)) € (—ZGy,0) and |Sum(Gi(z))| > yoy (3.2) Indeed, there is probability at least 0.99 over x so that S>,2,_, asSum(G%(x)) < O(1), and at least 0.99 over x so that Sum(G7(x)) > way (using the well-conditioned properties from Section 5 with « < 29°F and L= o(log log d)). Then, using the property that xo is random Gaussian with variance 1 finishes the proof of (3.2), As a result, for at least ae/d? fraction of the data, the label function is affected by the ¢-th layer. One can do a similar argument to show that for at least 1 — ae/de fraction of the data, the label function is not affected by the ¢-th layer and beyond. # g<e-1 AsSum(G3(x)) 16 ae is (approximately) the increment in classification accuracy when we use an €-layer network comparing to (¢ — 1)-layer ones Therefore, information gap is equivalent to saying that harder data (which requires deeper networks to learn) are fewer in the training set, which can be very natural . For instance, around 70% images of the CIFAR-10 data can be classified correctly by merely looking at their rough colors and patterns using a one-hidden-layer network; the final < 1% accuracy gain requires much refined arguments such as whether there is a beak on the animal face which can only be detected using very deep networks. As another example, humans use much more training examples to learn counting, than to learn basic calculus, than to learn advanced calculus. For multi-class classification, information gap can be further relaxed. On CIFAR-100, a three- hidden layer network can already achieve 86.64% top-10 accuracy (see Figure 5 on Page 6), and the remaining layers only need to pick labels from these ten classes instead of the original 100 classes. In this classification regime, our Theorem 1 still applies as follows. Recall the cross entropy 1+e−yz where y ∈ {−1, 1} is the label and z ∈ R is the 1 (i.e., logistic loss) function CE(y, z) = − log prediction. In this regime, we can choose a training loss function Loss" (9,2; W,K) © CE(Y (xo, 2), v(a9 + F(x; W, K))) = log (1 + oY oa)v(a0+ Few K))) where the parameter v is around 1 ε is for proper normalization and the training objective is where the parameter v is around i is for proper normalization and the training objective is — xE ——~— xE Obj. (20,2; W,K) = Loss’ (9,2; W, K) + vReg(W, K) (3.3) We have the following corollary of Theorem 1: Theorem 4 (classification). In the same setting Theorem 1, and suppose additionally « > arin Given N = poly(d/e) i.i.d. samples Z from D and given their corresponding labels {Y (x0, 2) } (ao,x)eZ> — xE by applying a variant of SGD (Algorithm 1) over the training objective Obj (2; W,K), with prob- ability at least 0.99, we can find a learner network F in time poly(d/e) such that: Pr pt (0,2) # sgn(xo + F(x))] <e (x0,2)~ Intuitively, Theorem 4 is possible because under the choice v = 1/e, up to small multiplicative factors, “j-loss equals e2” becomes near identical to “cross-entropy loss equals <”. This is why we need to add a factor v in from of the regularizers in (3.3), We make this rigorous in Appendix G, 17 # Appendix I: Related Works, Experiments, Sketched Proofs We formally include the specifications of Algorithm 1 in Section 4. The requirements on the in- put distribution D is given in Section 5 (recall standard Gaussians and certain mixture of Gaussians are permitted). We give sketched proofs in Section 6, and discuss more related works in Section 7. We explain our experiment setups and give additional experiments in Section 8. # 4 Training Algorithm in each in- We describe our algorithm in Algorithm 1. It is almost the vanilla SGD algorithm: nermost iteration, it gets a random sample z ∼ D, computes (stochastic) gradient in (W, K), and moves in the negative gradient direction with step length η > 0. To make our analysis simpler, we made several minor modifications only for theory purpose on Algorithm 1 so that it may not appear immediate like the vanilla SGD at a first reading. e We added a target error ¢9 which is initially large, and when the empirical objective Obj falls elow $(€0)? we set €9 + €0/2. This lets us gradually decrease the weight decay factor Ao. e We divided Algorithm 1 into stages, where in each stage a deeper layer is added to the set of trainable variables. (When Obj falls below Thresy,,, we add We to the set; when it falls below Thresy.y, we add Ky to the set.) This is known as layerwise pre-training and we use it to simplify analysis. In practice, even when all the layers are trainable from the beginning, higher- evel layers will not learn high-complexity signals until lower-level ones are sufficiently trained. “Layerwise pre-training” yields almost identical performance to “having all the layers trainable from the beginning” (see Figure 7 and Section 8.4), and sometimes has advantage |43]. e When Ky is added to the set of trainable variables (which happens only once per layer £), we apply a low-rank SVD decomposition to obtain a warm-start for distilling Ky using Wy for theoretical purpose. This allows us to compute ke without knowing it in advance; it also 1elps avoid singularities in Ky which will make the analysis messier. This SVD warm-start is invoked only L times and is only for theoretical purpose. It serves little role in learning G*, and essentially all of the learning is done by SGD.18 We specify the choices of thresholds Thresg,, and Thresgy, and the choices of regularizer weights 3,0 A4,¢, \5,¢ in full in Appendix A, Below, we calculate their values in the special case of Theorem 1. 2 2 2 2 2 a Qa a a a £-1 £ 4 £ £ Thresy , = —— , Threspg =—— , Asy i, Are — , AXxe=— 4 (4.1) d3c1t dséc® dséc® d3ct d2ct As for the network width m, sample size N , and SGD learning rate η, in the special case Theorem 1 one can set N = poly(d/ε), m = poly(d/ε) and η = As mentioned above, our algorithm does not require knowing ky but learns it on the air. In Line 21 of Algorithm 1, we define rank,(M) as the number of singular values of M with value > b, 8For instance, after Ke is warmed up by SVD, the objective is still around a7 (because deeper layers are not trained yet). It still requires SGD to update each Ke in order to eventually decrease the objective to 7. 18 and use this to compute ky. Similarly, ag and the connection graph J can be learned as well, at the expense of complicating the algorithm; but grid searching suffices for theoretical purpose.!® Algorithm 1 A variant of SGD for DenseNet Algorithm 1 A variant of SGD for DenseNet Input: Data set Z of size N = |Z], network size m, learning rate 7 > 0, target error e. current target error €9 — B?; me 0; — A3ye, Ase, 5,0; A6,¢ — 03 [Reli,j — N(0, 1/(ke)?); 2: Ke, Wye <— 0 for every = 2,3,..., LD. 3: while e9 > ¢ do 4 5 an Ay: def while Obj “ Obj(Z;W, K) > 4 (eo)? do for (= 2,3,-.-,L do 6: if ne = 0 and Obj < Thresy a, then setup learning rate and weight decay 7: ne — 1, roe = ae © Be S maxf{ky : 7 € Aj > 2} 8: if A3¢ = 0 and Obj < Thresy.y_ then 9: set A3.0, A4e, Ase according to (4.1) 0 Ky < INITIAL-DISTILL?(W,); 1 end for 2: x «+ arandom sample from Z stochastic gradient descent (SGD) 3 for (= 2,3,---,L do __ 4: Ky «+ Ke — mVK,Obj(x; W,K). 5: W: + We — nVw,Obj (a; Ww, Kk) + noise © noise is any poly-small Gaussian noise; 6: end for © noise is for theory purpose to escape saddle points |29]. 7: end while 8: €9 + €0/2 and Age + Ag e/4 for every ¢ = 2,3,..., L. 9: end while 20: return W and K, representing F(a; W, K). procedure INITIAL-DISTILL¢(W~) warm-up for Ky, called only once for each (=2,3,..., L 21: ke rank /(1942)(W/ ,Wee-1). 22: U,X, Ve ky-SVD(Wy ,Wee-1), 23: return Ky where Kj, = US"? and Key) = S1/?V. # 5 General Distributions Here we define the general distributional assumptions of our work. Given any degree-q homogenous L polynomial f(a) = SOyeqn ar Tjctny , define C,(f) = Denn 27 as the sum of squares of its coefficients. Input Distribution. We assume the input distribution D has the following property: 1. (isotropy). There is an absolute constant c6 > 0 such that for every w, we have that Ll (w, 2) 7] < cg||wl|3 and Eli. Sule)? < cg||w|3 (5.1) aT’ 1°Tt suffices to know ay up to a constant factor a since one can scale the weight matrices as if G* uses precisely a. Oe c . This increases By by at most 2? ‘so does not affect our result. Gird searching for a’, takes time O(log(1/e))” < poly(d/e). Moreover, searching the neural architecture (the connections J) takes time 20) poly(d). 19 2. (hyper-contractivity). There exists absolute constant c2 > 0 such that, for every integer q € [1,2], there exists value c4(q) > q such that, for every degree g polynomial f(z). 2 1/e4(q) Pr (|f(¢) — E[f(@)|| > Al < ea(q) 7 (verre) 2 1/e4(q) Pr (|f(¢) — E[f(@)|| > Al < ea(q) 7 (verre) (5.2) If D = N (0, I), we have c4(q) = O(q) (see Lemma I.2b). Note Eq. (5.2) implies there exists value c3(q) ≥ 1 such that, for every degree q polynomial f (x), for every integer p ≤ 6, [(f@)”"] <eME[U@y]" (5.3) avDl If D = N (0, I), we have c3(q) ≤ O((6q)!); and more generally we have c3(q) ≤ O(c4(q))c4(q). 3. (degree-preserving). For every integer q ∈ [1, 2L], there exists c1(q) ≥ 1 such that for every polynomial P (x) with max degree q, let Pq(x) be the polynomial consisting of only the degree-q part of P , the following holds Cx(Pq) ≤ c1(q) E x∼D P (x)2 (5.4) For D = N (0, I), such inequality holds with c1(q) ≤ q! (can be easily proved using Hermite polynomial expansion).20 Assumptions (isotropy) and (hyper-contractivity) are very common and they are satisfied for sub-gaussian distributions or even heavy-tailed distributions such as p(x) ∝ e−x0.1. Assumption (degree-preserving) says that data has certain variance along every degree q direction, which is also typical for distributions such like Gaussians or heavy-tailed distributions. We point out that it is possible to have a distribution to be a mixture of C-distributions satis- fying (5.4), where none of the individual distributions satisfies (5.4). For example, the distribution can be a mixture of d-distributions, the i-th distribution satisfies that xi = 0 and other coordinates are i.i.d. standard Gaussian. Thus, non of the individual distribution is degree-preserving, however, the mixture of them is as long as q ≤ d − 1. It is easy to check some simple distributions satisfy the following parameters. Proposition 5.1. Our distributional assumptions are satisfied for cg = O(1), c1(q) = O(q)4, ca(q) = O(q), ¢3(q) = q° when D = N(0, 57), where © has constant singular values (i.e., in between 2(1) and O(1)), it is also satisfied for a mizture of arbitrarily many D; = N(0, =?) ’s as long as each Xj; has constant singular values and for each j, the j-th row: ||[%4];\|2 has the same norm for every i. In the special case of the main theorem stated in Theorem 1, we work with the above parameters. In our full Theorem 1’, we shall make the dependency of those parameters transparent. # 6 Sketched Proof Our goal in this section is to make the high level intuitions in Section 1.2 concrete. In this sketched proof let us first ignore the difference between truncated activations and the true quadratic activa- tion. We explain at the end why we need to do truncation. Let us now make the intuition concrete. We plan to prove by induction, so let us assume for now that the regression error is ¢? and for every layer ¢’ < 0, the function Sy is already learned ?°We can also replace this degree-preserving assumption by directly assuming that the minimal singular value of Exzrv[(Sp * S7,) @ (SZ * $7)] defined in Lemma D.1 is large for ¢’ 4 £ (and the corresponding “symmetric version” is large for ¢’ = €), as well as E,~p|||$7||3] < B for every € > 2, > 0. 20 correct up to error ¢/ag < ¢/a. Let us now see what will happen if we continue to decrease the regression error to (€)? for some €< ¢. We want to show a b+. e Se; can be learned to error = (forward feature learning), a b+. e Se; can be learned to error = (forward e Sw can be backward corrected to error = e Sw can be backward corrected to error = for each ¢/ < ¢ (backward feature correction). Note that due to error between $7, and Sw for ¢ < ¢, when we use them to learn the (¢+ 1)-th layer, namely a741G7,, = ar410 (WF, , -0(Sz) +. ), we cannot learn it correct for any error better than ¢/ay x ae41. Fortunately, using information gap, we have €/ay X ag41 < €, so if we continue to decrease the regression loss to (€)?, we can at least “hope for” learning some ep Fes = aeyiGh 4 up to error € as long as € > e/a X ayy. (This implies S41 Â¥ St,1 up to error aa’) Moreover, if we have learned ay41GZ,, to error € and the regression error is (é)?, then the sum of the lower-order terms )oy <p av G7 is also of error € < €, so by induction the lower-level features also get improved. There are several major obstacles for implementing the above intuition, as we summarized blow. Function value v.s. coefficients. To actually implement the approach, we first notice that F4, is a polynomial of maximum degree 2+! however, it also has a lot of lower-degree monomials. Obviously, the monomials up to degree 2° can also be learned in lower layers such as Fy. As a result, it is ¢mpossible to derive Fp, © G7,, simply from F ~ G*. Using a concrete example, the learner network could instead learn Fe4i(“) © Gj,,(x) — F"(x) for some error function F"(x) of degree 2°, while satisfying Fe(a) + Gi(«) + 4 F"(2). degree 2°, while satisfying Fe(a) + Gi(«) + 4 F"(2). Our critical lemma (see Theorem 2 or Lemma E.1) proves that this cannot happen when we train the network using SGD. We prove it by first focusing on all the monomials in Fy, of degree 2£41,...,2°F!, which are not learnable at lower-level layers. One might hope to use this observation to show that it must be the case Foyi(x) ~ G* ¢+1(x), where the Fou contains all the monomials in Fy41 of degree 26 +4,...,2 and similarly for Coat. Unfortunately, this approach fails again. Even in the ideal case when we already have Fy41 Â¥ “a e’, it still does not imply Fo i Great +e’. One counterexample is the polynomial Viel dl ala? —1) where x; ~ N(0,1). This polynomial is e’-close to zero, however, its degree-2 5 EL terms Vari d°2") for learning the degree 2” target function, leading to an unsatisfying bound. when added up is actually Vde! >> e’. In worst case, such difference leads to complexity To correct this, as a first step, we count the monomial coefficients instead of the actual function value. The main observation is that, if the regression error is already (€)?, then?! e (Step 1). The top-degree (i.e., degree-2‘+!) coefficients of the monomials in Fy, is e’ close to ani , that of GZ. 4, in terms of £9-norm, for e’ = without sacrificing a dimension factor (and only sacrificing a factor that depends on the degree). Taking the above example, the 2 norm of the coefficients of ti is indeed ¢’, which does not grow with the dimension d. Symmetrization. Asa second step, one would like to show that Step 1 — namely, Fy1 is learned so that its coefficients of degree 241 monomials match 7. 4, implies We+1,¢ is close to Wi, 1 ¢ in some measure. Indeed, all of the top-degree (i.e., degree 24+!) monomials in Fz, come from o(We4100(ReSe)), where Sv consists of all the top-degree (i.e., degree-2‘!) monomials in S;. At the same time, inductive assumption says Sy is close to S7, so the coefficients of S¢ are also close to $7. In other words, we arrive at the following question: 21Concretely, this can be found in (E.7) in our proof of Lemma E.1. 21 If (1) the coefficients of Se(x), in ly-norm, are ¢'-close to that of Si(x), and (2) the coefficients of o(We4100(ReS:)), in l-norm, are <'-close to that of o(W%,, ,0(S?)), then, does it mean that Woe+i¢ is e'-close to Wi10 in some measure? The answer to this question is very delicate, due to the huge amount of “symmetricity” in a degree-4 polynomial. Note that both the following two quantities o(Wi4100(S?)) = || Wi e(1 @ 2)(S7 @ SF) o(We4100(ReSe)) = || Wes1c(Re ® Re) (Se @ S))|| 2 are degree-4 polynomials over Se are degree-4 polynomials over Se and Se respectively. In general, when « € R¢ and M,M’ € R®*®,, suppose (« @ 2) 'M(a @ 2) is e/-close to (# @ x)'M'(ax ® x) in terms of coefficients when we view them as degree 4 polynomials, this does not imply that M is close to M’ at all. Indeed, if we increase M (1,2),(3,4) by 10!° and decrease M(1.3),(2,4) by 10!°, then (x @ z)'M(a ® x) remains the same. One may consider a simple fix: define a symmetric version of tensor product— the “* product” in Definition B.2 — which makes sure x*a only has (4) dimensions, each corresponding to the {i, j}- th entry for i < j. This makes sure My, 9} 43,4) is the same entry as Myo 1} 44,33. Unfortunately, this simple fix does not resolve all the “symmetricity”: for instance, My) (3.4, and My 3) (9,44 are still difference entries. For reasons explained above, we cannot hope to derive We41, and W7, ; » are e'-close. However, they should still be close after “twice symmetrizing” their entries. For this purpose, we introduce a “twice symmetrization” operator Sym on matrices, and eventually derive that:?? ¢ (Step 2). Wesie and W7, W7, ,, are close under the following notation (for e’ ~ aa) ¢ (Step 2). Wesie and W7, ,, are close under the following notation (for e’ ~ aa) ) Sym ((Re* Re)! (Were) Were (Re* Re) © Sym ((L« 1! (Were) Wye (1*2)) £6 (6.1 We then use (6.1) to non-trivially derive that ¢(We+1,c0(RvS¢)) is close to o(W7, ; »o(S7)), since Se is close to S7 as we have assumed. This implies the monomials in Fy+1 of degree > a match that of GZ,,. It is a good start, but there are lower-degree terms to handle. match that of GZ,,. It is a good start, but there are lower-degree terms to handle. Low-degree terms. Without loss of generality, we assume the next highest degree is 26 + 2'-?. (It cannot be 2 + 261 since we assumed skip links.) Such degree monomials must either come from o(W7, ; ,7(S7))— which we have just shown it is close to o(We+1,0(ReSe))— or come from the cross term (S7 * Styl (Wiss 0)! W414 ¢-2( S72 * Si_) Using a similar analysis, we can first show that the learned function Fy,, matches in coefficients the top-degree (i.e., degree Qe + 2h?) monomials in the above cross term. Then, we wish to argue that the learned Wp+1,¢-2 is close to Wi 410-2 in some measure. the learned Wp+1,¢-2 is close to Wi 410-2 in some measure. In fact, this time the proof is much simpler: the matrix (W7 In fact, this time the proof is much simpler: the matrix (W7 ie) We +1,¢-2 18 not symmetric, and therefore we do not have the “twice symmetrization” issue as argued above. Therefore, we can directly conclude that the non-symmetrized closeness, or in symbols.?? 22The operator Sym(M) essentially averages out all the Mi,j,k,l entries when {i, j, k, l} comes from the same unordered set (see Definition B.3). The formal statement of (6.1) is in Eq. (E.9) of Appendix E.3. 23The formal statement of this can be found in (E.12). 22 # e (Step 3). We+1e-2 and W7,; _» are close in the following sense (for e’ ~ aa) ) (Rez * Re-2) | (Weyiy-2)! Werie (Re * Re) © (I#D)" (Wii 2)’ Wire (*D) (6.2) We can continue in this fashion for all the remaining degrees until degree 2° + 1. Moving from W to K: Part I. So far Steps 2&3 show that Wye,1,; and Wii are close in some measure. We hope to use this to show that the function S¢41 is close to S7 ‘,, and proceed the induction. However, if we use the matrix Wy, to define S41 (instead of introducing the notation Ke41), then Se, may have huge error compare to S7 ne Indeed, even in the ideal case that (We+1,2)' Weyie © (Wii) Wire +e’, this only guar- antees that Wei1~. * UW? sae + Ve! for some column orthonormal matrix U. This is because the inner dimension m of (Wei) Were is much larger than that the inner dimension kp+1 of Z. ers This Ve’ error can lie in the orthogonal complement of U. Z. ers This Ve’ error can lie in the orthogonal complement of U. To fix this issue, we need to “reduce” the dimension of Wy+1¢ back to ke,1 to reduce error. This is why we need to introduce the Ky; matrix of rank ky,,, and add a regularizer to ensure that Ki Kesie approximates (Wy+1,)' Wei. (This can be reminiscent of knowledge distillation used in practice |37|.) This knowledge distillation step decreases the error back to <’ < Vé', so now Ke41,¢ truly becomes e’ close to Wi ‘410 UP to column orthonormal transformation.?° We use this to proceed and conclude the closeness of S741. This is done in Section E.6. Moving from W to K: Part II. Now suppose the leading term (6.1) holds without the Sym operator (see Footnote 25 for how to get rid of it), and suppose the cross term (6.2) also holds. The former means “(Wy+1,2)' We+1,¢ is close to ( tao! Wie’ and the latter means “(We41e-2)' We41,¢ is close to (W4, ; »_2)' Wi, 1,”. These two together, still does not imply that “(We4ie—-2) | We4ie—2 is close to (Wii eo) Whyie 2”: since the error of We41,¢—2 can also lie on the orthogonal complement of We41,¢. This error can be arbitrary large when We41,¢ is not full rank. This means, the learner network can still make a lot of error on the +1 layer, even when it already learns all degree > 2 monomials correctly. To resolve this, we again need to use the regularizer to ensure closeness between Wy» ¢_2 to Ky _2. It “reduces” the error because by enforcing Wer+1¢-2 being close to Ke41,¢, it must be of low rank— thus the “arbitrary large error” from the orthogonal complement cannot exist. Thus, it is important that we keep Wy being close to the low rank counterpart Ke, and update them together gradually. Remark 6.1. If we have “weight sharing”, meaning forcing W412 = We+1,2, then we immediately have (Wes1e-2)' We4te-2 is close to (Wiyie-2)! Wisi ea: so we do not need to rely on “Wy+1 ¢-2 is close to Ke41,¢” and this can make the proof much simpler. To conclude, by introducing matrices Ky,, and enforcing the low-rank Ki, Ke to stay close to Wi West, we have distilled the knowledge from Wy; and can derive that 26 24Recall that without RIP-type of strong assumptions, such over-parameterization m is somewhat necessary for a neural network with quadratic activations to perform optimization without running into saddle points, and is also used in [6]. ?5Tn fact, things are still trickier than one would expect. To show “Ke+1,¢ close to W7+1,2;” one needs to first have “We+41,¢ close to W741,2”, but we do not have that due to the twice symmetrization issue from (6.1) Instead, our approach is to first use (6.2) to derive that there exists some matrix P satisfying “PKy41,¢ is close to PW?4,,,” and “P~!Ky41,¢-2 is close to PWi41.0-2”- Then, we plug this back to (6.1) to derive that P must be close to I. This is precisely why we need a skip connection. 26The formal statement can be found in (E.21). 23 e (Step 4). Up to unitary transformations, Ky,, is close to W7,, with error e’ ~ aa and this also implies $211 is close to rial with error e’ as desired. Empirical v.s. Population loss. We have given a sketched proof to our intuition focusing on the case when F' is in the population case (i.e., under the true distribution D), since properties such as degree preserving Property 5.4 is only true for the population loss. Indeed, if we only have poly(d) samples, the empirical distribution can not be degree-preserving at all for any of = w(1). One would like to get around it by showing that, when F' is close to G* only on the training data set Z, then the aforementioned closeness between 5S, and S7 still holds for the population case. This turns out to be a challenging task. One naive idea would be to show that E,~z (F(x) — G*())? is close to Epwp (F(a) — G*(x))* for any networks weights W,K. However, this cannot work at all. Since F(x) — G*(x) is a degree 2© polynomial, we know that for a fixed F, E,wz (F(x) — G*(x))? © Epvp (F(2) — G*(a))? + € N log(1/<)) 25 only holds with probability e~ , where |Z| = N. This implies, in order for it to hold for all possible W, K, we need at least N = Qa") many samples, which is too bad. We took an alternative approach. We truncated the learner network from F' to F using truncated quadratic activations (recall 2.2): if the intermediate value of some layers becomes larger than some parameter B’, then we truncate it to Q(B’). Using this operation, we can show that the function output of F is always bounded by a small value. Using this, one could show that E,.z (F (x) — G*(«))? ~ E,wp (F(z) - G*(2))? 4 é. But, why is F(a) necessarily close to F (x), especially on the training set Z? If some of the x € Z is too large, then (F (2) - F(2))? can be large as well. Fortunately, we show during the training process, the neural network actually has implicit self-regularization (as shown in Corollary E.3e): Sp(a)||? stay away from 2B for most of the « ~ D. This ensures that E.~p(F (x) — F(2))? is small in the population loss. the intermediate values such as This implicit regularization is elegantly maintained by SGD where the weight matrix does not move too much at each step, this is another place where we need gradual training instead of one-shot learning. Using this property we can conclude that G*(x))” small E x∼Z which allows us to interchangeably apply all the aforementioned arguments both on the empirical truncated loss and on the population loss. # 7 More on Related Works Historically, due to the extreme non-convexity, for theoretical studies, the hierarchical structure of a neural network is typically adisadvantage for efficient training. For example, multi-layer linear network [24, 35] has no advantage over linear functions in representation power, but it already creates huge obstacle for analyzing the training properties. With such difficulties, it is perhaps not surprising that existing theory in the efficient learning regime of neural networks, mostly study (a simpler but already non-trivial) question: “can multi- layer neural networks efficiently learn simple functions that are already learnable by non-hierarchical models.” Specifically, they either reduce multi-layer neural networks to non-hierarchical models such as kernel methods (a.k.a. neural kernels) or focus on two-layer networks which do not have the deep hierarchical structure. 24 Learning two-layer network [5, 13, 17, 19, 22, 30, 31, 44, 46, 47, 49–51, 64, 68, 69, 72, 74, 75, 77, 80, 81]. There is a rich history of works considering the learnability of neural networks trained by SGD. However, as we mentioned before, many of these works only focus on network with 2 layers or only one layer in the network is trained. Hence, the learning process is not hierarchical in the language of this paper. Note even those two-layer results that study feature learning as a process (such as [5, 22, 53]) do not cover how the features of second layer can help backward correct the first layer, not to say repeating them for multiple layers may only give rise to layerwise training as opposed to the full hierarchical learning. Neural tangent/compositional kernel [4, 7, 8, 11, 12, 20, 21, 23, 25, 26, 32, 34, 39, 42, 48, 52, 62, 67, 76, 82, 83]. There is a rich literature approximating the learning process of over-parameterized networks using the neural tangent kernel (NTK) approach, where the kernel is defined by the gradient of a neural network at random initialization [42]. Others also study neural compositional kernel through a random neural network [23, 67]. One should not confuse these hierarchically-defined kernels with hierarchical learning. As we pointed out, see also Bengio [16], hierarchical learning means that each layer learns a combination of previous learned layers. In these cited kernel methods, such combinations are prescribed by the random initialization and not learned during training. As our negative result shows, for certain learning tasks, hierarchical learning is superior than any kernel method, so the hierarchically-learned features are indeed superior than any (even hierarchically) prescribed features. (See also experiments in Figure 4.) Three-layer result [6]. This paper shows that 3-layer neural networks can learn the so-called “second-order NTK,” which is not a linear model; however, second-order NTK is also learnable by doing a nuclear-norm constrained linear regression over the feature mappings defined by the ini- tialization of a neural network. Thus, the underlying learning process is still not truly hierarchical. Three-layer ResNet result [3]. This paper shows that 3-layer ResNet can at least perform some weaker form of implicit hierarchical learning, with better sample or time complexity than any kernel method or linear regression over feature mappings. Our result is greatly inspired by [3], but with several major differences. First and foremost, the result [3] is only forward feature learning without backward feature correction. It is a weaker version of hierarchical learning. Second, the result [3] can also be achieved by non-hierarchical methods such as simply applying kernel method twice.27 Third, we prove in this paper a “poly vs. super-poly” running time separation, which is what one refers to as “efficient vs non-efficient” in traditional theoretical computer science. The result [3] is regarding “poly vs. bigger poly” in the standard regime with constant output dimension. 28 Fourth, as we illustrate in Section 6, the major technical difficulty of this paper comes from ?7Recall the target functions in (3) are of the form F(x) + a- G(F(«)) for a < 1, and they were proved learnable by 3-layer ResNet up to generalization error a? in 3.. Here is a simple alternative two-step kernel method to achieve this same result. First, learn some F’(x) that is a-close to F(x) using kernel method. Then, treat (x, F’(z)) as the input to learn two more functions F,G using kernel method, to ensure that F(x) + aG(F’(2)) is close to the target. This incurs a fixed generalization error of magnitude a”. Note in particular, both this two-step kernel method as well as the 3-layer ResNet analysis from (3) never guarantees to learn any function F(x) that is a* close to F(x), and therefore the “intermediate features” do not get improved. In other words, there is no backward feature correction. ?8The result |3) only works for a concept class whose functions contain merely networks with “number of hidden neurons = output dimension.” Putting into the case of this paper, the output dimension is 1, so the result 3, only supports networks with one hidden neuron, and gives no separation between neural networks and kernel methods. When the output dimension is O(1), they give separation between d and d° which is “poly vs bigger poly”. 25 showing how the hidden features are learned hierarchically. In contrast, the intermediate features in [3] are directly connected to the outputs so are not hidden.29 Fifth, without backward feature correction, the error incurred from lower layers in [3] cannot be improved through training (see Footnote 27), and thus their theory does not lead to arbitrarily small generalization error like we do. This also prevents [3] from going beyond L = 3 layers. Separation between multi-layer networks and shallower learners. Prior results such as [27, 70] separate the representation power of multi-layer networks from shallower learners (with- out efficient training guarantee), and concurrent results [22, 53] separate the power of two-layer neural networks from kernel methods with efficient training guarantees. As we emphasized, proving separation is not the main message of this paper, and we focus on studying how deep learning perform efficient hierarchical learning when L = ω(1). Other theoretical works on hierarchical learning [1, 9, 61]. There are other theoretical works to perform provable hierarchical learning. The cited works [9, 61] propose new, discrete learning algorithms to learn certain hierarchical representations. In contrast, the main goal of our work is to explore how deep learning (multi-layer neural networks) can perform hierarchical learning simply by applying SGD on the training objective, which is the most dominant hierarchical learning framework in practice nowadays. The follow-up work [1] studied learning “staircase” polynomials over the Boolean cube via layerwise training. Their setting does not require backward feature correction (because over a Boolean cube, monomials of lower degrees are orthogonal to those of higher degrees), so may not capture the full power of hierarchical learning in practical deep learning (in which backward feature correction is necessary and layerwise training does not work well). # 8 Details on Empirical Evaluations Our experiments use the CIFAR-10 and CIFAR-100 datasets [45]. In one of our experiments, we also use what we call CIFAR-2, which is to re-group the 10 classes of CIFAR-10 into two classes (bird,cat,deer,dog,horse vs. the rest) and is a binary classification task. We adopt standard data augmentations: random crops, random flips, and normalization; but for adversarial training, we removed data normalization. For some of the experiments (to be mentioned later), we also adopt random Cutout augmentation [67] to obtain higher accuracy. We note there is a distinction between the original ResNet [36] and the later more popularized (pre-activation) ResNet [78]. We adopt the later because it is the basic block of WideResNet or WRN [78]. Recall ResNet-34 has 1 convolutional layers plus 15 basic blocks each consisting of 2 convolutional layers. We have also implemented VGG19 and VGG13 in some of our experiments, and they have 16 and 10 convolutional layers respectively. All the training uses stochastic gradient descent (SGD) with momentum 0.9 and batch size 125, unless otherwise specified. # 8.1 Feature Visualization on ResNet-34: Figure 1 We explain how Figure 1 is obtained. Throughout this paper, we adopt the simplest possible feature visualization scheme for ResNet: that is, start from a random 32x32 image, then repeatedly take ?°For experts familiar with 3, they only proved that hierarchical learning happens when the output vector contains explicit information about the intermediate output. In symbols, their target network is y = F(x) + a- G(F(z)), so the output label y is a vector that has explicit information of the vector F(x) up to error a. In this paper, we show that the network can discover hidden feature vectors from the target function, even if the output dimension is 1 such as y =u! F(a) +a-v'G(F(a)). 26 its gradient so as to maximize a given neuron in some layer. We perform gradient updates on the image for 2000 steps, with weight decay factor 0.003. Note however, if the network is trained normally, then the above feature visualization process outputs images that appear like high-frequency noise (for reasons of this, see |5|). Therefore, in order to obtain Figure 1 we run adversarial training. The specific adversarial attacker that we used in the training is 42 PGD perturbation plus Gaussian noise suggested by |65,. That is, we randomly perturb the input twice each with Gaussian noise 0 = 0.12 per coordinate, and then perform 4 steps of PGD attack with é) radius r = 0.5. We call this £2(0.5, 0.12) attacker for short. Recall ResNet-34 has 3 parts, the first part has 11 convolutional layers consisting of 16 channels each; the second part has 10 convolutional layers consisting of 32 channels each (but we plot 24 of them due to space limitation); the third part has 10 convolutional layers consisting of 64 channels each (but we plot 40 of them due to space limitation). To be consistent with the theoretical results of this paper, to obtain Figure 1, we have mod- ified ResNet-34 to make it more like DenseNet: the network output is now a linear functions (AvgPool+FC) over all the 16 blocks (15 basic blocks plus the first convolutional layer). This modification will not change the final accuracy by much. Without this modification, the feature visualizations will be similar; but with this modification, we can additionally see the “incremental feature change” in each of the 3 parts of ResNet-34. # 8.2 Toy Experiment on AlexNet: Figure 2 We explain how Figure 2 is obtained. Recall AlexNet has 5 convolutional layers with ReLU activa- tion, connected sequentially. The output of AlexNet is a linear function over its 5th convolutional layer. To make AlexNet more connected to the language of this paper, we redefine its network output as a linear functions over all the five convolutional layers. We only train the weights of the convolutional layers and keep the weights of the linear layer unchanged. We use fixed learning rate 0.01, momentum 0.9, batch size 128, and weight decay 0.0005. In the first 80 epochs, we freeze the (randomly initialized) weights of the 2nd through 5th convolutional layers, and only train the weights of the first layer). In the next 120 epochs, we unfreeze those weights and train all the 5 convolutional layers together. As one can see from Figure 2, in the first 80 epochs, we have sufficiently trained the first layer (alone) so that the features do not move significantly anymore; however, as the 2nd through 5th layers become trained together, the features of the first layer gets significantly improved. # 8.3 Quad vs ReLU vs NTK: Figure 4 Recall Figure 4 compares the performance of ReLU networks, quadratic networks and kernel meth- ods. We use standard data augmentation plus Cutout augmentation in these experiments. Recall Cutout was also used in [67] for presenting the best accuracy on neural kernel methods, so this comparison is fair. ReLU network. For the network WRN-L-10, we widen each layer of a depth L ResNet by a factor of 10. We train 140 epochs with weight decay 0.0005. We use initial learning rate 0.1, and decay by a factor of 0.2 at epochs 80, 100 and 120. In the plots we present the best test accuracy out of 10 runs, as well as their ensemble accuracy. Quadratic network. For the quadratic network WRN-L-10, we make slight modifications to the network to make it closer to our architecture used in the theorem, and make it more easily trainable. Specifically, we use activation function σ(z) = z + 0.1z2 instead of σ(z) = z2 to make 27 the training more stable. We swap the order of Activation and BatchNorm to make BN come after quadratic activations; this re-scaling also stabilizes training. Finally, consistent with our theory, we add a linear layer connecting the output of each layer to the final soft-max gate; so the final output is a linear combination of all the intermediate layers. We train quadratic WRN-L-10 for also 140 epochs with weight decay 0.0005. We use initial learning rate 0.02, and decay by a factor of 0.3 at epochs 80, 100 and 120. We also present the best test accuracy out of 10 runs and their ensemble accuracy. Finite-width NTK. We implemented a naive NTK version of the (ReLU) WRN-L-10 architecture on the CIFAR-10 dataset, and use iterative algorithms to train this (linear) NTK model. Per-epoch training is 10 times slower than standard WRN-L-10 because the 10-class outputs each requires a different set of trainable parameters. We find Adam with learning rate 0.001 is best suited for training such tasks, but the convergence speed is rather slow. We use batch size 50 and zero weight decay since the model does not overfit to the training set (thanks to data augmentation). We run the training for 200 epochs, with learning rate decay factor 0.2 at epochs 140 and 170. We run 10 single models using different random initializations (which correspond to 10 slightly different kernels), and report the best single-model accuracy; our ensemble accuracy is by combining the outputs of the 10 models. In our finite-width NTK experiments, we also try with and without ZCA data preprocessing for comparison: ZCA data preprocessing was known to achieve accuracy gain in neural kernel methods [67], but we observe in practice, it does not help in training standard ReLU or quadratic networks. We only run this finite-width NTK for WRN-10-10. Using for instance WRN-16-10 to obtain the same test accuracy, one has to run for much more than 200 epochs; due to resource limitations, we refrain from trying bigger architectures on this finite-width NTK experiment. # 8.4 Layerwise vs Hierachical Learning: Figure 7 Recall Figure 7 compares the accuracy difference between layerwise training and training all the layers together on VGG19 and ResNet-34 architectures. We also include in Figure 7 additional experiments on VGG13 and ResNet-22. ey S 100 ~ 3 90 a i) 80 70 13-layerwise 13 13-x2 0 2 4 6 8 10 0 2 4 6 8 10 # of layers # of layers veg-13-x4 = 6 CIFAR-10 Test Accuracy % CIFAR-100 Test Accuracy % a S w i) 60 (a) VGG13+BatchNorm, accuracy at x-axis S indicates only the first S convolutional layers are trained 0 bad x Fy > fa 8 75 8 3 < 265 bd i nd nn pec oc WRN-22-layerwise 3 8 waw-22 & 45 N-22-x4-layerwise g g N22 8 5 35 WRN-22-x8-layerwise ° —— WRN-22-x8 ° 2 4 ® 8 10 ° 2 4 6 8 10 ..... WRN-22-x16-layerwise # of blocks # of blocks a waw-22-x16 (b) WideResNet-22, accuracy at x-axis S indicates only the first S convolutional blocks are trained Figure 13: Layerwise training vs Training all layers together (additional experiments to Figure 7). 28 # 13-x2-layerwise # 13-x4-layerwise In those experiments, we use standard data augmentation plus Cutout. When widening an architecture we widen all the layers together by the specific factor. When performing “layerwise training”, we adopt the same setup as Trinh |73]. During the éth phase, we freeze all the previous (€— 1) convolutional layers to their already-trained weights (along with batch norm), add an additional linear layer (AvgPool + FC) connecting the output of the é-th layer to the final soft-max gate, and only train the ¢-th convolutional layer (with batch-norm) together with this additional linear layer. We train them for 120 epochs with initial learning rate 0.1 and decay it by 0.1 at epochs 80 and 100. We try both weight decay 0.0001 and 0.0005 and report the better accuracy for each phase ¢ (note this is needed for layerwise training as smaller weight decay is suitable for smaller 0). Once we move to the next phase + 1, we discard this additional linear layer.?° For “training all layers together”, to make our comparison even stronger, we adopt nearly the same training setup as “layerwise training”, except in the ¢-th phase, we do not freeze the previous < €—1 layers and train all the < @ layers altogether. In this way, we use the first (¢ — 1) layers’ pre-trained weights to continue training. The test accuracy obtained from this procedure is nearly identical to training the first layers altogether directly from random initialization.*! Finally, for ResNet experiments, we regard each Basic Block (consisting of 2 convolutional layers) as a single “layer” so in each phase (except for the first phase) of layerwise training, we train a single block together with the additional linear layer. # 8.5 Measure Backward Feature Correlation: Figures 3, 10, 11 and 12 Recall in Figure 3 and Figure 12 we visualize how layer features change before and after backward feature correction (BFC); in Figure 10 and Figure 11 we present how much accuracy gain is related to BFC, and how much and how deep BFC goes on the CIFAR-100 dataset. In this section, we also provide additional experiments showing how much and how deep BFC goes on (1) the CIFAR- 10 dataset in Figure 14(a), (2) on the &.. adversarial training in Figure 14(b), and (3) on the é5 adversarial training in Figure 14(c). In all of these experiments we use the vanilla WRN-34-5 architecture [78] (thus without widening the first layer and) without introducing “additional linear layer” like Section 8.4. We use initial learning rate 0.1 and weight decay 0.0005. For clean training we train for 120 epochs and decay learning rate by 0.1 at epochs 80 and 100; for adversarial training we train for 100 epochs and decay learning rate by 0.1 at epochs 70 and 85. For the case of ¢ € {0,1,2,..., 10}: e we first train only the first @ blocks of WRN-34-5 (and thus 2¢ + 1 convolutional layers), by e we first train only the first @ blocks of WRN-34-5 (and thus 2¢ + 1 convolutional layers), by zeroing out all the remaining deeper layers. We call this “train only < 0”; e we freeze these 2¢+ 1 layers and train only the deeper blocks (starting from random initial- we freeze these 2¢+ 1 layers and train only the deeper blocks (starting from random initial- ization) and call this “fix < @ train the rest”; e we also try to only freeze the < ¢—j blocks for j € {1,2,3, 4} and train the remaining deeper blocks, and call this “fix < @— jj train the rest”; we start from random initialization and train all the layers, but regularize the weights of the 30Our “additional linear layer” is represented by a 2-dimensional average pooling unit followed by a (trainable) fully-connected unit. “Discarding” this additional linear before moving to the next phase is also used in [15, 73]. 31Our adopted process is known as “layerwise pre-training” in some literature, and is also related to Algorithm 1 that we used in our theoretical analysis. We emphasize that “layerwise pre-training” should be consider as training all the layers together and they have the same performance. 29 CIFAR-10 accuracy €=1 €=3 0=5 €=7 0=9 P=11 0=13 €=15 C=17 €=19 L=21 train only < @ 417% 73.5% 89.1% 92.2% 93.4% 93.9% 94.2% 95.1% 95.7% 95.9% || 96.1% fix< @,trainthe rest [96.5% 93.7% 91.7% 92.8% 93.7% 94.2% 94.3% 95.3% 95.9% | 96.1% [963% fix < £ — 2, train the rest - 996.5% 95.5% 93.4% 93.6% 94.1% 94.4% 95.2% 95.8% 95.9% | 96.3% fix < €— 4, train the rest - - 796.4% [96.1% 94.9% 94.5% 94.4% 95.5% 95.6% 95.8% | 96.1% fix < £ — 6, train the rest 2 = - [796.7% | 96.1% 95.4% 95.0% 95.6% | 96.0% 95.6% 95.9% fix < £ — 8, train the rest - - - - 796.6% [96.3% 95.9% | 96.2% [96.3% | 96.1% 95.7% wainallthelavers 55x [Bex Fossn [Besw [orsn [ooo [oer (se (96% 965% fsx average weight correlations 9 >76 9.983 0.068 0.060 0.053 0.043 0.043 0.040 0.036 0.037 0.031 (for “train < 2” vs “rand init”) average weight correlations ee ee ome crm 0.905 0.977 0.975 0.972 0.966 0.965 0.964 0.966 0.959 0.957 0.939 (for “train < £” vs “train all”) train only < & 414% 71.2% 86.5% 89.9% 91.3% 91.8% 92.3% 93.9% | 94.5% | 94.7% | 95.0% fix< @,trainthe rest [95.6% 90.8% 89.1% 90.4% 91.5% 92.0% 92.5% 93.9% || 94.6% | 94.8% [95.0% fix < £— 2, train the rest - 5.7%) 93.7% 90.9% 91.3% 91.8% 92.3% 93.7% 94.4% | 94.6% [94.9% fix < #— 4, train the rest - - 85.7% [\94.7% 93.0% 92.2% 92.4% 93.9% 94.1% 94.2% || 94.6% fix< £6, train the rest - - - B58%| [94.9% 93.8% 93.2% 94.2% | 94.6% 94.1% 94.2% fix < £ — 8, train the rest - - - - Ff5.8%) ff9s.1% | 94.5% [94.7% [95.0% [94.9% 94.1% train all the layers [15.8% | ff95.6% [5.6% [95.8% | ff95.6% [95.5% 95.4% 95.7%] ffa5.o% [95.8% | ss.o% | tdddddd db bdbddddd no BEC no BEC BFC for 2 layers BFC for 4 layers BFC for 6 layers BFC for 8 layers full BEC training neural from the NTK correlation between with vs. without no BEC no BEC BFC for 2 layers BFC for 4 layers BFC for 6 layers BFC for 8 layers full BEC # o a E 4 e # z fe) E — € (a) clean training on CIFAR-10 CIFAR-10, L2 adversarial €=1 €=3 ¢€=5 ¢=7 ¢€=9 #=11 £=13 €=15 £=17 €=19 £=21 train only < @ 23.5% 36.2% 47.0% 52.6% 55.9% 57.3% 58.7% 61.9% 63.3% ||63.9% | 64.4% fix< £,trainthe rest — /]67.2% 63.4% [64.3% 62.5% 58.3% 57.8% 59.3% 61.8% 62.6% | 63.7% | 63.7% fix < # — 2, train the rest - [673% [5.7% | 63.7% 62.3% 60.1% 59.9% 61.6% 62.7% | 63.7% | 64.1% fix < £ — 4, train the rest - - 67.0% | ff65.9% Feh0% 63.1% 62.1% 63.5% 63.3% 63.5% | 64.0% fix < £—6, train the rest 2 - - 67.0% | ffe5.8% fe4.7% [64.5% [fesi3% ffes.6% [64.7% | 63.9% - fix< 8, train the rest - . ~ le70% i665 [fes.a% les. fes3% [lesa [fes.1% train all the layers 5.7% .0% 9% 8% 7%. 5.794) 7%) 5% .0% {66.0% | 65.5% ‘ight lati average WeIBnt correlations 0.354 0.134 0.108 0.089 0.074 0.065 0.059 0.050 0.045 0.045 0.042 (for “train < 2” vs “rand init”) average weight correlations coir, 0.798 0.868 0.877 0.864 0.911 0.890 0.862 0.860 0.861 0.834 0.802 (for “train < £” vs “train all”) 4, diiiidd no BEC no BEC BFC for 2 layers BFC for 4 layers BFC for 6 layers BFC for 8 layers full BEC training neural from the NTK correlation between with vs. without z Q © —_— < ES (b) adversarial training on CIFAR-10 with @, radius 6/255 CIFAR-10, Linfadversarial €=1 f€=3 ¢€=5 ¢€=7 #=9 ¢€=11 €=13 #=15 £=17 £=19 €=21 train only < ¢ 21.6% 28.4% 37.6% 42.8% 46.7% 48.1% 50.6% 54.1% 55.6% 56.5% 57.4% fix< @,trainthe rest [J60.6% 55.7% 56.7% 54.5% 52.6% 51.0% 52.0% 54.0% 55.5% 56.6% 57.5% fix < £ — 2, train the rest - §B13% || 58.2% 55.9% 54.2% 53.3% 53.1% 55.6% 56.4% 56.5% 57.2% fix < &— 4, train the rest - - B1.0% [58.6% 57.0% 56.3% 55.7% 57.5% 57.8% 57.3% 57.6% fix < £ — 6, train the rest 2 = - (Gian | 57.6% 58.0% [[59.1% [[59|5% [58.7% |] 58.3% fix < £ — 8, train the rest - - - - [59.3% [58.6% [59.4% [60.0% [/60.4% [60.0% train all the ayers Bos% Wia%| WIa% [loo7% [loon [6osn f6osx féo2% [eos [eon average weight correlations (for “train < £” vs “rand init”) 2-264 0.137 0.086 0.070 0.061 0.054 0.054 0.041 0.047 0.040 0.037 average weight correlations 0.622 0.783 0.845 0.835 0.898 0.883 0.847 0.867 0.849 0.942 0.904 (for “train < £” vs “train all”) tb ddiiidd no BEC no BEC BFC for 2 layers BFC for 4 layers BFC for 6 layers BFC for 8 layers full BEC training neural nets from the NTK regime correlation between with vs. without # single model (c) adversarial training on CIFAR-10 with ¢2(0.5, 0.12) attacker Figure 14: This table gives more experiments comparing to Figure 10. first < @ blocks so that they stay close to those obtained from “train only < ¢’, and we call 2 (32 this “train all the layers”. This explains how we obtained Figure 10, Figure 11 and Figure 14. We emphasize that by comparing the accuracy difference between “train all the layers” and “fix < @— J and train the rest”, one can immediately conclusion on how deep is it necessary for backward feature correction to go. 32In principle, one can tune this regularizer weight so as to maximize neuron correlations to a magnitude without hurting the final accuracy. We did not do that, and simply trained using weights 0.0005 and 0.0007 and simply reported the better one without hurting the final accuracy. 30 nets is far # regime # BEC nets is far # regime # BFC is far # BFC As for feature visualizations in Figure 3 and Figure 12, we compare the last layer visualizations of “train only < @” (or equivalently “fix < @ train the rest”) which has no backward feature correction from deeper layers, as well as that of “train all the layers” which is after backward feature correction from all the deeper layers. For the adversarial attacker used in Figure 14(b), we used €,, PGD attacker for 7 steps during training, and for 20 steps during testing; for the adversarial attacker used in Figure 14(c), we used 2(0.5, 0.12) (see Section 8.1) for training and replaces its PGD number of steps to 20 during testing. # 8.6 Gap Assumption Verification: Figure 5 Recall in Figure 5 we have compared the accuracy performance of WRN-34-10 with various depths. In this experiment we have widened all the layers of the original ResNet-34 by a factor of 10, and we remove the deepest j basic blocks of the architecture for j ∈ {0, 1, 2, . . . , 15} in order to represent WRN-34-10 with various depths. We train each architecture for 120 epochs with weight decay 0.0005, and initial learning rate 0.1 with decay factor 0.1 at epochs 80 and 100. In the single model experiments, we run the training 10 times, and report the average accuracy of those 8 runs excluding the top and bottom ones; in the ensemble experiment, we use the average output of the 10 runs to perform classification. 31 # Appendix II: Complete Proofs We provide clear roadmap of what is included in this appendix. Note that a full statement of our theorem and its high-level proof plan begin on the next page. • Section A : In this section, we first state the general version of the main theorem, including agnostic case in Section A.5. • Section B : In this section, we introduce notations including defining the symmetric tensor product ∗ and the twice symmetrization operator Sym(M). • Section C : In this section, we show useful properties of our loss function. To mention a few: 1. In Section C.1 we show the truncated version Se is close to Sz in the population loss. 2. In Section C.3 we show S; is Lipschitz continuous in the population loss. We need this to show that when doing a gradient update step, the quantity E,~p|||S¢||?] does not move too much in population loss. This is important for the self-regularization property we discussed in Section 6 to hold. 3. In Section C.4 we show the empirical truncated loss is Lipschitz w.r.t. K. 4. In Section C.5 we show the empirical truncated loss satisfies higher-order Lipschitz smoothness w.r.t. K and W. We need this to derive the time complexity of SGD. 5. In Section C.6 we show empirical truncated loss is close to the population truncated loss. We need this together with Section C.1 to deriv the final generalization bound. Section D : SECTION D : In this section, we prove the critical result about the “coefficient preserving” property of Sé(x), as we discussed in Section 6. This is used to show that if the output of F is close to G* in population, then the high degree coefficient must match, thus W must be close to W* in some measure. e SECTION E : In this section, we present our main technical lemma for hierarchical learning. It says as long as the (population) objective is as small as ¢?, then the following properties hold: loosely speaking, for every layer @, 1. (hierarchical learning): S;(x) close to S7(a) by error ~ e/ae, up to unitary transforma- tion. 2. (boundedness): each E[||:S¢(x)||3] is bounded. (This is needed in self-regularization.) We emphasize that these properties are maintained gradually. In the sense that we need to start with a case where these properties are already approximately satisfied, and then we show that the network will self-regularize to improve these properties. It does not mean, for example in the “hierarchical learning” property above, any network with loss smaller than ε2 satisfies this property; we need to conclude from the fact that this network is obtained via a (small step) gradient update from an earlier network that has this property with loss ≤ 2ε. • Section F : In this section, we use the main technical lemma to show that there is a descent direction of the training objective, as long as the objective value is not too small. Specifically, we show that there is a gradient update direction of K and a second order Hessian update direction of W, which guarantees to decrease the objective. This means, in the non-convex optimization language, there is no second-order critical points, so one can apply SGD to sufficiently decrease the objective. 32 • Section G : We show how to extend our theorems to classification. • Section H : This section contains our lower bounds. # A Main Theorem and Proof Plan Let us recall that d is the input dimension and x € R¢ is the input. We use L to denote the total number of layers in the network, and use ke to denote the width (number of neurons) of the hidden layer €. Throughout the appendix, we make the following conventions: e k = maxe{ke} and ke = max{kj : 7 € JeAj > 2}. e B=max;{By} and By = max{B; : j € Ie Aj = 2}. Our main theorem in its full generalization can be stated as follows. Theorem 1’ (general case of Theorem 1). There is absolute constant c0 ≥ 2 so that for any desired accuracy ε ∈ (0, 1), suppose the following gap assumption is satisfied ae L < — 9 (5-2) > (ca(2/) log(ab/e))*) - (n- e1(2) -e9(2/))*" I yee Oe+1 jae Then, there exist choices of parameters (i.e., regularizer weight, learning rate, over parameteriza- tion) so that using a(ea(24)) N ≥ d2 · logΩ(1) d δ + d log d ε6 · poly(B, k, κ) · c4(2L) log BkLκd δε samples. With probability at least 0.99 over the randomness of {Re}e, with probability at least 1—6 over the randomness of Z, in at most time complexity ld T < pol a TF Bo, (ca(2"))#O) logs =, = p(w Be, (ca(2”)) log we SGD converges to a point with Obj(Z; W,K) <c? Obj(D;W,K) <<? Obj(D; W,K) <2” Corollary A.1. In the typical setting when c3(q) < q2, ex(q) < O(q2), and ca(q) < O(@), Theorem 1’ simplifies to a L a d\° e ener” 0) ow ys Ce oy 20" I € O41 d we de a(2F) N>d@. log?) d +4 Bkrd 4 be poly(B,k, «) - (2 log a ld T < poly ( [] eB, 2" 108” 5, ‘) E £ Corollary A.2. In the special case Theorem 1, we have additional assumed δ = 0.01, L = o(log log d), κ ≤ 2CL c3(q) ≤ qO(q), c1(q) ≤ O(qq), and c4(q) ≤ O(q), simplifies Theorem 1’ to OOHL ae <dc’, N>poly(d/e), and T < poly(d/e) 33 # A.1 Truncated Quadratic Activation (for training) To make our analysis simpler, it would be easier to work with an activation function that has bounded derivatives in the entire space. For each layer ¢, we consider a “trun- cated, smooth” version of the square activation G(z) de- fined as follows. For some sufficiently large Bj (to be chosen later), let ~ o(z), if |z|< Bi, ” ~ lo = y _ for some BY = O((B' (2) { BY if |z| > 2Bt ¢ ((Bi)”) if |z|< Bi, _ if |z| > 2Bt [B/,2B)], . . ~ and in the range [B/,2B)], function (z) can be cho- sen as any monotone increasing function such that |oe(z)'|, |ee(z)"|, |@e(z)”| = O(B{) are bounded for every z. \ | [-omis in the limit identical to o(2) = 2? for some sufficiently large B} { { T T T i —B, 0 BY Figure 15: truncated quadratic activation Accordingly, we define the learner network with respect to the truncated activation as follows. Sola) = Giz), Sil) =Gi(x), Sex) = Dyeg joo Kes (R,5;(c)) + Dyetorynn Kes Sj(a) F(x) = DhgacSum(Fi(x)) , Fee) = 6 (Syeg.ys2 Wes (RiS)(@)) + Djeoryjnr, WesSi()) We also o instead of when its clear from We also use o instead of oj; when its clear from content. Remark A.3. The truncated F is for training purpose to ensure the network is Lipschitz smooth, so we can obtain simpler proofs. Our choice Bj makes sure when taking expectation over data, the difference between ¢;(z) and o(z) is negligible, see Appendix C.1. Thus, our final learned network F(a) is truly quadratic. In practice, people use regularizers such as batch/layer normalization to make sure activations stay bounded, but truncation is much simpler to analyze in theory. # A.2 Parameter Choices Definition A.4. In our analysis, let us introduce a few more notations. e With the following notation we can write poly(Ke) instead of poly(ke, L,«) whenever needed. Re = (ke: L- «)* and ty = (Be- ke L- ). # Re = (ke: L- «)* and ty = (Be- ke L- ). The next one is our final choice of the truncation parameter for p(x) at each layer &. Bi = poly(7) - Q(ca(2°) log(dL /e))#29 and By = max{ Bi : 7 € Je Aj > 2} • The following can simplify our notations. k= max{ke}, B= max{ Be}, K= max{fe}, T= max{re}, B= max{ By} • The following is our main “big polynomial factors” to carry around, and it satisfies col “ 20-26(5-£) and Tp = II) j=l De® (re: n™- (2!)* -e1(2") -ea(2")) Note it satisfies Te > (De)??(Ter1Ve42---Tz)®. The following is our gap assumption. Oe+1 < a ~ (Yes1)8Bess 34 Our thresholds 2 2 Qe_1 1 ag Threse , = (oes) , Thresgy = i (osm x) The following is our choice of the regularizer weights33 2 2 2 2 € ag ag ag 6,0 = = A3,0= » Me= zy ABC = 3 (Re)?? De: Te "(Di)" (De) BYE e The following is our amount of the over-parametrization m > poly(x, B’)/e? e The following is our final choice of the sample complexity : d dlog d q\ ©4(2")+2(1) N>@- log?) 5 + — - poly(7) (2%eu2") log ~) € E # A.3 Algorithm Description For Analysis Purpose For analysis purpose, it would be nice to divide our Algorithm 1 into stages for @ = 2,3,...,L. — 2 © Stage 04 begins with Obj(Z; W,K) < Thresy,, (oS) (De-1) . Our algorithm satisfies 7; = 0 for 7 > @ and A3; = Ady = A5, = 0 for 7 = &. In other words, only the matrices Wo2,..., We, Ko,..., Ke_; are training parameters and the rest of the matrices stay at zeros. Our analysis will ensure that applying (noisy) SGD one can decrease 2 to ; : 1 ag ° ta ; ic pee ~ > this objective to 7 ( De =) , and when this point is reached we move to stage ¢°. — 2 © & begins with Obj(Z;W,K) < Thresyy “ 4 (asta) — & begins with Obj(Z;W,K) < Thresyy “ 4 (asta) In this stage, our analysis will guarantee that Wi Wea is extremely close to a rank matrix, so we can apply k-SVD decomposition to get some warm-up choice of Ky satisfying fp Keg — Why Weallr being sufficiently small. Then, we set A3¢, 4,2, A5,¢ from Definition A.4, and our analysis will 2 ; : ‘ective j ases to ¢ ; ae ste v ensure that the objective increases to at most ( 7 atin) . We move to stage 0”. — 2 « £° begins with Obj(Z; W,K) < 4Threspy = (at) Our algorithm satisfies 7; = 0 for 7 > @ and A3; = Ady = A5,; = 0 for 7 > & In other words, only the matrices W2,..., Wo, Ko,..., Ky are training parameters and the rest of the matrices stay at zeros. Our analysis will ensure that applying (noisy) SGD one can decrease 2 this objective to (war) , SO we can move to stage (¢+ 1). (eo)4 (Re)? In Algorithm 1, we have in fact chosen Age = (eo)4 (Re)? the current “target error”, that is guaranteed to be within a factor of 2 comparing to the true ¢ (that comes from where €o is 33Tet us make a comment on Age = m= In Algorithm 1, we have in fact chosen Age = e= Obj(Z; W,K)). To make the notations simpler, we have ignored this constant factor 2. 35 # kg # A.4 Proof of Theorem 1’ We begin by noting that our truncated empirical objective Obj(Z ;W,K) is in fact lip-bounded, lip-Lipschitz continuous, lip-Lipschitz smooth, and lip-second-order smooth for some parameter lip = (%, BO - poly (2. (c4(24)) 42”) Joges(2") 54) that is sufficiently small (see Claim C.5). This parameter lip will eventually go into our running time, but not anywhere else. # ε2 Throughout this proof, we assume as if Ag is always set to be ae: where e? = Obj(Z; W,K) is the current objective value. (We can assume so because Algorithm 1 will iteratively shrink the target error €9 by a factor of 2.) Stage £°. Suppose we begin this stage with the promise that (guaranteed by the previous stage) 2 2_ Onye. . Ove (x)I2) <1; . P= ObI2 WK) < (HA) amd {BIIS@IBS of A) and Algorithm 1 will ensure that Wy, = 0 is now added to the trainable parameters. Our main difficulty is to prove (see Theorem F.10) that whenever (A.1) holds, for every small η1 > 0, there must exist some update direction (W(new), K(new)) satisfying © |KO") — Ke < m - poly(&), © Ep ||We™) — WII; < m - e Ep [Obj(Z; wire) K(new))] < m - poly(®), [Obj(Z; wire) K(new))] < Obj(zZ; W,K) — m (0.722 — 2az,4). Therefore, as long as e? > 4a? 41, by classical theory from optimization (see Fact I.11 for complete- ness), we know that 2 co 2 E ——, —— € either || VObj(Z; W, K) ||» > ——— or Arin (v?0bj(Z; W. K)) <-—_. (A.2) poly(*) poly(x) This means, the current point cannot be an (even approximate) second-order critical point. Invoking known results on stochastic non-conex optimization |29|, we know starting from this point, (noisy) SGD can decrease the objective. Note the objective will continue to decrease at least until <7 < 1 8a? yy, but we do not need to wait until the objective is this small, and whenever ¢ hits sD we can go into stage 0°. Remark A.5. In order to apply SGD to decrease the objective, we need to maintain that the bound- edness E,,.p[||$; (x) [3] < 7; in (A-1) always holds. This is ensured because of self-regularization: we proved that (1) whenever (A.1) holds it must satisfy a tighter bound E,~p|||$;(x)||3] < 2B; < 7;, and (2) the quantity E,~p[||$;(2)||3] satisfies a Lipschitz continuity statement (see Claim C.3). Specifically, if we move by in step length, then E,W p|||$;(x)||3] is affected by at most 7 - (IIc poly(7j,¢3(2"))). If we choose the step length of SGD to be smaller than this amount, then the quantity E,~p|||9;(2x)||3] self-regularizes. (This Lipschitz continuity factor also goes into the running time.) 2 Stage £°. Using e? < ; (ats) , we shall have a theorem to derive that®4 2 ly(K, poly (Ke) Why Wra - M| < Poyke ee-1 "ea FO (De)*T¢ # 34In the language of later sections, Corollary E.4a implies 2 1 + <=T T - QM Wie WreQes — Wr iW tall < Wot * 36 for some matrix M with rank ky and singular values between [s, k?L?]. Note that when connect- ing this back to Line 21 of Algorithm 1, we immediately know that the computed ky is correct. Therefore, applying kp-SVD decomposition on Wi 1 Wea on Line 23, one can derive a warm-up solution of Ky satisfying , ly(Ke) K],_;Kra — Why Weal < Pe I b0-14%04 00-1 calle > (Dy)*Ty . Note that, without loss of generality, we can assume ||Ky|| 7 < poly(«, L) < %/100 and Kip Kee — Wi Wee-ille < poly(Re) and ||K/ Ke — Wy Wel|z < poly(Re) (This can be done by left/right multiplying the SVD solution as the solution is not unique. Kip Kee — Wi Wee-ille < poly(Re) and ||K/ Ke — Wy Wel|z < poly(Re) (This can be done by left/right multiplying the SVD solution as the solution is not unique. Since we have chosen regularizer weights (see Definition A.4) 2 2 2 2 € Qa a a re ——, e d se = £ OO Re? De Fe OM (DTTP? (Dy) BF with the introduction of new trainable variables Ky, our objective has increased by at most (Re)? poly (xe) ~ ~ Xo + A30° Aa,e* pol As,e° pol 699 + 8L" CH pay, TAM PO y(Re) + As,¢- poly(Ke) e? a? a? a? 1 ae 2 S$ +54 tte St f__<- ~ 100 © Y7(De)4 — V7(De)® —” VF(De)? ~ 4 \(De)BVTe (Re)? poly (xe) Xo + A30° 699 + 8L" CH pay, e? a? a? S$ +54 tte St ~ 100 © Y7(De)4 — V7(De)® —” This means we can move to stage ¢Y. Stage £”. We begin this stage with the promise 2 ae ≤ 2 2. ae 2 e* = Obj(Z; W,K) < (ae) and { Sj (x) Ia] < } (A.3) ° (DVT pl Silla stp and our trainable parameters are W1,..., We, Ki,..., Ke. This time, we have another Theorem F.11 to guarantee that as long as (A.3) is satisfied, then (A.2) still holds (namely, it is not an approximate second-order critical point). Therefore, one can still apply standard (noisy) SGD to sufficiently de- crease the objective at least until <2 < 80741 (or until arbitrarily small <? > 0 if €= L). This is much smaller than the requirement of stage (¢ + 1)°. For similar reason as Remark A.5, we have self-regularization so E,,.p|||5;(x)||3] < 7; (for 7 < 2) holds throughout the optimization process. In addition, this time Theorem F.11 also implies that ae whenever we exit this stage, namely when ¢ < wor * satisfied, then E,~p|||S¢(x)||3] < 2Be. End of Algorithm. Note in the last LY stage, we can decrease the objective until arbitrarily small ec? > 0 and thus we have Obj(Z; W, K) < e?. Applying Proposition C.7 (relating empirical and population losses) and Claim C.1 (relating truncated and quadratic losses), we have # Obj(D; W, K) < 2e2 and Obj(D; W, K) ≤ 3ε2 . Time Complexity. As for the time complexity, since our objective satisfies lip-Lipschitz property until second-order smoothness, the time complexity of SGD depends only on poly(lip, 4, d) (see |29.). Quadratic Activation. We used the truncated quadratic activation (x) only for the purpose to make sure the training objective is sufficiently smooth. Our analysis will ensure that, in fact, we! 6 | OU™ ° + ww ow . . + my Since W*,¢_;W* eq is of rank ke, this means Qi Wee WeaQea is close to rank ke. Since our notation We,;Q; is only an abbreviation of We,;(R;U; *R;U;) for some well conditioned matrix (R;U,; « R;U;), this also implies Wi Wee is close to being rank ke. At the same time, we know that the singular values of Wri Weg are between [25,7 L7] (see Fact B.7). 37 when substituting o;(a) back with the vanilla quadratic activation, the objective is also small (see (F.8) and (F.9)). # A.5 Our Theorem on Agnostic Learning For notational simplicity, throughout this paper we have assumed that the exact true label G*(x) is given for every training input 7 ~ Z. This is called realizable learning. In fact, our proof trivially generalizes to the agnostic learning case at the expense of introducing extra notations. Suppose that Y (x) ∈ R is a label function (not necessarily a polynomial) and is OPT close to some target network, or in symbols, [(G*(x) —Y(x))?] < OPT . ae Suppose the algorithm is given training set {(x, Y (x)) : x ∈ Z}, so the loss function now becomes Loss(x; W, K) = (F (x; W, K) − Y (x))2 Suppose in addition that |Y (x)| ≤ B almost surely. Then,35 Theorem 3’ (agonistic version of Theorem 1’). For every constant γ > 1, for any desired accuracy ε ∈ ( — 1 —— 1 1 Obj(Z;W,K) < (1+=)OPT+e2 Obj(D; W, K) < (14+=)OPT+e2 Obj(D; W,K) < (1+—)OPT+¢? Y y Y # B Notations and Preliminaries We denote by ||w||2 and ||w||.. the Euclidean and infinity norms of vectors w, and ||w||9 the num- ber of non-zeros of w. We also abbreviate ||w|| = ||w||2 when it is clear from the context. We use ||W||7, || W|/2 to denote the Frobenius and spectral norm of matrix W. We use A > B to denote that the difference between two symmetric matrices A — B is positive semi-definite. We use Omin(A), Omax(A) to denote the minimum and maximum singular values of a rectangular matrix, and Amin(A), Amax(A) for the minimum and maximum eigenvalues. We use N (µ, σ) to denote Gaussian distribution with mean µ and variance σ; or N (µ, Σ) to denote Gaussian vector with mean µ and covariance Σ. We use 1event or 1[event] to denote the indicator function of whether event is true. We denote Sum(x) = >>, i xi as the sum of the coordinate of this vector. We use σ(x) = x2 as the quadratic activation function. Also recall Definition B.1. Given any degree-q homogenous polynomial f(x) = define Definition B.1. Given any degree-q homogenous polynomial f(x) = ren: \ITln—q TLetnj wy, define CAf)= YI az TEN”: |[Z|]1=¢ When it is clear from the context, we also denote C(f ) = Cx(f ). # B.1 Symmetric Tensor When it is clear from the context, in this paper sets can be multisets. This allows us to write {i, i}. We also support notation V{i, j} € ("3") to denote all possible (unordered) sub multi-sets of [n] with cardinality 2. “The proof is nearly identical. The main difference is to replace the use of OPT<, < 2074 with OPT<e < O(azy4) + (1+ +)OPT (when invoking Lemma F.8) in the final proofs of Theorem F.10 and Theorem F.11 38 # )OPT+ε2 Definition B.2 (symmetric tensor). The symmetric tensor ∗ for two vectors x, y ∈ Rn is given as: # ve ylag = aviay, n+1 for j i. Notex*y € RU2'). ∀1 ≤ i ≤ j ≤ p √ # 2 ). The symmetric tensor ∗ for two matrices for ai,i = 1 and ai,j = X, Y ∈ Rm×n is given as: [X ∗ Y]p,{i,j} = ai,jXp,iXp,j, ∀p ∈ [m], 1 ≤ i ≤ j ≤ p and it satisfies X ∗ Y ∈ Rm×(n+1 2 ). It is a simple exercise to verify that (x, y)? = (a *a,y *y). 2 )×(n+1 Definition B.3 (Sym). For any M ∈ R(n+1 “twice-symmetric” version of M. For every 1 ≤ i ≤ j ≤ n and 1 ≤ k ≤ l ≤ n, define36 2 )∧{p,q,r,s}={i,j,k,l} ap,qar,sM{p,q},{r,s} det Vp, ah trste("S)Mp.ars}={i. kl} Ap.q%r,sM {p,q} {7,8} ajar |{{P. a}. {r,s} € ("3"): {p.a.r. 8} = {5k} Sym(M) (5), 42,0) # to be the Fact B.4. Sym(M) satisfies the following three properties. © (z* z)'Sym(M)(z * z) = (z* z)'M(z* z) for every z € R"; e IfM is symmetric and satisfies Mi,5},¢%,1) =0 whenever i # j or k #1, then Sym(M) = M. © O(1)|MIjz. > C.((2*2)™M(e 2) > |lSym(M)|2, C.((2*2)™M(e 2) > |lSym(M)|2, It is not hard to derive the following important property (proof see Appendix I.3) Lemma B.5. Jf U € R?*? is unitary and R € R®°*? for s > ("5), then there exists some unitary matriz QE RO2)*(3") 50 that RU « RU = (R*«R)Q. # B.2 Network Initialization and Network Tensor Notions We show the following lemma on random initialization (proved in Appendix I.2). ke Lemma B.6. Let Ry € ROE *) he be a random matrix such that each entry is i.i.d. then with probability at least 1— p, Re * Re has singular values between lou: ke Lemma B.6. Let Ry € ROE *) he be a random matrix such that each entry is i.i.d. from N (0, a): then with probability at least 1— p, Re * Re has singular values between lou: O(l+ z log #¢ > Ly), e £ and ||Re|l2 < OA+ viesQ/p)) and ||Re|l2 < OA+ viesQ/p)) As a result, with probability at least 0.99, it satisfies for all = 2,3,...,L, the square matrices O(L + PEO) and |[Ryll2 < O(1 + YE). * Re have singular values between logED: £ Through out the analysis, it is more convenient to work on the matrix symmetric tensors. For # Ry Through out the analysis, it is more convenient to work on the matrix symmetric tensors. For every (= 2,3,4,...,L and every j € J \ {0,1}, we define a k+l Wj = Wi; (1x1) = Wi, = Wi, e Rex") W_,; 2 W:,(R;* Rj) = We,R; * WejR; eR™ ("3") K,; @ Ky,(Rj * Rj) = KyjR; *KyjR; e pkex("3) 36For instance, when i, j, k, l ∈ [n] are distinct, this means M{i,j},{k,l} + M{i,k},{j,l} + M{i,l},{j,k} + M{j,k},{i,l} + M{j,l},{i,k} + M{k,l},{i,j} 6 . 39 so that Vee RS: Wy) j(z*z) = Wz ;0(2) Wye, (2 * z) = Wejo(Rjz) Ky; (z*z)= Ky jo(R;z) For convenience, whenever j € J {0,1}, we also write Wy; = Wi; We; = We; Ky; = Ke; We define W*, = (Wei) sex, € Rex Ww, = (Wei) ier, € R™™**, Ky = (Key) je7, € Rihex* Wrta = (Wei) erp j 4-1 Wea = (Wes) seg i¢e—1 Kea = (Kes) jer g¢0-1 ier, Wea = (Wes) seg i¢e—1 Kea = are in [1/K,«]. Singular values of W*~ Fact B.7. Singular values of Wi; are in [1/K,«]. Singular values of W*~ and W*:. are in [1/n, &x]. # C Useful Properties of Our Objective Function C.1 Closeness: Population Quadratic vs. Population Truncated Loss Claim C.1. Suppose for every € € [L}, ||Kell2,||Well2 < %e for some Ke > ke +L +k and Exxp|||Se(x)||7] < 1% for some te > Ke. Then, for every € € (0,1), when choosing truncation parameter: Bi > 7} - poly(Xe) - 0(2%e4(2°) log(dL/e))%#29 , truncation parameter: Bi > 7} - poly(Xe) - 0(2%e4(2°) log(dL/e))%#29 , have for every integer constant p < 10, we have for every integer constant p < 10, E. [(F@) — F(«))'| <e and ED [ (Sele — $¢(2)|2)'| <e an Proof of Claim C.1. We first focus on Se(x) —S¢(x). We first note that for every S¢(x), S¢(a), there is a crude (but absolute) upper bound: A ~ ye ye ee \|Se(a)|l2, [|Se(a)|l2 < Rekel)OP jal] =: Calfarlld . By the isotropic property of x (see (5.1)) and the hyper-contractivity (see (5.2)), we know that for R, is as large as Ry = (dlog(Ci/e) °°) it holds that € ; ae | < Ey Miatg'2allB”] < ge0 This implies (C.1) Nl] E, I( S¢(x) ~ Se(2)ll2)” By y2es ay| < Next, we consider the remaining part, since E,~p|||S¢(z)||7] < te, we know that when B/ > 7; - Q(e4(22)) 429 log? (C, Ri L/e), by the hyper-contractivity Property 5.2, we have for every fixed L x >Bil< ae Pr(|RSi(0)|2 > Bi < sc RPT Therefore, with probability at least 1 — TOC at every layer @, the value plugged into o and a are the same. As a result, # Pp ~ Pp ED [(Setx) - S¢(2)||2) Lisis'<m| < (2C, Ri)? Pr [Al < ¢,|[Re Sp (x)|l2 > Bi] <e/2 (C.2) 40 Putting together (C.1) and (C.2) we complete the proof that E, | (iiSe(x) - Selo) )'] <e An identical proof also shows that \\Sum(F)(x)) — Sum(F,)(2)|I2)"] < ‘D J an Thus, scaling down by a factor of Lp we can derive the bound on E,.p (Fe) - F(x))'). # C.2 Covariance: Empirical vs. Population Recall that our isotropic Property 5.1 says for every w € R4, .{(w, 2) x)?] < O(1)- [wl]? and JE al (w, .{(w, 2) x)?] < O(1)- [wl]? and JE al (w, Si(x))?] < O(1) - |u|? . # empirical Below we show that this also holds for the empirical dataset as long as enough samples are given. δ , with probability at least 1 − δ over the random Proposition C.2. As long as N = d2 · logΩ(1) d choice of Z, for every vector w ∈ Rd, E x∼Z Bllw.2)4] <O(1)- ul? and 8, {fw $1(2))4] < O1)- rol? Va € Z: max{|lal]?, |[S1(a)||?} < dlog0G ¢ Proof of Proposition C.2. Our isotropic Property 5.1 together with the hyper-contractivity Property 5.2 implies if N ≥ d logΩ(1) d Vee Z: lal]? < Rg and |\$i(x)||? < Rs Where R3 = d- log) g. Next, conditioning on this event, we can apply Bernstein’s inequality to derive that as long as N > Q(Rs3- log x) with probability at least 1 — 6, for every fixed w € R4, Pr [(w, «)4 > Q(1)] > 1- do Pr x∼D Taking an epsilon-net over all possible w finishes the proof. # C.3 Lipschitz Continuity: Population Quadratic Claim C.3. Suppose K satisfies ||Kj||2 < 7; for every j € {2,3,---,L} where 7) > kj +n +L, and suppose for some € € {2,3,---,L}, Ke replaced with K, = Ke + Ag with any ||Aellr < -1 (ie poly(7;, c3(2))) , then for every i> poly(7;, c3(2))) . HSi@)I? — 1Si(#) 1/7] (TIpow 73, ¢3(2’)) j=t and for every i < € obviously S;(x) = # Si(x). Proof of Claim _C.3. We first check the stability with respect to K, and suppose without loss of generality that only one W, is changed for some ¢. For notation simplicity, suppose we do an update Ki = Ky + 7A¢ for ||Ag|| 7 = 1. We use S” to denote the sequence of S' after the update, 41 and we have Si(x) = Sj(x) for every j < @. As for S}(x), we have Si(x) = Sj(x) for every j < @. As for é-1 é-1 IIS¢(x) — So(x)|| <7 | Yo Ac jllalloRyS;(x))|| + |Ae1S1(2)|] + | Aco j22 < npoly (Fe, nL) | $2 |S;(@)|)? + Aer Si(a)|| + | Acoal| j<t j<t so using E,~p|||.$;(«)||?] < 7;, the isotropic Property 5.1 and the hyper-contractivity Property 5.3, we can write E [IlSe(x) — Se(@)||"] < n?poly (te, ¢3(2°)) =: Be As for later layers i > £, we have I|Si(a) — Si(x)|| < 4 So Ky llR, 3S; (@)INNS5(@) — $5(x)|| + 1S5(x) — $(«) I? jee so taking square and expectation, and using hyper-contractivity Property 5.3 again, (and using our assumption on η)37 II Si(a) — 5i(x)|/? < poly(7i, e3(2")) - G1 =: 8: E x∼D by recursing θi = poly(τi, c3(2i)) · θi−1 we have E(ISi(«) — Si(x) < (TT pow 7),¢3(2’)) jae # C.4 Lipschitz Continuity: Empirical Truncated Loss in K Claim C.4. Suppose the sampled set Z satisfies the event of Proposition C.2. For every W, K satisfying Vj = 2,3,...,L: [Wile Av Bsbet for some Kj > kj +K +L. Then, for any £ € {2,3,- — 1} and consider Ky replaced with Ki = Ky + Ay for any ||Agllr < cove’ Then, |Loss(Z; W, K) — Loss(Z; W, K’)| < a¢41\/ Loss(Z; W, K) - poly(#;, BY) « ||Aellr Proof of Claim C.4. Let us denote e? = Loss(Z; W,K). For notation simplicity, suppose we do an update K, = Ke+ Ac for n > 0 and \|Ac|| z = 1. We use 5’ to denote the sequence of § after the update, and we have Si(a t) = Sj (a x) for every j < €. As for Sh(a x), we have (using the boundedness of ) A Sea) — Se(a)|] <n | 2 Ac llall(S;(x))I| + |!Ae151(2)|| + Acoz je2 nLBy +0 (|Ae151(2)|| + ||Acorl) | \ 37This requires one to repeatedly apply the trivial inequality ab ≤ ηa2 + b2/η. 42 As for later layers i > ¢, we have (using the Lipschitz continuity of o) i-l 542) — $2) | < $7 Kile BUR, all 4(e) — $)(2)| j22 <es [] BiL*) (0LBy +n (AcaSi(o)| + || Acorll)) = pi j=e+l # I As for F(a), recall 2 F(x) = Yo ai Wiort + Wi Si(a) + > Wijo (Rj5;(x)) =: Se ail| Ail? a JE{2,3,.+ i 1} a BY), one can carefully verify?8 Using the bound || Aj|| < || W402] + ||Wi1S1(«)|| + poly(%;, BY), one can carefully verify?8 Using the bound || Aj|| < || W402] + ||Wi1S1(«)|| + poly(%;, |F'(a) — F(x)| < S> a4 (||Ail| pia + p71) - poly (i, Bi) — F(x)| < S> a4 (||Ail| pia + p71) - poly (i, Bi) i>e+1 < avy inpoly (Fe, By) - (1+ (|| Weo2| + ||We151(2)||)(||Ae151(x)|| + | Acorll)) |F'(a) — F(x)| < Therefore, we know that 2 (Ge) - Fa) - (Ge) - F()) < 2|G*(e) — F(a)| -|P'(x) — P@)| + Pa) — F@)P 2 ; _|F'(x) — F(x)? “ine. y . e417 + |F"(ax) — F(ax)| ae . |G*(w) F(a) IA ~ 2 el |G*(w) - F(x)| € + caesinpoly(Ke, By) (1+ (|| Weorrll? + |]We1S1(x)|I?)(|AerS1() ||? + |Aeo2|l)?) Note that 2a2b? < a* + b4 and: eng ||We1Si(x)||* < Re. |* + |Aco2'l|* < poly(%e). ¢ From Proposition C.2) we have E,~z ||Weoz'l|*, e From Proposition C.2 we have E,~z ||A¢15i (2) e From definition of ¢ we have E,.z G(x) - F(x)| =e. Therefore, taking expectation we have (Ge) — Fa)’ E avd (Ge) — Fa)’ — (G*(@) — Fay) ~ Zl < cays poly(Ke, By) E avd # C.5 Lipschitz Smoothness: Empirical Truncated Loss (Crude Bound) Recall a function f (x) over domain X is e lip-Lipschitz continuous if f(y) < f(x) +lip-: ||y— 2||r for all 2, y € Â¥; e lip-Lipschitz smooth if f(y) < f(x) +(Vf(x),y— 2) + ®- lly — al]? for # ®- lly — al]? all 2,y € &; *8This requires us to use the gap assumption between ai+1 and ai, and the sufficient small choice of 7 > 0. For instance, the 7?||Avox||? term diminishes because 7 is sufficiently small and ||zx|| is bounded for every x ~ Z (see Proposition C.2). 43 e lip-Lipschitz second-order smooth if f(y) < f(x) + (Vf(x),y— 2) + 3(y—2)'Vf(@)(y-2) + fp -|ly — a||% for all z,y € Â¥. We have the following crude bound: Claim C.5. Consider the domain consisting of all W, K with ∀j = 2, 3, . . . , L : # [Wille < %;, [Kyle < ® for some K; > ky +L+k, we have for everyx~ D, |F(2;W.K)| < poly, BY) Dy (\|Weorl? + || WerS1(x)|)?) e F(a; W,K) is lip-Lipschitz continuous, lip- ete, smooth, and lip-Lipschitz second-order smooth in W,K for lip = Tole, BY) « poly(G* (x), |||) smooth in W,K for lip = Tole, BY) poly(G* (x), |||) Suppose the sampled set Z satisfies the event of Proposition C.2, then e Loss(Z; W,K) is lip-Lipschitz continuous, lip-Lipschitz smooth, and lip-Lipschitz second-order e Loss(Z; W,K) is lip-Lipschitz continuous, lip-Lipschitz smooth, and lip-Lipschitz second-order smooth in W,K for lip = [[(Ke, Bi) 2 - poly (8, (c4(2"))ea2"), log") + d). We first state the following bound on chain of derivatives Claim C.6 (chain derivatives). For every integer K > 0, every functions f, g1, g2, . . . , gK : R → R, and every integer p0 > 0, suppose there exists a value R0, R1 > 1 and an integer s ≥ 0 such that @ f(x) dgi(v) " ee ppd . p Vp € {0,1,--+ ,po},i € [A]: aa < Rp, ane < Ri. @ f(x) ppd . ,po},i € [A]: aa < wigi(x)) satisfies: # Then, the function h(x, w) = Then, the function h(x, w) = f(Nietx wigi(x)) satisfies: OPh(x,w we € (0.450 po} [SEY < orf Ray? eP . OPh(x,w We © (0.1 spo}ote (Rs [PE] < gate a Proof of Claim _C.6. We first consider ||. Using Fa a di Bruno’s formula, we have that Using Fa a di Bruno’s formula, we have that Gi) OxP D1 !po!+-- Dp! L-pit+2-po+-+p-pp=p Pup Pp Gi) Orn) » » frre) » wigi(e TL mean a (x) ie[K] j=l Note that from our assumption e i |(* ie [K] 7 wig! (x) ‘y' 7 o@ | fPit~pe) (Dieux) wigi()) e i |(* ie [K] 7 wig! (x) ‘y' 7 o@ | fPit~pe) (Dieux) wigi()) |< Rh < [Uj (Weel Ra)! = (few Ra)? Combining them, we have OPh(x, w) OxP < (pRo||will1 Ri)? On the other hand, consider each wi, we also have: OPh(x, w) Ou? =| | S> wigi(x) } (gi(x))?| < |Rogi(a)|? ie[K] 44 # pj Proof of Claim C.5. The first 4 inequalities is a direct corollary of Claim C.6. Initially, we have a multivariate function but it suffices to check its directional first, second and third-order gradient. (For any function g(y): R™ — R”, we can take g(y +0) and consider Pailyred) fey every coordinate j and every unit vector w. daP e In the base case, we have multivariate functions f(Keo) = Keox or f(Ke1) = Kei Si (a). For each direction || A|| 7 = 1 we have | ao f (Keo +aAvo)| < |||? so we can take Ry = ||| (and for f(Ke1) we can take Ry = ||z||?.) e Whenever we compose with @ at layer ¢, for instance calculating h(w, y) = o(0; wifi(y)) (when viewing all matrices as vectors), we only need to calculate Pohj (w, y+a6) = eae (0; wyifil(y+ aé)), so we can apply Claim C.6 and R; becomes O(BykckeL) - Ry. We can do the same for the w variables, so overall for any unit (62,6) it satisfies | Po hy(w + adbw,y + ady)| < (O(BiRe(FeL)?) - Ri)”. We also need to compose with the vanilla σ function three times: — once of the form o(f(K2,...,Ky_1)) for calculating F(x), — once of the form o(Wef(Kg,...,Ky_1)) for calculating F(z), and — once of the form (f(W,K) — G*(«))? for the final squared loss. In those calculations, although g(a) = x? does not have a bounded gradient (indeed, 4 g(x) = x can go to infinity when x is infinite), we know that the input x is always bounded by poly(&, ||z||, B’, G*(x)). Therefore, we can also invoke Claim C.6. Finally, we obtain the desired bounds on the first, second, and third order Lipschitzness property of Loss(x; W, K). # for For the bounds on Loss(Z ;W,K), we can use the absolute bounds on Sum(G*(z)) and ||z| all x € Z (see Proposition C.2). # C.6 Closeness: Empirical Truncated vs. Population Truncated Loss Proposition C.7 (population < empirical + ¢;). Let P be the total number of parameters in {We, Ke} ceiz}.. Then for every é5,6 >0 andk >k+L +k, as long as Ploe(d/é5 KB! ca(24)+0(1) N=2 (Peay (cxt24) 108 ® ) s Es , with probability at least 1—6 over the choice of Z, we have that for every {We, Ke}ce[r] satisfying |Wellz, Kelle < &, tt holds: Loss(D; W, K) < Loss(Z; W, K) + ¢; Proof of Proposition C.7. Observe that for every fixed Ro > 0 and R; > B’ > 0 (to be chosen ater), and (ew ~ Fe) Nouo-Feolerotaicn S ee (ew ~ Fe))'| ~ \2 Moreover, each function R(x) = (c*@) - F(x)) 1D G+(2)—F(a)|<Ro,jnl|<R1 S2tisties that e boundedness: |R(x)| < R3, and 45 e Lipschitz continuity: R(a) is a lip < poly(&, B’, Ro, Ri,d)-Lipschitz continuous in (W, K) (by applying Claim C.5 and the fact G*(%) < Ro + F(x) < poly(#, B’, Ro, Ri, d)) applying Claim C.5 and the fact G*(%) < Ro + F(x) < poly(#, B’, Ro, Ri, d)) Therefore, we can take an epsilon-net on (W, K) to conclude that as long as N = 2 w.p. at least 1—4, for every (W, K) within our bound (e.g. every ||We|l2, ||Kel]2 < &), R&P log(KB! Rid/(5es)) €: we have tha it holds: # R&P # R&P log(KB! Rid/(5es)) €: i ~ 2 (@@) - F@)) Novo) Feoicrale<e < E (aCe) ~ Fe) Nooo Frolcroeicr +e5/2< ED (ew ~ Fe) | +es/2 and As for the remaining terms, let us write ~ 2 ~ 2 (G@) — FO) Neue) FLeyio Rp of Ull> Rs ~ 2 9 a ~ 2 . S$ (G*(@) ~ F(x)) Lice (x)—F(a)|> Ro + Ro Vjei>R1 4 2 ~ (G* (x)? Lor (a))> 9/2 + AE (@))" ya (ay)5. 29/2 + O° Lola # O° Lola 2~D(G* (x) < B] so we can apply the hyper-contractivity Property 5.2 e For the first term, recalling to show that, as long as Ro > poly(#)- ah log = €s/10. Byer ) then it satisfies aw D[4 (G*(2))? Lige(2)|Ro/2] < e For the second term, recall therefore, we can write rom Claim C.5 that |F(«)| < poly(%, B’)- YX (||Weo2r||2+]/We151(x) ||?) therefore, we can write A(F (©) Liz) A(F (©) Liz) ro/2 ~ / | |2 < poly (BSD (Wet ly yy soj2> acta, + IWe081 (I Bpw,.s,e9)2> a8 r poly(,B’) Applying the isotropic Property 5.1/and the hyper-contractivity (5.2) on ||Weo2||? and ||W¢154(2)|l?, RBIS x) then it satisfies we have as long as Ro > poly(k, B’) - (log EHF) UF G)|s-Ro/2! <</10 (for every W,K in the range) # εs EHF) UF G)|s-Ro/2! <</10 (for every W,K in the range) e For the third term, as long as Ry = dlog®) (Ro/es) then we have Erwp[R§- Ucn] < €s/10. #B!\ O(1)+e4(2" og a) O()+e42") and we have Putting them together, we can choose Ro = poly(F, B')(ca(2")1 #B!\ og a) Putting them together, we can choose Ro = poly(F, B')(ca(2")1 ~ 2 ~ 2 E (@@- Fe) Vics (@)—Fla)|>Ro on tion, Ses/2 - avD This completes the proof that “. =~, \\2 a, =, \\2 - ED I(c (x) —F (z)) | < ES \(c (x) — P(e) + és. Let P be the total number of parameters in Proposition C.8 (empirical < population + ¢,). {We, Ke} ceiz}. Then for every és, > 0 andk >k+L +k, as long as 1 ea(24)+0(1) waa (Pues. poly GB (c 4(2£) log ~) ) &s , 46 , for any fixed {Wy o, Wortce(z}; with probability at least 1—6 over the choice of Z, we have that for every {We, Ke} cejz} satisfying (1) \|Wellr, ||Kellr < % and (2) consistent with {Weo, We1}ce[z); it holds: EB [Loss(e; W,K)] = Ee (ew - F(x)) ‘| < E, (ew - Fe) | +é5= EB, [Loss(«; W, K)] + és # 4 RoP 4 = pl Proof. We first reverse the argument of Proposition C.7 and have that as long as N = RoP loe(hB Rad/(des)) we have that w.p. at least 1— 6/2, for every (W, K) within our bound (e.g. every ||W~||2, Kyla. < RK), it holds: ‘ ~ 2 and l(c (2)-F (o)) Lice (o)-Fea) <Rollect < E (ew - F(a)” digs (x)-F(e) <llei<m t+eé/2< E (ew - F(x)) | + €,/2 27D &2vD As for the remaining terms, we again write \2 ~ \2 (c «) - F(e)) 1D +(e)—F(a)|>Ro or |ja|>R1 <4(G*(x))? Lor(a)|>Ro/2 + RG * Uasr, + poly( BY (| Weel? yy, ses ¢ + ||Wei Si (a ) = RHS I Law, si(ey|2> pole, BY) ware BY) For this right hand side RHS, we notice that it does not depend on K. The identical proof of Proposition C.7 in fact proves that if Ro = poly(%, B’)(c4(2”) log BBL) 00) yre(2") thon for every W with Kelle <k, E x∼D [RHS] ≤ δεs/4 . This means, by Markov bound, for the given fixed W, with probability at least 1 − δ/2 over the randomness of Z, it satisfies E x∼Z [RHS] ≤ εs/2 . This implies for every K in the given range, ~ 2 Ka 9 ~ ars I(¢ (x) F(z)) Ligs(a)—F(a)|>Ro or |lall>Ri | < &s/2 - # D An Implicit Implication of Our Distribution Assumption Let us define Si(a) = (x) S3(x) = W3, St (a cr) = W3,10(2) St(a) = Why 10 (57 L(x :)) for £=2,...,L # so that Sé(a so that Sé(a r) is the top-degree (i.e. degree 2!) part of $#(z).3° We have the following implication: °*°Meaning that St (x) is a (vector) of homogenous polynomials of x with degree 2°) and its coefficients coincide with S7(«) on those monomials. 47 ), , Lemma D.1 (Implication of singular-value preserving). Let us define 29 = 2%(a) = a = 2\(x) = 2! = 2! (ax) = (D.1) Sk(a) =x SÂ¥(x) = o(2) St(x) * SÂ¥(a) (D.2) (D.3) Then, for every £> 1, ¢2 > 0 with |) — 2| #1, for every matrix M: and the associated homoge- neous polynomial gyr(x) = (24)'Mz®, eo If; =l2=£=0 or 1, then Cz(gm) = ||MI|z, e Ifl) =l) =£> 2, then C,(gm) > Gajoan|iSym(M)||7-, and e [ft; —2> l)>0, then Cz(gm) > Peace forl=h. # D.1 Proof of Lemma D.1 Proof of Lemma D.1. We divide the proof into several cases. Case A: When é; = £2 = @. The situation for 2 = 0 or = 1 is obvious, so below we consider €> 2. Let he(z) = (2 * 2)M(z* 2) = Vici pcrM G7} (0, %,54b1212j 21 be the degree-4 polynomial defined by M. We have a C.(he) = ||Sym(M) ||; For every for every j = €—1,...,1, we define hj(z) = hj+1(W3,, ;0(2)), it holds that Let h(z) = hj41(W,1 Let h(z) = hj41(W,1 ;2) so that hj(z) = h(o(z)). This means ~ 1 C(hj) =C(h) = Tao y(t) and finally we have (z°)'Mz‘ = hi(«) and therefore Tye 1 Cr ((25)"Me!) = a Tye 1 2 Cr ((25)"Me!) = a aaRllSym(M) 2 Case B: When £; — 1 > £2 > 2. We define he,(z,y) = (z * z)'M(y* y) which is a degree-4 homogenous polynomial in (z, y), and obviously Cy,2(he,) > ||M||%. Let us define Let us define (W341 ;0(2),9)) Vj =f -1,...,02 +2: hy(z,y) = yar (W341 ;0(2),9)) By the same argument as before, we have 1 (eaoe Cou (ae) Czy(hj) = Next, for 7 = ¢2, we define hgly) = hy (Wi 42,5410(W 541 50(Y)),9) To analyze this, we first define h'(z,y) = hy+e 54124) # (Wiyo so that hj(y) =n’ (o(Wi41j;0(y)),y) Since h’(z,y) is of degree 2 in the variables from y, we can write it as h'(z,y) = Yo)? hp. p 2) + > YpÂ¥ql{y,q} (2) (D.4) Pp p<q hl (z,0(y))) 48 where the first summation contains only those quadratic terms in (yp)? and the second contain cross terms YpÂ¥q. Note in particular if we write the first summation as h'| (z,o(y))) for polynomial hi (z,y) and y = a(y), then h”, is linear in y. Clearly, Coy(h!) = Cen (h'L) + 2 Ce (hip a3) (D.5) p<q As a consequence, we can write hy(y) = RL (o(W3,j0(y)).0(y)) FÂ¥pda RY, gy (OCW 4 j0(y))) a anne seme _———_—_—_—_—_S hiy) hepa (Â¥) Clearly, since any polynomial in 7(y) only contain even degrees of variables in y, so h, (y) and each hiya} share no common monomial, we have Cy(hyj) = Cy (hi) +S Cy(gp gy) (D.6) p<q e On one hand, we have hoa) = hy 3 (o(WF +1,7(y))) and therefore by previous argument ~ 1 Cy (hip g}) 2 Gaoa—nee (Mpa) (D.7) e On the other hand, to analyze h 1(y), let us construct a square matrix W € R’** with singular values between [1/k, «] so that W3i1j7W = (Thj41xkj4179) (D.8) W3i1j7W = (Thj41xkj4179) h'!(z, 8) = h'{ (z, WB) which is linear in 8, it holds:4° Cy (hily)) = Cy (HL(oW5e 1 5¢(y), ow) =Cy hi (o (W174) Â¥)) > 65 (WL (CW. W8).W8)) # Define Cy (hily)) = Cy (HL(oW5e 1 5¢(y), ow) =Cy hi (o (W174) Â¥)) > 65 (WL (CW. W8).W8)) Soar = C5 (HL (0((1.0)8). WA) rary = C5 (MY (0((E.0)9).9)- oa 1 ® =Cz8 (Al (z, B)) . (2900) _ 1 = Czy (hz, W ty) : (e208) 4° Above, equality ® holds because h'{’(z, 3) is a multi-variate polynomial which is linear in 8, so it can be written # as All (z,8) = 3° Bi ha (2) for each h'(',(z) being a polynomial in z; next, since we plug in z = o((I,0)8) which only contains even-degree variables in 8, we have Ca (hil (o((L,0)8),8)) = D7 Co (his (o((L,0)8))) = 0 Cz (WL) = Cory (RY (2,1) 49 1 = Czy (hi (2,9) + OOH (D.9) Finally, plugging the lower bounds (D.7) and (D.9) into expansions (D.5) and (D.6), we conclude that 1 1 I Cy(hj) = (5208), Coy(h’) = (5208 2 0R) . Cu y(hj+2) 1 (5208 2 0R) . Cu y(hj+2) hj+1(W}41,;0(y)) for every j = f2—1,f2—2,...,1 Continuing from here, we can define hj(y) = hj+1(W}41,;0(y)) for every j = f2—1,f2—2,...,1 and using the same analysis as Case A, we have 1 C(hy) = C(h) > Gajowy (hit) and finally we have (z°)'Mz‘ = h1(«) and therefore Cr (( ‘Mz Je canoes IMIR Case C: When £; —1 > £2 = 1. Similar to Case B, we can he, (z, y) = (2 * z)' Ma(y) which is a degree-4 homogenous polynomial in (z,y), and obviously Cy,.(hg,) > ||M||%-. Let us define Vi =O —1.. 8: Ag(e,y) = hyjat (War jo),y)) Vi =O —1.. 8: Ag(e,y) = hyjat (War jo),y)) hily) = hs (W30(W310(y)),9) The rest of the proof now becomes identical to Case B. (In fact, we no longer have cross terms in (D.4) so the proof only becomes simpler.) Case D: When ¢; — 1 > 2 = 0. We define he,(z,y) = (2 * z)'My which is a degree-3 homogenous polynomial in (z, y), and obviously Cy,2(he,) > ||M||%. Let us define Vj =&—-1,...,2: hj(z,y) = hysi (Wi4150(2),y)) hi(y) = he (W3,0(y),y) Let us define (Wi4150(2),y)) By defining h’(z,y) = ho(W312,y) we have hi(y) = h'(o yw): This time, we have Cy(h1) = h Czy(h’), but the same proof of Case B tells us Cz,y(h’) > Toe * || M|3. # E Critical Lemma For Implicit Hierarchical Learning The implicit hierarchical learning only requires one Lemma, which can be stated as the following: Lemma E.1. There exists absolute constant co > 2 so that the following holds. Let t¢ > ke+L+k and Te > 1 be arbitrary parameters for each layer €< L. Define parameters Dy (re -(2) -e1(2") -e9(2"))™ Ce = Cy_1 - 283 (Dy) with Co =1 Suppose Obj(D; W, K) ≤ ε2 for some 0 ≤ ε ≤ αL and suppose the parameters satisfy # (DL)9Î¥L e oats < ore for every = 2,3,...,L—1 e E,vp|l|Se(x)|| 2) < 1 for every €=2,3,...,L-—1 50 © AMGoe= a 302 Det ML 2 H mark ‘52 yore for every €= 2,3,...,L Then, there exist unitary matrices Uy such that for every € = 2,3,...,L 2 2 ‘ € UpS7 (x) — Se(x)|[3 < | =~] © , |UeSé(x) — Se(a)Il2 S Jara) Ce Since we shall prove Corollary E.1 by induction, we have stated only one of the main conclusions in order for the induction to go through. Once the Theorem E.1 is proved, in fact we can strengthen it as follows. Definition E.2. For each £ > 2, let Qe be the unitary matrix defined from Lemma B.5 satisfying RypUy * ReUe = (Re * Re) Qe We also let Q0 = Q1 = Id×d, and let Qea = diag(Q;)z, and Qe = diag(Q;)jez, # Qea Corollary E.3. Under the same setting as Theorem E.1, we actually have for all £ = 2,3,...,L, + T 2 2 6 (a) ||Q7 Wee 1WeQra Wee We < (Di)? (E) -& Oe (b) ]Qz Ri 1 KreQra — W*/y_;W* eal), < Te(De)* (2). é 7 _ Fy 2, (c) ||QPK; KiQ — We) We) < THD) (=) - (d) Brow |WeS#(x) — So(x)|I3 < 22D)" (Z) + SE (d) Brow |WeS#(x) — So(x)|I3 < 22D)" (Z) + SE (¢) Ex~pll|Se(2) |] < 2Be. Corollary E.4. Suppose we only have ¢ < Dae which is a weaker requirement comparing to Theorem E.1. Then, Theorem E.1 and Corollary E.3 still hold for the first L — 1 layers but for replaced with ay: /Drz. In addition, for 0 = L, we have pL — + __ 12 ; 2 (a) Q7 Wr. 1Wr<Qr< - WW? < 2(D1)* (<) T T 2 2 (b) Qz Kr, 1KrQr - WW rai] <2Yz,(Dz)* (=) (c) \|@7K,K.Q, — WW, [ < 21} (Dz) (=) # E.1 Base Case The base case is L = 2. In this case, the loss function 2 > Obj(D; W.K) > 3 E, (|| Wa18(2)|? = | W3.Si(@)|?)” Applying the degree-preservation Property 5.4, we have Ca (W225i (2)|? — |W3,,51(2)| <0 (2) ag where recall from Section D that Si (x) = 0(x) is the top-degree homogeneous part of 5)(a), and Cx(f(a)) is the sum of squares of f’s monomial coefficients. Applying Lemma D.1, we know _\2 Wy Wei — (W3,)' W3i\|- < O( (<=) 51 On the other hand, our regularizer λ4,L ensures that 2 6 e\? |Ws Wea - K},Ko, |. <\5< (D172 (=) Putting them together we have 2 e2 P € (Waa) Wr. - K},Ko,| <— <(Dz)'Ti (<) FO X42 ag 2 By putting it into SVD decomposition, it is easy to derive the existence of some unitary matrix U2 satisfying (for a proof see Claim I.10) -\2 [Waka ~ Waal < (DL) Tt (=) Right multiplying it to Si (a), we have (using the isotropic Property 5.1) E,||U2S2(x) — $3(0)|z = |/U2K2151(x) — W5151(2) E,||U2S2(x) — $3(0)|z = EB, |/U2K2151(x) — W5151(2) || # E.2 Preparing to Prove Theorem E.1 Let us do the proof by induction with the number of layers L. Suppose this Lemma is true for every L ≤ L0, then let us consider L = L0 + 1 Define Ge, 4(«) = DIY Fer1(«) = Diy We know that the objective of the first L — 1 layers Loss;—1(D) + Regy_1 = a (GE,-1(x) — Fep-x(2))” <2 EB (G'(e) — F(a)’ +207, an Ge, 4(«) = DIY oSum(G; (x) Fer1(«) = Diy aSum(Fy(x)) Regy_1 = a (GE,-1(x) — Fep-x(2))” + Reg; 1 <2 EB (G'(e) — F(a)’ +207, EB (Sum(F,(x)) — Sum(Gj(x)))’ + Reg, an < 207, . (Sum(F(a)) — Sum(G%(a)))? + 2Loss(D) + Reg . (E.1) an By our assumption on the network G*, we know that for every ¢ € [L], E,Sum(Gi(c))] < Be > EB IIS*o)I7] < Be E,Sum(Gi(c))] < Be > EB IIS*o)I7] < Be By hyper-contractivity assumption (5.3), we have that Zatsumicie < 092°) BP => E IlS¢(~)|l4] < e9(25) BP (B.2) @r~ @2~D Using our assumption E,W. p|||S¢(x \|?] < 7, and the hyper-contractivity Property 5.3 we also have , [Sum(F)())] < thE and E, [Sum(Fy())”) < 03(2") (keL7)8 # E x∼D [Sum(F)())] < # E x∼D [Sum(Fy())”) < 03(2") (keL7)8 Putting these into (E.1) we have Obj, < 03, - (ki LBrtz)8c3(2”) + 2? (E.3) By induction hypothesis*! for every L replaced with L — 1, there exist unitary matrices Uy such 2 op—1 2(Dp-1)8\/TE_y 41To be precise, using our assumption on as one can verify that O(az : (kL LBxt1)*cs(2")) < so the assumption from the inductive case holds. 52 that 2 f= wae —1: K(x) — x Fc ge OL _,-(kp LB 86 L Vé = 2,3, ,L-1 Eyl UeSi(@) S¢(x)||5 < Oy aaa CL-1 (kr LBrt) 3 (2 ) <1 (E.4) Let Sp(x), ra *) be the degree 2°~! homogeneous part of S¢(a), Sf(a) respectively, notice that USF (x) — Sela r)||> i is a polynomial of maximum degree 2‘, therefore, using the degree-preservation Property 5.4, we know that Ve =2,3,...,L—1: SoC ([UeS# (©) - 34(2)];) <c1(2")-62 (E35) i€[ke] ve =2,3,...,L: YS Ce ([8@)],) <erl2")- Br i€(ke] We begin by proof by grouping the 2/-degree polynomials G*(x) and F(z), into monomials of different degrees. Since L L G*(x) = S- apSum(G}(x)) and F(x) = So arSum(F (2), e=2 e=2 it is clear that all the monomials with degree between 24~! +1 and 2/ are only present in the terms Sum(G7(x)) and Sum(F7z(2)) respectively. Recall also (we assume L is even for the rest of the proof, and the odd case is analogous). 2 Sum(G7(x)) = ) = |Xeer\¢o, ry WF a (Sz (a) + ee 1y Wr Se (2 | (E.6) 2 Sum(F7;(x)) = )=[Xeen\eo, in W0(ReS¢(x)) + ee rn0, } Wr 0S¢(a | # E.3 Degree 2L We first consider all the monomials from G*(x) and F(a) in degree 24~! + 24-1 = 2" (ie., top degree). As argued above, they must come from the top degree of (E.6), F,: R41 R*« be the degree 2” part of Gi Let G*,, Let G*,, F,: R41 R*« be the degree 2” part of Gi (x), Fi(a) respectively. Using BE |P(a) — G*(a)? < Obj <<? and the degree-preservation Property 5.4 again, we have Cz (Sum(Fy (x) ~ Sum(G*z(0))) < e1(2") (<) (E.7) From (E.6), we know that Sum(@_(2)) = Wi, .a2 (S¢a(0))|| = Weer (Sta « Sta) We also have Sum(A,(2)) = | Wi,1-10 (Re1S1-1(2)) |)” =|] We1 (8-a(e) * Sr-a(a)) | = 4+! For analysis, we also define Wrr-1 =W,,1-1(Rr-1Uz_-1 * Rp_-1Uz_1) € REex(" oe) so that W110 (Ri 1Us-15}_,(x)) = Wi 1 (Si L(x) * St_4(a :)) 53 where WL,L−1 = WL,L−1QL−1 for a unitary matrix QL−1 by Lemma B.5. Using Viele] Cy ([UeS#(e) — 8:(x)],.) < (2°) - 6? from (E.5) and Viele] Cy ([S¢(@)],) < c1(2°) Be, it is not hard to derive that’? Cy (Wis (Rr Stale) )) | - Wr 10 (Ri 1Up-154 4( 2)) | *) <4 for some 1 < Tf - poly(Br, re ,c1(2"))62_). (E.8) Combining (E.7) and (E.8) with the fact that Cx(f1 + f2) ≤ 2Cx(f1) + 2Cx(f2), we have # Cx c(i (Si. u(x) * St_4( 2) |) — | Wi. 1 (Si (2) * Sia( *)| ‘) = & for some £) < 78 - poly(Br, 2, e1(2"))07_1 + 2er(2”) (<) Applying the singular value property Lemma D.1 to the above formula, we have < poly; (= + TPOL i) (E.9) —T = — 7 __ Sym Wi W211 — Sym (W2,1W's1-1)| for some sufficiently large polynomial poly1 = poly(BL, κ2L , (2L)2L , c1(2L), c3(2L)) This implies | Wi.1-19 Wi.1-19 (Sia(e))|? = (Sta(e) * Sia (0) " WE Wt (S41 (0) « S31 (@)) St _y(a) * S#_(a))" Sym (We). 1W*r1-1) (ST_ 1(x) * St_y (x )) lle i) St_4(a) * ST_4(a r))' Sym (Wis :Ws) (ST_1(x) * St_4(x)) + & lle St_1(x) * St_y(a ry Whe Wy 1- 1 (S7_1(@) * S7_4(a)) + & = ||Wr2-10 (Rt1Uz- Si 1( z))||? +& = ||Wy,1-10 (Rr-1S1-1(#))|? + & (E.10) Above, ® and ® hold because of Fact B.4, @ holds for some error term £3 with 2 [(€3)2] < (poly,)?- (= + rt.) because of (E.9) and E,~p|||S7(x)||?] < Be together with the hyper-contractivity Property 5.3, ® holds for 2 [(€)2] < (poly) (= + “té.) “Indeed, if we define g(z) = ||Wz,1—-10(Rz)||? = ||Wz,1-1(z*2)||? then we have C.(g) < O(1)-||Wz,1-1||% using Fact B.4, and therefore C.(g) < O(77L”) using ||Wx,1-1\|r < Te and |[Rx-1 * Rz-i||2 < O(L) from Lemma B.6 Next, we apply Lemma 1.7 with f“ (x) = Uz—1S}_1(2) and f(a) = S,-1(x) to derive the bound Cx(g(f1(x)) − g(f2(x))) ≤ k4 L · 2O(2L) · (c1(2L))8 · (δ8 L−1 + δ2 L−1B 3 L) · Cz(g) . 54 ≤ (E.8) because of Ez.p ||Uz-1S¢_, (x) — Sr1(2)|| < (24-1) OF 4 which implies“? Wr 19 (Ry-1S1-1( »))| - Wis 10 (Rp_1Uz_-1S7_4( ») | =e for some £4 € R with malt £1:)"] < TP - poly(Br, c3(2”))6?_4. (E.11) aN E.4 Degree 2L−1 + 2L−3 Or Lower Let us without loss of generality assuming that L − 3 ∈ JL, otherwise we move to lower degrees. We now describe the strategy for this weight matrix WL,L−3. Let us consider all the monomials from G*(x) and F(x) in degree 24—! + 24-3. As argued above, they must come from equation (E.6). As for the degree 24~! + 24-3 degree monomials in G*(a) and F(z), either they come from [Wi 1 -10(St_1( z))|)? and ||Wz,,-1¢(Rr-1S1-1(2))||’, which as we have argued in (E.10), they are sufficiently close; or they come from oe Tr T oe o (Si_s(2)) (Wi.r-3) Wi1-17 (Si) rom Sum(G7_;(x)) a T ~ a (Rr-sS1-s(2)) (Wz,1-3)' W110 (Rr1S:-a(2)) from Sum(Fy~1(2)) For this reason, suppose we compare the following two polynomials # ee G* (a) — oy || Wer 1o (St_ 1 x))|\? (x )= az ||Wr,r- 10 Ryp-1Sp-1(2))||" > they are both of degree at most 2’~! + 24-3, and they differ by an error term & = (G*(e) - QL [Wz 10 (ST_ 1( x))|| IP) - (F( x) —arz ||Wr,r- yo(Ry- 1SE- 1: r))|| I?) which satisfies (using Obj ≤ ε2 together with (E.10)) [(&5)"] < (poly,)*: (¢ + rRardr—1)” aT’ Using and the degree-preservation Property 5.4 again (for the top degree 2L−1 + 2L−3), we have Cx (« (St-a(@))" ( Toa) Wi1-17 (Si_a()) 43Specifically, one can combine © llo(a) — o(6)|| < lla — | - (llal] + 2lla — Ol), © ([Wr,1-14l|? — || Wa,2-15|”)? < |[We,2-1(a — 8)|)? - (2||We,1-14l| + [We,2-1(a — 8)|I)’, e the spectral norm bound ||Wz,x-1||2 < 71, ||Rx-1|/2 < O(7z), to derive that (was 10 (Rr-1S1-1( 2) = || Wr,1-10 (Re1UL-1Si-1( m/f’) O(r1?): (| $é(x)||°|| WeSF (ex) — Se(@)||? + [[WeS7 (a) — Se(x)]|*) ≤ O(τ 12 Using ||a[°||b||? < O(52 a lal)? + BY), as well as the aforementioned bounds © Exvp ||SZ_1(a)||” < Br and Ezwp ||Uz—1$7_1(x) — Sz-a(2)||” and the hyper-contractivity assumption (5.3), we can prove (E.11) < 67-4 55 (E.11) ~ T a —9 (Rx-s$1-s(2)) (Wr,r-3)' Wi1-10 (Ri1S1-1(2)) ) <& ; 2 for some error term & with [(&)?] < (poly,)°- (< + 761-1) . Using a similar argument as (E.8), we also have ~ T a c.( (Rr-»S1-s(2)) (Wz1-3)' Wr1-10 (Rr 1S:-1(2)) vara qT T Ox —o (Ri-3Uz-35}_s(2)) (Wr,1-3)) Wr1-10 (Rr 1Uz-354_,(«)) <& L · poly(BL, 22L, c1(2L))δ2 for ξ7 ≤ τ 6 matrix QL−1 as before, we have L−1. If we define WL,L−3 = WL,L−3QL−1 for the same unitary W130 (Rx-sUL-s53_2(«)) = Wiis (Si_s(a) * 3j_s(2)) Using this notation, the error bounds on ξ6 and ξ7 together imply C2( (Si-s(0)*8¢sl0)! Wr sWer sa (Sale) * S310) _ (Si-s(o) * S¢_a(2)) Whe 3Wr11 (Sia) * Si_a(0)) ) < & (< 2 for ξ8 ≤ (poly1)6 · formula, we have + τ 3 LδL−1 . Applying the singular value property Lemma D.1 to the above 2 2 < (poly,)" (= + rio.) . (E.12) —t + We -3W,r-1 — W* 7 7-3 W* 10-1 F Following a similar argument to (E.10), we can derive that This implies (W713 (S7_3(x)))" L,L-19 (SZ_1(2)) = (Wr,1-30 (Rr-3S1-3(x))) | Wr,r-19 (Ri-1S1-1(2)) + & 2 E[(€9)?] < (poly,)® (< + 75-1) for some E[(€9)?] < (poly,)® (< # E.5 Until Degree 2L−1 + 1 If we repeat the process in Section E.4 to analyze monomials of degrees 24~! + 27 until 24-1 +1 (for all 7 € Jz), eventually we can conclude that“* —tT — tr Wp Wie - W*, 7-1 W* 13 ara3 { € < < (poly, )?2+8 (= + a) PF ap which implies that for unitary matrix Qr< e diag(Qe) re Ir\{L—-1}, we have that ato o> =aTtT = 3fé 36 Qi Wr. aWrQr. - Wey -1W*r4l| < (poly,)?”+8 (= + rti.-1) Let us define # poly2 = (poly1)2L+3τ 3 L (we eventually choose DL = poly2) “Technically speaking, for 7 € Jz 1 {0,1}, one needs to modify Section E.4 a bit, because the 4-tensor becomes a Ti_+y __ a a 3-tensor: (3; (x)) Wy Wert (Sis) * Sis). 56 so that =) lo@e er € Qi Wr 1 1WrQr< - We pW) pe poly, (= + ‘-1) (E.13) By the regularizer that T T 2 a \|wi t-1W1.- K;, 11K ra|| < , , FO Azar Using Wy; = Wz,;(R; *R;) and K;,; = Kz,,;(R; * R;), using the properties that R,; * R; is well-conditioned (see Lemma B.6), and using Q;_; and Q;,< are unitary (see Lemma B.5), we have 2 2 =—t oz st = é = QF Wr 1 1WrQr. - Q71Kr,1Kr<Qza|| < Yen poly(kz, L) (E.14) By our choice ofλ3,L ≥ 1 poly2·ΥL α2 L and (E.13), we have T T € ]Qi1Kr11Kr<Qr - WW? 4] < VTi (polys)? (= + 6-1) (E.15) # E.6 Deriving K; Close To W*; Since ||Kz,||r, ||Kz,c-1||r < Tr, we have ||Kz,a||r, ||Kzr,1-1||p < O(7iL) from Lemma B.6. Also, the singular values of W*;4, W* L,L—1 are between 1/« and Lx (see Fact B.7). Therefore, applying Claim I.9 to (E.15), we know that there exists square matrix P € R*4**+ satisfying’? K — PW y 3 4 b= Q _ L-1||p pol _ | LL L-1 LL 1|| < L( lyo) (. Oo i) => 1k € ||Kr<Q1. _ (P') Wea < Tz(poly,)? (= vr i) and all the singular values of P are between 1 poly(τL) and poly(τL). This implies that ost = —T — € iKr11Kr1.iQr-1 — We). 1P'PW,,1-1| < VT 1z(poly,)4 (= + 1) (E.16) ole a eo € Q2oK 2K 1<Qro~ W*7.(P™P) Wall, < VTi (polya)! (= + i) (E17) # Qi Our regularizer λ4,L ensures that T T 2 a |wi t-1W1,L-1 — Ky 11K1-1| < ’ F ALL Using W,,; = Wz,;(R; *R;) and K;,; = Kz,,;(R; * R;), using the properties that R,; * R; is well-conditioned (see Lemma B.6), and using Q;_; and Q;,< are unitary (see Lemma B.5), we have 2 e2 op a _ Qi. Wrp1Wrr1Qr-1 - Qi1Kr,1Kr11Qr-) < Xan poly(kr, L) 45We note here, to apply Claim I.9, one also needs to ensure ε ≤ both of them are satisfied under the assumptions ε ≤ αL (DL)9Î¥L αL (poly2)3 and αL αL−1 √ ≤ Î¥L and δL−1 ≤ 1 L(DL)16CL−1 4Î¥3 1 (poly2)3 √ ; however, Î¥L , and the definition of δL−1 from (E.4). 45We note here, to apply Claim I.9, one also needs to ensure ε ≤ 57 1 (poly2)7 # α2 √ # By our choiceλ4,L ≥ # L , this together with (E.16) implies our choiceA4,1, > az , this together with (E.16) implies (Polya)? \/ TZ a =T — e. Qi Wr aWr11Qs-1 - W).1P’PW*z,1-1|| < 2,/T? (poly)! (= + 65-1 = =—tT = _ —_ Wy-1W2,1-1 — W*, 7) ;P'PW*_1-1 (E.18) < 24/7 (poly2)* (= + 6r-1 F aL # Qi Recall we have already concluded in (E.9) that —=—T = op Sym |W, 7,-;Wz1-1) —Sym (We)1-1W*,1-1) é | < poly (= + 61-1) Pa az so putting it into (E.18) we have | Sym (Wey, .P PW 1-1) ~ Sym (We, .Wer-a)]], <8V TE0h2)" (= +51.) Since W*,,5-1 = W711: by Fact B.4, we know that for any matrix P, Sym (We). 1P™PW*,,1-1) = WP’ PW 1-1 This implies =—T T ; € Wee PT PWR - Wi 1W*r2-1|| < 4\/T} (polyo)* (= + i) By expanding W*;, 7-1 into its SVD decomposition, one can derive from the above inequality that ||P'P - |. < \T2 (poly.)® (= + i) (E.19) Putting this back to (E.16) and (E.17), we have ol os =a! =z ;(€ . Qi Kr. 1Kr11@r-1 — We}. W121] < 1/3 (poly2)® (= + b.-1) < \/T2(polys)® (—— + 6,- —_ 7 (po! y2) (= L :) T T |Qi-KpoKr<Qua— W*,.W*r.| Combining this with (E.15), we derive that (denoting by Q, # © diag(Qy)re In) 2 7(£ |. < 1/ Tz (poly) (= + i) (E.20) |QiK, KG, — WW, E.7 Deriving S,(x) Close To S7(x), Construct U, From (E.20) we can also apply Claim I.10 and derive the existence of some unitary Uz, € R***t so that*6 |< VTE (poly) (= +b.) . (B21) |K.Q. — ULW*, Simultaneously right applying the two matrices in (E.21) by the vector (where the operator ~ is for concatenating two vectors) (S7(2) * S3(@)) 7 (SF) ° (S7(2) * S3(@)) 5e5,\ (0.1) 7 (SF) 46 Taj We note here, to apply Claim 1.10, one also needs to ensure « < py ~ (S7(2) * S3(@)) 5e5,\ (0.1) 7 (SF) jer\gory ° αL (poly2)8 1 (poly2)8 √ √ and δL−1 ≤ ; however, Î¥2 L Î¥2 L , and the definition of and αL αL−1 αL (DL)9Î¥L 1 L(DL)16CL−1 both of them are satisfied under the assumptions ε ≤ ≤ 4Î¥3 δL−1 from (E.4). 58 (E.18) we have So Kz yo (RJU;S4(x))+ SD Kr SF (x) JETL\{0,1} GEILOO, 1} =U, > Wy, 57 (S7(2)) + > W755; (2) } + JE Ti\{0,1} JEILMO,1} + ξ10 # for some error vector ξ10 with 2 ¢ —2 6 € B [|éol2) < TZ - LBA (poly)! Ge +b.) . ~D aL B [|éol2) < TZ - LBA (poly)! Ge +b.) . ~D aL Combining it with E;.p |[Ur1S7_(« )= Sr-1( _, (see (E.4)) we know n)|I> < St(@)= SO Kr yo (RySj(x))+ SO Ki 830) JETt\{0,1} JETL“O LS JETi\{0,1} JETLMNO LY =U, > Wi. jo (SÂ¥(a)) + > W555 (a »] +1 = ULS7 (x) + &1 # for some error vector ξ11 with . 2 [lal] = 8, IOuSz (2) ~ S103 < TE(P0W2)" (+o). (6.2) ~ an QL # E.8 Deriving F(x) Close To G*(x) By the regularizer λ5,L, we have that ) |Wiw: - K;Kz| < (E.23) 5,L Using W,,; = Wz,;(R; *R;) and K;,; = Kz,,;(R; * R;), using the properties that R,; * R; is well-conditioned (see Lemma B.6), and using Qr_-1 and Qz< are unitary (see Lemma B.5), we have iW) WG, - EK} KG. < 2 -poly(h,.L Q,W,W2Q7 - Q,K;, 1Qx|, <5 poly(Kz, L) By our choice ofλ5,L ≥ 1 (poly2)13Î¥3 L α2 L , together with (E.20), we have that < ,/Y3(poly,)” = + 6p- 1] Ss i (poly) (= L. :) RTaqzT z T |@rw,W.G, — Wr, Ww . Note from the definition of Sum(F,(x)) and Sum(G*% (x)) (see (E.6)) we have and Sum(G*% (x)) (see (E.6)) we have || W*,(Sf_1(x) * S$_a(x),...)|I? ||Wr(Si-1(2) * Sz-1(a : an Sum(Gj(x)) = || W*,(Sf_1(x) * S$_a(x),...)|I? Sum(F1(x)) = ||Wr(Si-1(2) * Sz-1(a : an so using a similar derivation as (E.10), we have 2 ,(Sum(F; (2)) ~ Sum (Gj (2)))? < T} (poly)! (= +a) | (E24) we L 59 # E.9 Recursion We can now put (E.24) back to the bound of ObjL−1 (see (E.1)) and derive that Obj,_) < 207, ,, (Sum( Fy (x) — Sum(G% (x)))? + 20bj aw < YT} (polys)’® (6¢_;aj +e?) . (E.25) L−1α2 Note this is a tighter upper bound on ObjL−1 comparing to the previously used one in (E.3). Therefore, we can apply the induction hypothesis again and replace (E.4) also with a tighter bound e+ 6p-10L Vf COOe+1 2 Ve=2,3,...,L—1: BE ||UeSi(x) ~ Sp(a)I|5 < ( ) 3 (polyy)'8Cz_1 . (E.26) In other words, we can replace our previous crude bound on δL−1 (see (E.3)) with this tighter bound (E.26), and repeat. By our assumption, αL , this implies that the process αL−1 ends when47 2 52_, = (=) 273 (polyn)!8Cr_y B.27 L-1 ( —) i (poly) °Cr-1 ( ) Plugging this choice back to (E.26), we have for every = 2,3,...,L—1 o 2 o 2 2 E_lUpS#(a) — Sp(x)|2 < (—-__) - 23 (poly,)!8C;_, < (= Ey | BeSé(@) Sila)IB < ( =) p(polyy) "Cra < maa Cc. As for the case of € = L, we derive from (E.22) that 2 2 K (a) me) 2 2 i7( & — - Ej lUSile) ~ Sule)l3 < 274 (p0y2)" (=) < (oS) cr # E x∼D # CL This completes the proof of Theorem E.1. # E.10 Proof of Corollary E.3 Proof of Corollary E.3. As for Corollary E.3, we first note that our final choice of δL−1 (see (E.27)), when plugged into (E.13), (E.15), (E.20) and (E.22), respectively give us ToT hlUS a! Ux Cd afé 2 |Q7 Wr. WrQr< —W*,7-1W tl. < 2(Dr) OL T zo <T 5 |? a(@\ Qi1Kr 11 Ki<Qua —W*, 7-1 W zl. < 2T (Dr) OL 2 e \2 seriou" (2) ee |@rK/KiQ, — W*,W*7 on c 2 pl USi(2) ~ S103 < 27201)" (=) aw aL So far this has only given us bounds for the Z-th layer. As for other layers ¢ = 2,3,...,L— 1, we note that our final choice of 6; (see (E.27)), when plugged into the formula of Obj,_, (see (E.25)), in fact gives 2 . 4 5 ; 2 ap Obii-r carbine < ey Tal « (pres) . 47To be precise, we also need to verify that this new δL−1 ≤ assumptions ε ≤ αL (DL)9Î¥L and αL αL−1 ≤ 1 L(DL)16CL−1 4Î¥3 . 1 (poly2)8 as before, but this is ensured from our 60 # a αL (DL)9Î¥L and αL . Therefore, we can recurse to the αL−1 L(DL)16ε2. Continuing in this fashion gives the desired 1 L(DL)16CL−1 using our assumptions ε ≤ case of L − 1 with ε2 replaced with 4Î¥3 bounds. ≤ 4Î¥3 Finally, our assumption ¢ < OT implies Ez~p ||ULS7 (x) — S1(a)|l> < 1, and using gap assumption it also holds for previous layers: 2 C VO<L: E_|\UeS}(x) — So(x)|[3 < 217(Dy)"” (=). Fe xD ae Cy imply E,..p ||Se(x 3 < 2By using E,~p || $7 (x IIb < By. VO<L: xD They also imply E,..p ||Se(x 3 # E.11 Proof of Corollary E.4 Proof of Corollary E.4. This time, we begin by recalling that from (E.3): ObjL−1 ≤ α2 L · (kLLBLτL)8c3(2L) + 2ε2 ≤ α2 L · DL Therefore, we can use €? = ay - Dy and apply Theorem E.1 and Corollary E.3 for the case of L—1. This is why we choose ¢9 = ay: V Dz for € < L. As for the case of ¢ = L, we first note the L — 1 case tells us 2 \\Ur-184_4(@) ~ Sea(@)|l3 < 624 Y or} (Da) ( - ) «(<) Qp-1 aL avD Therefore, we can plug in this choice of d;_; into (E.13), (E.15) and (E.20) to derive 2 _\2 2 € < 2( (Dr) 2 r(<) a 2 € lai ‘Kir 1Kr<Qr<— W* LL- Wc < 2T (Dz) W(S -) i" < ary(D,)" (= . ) aL Qi Wr 1 1WreQra— Wey. 1 Wr] et |i K,K.Q, — We, W*, Note that the three equations (E.13), (E.15) and (E.20) have only required the weaker requirement e< Dr) OL Vir OBE comparing to the full Theorem E.1 (the stronger requirement was ¢ < (oaaaie but it is required only starting from equation (E.21)). # F Construction of Descent Direction ket Vij € Riex( ne) V9" € [2], Vi, ne) or R*¢*4 that satisfies Let Uy be defined as in Theorem E.1, Let us construct Vij Let Uy be defined as in Theorem E.1, Let us construct Vij € Riex( ne) or R*¢*4 that satisfies Vj > 2: Vijo(RjUjz) = Wi jo(z), V9" € [2], Vi, = Wi jr (F.1) Wi jo(z), V9" € [2], Vi, = Wi jr (F.1) login O(L*r))- (This can be done by defining n kj+. ), and the singular value bounds are due to Fact B.7, jo(z), login n and the singular values of V7; are between login O(L*r))- (This can be done by defining 1d n kj+. i= 7, (+1) (R;U;+RjU;)“! e REx ), and the singular value bounds are due to Fact B.7, Lemma B.5 and Lemma B.6.) Let us also introduce notations Ey = Kj), Ke — (Viy_4) | VP = (Eee-1- Eva) Exc “ = Ki yiKea _ (Vie-1)! Via ce! T ark Eye-1 = Ki y_;Kee-1— (Vie) Vien 61 Ey = K/ Ke — (Vi)! Vi Let us consider updates (for some η2 ≥ η1): √ Wee V1—mWe + VmDevy” Keg (1 *) Keg — mQ¢eKeg — mKee-1 Era Koy (1 - ) Keye-1 + mQeKee-1 — 12K eaEla Keg (1 *) Keg — mQ¢eKeg — mKee-1 Era Koy (1 - ) Keye-1 + mQeKee-1 — 12K eaEla € R™** is defined as (V7")! = VEC((WG)T,...(VZ)") which contains ¢° identical and Dy € R™*”™ is a diagonal matrix with diagonals as random +1, and Q; is a where V7" € R™** is defined as (V7")! = VEC((WG)T,...(VZ)") which contains ¢° identical copies of V7, and Dy € R™*”™ is a diagonal matrix with diagonals as random +1, and Q; is a symmetric matrix given by 1 -1 Tt Q=5 (KeeaK7y-1) Kye (Viea)) Vie rKiea (KeeiK7y1) # F.1 Simple Properties Fact F.1. Suppose we know ||We||r < Ke. Then, cw) OW?) = Lm) (Wo) We + m(VE) VE + ving for some error matrix ξ with log 6" - poly (Ke) poly(#e) i= en a aa vm <9 and De US Is m =0 and P El and Pr Elle > Proof. Trivial from vector version of Hoeffding’s inequality. Claim F.2. Suppose omin(Kee—1), Omin(Kea) > xz and ||Ke||2 < 2% for some KR >K+ke +L, we —_ AS have: 1 Eva, K)p_)Kee1Eva + EpgK) Keg) > ——= |Eval|? (Epa, Ky p_) Kee-1 Eva + EeaKy Kea) = poly(f) Ecalle Proof of Claim F.2. We first note the left hand side LHS = |[Kee1Evallp LHS = |[Kee1Evallp + ||KeaB/allp Without loss of generality (by left/right multiplying with a unitary matrix), let us write Kyy_1 = (K,,0) and Ky. = (Kz, 0) for square matrices K,, Ky € R®**e. Accordingly, let us write Erg = (! £2) for EB) € R&**e, We have )||7 > (Ell? + 1 oly (*) VZ,_,; = (Vi, V2) and V7, LHS = ||(KiE,, KiE2)||7 + ||(K2E] , K2E; )||7 > (Ell? + ||B2|l% + |Esllz) - 1 oly (*) Note also ||Evg||r < poly(&). Let us write matrices V1, V3 € R***¢, Then we have _ (&, EB.) _ Era = (E B) ~ ( Note also ||Evg||r < poly(&). Let us write VZ,_,; = (Vi, V2) and V7, = (V3, V4) for square matrices V1, V3 € R***¢, Then we have _ (&, EB.) _ (K] K2-V] V3 -V] Va Era = (E B) ~ ( -V3v3 -vi Va (F.2) Recall we have ||V7¢_j|l2,||VZall2 < Le Consider two cases. < BEAEE: Then, it satisfies ||Ey||r > 3\|K] Kollr > > are done. In the second case, Omin(W1) > I6E20 He rr We have In the first case, omin(W1) < so we G3 7 BEAEE: Then, it satisfies ||Ey||r > Omin(W1) > I6E20 He rr We have ro omin(V1) || Valle = Vo) V2 Valle HIVE Valle ro Ballz = [Vt Valle = omin(V1) || Valle = Vo) 1 V2 Valle > ——|Eullr HIVE Valle > al 62 so we are also done. # Claim F.3. Suppose omin(Kee—1) > 4 is T 2K Fp Kee Claim F.3. Suppose omin(Kee—1) > 4 and ||Koll2 < & for some K > K+ke +L, we have is 4 and ||Koll2 < & for some K > K+ke +L, we have is T 7 Tyre ~ 2K Fp Kee — (Vie) Viel, < poly(&) ||Evallp and ||2Q¢—I|p < (&)"|Eve-illr Proof of Claim F.3. Without loss of generality (by applying a unitary transformation), let us write Ky e_1 = (K, 0) for square matrix K € R*exke and let us write Vivi = (Vj, V2) for square matrix Vi € Reex*e, From (F.2), we have [Eealle Valle < ——— < poly(ke, , L) - ||E . [Valle < Omin(Vi_) ~ poly (Ke, «, L) - ||Ezal|r From the definition of Q¢ we have 2Q¢ = (KK")7!(K, 0) (Vi, V2) | (Vi, V2) (K, 0) '(KK")7“! = K7'V] Vi K7! (F.3) It is easy to verify that 2K} pQeKeyp_1 — (Viva)! Vien = (vy °) — (Viva)! Vien = ( r vive) which shows that which shows that | 2K Ee | 2K Ee QeKoe1 — (Vie) ’ Vial], < 2Vallell Valle + [Vall < poly(®) - [Beale - Next, we consider ||2Q¢ — Iz, since # since Ville < |[KZe |KTK VI Ville < |[KZe Kee (Vieu)" Vieal),, = IEeealle 5 we immediately have 1 |2Q¢—Ilr < Omin( KpK'K VIVille < (&)*|Ece-ille - # F.2 Frobenius Norm Updates Consider the F-norm regularizers given by Rov = ||Kell = Tr(K/ Ke) = Tr(K/ 1 Kee-1) + 2Tr(Kfp_1 Rz¢ = ||Well = Tr(W/ We) Rov = ||Kell = Tr(K/ Ke) = Tr(K/ 1 Kee-1) + 2Tr(Kfp_1 Kea) + Tr(K/, Kea) Rz¢ = ||Well = Tr(W/ We) Lemma F.4. Suppose for some parameter Kp > K+ L + kg it satisfies 1 and ||Ecalle < 45 Omin(Kee-1) > Re and ||Kell2 < 2%e, m.m < Qa? 1 poly(Ke) ’ then [RYE] < = m)Ree +m poly(ke, Lr) De RU < (L—m)Roe + m- poly(ke, «, L) + (n? + n2||Evallr) - poly(%e) 63 Proof of Lemma F.4. Our updates satisfy Kio Keea © (1— m) Ki epee + 2mK jp: QeKey- + & K/.Kea © (1+ m)K/aKea — 2mK{QK ra + & Kj y_Keg + Kip Kea + & Wi) We © (1—m)(We)' We +m (VE) VE + Ves We +m (VE) VE + Ves + 72l|Evallr) - poly(Ke) and Ep, [£4] = 0. The Ry corollary of Claim F.5. where error matrices ||€1||Â¥, ||€2l|, ||€s|le& < (7? + 72l|Evallr) - poly(Ke) and Ep, [£4] = 0. The Ry part is now trivial and the Rg ¢ part is a direct corollary of Claim F.5. Claim F.5. The following is always true Tr (-Ki Kew + 2K /p,QeKie-1) < —||Kee-ille + O (ken?) # Tr Furthermore, suppose Omin(Kee—1) > x and ||Ke||g < 2k for ke > K +L +k, we have that as 2 long as |\|Ecal|r < eg then # eg then Tr Tr (K[Kra — 2K/QrKra) < —||Keal + O((L?n)?hy) Proof of Claim F.5. For the first bound, it is a direct corollary of the bound ||2K/,_,QeKee-i||r < poly(«, L) (which can be easily verified from formulation (F.3)). As for the second bound, let us assume without loss of generality (by left/right multiplying with a unitary matrix) that Kee_1 = (K1,0) and Kya = (Ko, 0) for square matrices K;,Ky € RM**e, Let us write Viv = (Vi, V2) and Vj, = (V3, V4) for square matrices V1, V3 € Reexke Then we have, Era = (z Em) = (Se -VIV3 yy) E3 Ey -VJ V3 -VJ Va # We have IK] IK] Ky — Vi Valle < [Eralle => |[Ko— Ky "V1 Valle < 27: |Evallr - => ||KokK) —K7'V/V3V3 Viky Nn S (2k)? + |[Bealle Translating this into the spectral dominance formula (recalling A > B means A — B is positive semi-definite), we have K2K) = Ky! V/ V3sV3 Viky! + (2h)? - ||Ecalle -1 x (Lx)? Kp IVI Vi Ky! + (2%)? - [Eealle I (using ||Violl2 < L°x) On the other hand, from (F.3) one can verify that 2K/,Q/Kra = K3 2K/,Q/Kra = K3 K, ' V1 ViKy Ko Combining the two formula above, we have 2K} QeKea = ae eee 3 KK} Ky — (2%)? ||Beal| 7 - Ky Ke > 2K3 Ky — O((L7x)?) -1 (using A? > 2A —I for symmetric A) (using A? > 2A —I for symmetric A) Taking trace on both sides finish the proof. 64 # F.3 Regularizer Updates Let us consider three regularizer R3¢ = Kip .Kea — Why 1 Wea Rae = Kip yKee-1 — Whe, Wee-1 Rs = K/ Ke -— W/ We Lemma F.6. Suppose for some parancler K>K+L+ ke it satisfies 1 > =, ||Kelle, || Well2 < 2%, m < —— >, m< 2K poly(#) then, suppose Obj(D; W,K) < e? and suppose Corollary E.3 holds for L > €, then 1 Omin(Kee—-1) = oR’ Omin(Kea) = # = poly(*) ‘ e? C, oly(k) Ree) <8 Rg ull, +73 - poly(& =) - (Dp)! 2 +me B RY], <@— 18m) [Rocllp + nt poly(®) + (moa) (Dols Gem ‘ e? C. oly(®) RY < (1 — 1.87) Ra sll2. tee): (Dy)®. LE +, PONY B Rac ||, <( 8m) ||Ravell- m2 te (De) Gt om 2 : é Cc poly(f) Re | < (1 — 1.8m) ||Rs,cll2s + mV? - (D.)!8- == E|| 5 [|p < (E~ 18m) [Roel Pa! (De)? Ge tm # oly(k) +me Proof of Lemma F.6. Let us check how these matrices get updated. Rs © (1—m)Rse + mEva — mK jp) Kee—-1Eva — mEraK Rs © (1—m)Rse + mEva — mK jp) Kee—-1Eva — mEraK {Kea + & + 63 (using Erg = Kf p_ Kea — (V*ee-1)' V*ea) # (using Erg = Kf p_ Kea — Rae + (L—m)Rae +m (2K) 1QKeen —(V* era)! V*ie-1) +&4+Ca Rs, (L—m)Rse +m (K/K; —(vz)t vi) — mK} 1Kee-1 + 2mK pp; QeKee1 + mK {Kea — 2mK{,QeKea + &5 + 6 + mK {Kea — 2mK{,QeKea + &5 + 6 where error matrices Ep,[¢3] = 0, Ep, [G4] = 0, Ep, [¢s] = 0 and lélle < (nt + 13||Evall®) - poly(®) lalle.€slle < (mi + npI|Eval|r) - poly(*) 2 2 2 = E ; < —- pol EB lcsll B llcallr, B \Gs Ile =m “P y(k) The update on R3¢ now tells us (by applying Claim F.2) 3,0 lp 2 RY | < = 2m) (Real 12 + 2m |B Eva|| > —- —2— ||E E m Race Beall» ~ pegs Beall + nopoly() || W7 Wee — Vo" Viel [Beale + (mq||Ravlle + ni\|Ecalle + 3||Ecallp + a). poly(x) As for Ry and Rs, applying Claim F.3 and using the notation Ey = KI Ky — (va)t further simplify them to 2 # V7, we can Rae (1— m)Rae + &4 + Ga for |[€j\lp < (m||Ecalle + n2||Eealle) - poly(®) Bso — (1—m)Rse+ mE + & +6 for || lle < (m(|Eelle + n2||Ecallr) - poly(®) 65 As a result, mM ~ BRP? |< - 19m) [Rael + [Rael (mlBealle + mlBealle +) - poly(a) ~ 1 mw "ns Ol" <1 = 1.9m) [IRs + Belle ~ (m1 IBell e+ nallBealle + 2) - poly a) Since Obj = ε2, by applying Corollary E.3, we have _\2 Corollary E.3a : [WE Wee-1 — (V8)! Vieille < (=) (Dy? CE > ” ag Ce T ¢ E 2 5 CL Corollary ESD: [Beall = [Ke aKra ~ Vie)" Vial < (=) -(Do?te SE _ 2 Corollary E.3e: fy}. = |K Ky — (V2)" Vill? < (=) (Dye Se ae a Plugging these into the bounds above, and using 72 > m-poly(&) and j2 < < SOE and repeatedly using 2ab < az + b?, we have » _ (new) 3 ~\ 1 (mE 4. Ch, poly(k) E REP], < C= 1.8m) [Rocllf + nf poly(®) 4 (meq) (De) Get mT 2 e? 5 C oly (% RYE |< = 18m) [Rael + mS %e- (Dy®- Ce + mE De C, oly (%) new 2 16 L poly E ROP], < 1.8m) [Roll + (nT + mS): (Di)? Ge me m Lemma F.7. In the same setting as Lemma F.6, suppose the weaker Corollary E.4 holds for L > ¢ instead of Corollary E.3. Then, for every €< L, 2 : ~ az D Cr _Poly() ROM < (1-1. R30\[2, + 12 pol 4). (Dp) B Rae |, 5 A — 18m) Ravllp + m1 poly) + (m+ ae ) (Di Ge +m 2 azD CL poly(#) RM < (1 — 1.8m) [Rayll + m5 Tr - (Dy)®- B\Rae ||, 5 ( m1) Raell + 2 of e- (De) a +n 2 a’ iDu CL poly(A) ROM < (1-1. Rs ¢l[) + p17 - (D8. 2+ m2 ps< p= ( 81) |[Rsellp + 1 op (De) CG +m, E |Ro™ ° < (1 = 1.8m) |[Ra,r (2 +n} - poly(z) + (m=) -(Dt)i + POV) D, BL lp ONE at m 2 RO |? < 4 — 1.8m) Racll +7 ey (Dz)8 +m PO D, 40 || p= OTL A,L|| pa L L 71 m mn Rie) 2 < ( ~ 1.87 R- (2 47 2 . (D )16 47 poly(k) D, BL |p = OTL 5,L|lF eae L L 1 # _Poly() Proof. Proof is identical to Lemma F.6 but replacing the use of Corollary E.3 with Corollary E.4, 66 (F.4) # F.4 Loss Function Update For analysis purpose, let us denote by £ Loss<p(2; W,K)= (Gc) - 52 ojSum( F(x: W.K)))” j=2 £ Loss<+(: WK) © (G*(0) ~ )>ajSum(Fj(x; W,K))) jm2 OPT<,= E G*(a ; ;S Gi (a , <¢= EB, | (@@) — >) ajSum(G5(c)) j=2 Lemma F.8. Suppose the sampled set Z satisfies the event of Proposition C.2, Proposition C.8, Proposition C.7 (for €, < €?/100). Suppose for some parameter Ke > K +L +ke and % > Ke it satisfies 1 1 < _R = Ta "1 = ~ Die’ poly(®)’ '” ~ poly(e) 1 ~ Omin(Kee-1) > We’ Omin(Kea) = Kelle, Welle < &e, m2 < Suppose parameters are set to satisfy Definition A.4. Suppose the assumptions of Theorem E.1 hold for some L = ¢—1, then for every constant y > 1, B[Loss<)(Z; wire) | KC™)| —~— ly(K, B’ 1 < (1 — 0.99) Loss<o(Z;W,K) + (0.042? + PWR BY yy =)?OPT <r) = m 7 ~ Proof of Lemma F.8. Let us first focus on Sum(Fj(x; W, K)) = ||[Wj(o(Rj—19)-1(@;K)),.--) |)? and first consider only the movement of W. Recall from Fact F.1 that cw) Tew!) — (1 = m) (Wy) Wy +m (Vi) V5 + Ving; √ for some Ep[£;] = 0 and Ep|||§}||%] < poly(&;)/m. Therefore, √ Sum(F;j(2; W"™),K)) = (1 — m)Sum(Fj(2; W, K)) + mSum(F;(x;V*,K)) + Vméj1 (F-5) for some €j,1 = (o o(Rj-1Sj-1), ee )TE(o(Ryj_-15;-1), ...) satisfying [Ej] = Oand 1€j,1| < (poly(%j, By) \|x||? + || Si (a)||?)||&llp- Therefore, for every x, # E D # B[Loss<r(c; wire) k)] £ L £ ‘ =E (Ga) —(1—m) Yo a;Sum(F; (2; W,K)) —m > aj jSum(F; (a; V*,K)) + Yasvinga) | j=2 j=2 j=2 £ _ £ _ 2 £ (cr r) —(1— mm) >_ ajSum(F; (a; W,K)) — )=m So ojSum(F; (a; V* -K))) +mE sa j=2 j=2 j=2 @ @ £ ~ é (l- m)(G*(2) - Yo a;Sum(F; (x; W ‘K))) + m(G*( x)—m So Sum(F (a; V*, K)))" j=2 + η1 # poly(, B’) m 67 # By) + ——~— ——~— ly(K, B’ = (1—m)Loss<¢(z; W, K) + m Loss<¢(x; V*, K) + 1 POR BY) = = m Above, ® uses the fact that Ep|[é;,1] = 0 and the fact that €)1 and 1 are independent for j 4 j; and ® uses ((1— )a +b)? < (1 — n)a? + nb, as well as the bound on Ep|||§}||?] from Fact F.1. Applying expectation with respect to x ~ Z on both sides, we have Loss (new) ios —~— + poly(*, B’) E[Loss<e(2;W ,K)] < (1— m)Loss<e(Z; W, K) + mLoss<¢(Z; V*, K) + 7 ———— = = _ m # E D On the other hand, for the update in K; in every j < , we can apply ||2Q; — I||, < (&)?|[Ejj-illr from Claim F.3 and apply the bounds in (F.4) to derive that (using our lower bound assumption on A3,j,A4,j from Theorem E.1) √ ~ 1 VoL KS — Kylie < m|lBylle + m|Byall e - poly(®;) < —(me + me) -(Dj)S\/'T}- (F.6 ) # VoL (F.6 from Definition A.4, ) Putting this into Claim C.4 (for L = @), and using the gap assumption on ont from Definition A.4, we derive that Loss<p(Z; wirew) | Krew) ) Toc (new) a. ap 16-2 CL < (14 0.017,)Loss<p(Z;W JK) + m—— (Dei) PT a = ag_y Coy —~— 2 < (1+ 0.0171) Loss<¢(Z; wre) KK) + m5 Finally, we calculate that —~— ® 5 ® Loss<¢(Z; V*,K) < Loss<¢(D; V*, K) + 0.01le? < (1+ +) Loss<)(D: V*, K) + 0.02€? < < 7 < 8 1 2 2 < (1+ =)°OPT<¢ + 0.03€ (F.7) 7 < where © uses Proposition C.8 and y > 1 is a constant, @ uses Claim C.1, and ® uses Claim F.9 below. Combining all the inequalities we finish the proof. # F.4.1 Auxiliary Claim F.9. Suppose parameters are set to satisfy Definition A.4, and the assumptions of Theorem E.1 hold for some L = €—1. Then, for the V* = (V3,...,V%) that we constructed from (F.1), and suppose {aj}; satisfies the gap assumption from Definition A.4, it satisfies for every constant y > 1, e? 1 Loss<,(D; V*, K) < — + (1+ —)OPT<, = 100 7 = Proof. Recalling that F(a; W,K) = > opSum(Fi(2)) = S> ae || We(o(Re-1Se-1(2)), ---)I? £ £ # £ [Us 2 2 Using the conclusion that for every j < €, Ex.p [Us S4(@) _ 8,(x), < 5 def (D;)'8 (<) . op aj Oe from Corollary E.3d, one can carefully verify that (using an analogous proof to (E.11)) for every I<, [V3 (o(Ry-1U 1844 (x)),---)||? = |[VF(@(Ry-1S)-1(a)),-- |)? +§ 68 for some _ \2 j−1 ≤ Dj(Dj−1)18 αj−1 · CL Cj [(6)") < poly Fj, Bj, es(24))62_ < Dj(Dj-1) Since our definition of V™ satisfies (F.1), we also have for every j < ¢ || Vi (o(Rj-1Uj-1S%_,(2)),...) ||? = Sum(G%(2)) # || Vi Putting them together, and using the gap assumption on αj αj−1 from Definition A.4, CL e2 a~D e _ \2 Oy ajSum(F; (2; V*, K)) — ajSum(Gj (x a< Lye | Dj ( -)"( - ) : < : jm Qj-1 Cj ~ 100(1 +7) Finally, using Young’s inequality that t 2 * 1 pk La Loss<;(x; V*, K) < (1+ —) > avSum(Gi(x)) — G* (x) ~ Y b=2 2 L +(14+7) (>: aypSum(F;(x; V*, K)) — osu ite) =2 we finish the proof. F.5 Objective Decrease Direction: Stage 0° Theorem F.10. Suppose we are in stage 0°, meaning that A3,j = Aaj = A5,j = 0 for 7 = & and the trainable parameters are W1,...,We,Ki,...,Ke_1. Suppose it satisfies 2 e2 det Qe-1 Obj(Z;W,K ——_—— and Si( <7. Obj( )< (ooh) {., [I] 55 ( x)||3] i}, Suppose the sampled set Z satisfies the event of Proposition C.2, Proposition C.8, Proposition C.7 (for εs ≤ ε2/100). Suppose parameters are set to satisfy Definition A.4. Then, for every η2 < 1 and η1 ≤ η2 pave J) EObj(Z: wire) KM) < (1 0.7m) Obj(Z; W, K) + 2maz 4 And also we have E,~p|||$j(x)||?] < 2B; for every j < &. Proof of Theorem F.10. We first verily the prerequisites of many of the lemmas we need to invoke. Prerequisite 1. Using 6 ¢ > aa and Obj(Z; W,K) < ©”, we have aa and Obj(Z; W,K) < ©”, we have # Kell, || Welle < %e which is a prerequisite for Lemma F.4, Lemma F.6, Lemma F.8 that we need to invoke. Prerequisite 2. Applying Proposition C.7, we have _Proposition C.7, <eé 2 Pp Loss(Z; W, K) Loss(D; W, K) Since E,~p|||S;(x)||?] < 7; for all 7 < £, we can apply Claim C.1 and get 2 Claim C.1 and choice B’ SSS SS Loss(D; W, K) < 2¢ Loss(D; W, _Proposition C.7, <eé 2 Pp Loss(Z; W, K) Loss(D; W, K) < 2c? (F.8) ===============⇒ Loss(D; W, K) ≤ 3ε2 (F.9) 69 Next, consider a dummy loss function against only the first @— 1 layers é-1 . 2 : Lossgummy(D; W, K) # > (2 e;Sum(F; (2) - ajSum(G5(z))) | < 1.1Loss(D; W, K) + O(a7) < avD j=2 : < 4c? so in the remainder of the proof we can safely apply Theorem E.1 and Corollary E.3 for L = ¢—1. Note that this is also a prerequisite for Lemma F.8 with @ layers that we want to invoke. As a side note, we can use Corollary E.3d to derive Vi< 6 E [|S(x)I?] $28; - Prerequisite 3. Corollary E.3b tells us for every j < @, 2 Qh Kj) 1K j<Q) - WF), Wo < Tj(D;)" (=) = (F.10) 2 T4(D;)" (2) Cee 1 ~ T74(De-r)'* \ a3 J Cj (D5) 2 T4(D;)" (2) Cee 1 ~ T74(De-r)'* \ a3 J Cj (D5) ® uses the assumption « < Day ta Inequality @ holds when j = @—- 1 < 1 from our sufficiently large choice of Yp41, and ineuqliaty @ holds when Above, inequality ® uses the assumption « < Day ta Inequality @ holds when j = @—- 1 by using 7 = aa 7 < 1 from our sufficiently large choice of Yp41, and ineuqliaty @ holds when j < &—1 using the gap assumption on ay when j < @—1. JIT Note that the left hand side of (F.10) is identical to (since Kj,iQi = Kj,i(RiUi ∗ RiUi)) T * Tyyr* 2 || AK})_.Kj.B — C(W3,_.)"W)_D|| . for some well-conditioned sqaure matrices A, B, C, D with singular values between say) O(poly(k;, 7 (see Lemma B.6 and Lemma B.5). Therefore, combining the facts that (1) K],_,Kj< and (WF j-1) | jja1 are both of rank exactly kj, (2) ||Kj|| < &;, (3) minimal singular value omin(W% ;) > 1/«, we must ju have O(poly(k;, L))] (WF j-1) | W5 a 1 and Omin(Kj<) 2 -—————_~ min ( ja) = Rj . poly(kj,«, L) Omin(K; ;-1) > ——>—_- min( dd 1) = Ry . poly(kj,«, L) as otherwise this will contract to (F.10). This lower bound on the minimum singular value is a prerequisite for Lemma F.4, Lemma F.6 that we need to invoke. Prerequisite 4. Using Corollary E.3b, we also have for every j < ¢ (see the calculation in (F.4)) 2 * \T € 5 Ce jal = Kj; Kj (Vij) Ville < (=) Yj (Dj) J J e (oat _ 1;(Dj)® Ce 1 ~\ ay Yea(De-1)® Cj ~ (D;)8 which is a prerequisite for Lemma F.4 that we need to invoke. Main Proof Begins. Now we are fully prepared and can begin the proof. In the language of this section, our objective Obj(Z; W, K) = Loss(Z;W, K) + 5> Qsg [Raj + ay ||Rag ll. + sy [Rs, 2 p+ 6,5 Ro.) W, K) = Loss(Z;W, K) + 5> Qsg [Raj + ay ||Rag ll. + sy j<t + > 6,5 (Rr,j) ist [Rs, 2 p+ 6,5 # F + λ6,jR6,j 70 We can apply Lemma F.4 to bound the decrease of Re; for 7 < ¢ and R7j for 7 < ¢, apply Lemma F’.6 to bound the decrease of R3j,Ra,j,R5,j for 7 < ¢, and apply Lemma F.8 to bound the decrease of Loss(Z ;W,K) (with the choice OPT <p < 2a? 41). By combining all the lemmas, we have (using 72 = m/poly(&) and sufficiently small choice of 7) Obj(Z; Wwinew) | Kr) D , poly(k, B’) m IAS (1 _ 0.9m) Obj(Z; Ww, K) + 71 (Esample + . 92 Cc, + 1 | -2(p,)'= ond (# T; Z #) (Dj) C; y+m > do, jpoly(kj, L,&) + 2mazy ist j<t ® —, oly(K, B’ < er _ 0.8m )Obj(Z; Ww, K) T ™(Esample + poly(hsB) +m > A6,j poly (ky, L, kK) + mops jsé () — < (1 — 0.7m) Obj(Z; W, K) + 2m. 07,; 2 Above, inequality © uses our parameter choices thatA3,; = T sr » Aaj = ( DT , and A545 = Jd Jj . J J 2 Say Inequality @ uses our choices of Y; (see Definition A.4). Inequality ® uses m > poy P’) ja from Definition A.4, ¢, < 0.0le?, and oj = from Definition A.4. 2 2 Ee es mS polythy Lr) F.6 Objective Decrease Direction: Stage 0’ Theorem F.11. Suppose we are in stage (’, meaning that \3,; = Aaj = A5j = 0 for 7 > € and the trainable parameters are W,,...,We,Kyi,..., Ke. Suppose it satisfies (wit) <P oH WH = (HR) mt (2sscoitier} Suppose the sampled set Z satisfies the event of Proposition C.2, Proposition C.8, Proposition C.7 (for εs ≤ ε2/100). Suppose parameters are set to satisfy Definition A.4. Then, for every η2 < 1 and η1 ≤ η2 pave J) EObj(Z: wire) KM) < (1 0.7m) Obj(Z; W, K) + 2maz 4 2 And also we have E,~p[||5;(2)||?] < 2B; for every j < ¢. Furthermore, if e? < (war) then we also have Ex~p|||Se(x)||?] < 2Be. Proof of Theorem F.11. The proof is analogous to Theorem F.10 but with several changes. Prerequisite 1. For analogous reasons, we have # Kell, || Welle < %e which is a prerequisite for Lemma F.4, Lemma F.7, Lemma F.8 that we need to invoke. Prerequisite 2. This time, we have e? < aware This means the weaker assumption of Corollary E.4 has been satisfied for L = @, and as a result Theorem E.1 and Corollary E.3 hold with L = @—1. This is a prerequisite for Lemma F.8 with @ layers that we want to invoke. Note 71 in particular, Corollary E.3d implies Wi<b: EB IIS;(0)IP]< 2B, 2 Note also, if e? < (wit) , then Corollary E.3 holds with L = @, so we can invoke Corollary E.3e to derive the above bound for j = @. Enlil Se(@)I7] < 2Be Prerequisite 3. Again using Corollary E.3b for L = ¢— 1, we can derive for all j < 1 1 %poly(hy, mL) °Md Pmin( Kj) 2 ij pr Ry Amin (Kjj—1) 2 ~ Ky - poly(k;, «, L) This time, one can also use Corollary E.4b with L = ¢ to derive that the above holds also for j = £. This is a prerequisite for Lemma F.4, Lemma F.7 that we need to invoke. Prerequisite 4. Using Corollary E.3b, we also have for every j < @ (see the calculation in (F.4)) 1 (Dj)13 * * 1 \Ejall = IK}j-1Kjo — (V3jp-1)' Viale < (D8 This time, one can also use Corollary E.4b with L = ¢ to derive that the above holds also for j = £. Main Proof Begins. Now we are fully prepared and can begin the proof. In the language of this section, our objective Main Proof Begins. Now we are fully prepared and can begin the proof. In the language of this section, our objective Obj(Z; W, K) = Loss(Z; W, K) + > Qsg Rail + Aggy WRagllp +s, Rsyllp + Ao Ro,j) Obj(Z; W, K) = Loss(Z; W, K) + > Qsg Rail + Aggy WRagllp +s, Rsyllp + Ao Ro,j) j<t + So r05 (Rr) jst We can apply Lemma F.4 to bound the decrease of Rgj,R7j for j < ¢, apply Lemma F.7 to bound the decrease of R3,j,Raj,R5,; for j < @, and apply Lemma F.8 to bound the decrease of Loss(Z ;W,K) (with the choice OPT <p < 2a? 41). By combining all the lemmas, we have (using n2 =m /poly(&) and sufficiently small choice of 7) # E D Obj(Z; We”), Knew) , # oly(K, B’ polv(h ® —, < (1 _ 0.9m )Obj(Z; Ww, Kk) + 71 (Esample + λ6,jpoly(kj, L, κ) + 2η1α2 > ) + η1 # jsé 1 YY 1% 4Ce +m (e+ Tr + n)é 2(Dp)* + md (=, + 7 + 7 (ay)? D¢(D; 4 G 1% (=, + 7 oly(*, B’ ~ ® — oly(*, B’ < (1 — 0.9m) Obj(Z; W, K) + m(Esample + ~ +m S- Aegpoly(kj, L,) +2ma3,4 jst # jst 1 Le Y Yj 2 19-2 4Ce +m (e+ Te + cai 2(Dp)* + mE (a, ta + 7 (De) T7(Dj) CG # (a, ta oly(K, B poly B) m ® —, oly(K, B < (1 —0.8m)Obj(Z; W, K) + m(Csample + poly B) y+m > Ag,jpoly(ky, L, «) + 2maF 44 jst m ® — < (1 — 0.7m) Obj(Z; W, K) + 2m07,1 72 41 2 2 j ality sac 7 > . 5 . 5 © - = Above, inequality © uses our parameter choices that.3,; (DyyTy? Maj (Dy)PTy? and A5,; = a : : . . a wD iz: Inequality @ uses our assumption that « > Dir Inequality ® uses our choices o: j YT; (see Definition A.4). Inequality © uses m > po eB) from Definition A.4, c, < 0.0le?, and oj = from Definition A.4, 2 ee 7 S poly(ky Ln) # G Extension to Classification 1 Let us assume without loss of generality that Var[G*(«)] = Te) for some sufficiently large constant C_ > 1. We have the following proposition that relates the /2 and cross entropy losses. (Proof see Appendix G.2.) Proposition G.1. For every function F (x) and ε ≥ 0, we have 1. If F (x) is a polynomial of degree 2L and E(x0,x)∼D CE (Y (x0, x), v(x0 + F (x))) ≤ ε for some there v ≥ 0, then (F(a) — G*(a)))? = O(e3(2")?<?) avD 2. If Ex~p (F(a) — G*(x)))? < e2 and v > 0, then E _CE(Â¥(ao,x),v(ao + F(ax))) < O (v= + ve”) (x0,a)~D v At a high level, when setting v = 1 ε , Proposition G.1 implies, up to small factors such as c3(2L) and log(1/ε), it satisfies fy-loss = e? <> cross-entropy loss = ¢ Therefore, applying SGD on the @2 loss (like we do in this paper) should behave very similarly to applying SGD on the cross-entropy loss. Of course, to turn this into an actual rigorous proof, there are subtleties. Most notably, we cannot naively convert back and forth between cross-entropy and £2 losses for every SGD step, since doing so we losing a multiplicative factor per step, killing the objective decrease we obtain. Also, one has to deal with truncated activation vs. quadratic activation. In the next subsection, we sketch perhaps the simplest possible way to prove our classification theorem by reducing its proof to that of our @2 regression theorem. # G.1 Detail Sketch: Reduce the Proof to Regression Let us use the same parameters in Definition A.4 with minor modifications: e additionally require one log(1/e) factor in the gap assumption See 48 e additionally require one 1/e factor in the over-parameterization m, and e additionally require one poly(d) factor in the sample complexity N. 48We need this log factor because there is a logarithmic factor loss when translating between cross-entropy and the 2 loss (see Lemma G.1). This log factor prevents us from working with extremely small ¢ > 0, and therefore we have required ¢ > qe in the statement of Theorem 4 73 Recall from Theorem F.10 and Theorem F.11 that the main technical statement for the con- vergence in the regression case was to construct some W("™), K("©“) satisfying BObj(Z; WK") <(1- 0.7m )Obj(Z; W, K) + maz, BObj(Z; WK") <(1- 0.7m )Obj(Z; W, K) + maz, . # xE — We show that the same construction W("™), K("=“) also satisfies, denoting by ¢ = Obj” (2; W,Kks), — —_ log?(1/e EOb} (2; W°™"), K™) < (1—0.7m)Obj (Z;W,K) +m 0 ee 2), saz, (Gl) c This means the objective can sufficiently decrease at least until ¢ ag, - log wt (or to arbitrarily small when ¢ = L). The rest of the proof will simplify follow from here. Quick Observation. Let us assume without loss of generality that v = log(1/ε) 100ε Using an analogous argument to Proposition C.7 and Claim C.1, we also have always holds.49 —~—,xE *xE Obj (D;W,K) <2e and Obj"(D;W,K) < 3e . Applying Lemma G.1, we immediately know Obj(D;W,K) < O(c3(2")%e?) for the original 2 objective. Therefore, up to a small factor c3(2”)?, the old inequality Obj(D; W, K) < <? remains true. This ensures that we can still apply many of the technical lemmas (especially the critical Lemma E.1 and the regularizer update Lemma F.6). Going back to (G.1). In order to show sufficient objective value decrease in (G.1), in principle one needs to look at loss function decrease as well as regularizer decrease. This is what we did in the proofs of Theorem F.10 and Theorem F.11 for the regression case. Now for classification, the regularizer decrease remains the same as before since we are using the same regularizer. The only technical lemma that requires non-trivial changes is Lemma F.8 which talks about loss function decrease from W, K to W(new), K(new). As before, let us write for notational simplicity £ Feo(a; W,K) “ Yo a;Sum(F (2; W,K)) j=2 —~— # Loss xE . ~ ~)(279, 7; W, K) © CE(Y (29,2), v(x + F'<o(a; W,K))) One can show that the following holds (proved in Appendix G.1.1): Lemma G.2 (classification variant of Lemma F.8). —~ xE ELoss</(Z: wren) KK (new) ) —~— x 2 € 2. R. / < (1 —m)Loss<,(2;W,K) +m (Core Jopter +016 +! poe.) m Combining this with the regularizer decrease lemmas, we arrive at (G.1). — les(t/eo) *°This can be done by setting v T00co where € is the current target error in Algorithm 1 Since ¢ and eo are up to a factor of at most 2, the equation v = logis) holds up to a constant factor. Also, whenever ¢o shrinks by a factor of 2 in Algorithm 1, we also increase v accordingly. This is okay, since it increases the objective value Obj(Z; W, K) by more than a constant factor. 74 # G.1.1 Proof of Lemma G.2 Sketched proof of Lemma G.2. Let us rewrite Freo(x WO) KP) = (1 — m) Fee(2; WK) + mH (x) + Q(a) WO) KP) = (1 — m) Fee(2; WK) + mH (x) + Q(a) (G.2) aos Peer; WO), KO) ~ Foo, WO), K) m or Q(x) = Fee(a; WO), K) — 9 Fee(a; V*,K) — (1 —m) Fee(a; W,K) or H(x) + Fee(a; V*,K) We make two observations from here. e First, we can calculate the f2 loss of the auxilary function H(x). The original proof of Lemma F.8 can be modified to show the following (proof in Appendix G.1.2) Claim G.3. E,vp(G*(x) — H(2))? < 0.00001 a5 + 6OPT <p. Using Lemma G.1, and our choice of v = 100 log2(1/ε) entropy loss: ε , we can connect this back to the cross CE (Y (ao, x), v(ap + H(2x))) < O(tog*(A/e)) (a0,2)~D E OPT <; + 0.092 Through a similar treatment to Proposition C.8 we can also translate this to the training set ( 5g CE (Â¥ (x0, 2), u(t + H(x))) < los") opr, + 0.le (G.3) • Second, recall from (F.5) in the original proof of Lemma F.8 that we have ((Q(@))?] = E (Peels WO, K) — m Peels V",K) = (1 = m)Pee(w; W,K)) D £ ~ 2 ly(K, B’) poly (%, =E(Soajgja) <mPR (G.4) j=2 # E D as well as ED[Q(x)] = 0. We are now ready to go back to (G.2), and apply convexity and the Lipscthiz smoothness of the cross-entropy loss function to derive: # E D —~— xE — xE Loss <)(2; W°™"), K°"™) < (1—m)Losse,(Z; W, K) +m on ZICEW (0,22), o(e0 + H(a)) = a r0,0)~ a n)\2 +0? B(Q(e)) Plugging (G.3) and (G.4) into the above formula, we finish the proof. # G.1.2 Proof of Claim G.3 Proof of Claim G.3. Let us write OK (on) a)\2 2 Bolo: (new) (new)) FAL (me (new) EG (0) — HO)? s Ty E, (Fevler Wee, KOM) — Fee(a; WO"), K)) ok ral * 2 +2 E (G (x) — Fep(a:V ‘K)) 2 75 For the first term, the same analysis of Claim C.4 gives ~ ~ 2 E (Feel: wires) (MEW) Fe (as Wine) K)) mnZe 7 ~ a2 ajpoly(K—1, By_,)||K°™) — K||7- < (")” 900000 1oe24/=) ≤ α2 where the last inequality has used the upper bound on Ke") —Kj||r for j < ¢— see (F.6) in the original proof of Lemma F.8 — as well as the gap assumption on ao (with an additional log(1/e) factor). For the second term, the original proof of Lemma F.8 — specifically (F.7) — already gives ) c 1000000 log?(1/e) ~ 2 ~~ 1 E (Gc) - Fao(x; V*,K)) = Loss<,(Z:V*,K) < (1+ —)?OPT <r 4 aw. ~ ~ y a where the additional log(1/e) factor comes from the gap assumption on a: ~ ~~ E (Gc) - Fao(x; V*,K)) = Loss<,(Z:V*,K) < (1+ —)?OPT <r 4 aw. ~ ~ y a Putting them together, and applying a similar treatment to Proposition C.7 to go from the training set Z to the population D, we have the desired bound. # G.2 Proof of Proposition G.1 Proof of Proposition G.1. 1. Suppose by way of contradiction that wep F(x) — G*(x))? =2 (c3(2")?e?) Let us recall a simple probability fact. Given any random variable X ≥ 0, it satisfies50 1 9 (E[X7])? Pr[X > = /E[X2]] > —~—— TIX > 5 VEIN) 2 6 pay Let us plug in X = |F (x) — G*(x)|, so by the hyper-contractivity Property 5.3, with probability at least Q (=tn) over x ~ D, c3(2L) F(a) — G*(a)| = Q(c3(2")e) Also by the hyper-contractivity Property 5.3 and Markov’s inequality, with probability at least 1 − O c3(2L) G* (x) < E[G*(x)| + O(c3(2”)) - \/Var[G*(a)] < E[G*(x)] +1 When the above two events over x both take place— this happens with probability Zon) we further have with probability at least Q(c3(2”)e) over xo, it satisfies sgn(ao + F(x)) 4 sgn(xp + G*(x)) = Y(%0,x). This implies E(ao2).p CE (Y (xo, x), v(ao + F(x))) > € using the definition of cross entropy, giving a contradiction. # 4a} and p= Pr [x > yDV EX" |] < ja? 5a]. Then, we have 50The proof is rather simple. Denote by E[X 2] = a2 and let E = {X ≥ 1 a? = E[X?] < 5 (1 p)a® + pELX? | €] < 4a? + pV B[X* |] = La? + yDV EX" |] < ja? + ypvEIX 76 2. By the Lipschitz continuity of the cross-entropy loss, we have that CE(Â¥ (x, 2), (wo + F(x))) < CE(Y (20, 2), vn + G*(x))) + O (v|G*(a) — F(a))) < O(1 + v/G*(x) — F(2))) Now, for a fixed x, we know that if 9 > —G*(x) + |G*(x) — F(2)| 4 10282 or 9 < —G*(x) |G*(x) — F(2)| - 102s then CE (Y (xo, z), v(xo + F(x))) < 2. This implies vi? CE (Y (xo, x), v(vo + F'(x))) x0 <> +Pr [ee G*(x) + (ict) F(x)| 4 rs") x O(1 + ulG*(x) — F(«)|) e <- (ioe) F(2)| 4 we") x O(1+v|G*(x) — F(z)]) e < 7+ (lore x |G*(x) — F(x)| + o|G* (x) — F(x)/? 4 a) VU U Taking expectation over x we have CE (Y (x0, x), v(vo + F(2))) (x0,2)~D 1 log l 2, <=+0 (108 E |G"(x) — F(a)|+0_ E |G*(x) — F(a)? + a) < O(ve2 +28") . VU oiled eed U VU ≤ # H Lower Bounds for Kernels, Feature Mappings and Two-Layer Networks # H.1 Lower Bound: Kernel Methods and Feature Mappings This subsection is a direct corollary of [3] with simple modifications. We consider the following L-layer target network as a separating hard instance for any kernel method. Let us choose k = 1 with each Wi o: WwW; 1€ R?¢ sampled i.i.d. uniformly at random from Spi-1, and other Wi; = 1. Here, the set S, is given by: 1 _ 4 R? | Jlwlly = p, w; Sp {ve E | |wllo =p, w: € {o. —}} . We assume input x follows from the d-dimensional standard Gaussian distribution. Recall Theorem 1 says that, for every d and L = o(log log d), under appropriate gap assumptions for a1,...,az, for every € > 0, the neural network defined in our paper requires only poly(d/<) time and samples to learn this target function G*(x) up to accuracy e. In contrast, we show the following theorem of the sample complexity lower bound for kernel methods: Theorem H.1 (kernel lower bound). For every d>1, everyL< 108 ated , every ay < 0.1, every (Mercer) kernels K : R&*¢ + R, and N < 0 (oi), for every N iid. samples x, ..., aN) N(0,1), the following holds for at least 99% of the target functions G*(x) in the aforementioned class (over the choice in S,). For all kernel regression functions R(x) = nel] K(a,2™) - vn ~ 77 where weights vi ∈ R can depend on α1, · · · , αL, x(1), . . . , x(N ), K and the training labels {y(1), · · · , y(N )}, it must suffer population risk E G* (x) — R(z))? = 2(a? lo oN Epes) (a) — R(2)) (a7 log —9h+2 (d)) - Remark H.2. Let us compare this to our positive result in Theorem 1 for L = o(loglogd). Recall from Section 3 that az can be as large as for instance d~°-991 in order for Theorem 1 to hold. When this holds, neural network achieves for instance 1/d! error with poly(d) samples and time complexity. In contrast, Theorem H.1 says, unless there are more than q000 (5171) = d“()) samples, no kernel method can achieve a regression error of even 1/d°°!. Sketch proof of Theorem H.1. The proof is almost a direct application of [3], and the main difference is that we have Gaussian input distribution here (in order to match the upper bound), and in [3] the input distribution is uniform over {−1, 1}d. We sketch the main ideas below. First, randomly sample |x;| for each coordinate of x, then we have that x; = |x;\T; where each 7 iid. uniformly on {—1,1}. The target function G*(x) can be re-written as G*(x) = G*(r) for T = (Ti)iefay € {-1, 1}4, where G*(r) is a degree p= 2/7! polynomial over 7, of the form: G*(r) =az(w,T)? + G(r) where (for a ◦ b being the coordinate product of two vectors) w= W390|2| and deg(G*(r)) <p-l For every function f , let us write the Fourier Boolean decomposition of f : f (τ ) = λS τj S⊂[d] j∈S and for any fixed w, write the decomposition of G (7): G)= DMs G)= DMs SCld| jes Let us denote the set of p non-zero coordinates of W3, as Sw. Using basic Fourier analy- sis of boolean variables, we must have that conditioning on the > 0.999 probability event that gl Thies, |i] = (log®? a) 2" it satisfies gl |i] = (log®? a) 2" ' |Xs,,| = We it satisfies 1\? # i∈Sw ' 1\? 1\? 0.9 4-2" —2h |Xs,,| = We QL Il jai] > 7 ay (log®” d) >azlog~ (d) . i€Sw Moreover, since deg(G*(r)) < p—1, we must have \ = 0 for any other S 4 S,, with |S| = p. This implies that for any function f(7) with f(r) = > Xs Il Tj and E (1) — G(r) = O(az, log-2"*" (d)) ; Sc{d| jes it must satisfy NB, = Aa? log?" ee (d)) > > rz = O(az, log?" (d)) SC{d,|S|=p.SASw (d)) ~ 2 Finally, using E,. yo (G*(x) — A(x))? = |2| Er (Ric oT) — G(r) , we have with proba- 78 bility at least 0.999 over the choice of |x|, it holds that L+2 (d)) . E (Ric oT) — G(r))” = O(a? log? From here, we can select f(7) = &(|x|o7). The rest of the proof is a direct application of |3, Lemma E.2] (as the input 7 is now uniform over the Boolean cube {—1,1}%). (The precise argument also uses the observation that if for > 0.999 fraction of w, event E,,(x) holds for > 0.999 fraction of x, then there is an x such that €,,(x) holds for > 0.997 fraction of w.) For similar reason, we also have the number of features lower bound for linear regression over feature mappings: Theorem H.3 (feature mapping lower bound). For every d > 1, every L < logtoad every d > 0, every ay < 0.1, every D < qa00 (5174) , and every feature mapping ¢: R4 > R?, the following holds for at least 99% of the target functions G*(x) in the aforementioned class (over the choice in S)). For all linear regression functions B(x) = w' d(2), where weights w ∈ RD can depend on α1, · · · , αL and φ, it must suffer population risk Ko) RoI] 2 — 2 —2h+2 Boy lO ~ BIB = 2 (a3 los”) . Remark H.4. In the same setting as Remark H.2, we see that neural network achieves for in- stance 1/d100 regression error with poly(d) time complexity, but to achieve even just 1/d0.01 error, Theorem H.3 says that any linear regression over feature mappings must use at least D = dω(1) features. This usually needs Ω(D) = dω(1) time complexity.51 # H.2 Lower Bound: Certain Two-Layer Polynomial Neural Networks We also give a preliminary result separating our positive result (for L-layer quadratic DenseNet) from two-layer neural networks with polynomial activations (of degree 2L). The lower bound relies on the following technical lemma which holds for some absolute constant C > 1: Lemma H.5. For 1 ≤ d1 ≤ d, consider inputs (x, y) where x ∈ Rd1 follows from N (0, Id1×d1) and y ∈ Rd−d1 follows from an arbitrary distribution independent of x. We have that for every p ≥ 1, 4\P e for every function f(x,y) = (4H) +4(x,y) where g(x,y) is a polynomial and its degree over x is at most 4p—1, and © for every function h(a,y) = Wh, © for every function h(a,y) = Wh, aiGi((wi, (w, x,y) + b:)) with r = &(di/p)? and each 6; is an arbitrary polynomial of maximum degree 2p, it must satisfy Ex,y(h(x, y) − f (x, y))2 ≥ 1 Before we prove Lemma H.5 in Section H.2.1, let us quickly point out how it gives our lower bound theorem. We can for instance consider target functions with ky = d, kg =--- =k, = 1, W3, = Lixa and Wi, Why, Wis = (2y.-++ dy), and other W7, =1 for j > 2. 51One might argue that feature mapping can be implemented to run faster than O(D) time. However, those algorithms are very complicated and may require a lot of work to design. It can be unfair to compare to them for a “silly” reason. One can for instance cheat by defining an infinitely-large feature mapping where each feature corresponds to a different neural network; then, one can train a neural network and just set the weight of the feature mapping corresponding to the final network to be 1. Therefore, we would tend to assume that a linear regression over feature mapping requires at least Ω(D) running time to implement, where D is the total number of features. 79 For such target functions, when L = o(log log d), our positive result Theorem 1 shows that the (hierarchical) DenseNet learner considered in our paper only need poly(d/ε) time and sample complexity to learn it to an arbitrary ε > 0 error (where the degree of the poly(d/ε) does not depend on L). 4\ On the other hand, since the aforementioned target G* (x) can be written in the form az (4H) g(x) for some g(a) of degree at most 24 — 1, Lemma H.5 directly implies the following: 2 Theorem H.6. For any two-layer neural network of form h(x) = Yoj_4 aiGi((wi, (@, with r< a and each Gi is any polynomial of maximum degree 2&1, we have that S1(@)) + bi), 2 a E_ (h(x) —G*(2)? > — . Egy tl OP > sos (Since G; is degree 24-! over Si(a), the final degree of h(x) is 2” in a; this is the same as our L-layer DenseNet in the positive result.) To compare this with the upper bound, let us recall again (see Section 3) that when L = o(log log d), parameter αL can be as large as for instance d−0.001 in order for Theorem 1 to hold. When this holds, neural network achieves for instance 1/d100 error with poly(d) samples and time In contrast, Theorem H.1 says, unless there are more than d2Ω(L) = dω(1) neurons, complexity. the two-layer polynomial network cannot achieve regression error of even 1/d0.01. To conclude, the hierarchical neural network can learn this function class more efficiently. Finally, we also remark here after some simple modifications to Lemma H.5, we can also obtain the following theorem when ky = kg = --- = ky = 1, W7,,Wio = (+. ney s+) and other Wi, =1. e . e Theorem H.7. For every function of form h(x) = S~?_, a is any polynomial of maximum degree 2", we have e . e ~, . o(L Theorem H.7. For every function of form h(x) = S~?_, a;o1((w;,x +b;)) with r < a? and each a is any polynomial of maximum degree 2", we have 2 5 * (ae))2 OL ravio POS gots» # H.2.1 Proof of Lemma H.5 Proof of Lemma H.5. Suppose by way of contradiction that for some sufficiently large constant C > 1, E x,y (h(x, y) − f (x, y))2 ≤ 1 pC·p This implies that eo 4 ! (: (x,y) a fl2-9)) < or (H.1) aly y p We break x into p parts: x = (x(1), x(2), · · · , x(p)) where each x(j) ∈ Rd1/p. We also decompose wi into (w(1) . . accordingly. We can write wll4\? . wll4\? . gd) |4\? (att) - (Saat 4) (H.2) dy dy 80 + Since 0; is of degree at most 2p, we can write for some coefficients aj,q: # q . F -\\ 2 F Eajoi( (wi, (a,a*,y) + bi)) = > Aig Yo, w) + ((2) wl) (H.3) q9¢(2p] Jelp] Let us now go back to (H.1). We know that Ey f (x, y) and Ey h(x, y) are both polynomials over x ∈ Rd1 with maximum degree 4p. ( Vy jo 14)”. jell jx |I4. e The only 4p-degree monomials of E, f(a, y) come from (H.2) which is ae ( Vy jo 14)”. Among them, the only ones with homogeneous degree 4 for each x) is WF II jell jx |I4. # ae ( WF (s; eb ( (a9) 2 e The only 4p-degree monomials of Ey h(a, y) come from (H.3) which is a;,2p (s; eb ( (a9) 2 , wl )) Among them, the only ones with homogeneous degree 4 for each x) can be written as a a@A))2 ay\y2 (yp Tet) (((2) Wi; )) : Applying the degree-preserving Property 5.4 for Gaussian polynomials: . — 2 Set TT (2) ul)? = TP et) < ep - i jelp] Jelp] # Tyepy (2)? Let us denote Tyepy (2)? wl) = (w £) where Z, @; € R(“/)” are given as: wl) = (w £) where Z, @; € R(“/)” are given as: T= Il (2) and Ww; = Il fw!)}, Je] ity ip€[di/p] J€lP] ity i1,··· ,ip∈[d1/p] Under this notation, we have TE ied = 0203, Sat TY (29) of)? =F” ato wy" FT a j€[P] i je€(p] This implies that for M = Â¥>; ait; ( (w)' € R(41/P)? x (d1/P)” we have ait; ( (w)' € R(41/P)? x (d1/P)” we have xT aT) (d,)°P # (d,)°P Seg = Misi} #4) xT aT) _ (d,)°P Cx (="(M—Di") = Seg By the special structure of M where M(j, i) (i2,),--- (ij) = Misi} 42.44}, {aj a} does not depend Te h2)9 #4) D5 on the order of (ij, #;) (since each @;(@;)' has this enonerty), we further know that jx — aay = (ar fp x (aa fp? F piC-10)p This implies that the rank r of M must satisfy r = Ω((d1/p)p) using [3, Lemma E.2]. # I Mathematical Preliminaries # I.1 Concentration of Gaussian Polynomials Lemma I.1. Suppose f : Rm → R is a degree q homogenous polynomial, and let C(f ) be the sum of squares of all the monomial coefficients of f . Suppose g ∼ N (0, I) is standard Gaussian, then for every ε ∈ (0, 1 [IF seve] < O(a) -e'/4 dion 81 # 2p . Proof. Recall from the anti-concentration of Gaussian polynomial (see Lemma I.2a) [If(@) — t| < eV/Var{F(a)]] < O(g) -=Â¥/4 on DD) Next, one can verify when f is degree-g homogenous for gq > 1, we have Var[f(g)] > C(f). This can be seen as follows, first, we write Var[f(g)] = E[(f(g) —E f(g))7]. Next, we rewrite the polynomia. f(g) -E f(g) in the Hermite basis of g. For instance, g}g3 is replaced with (H5(g1) +--+ )(H2(go) + --) where H;,(a) is the (probabilists’) k-th order Hermite polynomial and the “-- order terms. This transformation does not affect the coefficients of the highest degree monomials. (For instance, the coefficient in front of Hs(g1)H2(g2) is the same as the coefficient in front o G93. By the orthogonality of Hermite polynomials with respect to the Gaussian distribution, we immediately have E[(f(g) — E f(g))?] > C(f). ” hides lower- Lemma I.2. Let f : Rm → R be a degree q polynomial. (a) Anti-concentration (see e.g. [58, Eq. (1)]): for every t ∈ R and ε ∈ (0, 1), [If )-tl)<e Var[f(9)]| < O(q)-e'/4 onto (b) Hypercontractivity concentration (see e.g. [66, Thm 1.9]): there exists constant R > 0 so that Pr, yllf() EU > Se? e-(ewartran) λ2 R·Var[f (g)] Pr g∼N (0,I) 1.2. Random Initialization k Lemma B.6. Let Ry € R( Oy) xke be a random matrix such that each entry then with probability at least 1— p, Re * Re has singular values between k Lemma B.6. Let Ry € R( Oy) xke be a random matrix such that each entry is i.i.d. from N (0, i), £ then with probability at least 1— p, Re * Re has singular values between loans: O(l+ z@ log f)), and ||Relly < O(1 + LEA), and ||Relly < O(1 + LEA), As a result, with probability at least 0. 99, it satisfies for all (= 2,3,...,L, the square matrices Ry * Re have singular values between [ o(1+ teat Eke)) )] and |[Re|l2 < O14 Â¥ viet), Proof. Let us drop the subscript @ for simplicity, and denote by m = (*31). Consider any unit vector u € R™. Define v to (any) unit vector orthogonal to all the rows of R except its i-th row. We have Jul (R* R)v| = |u,(Ri, * Ri.) = Jui |S ap gRipRiqv, Psa Now, we have that v is independent of the randomness of R;,;, and therefore, by anti-concentration of Gaussian homogenous polynomials (see Lemma I.1), Pp R;,Riqui?| < i @) . i < O( 1/2) Br Ap,qRipRiqpq) < €|lv zl <= € : PS Therefore, given any fixed i, with probability at least 1 − O(ε1/2), it satisfies that for every unit vector u, ju" (R*R)v®| > ltl . 82 By union bound, with probability at least 1 − O(kε1/2), the above holds for all i and all unit vectors u. Since maxi |ui| ≥ 1 k2 with probability at least 1 − O(kε1/2). As for the upper bound, we can do a crude calculation by using |/R * Rl2 < ||[R * Rl r. 2 2 p2 p2 2 |R*Rilp = » ap. gRipRig = » » Rip ipsa t \pelk] 2 By concentration of chi-square distribution (and union bound), we know that with probability at least 1 − p, the above summation is at most O(k2) · ( 1 Finally, the bound on ||R||z: can be derived from any asymptotic bound for the maximum singular value of Gaussian random matrix: Pri||kKR||2 > tk] < e MPR) for every t > O(1). 1.3. Property on Symmetric Tensor Lemma B.5, Jf U € R?*? is unitary and R € R**? for s > (51), pt1 matrit Q € RO2)*(2") 50 that RU « RU = (R*«R)Q. then there exists some unitary 2 )×(p+1 2 ) so that RU ∗ RU = (R ∗ R) Q. Proof of Lemma B.5. For an arbitrary vector w € R®, let us denote by w!(R* R) = (i,j )1<i<j<p- Let g € N(0,Ipxp) be a Gaussian random vector so we have: √ w'o(Rg) = > w;(Rig)” = > wi(Ri * Ri,g* g) = > big? +V2 > bij G9 ie{s| ils] ie[p| 1<icj<p Therefore, 2 2 [ (wT o(Rg)) | = [(Siets biG? + V2 i<icj<p big) | =2 Vixicj<p be, +2 Vi<icj<p b;,ib5,5 +3 Diep| be, 2 = WV icici<p be, + (Suet bi) + 2 View] be, : = WV icici<p be, On the other hand, we have E [w'o(Rg)] = Viel Var [wT o(Ry)| i∈[p] bi,i. Therefore, we have Var [wT o(Ry)| = 2\\w'(R«R)I|I3 . Note that Var[w'o(Rg)] = Var[w'o(RUg)| for a unitary matrix U, therefore we conclude that Jw! (RU * RU)|3 = ||w! (R* R)|)3 oe . . . . + : 1 for any vector w. Which implies that there exists some unitary matrix Q € RC RU «RU =(R«R)Q. 2 )×(p+1 # so that 1.4 Properties On Homogeneous Polynomials ; . > j - = Given any degree-g homogenous polynomial f(a) = defined ; . > j - = oli © Given any degree-g homogenous polynomial f(a) = Soyenn: \ITllacq OF Il jeln] &> recall we have defined Cf) Yo aj TEN”: |[Z i= When it is clear from the context, we also denote C(f ) = Cx(f ). 83 Definition I.3. Given f : Rn → R and vector y ∈ Rn, define the directional derivative (∆yf )(x) def= f (x + y) − f (x) and given vectors y(1), . . . , y(q) ∈ Rn, define ∆y(1),...,y(q)f = ∆y(1)∆y(2) . . . ∆y(q). Lemma I.4. Suppose f : Rn → R is a degree-q homogeneous polynomial. Then, the finite-differentiate polynomial F(y®,...,y) = A,o),,.yo f (2) is also degree-q homogenous over n × q variables, and satisfies © C(f) a <C(f) < CCF): (al)? © Ey), aN Oduen (Lys... 9))7] = eA) © Ey), aN Oduen Proof. Suppose f(%) = Proof. Suppose f(%) = Soren; jr||;=q U Tjefny af. Then, we have (see [58, Claim 3.2]) jr||;=q U Tjefny af. Ay,....y) = fy®,....y% = Ay,....y) = a, TT ay” fy®,....y% = Yo as [] Jelnle g(a] where @y = ayy) [poi k=1(Ik(J))! and Ik(J) = |{j ∈ [q] : Jj = k}|. On the other hand, for every [* € N¢ with ||J*||1 = q, J € [n]4 that maps I(J) = J*. Therefore, we have here ! are 4 Trai Gi)! different choices of C(f) = > aj = » aia) « ( JE|nj4 JE|nja k (In(J))!)? = 1 TEN”: | = Ti=¢ n az: ({[ 0)? . k=1 qd =i Ux)! As a result, aj: (q!) <C(f) < TEN”: || |= TEN”: | Tli= az (q!)” As for the second bullet, it is simple to verify. Lemma I.5. Suppose f : Rn → R is a degree-q homogeneous polynomial. If g(x) = f (Ux) for U ∈ Rn×m being row orthonormal (with n ≤ m), then C(g) ≥ C(f ) q! . • If g(x) = f (Wx) for W ∈ Rn×m with n ≤ m and σmin(W) ≥ 1 κ , then C(g) ≥ C(f ) (q!)2κq . # Proof. • For every y(1), . . . , y(q) ∈ Rm, every y, yO ER”, Gy, ....y) = Ayan, yin g(®) = Auywn,...uyof(Ux) = f(Uy,...,Uy) Since Gaussian is invariant under orthonormal transformation, we have C(f) = (Fy®,....y))7] = yD ,...,y DN (OInxn) y),...yÂ¥OoN (0,Imxm) • Suppose W = UΣV is its SVD decomposition. Define f1(x) = f (Ux), f2(x) = f1(Σx), so q! C(f2) ≥ 1 Lemma I.6. Suppose f,g: R” — R are two homogeneous polynomials of degree p and q respectively, and denote by h(x) = f(x)g(x). Then Cz(h) < (PE) Co(f)Cu (g) . 84 Proof. Let us write f(a) = > ar IL; 4 and g(x) = > by Il a? TENE: |[I|i=p — J€[h] JENF: ||J|i=q — elk] . TENE: |[I|i=p — J€[h] JENF: ||J|i=q — elk] On one hand, we obviously have )7 rene: |r), <p oJeEN*: Il ||i—a a7b*, = C(f)C(g). On the other hand, when multiplied together, each monomial in the multiplication f(a)g(x) comes from at most (pra pairs of (I, J). If we denote this set as S, then 2 # JENF: ||J|i=q a7b*, = 2 (Sunes arb) < (PF) Sunes 4705 Putting the two together finishes the proof. Lemma I.7. Suppose f (1), f (2) : Rn → Rk are degree-p homogeneous polynomials and g : Rk → R is degree q homogenous. Denote by h(x) = g(f (1)(x)) − g(f (2)(x)). Then, Cx(h) ≤ kqq2 · 2q−1 · qp p, p, . . . , p · C(g) · (max i C(f (1) i − f (2) i )) · (max i C(f (1) i ) + max i C(f (1) i − f (2) i ))q−1 . Proof. Let us write L gy= > a J] x,’ TEN: |\Ili=q — 7€[A] For each monomial above, we need to bound Cx(hI (x)) for each (f (1) j c det 1 ; 2) hoe) = TPF @))" - TL =[P@-TL Fe Jelk] jek] jes jes where S' C [k] is a multiset that contains exactly I; copies of j. Using the identity that a,a2a3a4 — bybob3b4 = (a1 — by )aga3aq + b1 (a2 — b2)aza4 + b1b2(a3 — b3)a4 + b1b2b3(a4 — b4), as well as applying Lemma I.6, one can derive that coins (, * “(amare 4 — 7) (max {e(A), CU) Summing up over all monomials finishes the proof. # I.5 Properties on Matrix Factorization Claim 1.8. Suppose we have matrices A,C € R**™ and B,D € R**™ for some mi,m2 > k and |A'B—C'Dllp<e. Then, there exists some matrix P € R*** so that: ° ||AT-C'P lp < oy, _ 2e-(omax(B))? ° |[B-P Cle < Sp iGirantDy and Omin(D) e the singular values of P are within [ 2ε·(σmax(B))2 # σmax(B) , σmax(D) # σmin(B) J* Proof of Claim I.8. We also refer to [2] for the proof. Suppose A = U1Σ1V1, B = U2Σ2V2, C = U3Σ3V3, D = U4Σ4V4 are the SVD decomposi- tions. We can write V1 DS] Ul Usd2V2 — V3 B43 Us UySy Valle < € => ||V3V] DU] UsX2V2Vj — Dy Us UsZallr <e 85 Now note that 4 Uy Usd is of dimension m1 x mg and only its top left k x Let us write }4 = (4,0) for Sy € R’**. Let us write U2d2VoVI = (E,F) the above Frobenius bound also implies (by ignoring the last mz — k columns) [V3Vj 5] ULE — E3 U3 Uskalle <e [V3Vj 5] ULE — E3 Finally, using ||MN||r < ||M||r - omax(N), we have k block is non-zero. E€ R***. Then, # for vistul —vislulu,SyE |. < —— =—=_ J peyet 33 3 aes lr ~ Omin(E) Omin(B) Let us define P = U4Σ4E−1, so we have σmax(P) ≤ σmax(D) σmin(B) and σmin(P) ≥ σmin(D) σmax(B) . From the above derivation we have E B _ P< _ F* Omax < €0max(B) A'B-C'PB A'-cC'P B Omin(B) By triangle inequality, this further implies _ € B) _ E0max(B) 1 CTPB~ C'PP™D||p <4 2B) _, |B pt < (e: mone J lp =* Omin(B) J lr ~ Omin(B) Omin(C)omin(P) Claim 1.9. Suppose we have matrices A,C € R**™ and B,D € R**™ for some mi, mz > k and |A'B-C!'Dllz << omin(C)omin(D). Then, there exists some matrix P € R*** so that: T T €0max(A * |AT—C'Pllp < ZiGieng t= e \|B _ PCllr < 2e-(omax(B))?omax(A) (Gmin(C)omin(D)—e)2 ? and e the singular values of P are within [gaat # σmax(B) , σmax(D)σmax(A) e the singular values of P are within [gaat ; Zoe p lene]. # σmin(C)σmin(D)−ε Proof of Claim I.9. Without loss of generality (by left/right multiplying a unitary matrix), let us assume that C = (C, 0) and D = (D, 0) for C, D ∈ Rk×k. Let us write A = (A, ∗) and B = (B, ∗) for A, B ∈ Rk×k. We have the following relationships Omin(C) = omin(C) , min(D) = omin(D) , Fmax(A) < Fmax(A) » Fmin(B) < omin(B) Now, the bound ||A'B—C'D||r < ¢ translates to (by only looking at its top-left k x k block) |A'B-—T'Dllr < c. Since these four matrices are square matrices, we immediately have Omin(B) > Sunin(C)Omin(D)—e Plugging in the above relationships, the similar bound holds with- Omax(A) out the hat notion: σmin(B) ≥ σmin(C)σmin(D) − ε σmax(A) . Plugging this into the bounds of Claim I.8, we finish the proof. Claim 1.10. Suppose we have matrices A,C € R**™ for some m > k and ||A'A—C'C||p < Ee< $(Omin(C))?, then there exists some unitary matric U € R’** so that 2 (σmin(C))2, then there exists some unitary matrix U ∈ Rk×k so that 7ε(σmax(A) + σmax(C))2(σmax(C))3 (σmin(C))6 . Proof of Claim I.10. Applying Claim I.9, we know there exists matrix P ∈ Rk×k so that: # tr + 2€0max(A) 0 |Al-C Pie < Soy ’ 86 e the singular values of P are within [sat 2s CH (A) They together imply e the singular values of P are within [sat eG 2 (onms(A) + dnax(C)oinac(P)) 2€0max(A) . 3(max(C))?omax(A) 6¢(max(A))?(Omax(C))? ~ (Omin(C ))? (Omin(C))? — (@min(C))4 |ATA—C'PP'Cllp< By triangle inequality we have 7e(Omax(A) + Omax(C))?(Omax(C))? (omin(C))4 \C'C-—C'PP'Clp< Putting C into its SVD decomposition, one can easily verify that this implies 7é(Omax(A) + Omax(C))?(max(C))? (omin(C))® |I-PP'||p< Putting P into its SVD decomposition, one can easily verify that this implies the existence of some unitary matrix U so that52 Te(Omax(A) + Omax(C))?(Omax(C))? (@min(C))® |U-Pllr< . eee. and finish the proof. Finally, we replace P with U in the bound ||A' — C'P||p < eee. and finish the proof. # I.6 Nonconvex Optimization Theory Fact 1.11. For every B-second-order smooth function f :R4— R, every € > 0, every fixed vectors x € R¢, suppose for every sufficiently small n > 0, there exists vector x, € R* and a random vector x2 € R¢ with [x2] = 0 satisfying ||x1|]2 < Qi, [llx2ll3] < Q2 and √ E x2 or λmin(∇2f (x)) ≤ − ε Q2 [f (x + ηx1 + ηx2)] ≤ f (x) − ηε . Then, either ||V f(x)|| 2 xo, or Amin(V? f(a)) < Gy: where Amin 1s the minimal eigenvalue. Proof of Fact I.11, We know that √ f (e@+ne1 + /nx2) = f(x) + (VF (a), nay + nae) + 5 (nai + Vinx)’ V? F(x) (nei + Vie) + O(By*). Taking expectation, we know that [P(e + Viie2)] = flo) +7 F(0).21) +5 E [2] V?f(0)09] + O(Bn"*) Thus, either (V f(x), v1) < —e/2 or E[xJ V?f(x)x2] < —e, which completes the proof. # References [1] Emmanuel Abbe, Enric Boix-Adsera, Matthew S Brennan, Guy Bresler, and Dheeraj Nagaraj. The staircase property: How hierarchical structure can guide deep learning. Advances in Neural Information Processing Systems, 34:26989–27002, 2021. 52Indeed, if the singular values of P are p1,...,px, then ||I— PP'||p < 5 says Xd - pi)? <&. 52Indeed, if the singular values of P are p1,...,px, then ||I— PP'||p < 5 says 0,(1— p?)? < 6”, but this implies Xd - pi)? <&. 87 [2] Zeyuan Allen-Zhu and Yuanzhi Li. LazySVD: even faster SVD decomposition yet without agonizing pain. In NeurIPS, pages 974–982, 2016. [3] Zeyuan Allen-Zhu and Yuanzhi Li. What Can ResNet Learn Efficiently, Going Beyond Kernels? In NeurIPS, 2019. Full version available at http://arxiv.org/abs/1905.10337. [4] Zeyuan Allen-Zhu and Yuanzhi Li. Can SGD Learn Recurrent Neural Networks with Provable Gener- alization? In NeurIPS, 2019. Full version available at http://arxiv.org/abs/1902.01028. [5] Zeyuan Allen-Zhu and Yuanzhi Li. Feature purification: How adversarial training performs robust deep learning. In FOCS, 2021. Full version available at http://arxiv.org/abs/2005.10190. [6] Zeyuan Allen-Zhu, Yuanzhi Li, and Yingyu Liang. Learning and Generalization in Overparameterized In NeurIPS, 2019. Full version available at http: Neural Networks, Going Beyond Two Layers. //arxiv.org/abs/1811.04918. [7] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. On the convergence rate of training recurrent neural networks. In NeurIPS, 2019. Full version available at http://arxiv.org/abs/1810.12065. [8] Zeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over- parameterization. In ICML, 2019. Full version available at http://arxiv.org/abs/1811.03962. [9] Sanjeev Arora, Aditya Bhaskara, Rong Ge, and Tengyu Ma. Provable bounds for learning some deep representations. In International Conference on Machine Learning, pages 584–592, 2014. [10] Sanjeev Arora, Rong Ge, Tengyu Ma, and Ankur Moitra. Simple, efficient, and neural algorithms for sparse coding. In Conference on learning theory, pages 113–149. PMLR, 2015. [11] Sanjeev Arora, Simon S Du, Wei Hu, Zhiyuan Li, Ruslan Salakhutdinov, and Ruosong Wang. On exact computation with an infinitely wide neural net. arXiv preprint arXiv:1904.11955, 2019. [12] Sanjeev Arora, Simon S. Du, Wei Hu, Zhiyuan Li, and Ruosong Wang. Fine-grained analysis of opti- mization and generalization for overparameterized two-layer neural networks. CoRR, abs/1901.08584, 2019. URL http://arxiv.org/abs/1901.08584. [13] Ainesh Bakshi, Rajesh Jayaram, and David P Woodruff. Learning two layer rectified neural networks in polynomial time. arXiv preprint arXiv:1811.01885, 2018. [14] Eugene Belilovsky, Michael Eickenberg, and Edouard Oyallon. Decoupled greedy learning of cnns. CoRR, abs/1901.08164, 2019. URL http://arxiv.org/abs/1901.08164. [15] Eugene Belilovsky, Michael Eickenberg, and Edouard Oyallon. Greedy layerwise learning can scale to imagenet. In International Conference on Machine Learning, pages 583–593, 2019. [16] Yoshua Bengio. Learning deep architectures for AI. Now Publishers Inc, 2009. [17] Digvijay Boob and Guanghui Lan. Theoretical properties of the global optimizer of two layer neural network. arXiv preprint arXiv:1710.11241, 2017. [18] Jacob V Bouvrie. Hierarchical learning: Theory with applications in speech and vision. PhD thesis, Massachusetts Institute of Technology, 2009. [19] Alon Brutzkus and Amir Globerson. Globally optimal gradient descent for a convnet with gaussian inputs. arXiv preprint arXiv:1702.07966, 2017. [20] Yuan Cao and Quanquan Gu. Generalization bounds of stochastic gradient descent for wide and deep neural networks. In Advances in Neural Information Processing Systems, pages 10835–10845, 2019. [21] Amit Daniely. Sgd learns the conjugate kernel class of the network. In Advances in Neural Information Processing Systems, pages 2422–2430, 2017. [22] Amit Daniely and Eran Malach. Learning parities with neural networks. arXiv preprint arXiv:2002.07400, 2020. [23] Amit Daniely, Roy Frostig, and Yoram Singer. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity. In Advances in Neural Information Processing Systems (NIPS), pages 2253–2261, 2016. [24] Simon S Du and Wei Hu. Width provably matters in optimization for deep linear neural networks. arXiv preprint arXiv:1901.08572, 2019. 88 [25] Simon S Du, Jason D Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. arXiv preprint arXiv:1811.03804, November 2018. [26] Simon S Du, Xiyu Zhai, Barnabas Poczos, and Aarti Singh. Gradient descent provably optimizes over-parameterized neural networks. arXiv preprint arXiv:1810.02054, 2018. [27] Ronen Eldan and Ohad Shamir. The power of depth for feedforward neural networks. In Conference on learning theory, pages 907–940, 2016. [28] Vitaly Feldman, Parikshit Gopalan, Subhash Khot, and Ashok Kumar Ponnuswami. New results for learning noisy parities and halfspaces. In 2006 47th Annual IEEE Symposium on Foundations of Com- puter Science (FOCS’06), pages 563–574. IEEE, 2006. [29] Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. In Conference on Learning Theory, pages 797–842, 2015. [30] Rong Ge, Jason D Lee, and Tengyu Ma. Learning one-hidden-layer neural networks with landscape design. arXiv preprint arXiv:1711.00501, 2017. [31] Rong Ge, Rohith Kuditipudi, Zhize Li, and Xiang Wang. Learning two-layer neural networks with symmetric inputs. arXiv preprint arXiv:1810.06793, 2018. [32] Behrooz Ghorbani, Song Mei, Theodor Misiakiewicz, and Andrea Montanari. Linearized two-layers neural networks in high dimension. arXiv preprint arXiv:1904.12191, 2019. [33] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. [34] Boris Hanin and Mihai Nica. Finite depth and width corrections to the neural tangent kernel. arXiv preprint arXiv:1909.05989, 2019. [35] Moritz Hardt and Tengyu Ma. Identity matters in deep learning. arXiv preprint arXiv:1611.04231, 2016. [36] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [37] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015. [38] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected con- volutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017. [39] Jiaoyang Huang and Horng-Tzer Yau. Dynamics of deep neural networks and neural tangent hierarchy. arXiv preprint arXiv:1909.08156, 2019. [40] Lei Huang, Xianglong Liu, Bo Lang, Adams Wei Yu, Yongliang Wang, and Bo Li. Orthogonal weight normalization: Solution to optimization over multiple dependent stiefel manifolds in deep neural net- works. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. [41] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In Advances in neural information processing systems, pages 103–112, 2019. [42] Arthur Jacot, Franck Gabriel, and Cl´ement Hongler. Neural tangent kernel: Convergence and gener- alization in neural networks. In Advances in neural information processing systems, pages 8571–8580, 2018. [43] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. In International Conference on Learning Representations, 2018. [44] Kenji Kawaguchi. Deep learning without poor local minima. In Advances in Neural Information Processing Systems, pages 586–594, 2016. [45] Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. [46] Yuanzhi Li and Zehao Dou. When can wasserstein gans minimize wasserstein distance? arXiv preprint arXiv:2003.04033, 2020. [47] Yuanzhi Li and Yingyu Liang. Provable alternating gradient descent for non-negative matrix fac- 89 torization with strong correlations. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2062–2070. JMLR. org, 2017. [48] Yuanzhi Li and Yingyu Liang. Learning overparameterized neural networks via stochastic gradient descent on structured data. In Advances in Neural Information Processing Systems, 2018. [49] Yuanzhi Li and Yang Yuan. Convergence analysis of two-layer neural networks with relu activation. In Advances in Neural Information Processing Systems, pages 597–607. http://arxiv.org/abs/1705.09886, 2017. [50] Yuanzhi Li, Yingyu Liang, and Andrej Risteski. Recovery guarantee of non-negative matrix factorization via alternating updates. In Advances in neural information processing systems, pages 4987–4995, 2016. [51] Yuanzhi Li, Tengyu Ma, and Hongyang Zhang. Algorithmic regularization in over-parameterized matrix sensing and neural networks with quadratic activations. In COLT, 2018. [52] Yuanzhi Li, Colin Wei, and Tengyu Ma. Towards explaining the regularization effect of initial large learning rate in training neural networks. arXiv preprint arXiv:1907.04595, 2019. [53] Yuanzhi Li, Tengyu Ma, and Hongyang R Zhang. Learning over-parametrized two-layer relu neural networks beyond ntk. arXiv preprint arXiv:2007.04596, 2020. [54] Zhiyuan Li, Ruosong Wang, Dingli Yu, Simon S Du, Wei Hu, Ruslan Salakhutdinov, and Sanjeev Arora. Enhanced convolutional neural tangent kernels. arXiv preprint arXiv:1911.00809, 2019. [55] Liyuan Liu, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, and Jiawei Han. Understanding the difficulty of training transformers. arXiv preprint arXiv:2004.08249, 2020. [56] Xiaodong Liu, Kevin Duh, Liyuan Liu, and Jianfeng Gao. Very deep transformers for neural machine translation. arXiv preprint arXiv:2008.07772, 2020. [57] Roi Livni, Shai Shalev-Shwartz, and Ohad Shamir. On the computational efficiency of training neural networks. In Advances in Neural Information Processing Systems, pages 855–863, 2014. [58] Shachar Lovett. An elementary proof of anti-concentration of polynomials in gaussian variables. In Electronic Colloquium on Computational Complexity (ECCC), volume 17, page 182, 2010. [59] Eran Malach and Shai Shalev-Shwartz. A provably correct algorithm for deep learning that actually works. arXiv preprint arXiv:1803.09522, 2018. [60] Pratyush Mishra, Ryan Lehmkuhl, Akshayaram Srinivasan, Wenting Zheng, and Raluca Ada Popa. Delphi: A cryptographic inference service for neural networks. In 29th USENIX Security Symposium (USENIX Security 20), pages 2505–2522. USENIX Association, August 2020. ISBN 978-1-939133-17-5. URL https://www.usenix.org/conference/usenixsecurity20/presentation/mishra. [61] Elchanan Mossel. Deep learning and hierarchal generative models. arXiv preprint arXiv:1612.09057, 2016. [62] Ido Nachum and Amir Yehudayoff. On symmetry and initialization for neural networks. In LATIN 2020, pages 401–412, 2020. [63] Arild Nøkland and Lars Hiller Eidnes. Training neural networks with local error signals. arXiv preprint arXiv:1901.06656, 2019. [64] Samet Oymak and Mahdi Soltanolkotabi. Towards moderate overparameterization: global convergence guarantees for training shallow neural networks. arXiv preprint arXiv:1902.04674, 2019. [65] Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg In Advances in Yang. Provably robust deep learning via adversarially trained smoothed classifiers. Neural Information Processing Systems, pages 11289–11300, 2019. [66] Warren Schudy and Maxim Sviridenko. Concentration and moment inequalities for polynomials of independent random variables. In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, pages 437–446. Society for Industrial and Applied Mathematics, 2012. [67] Vaishaal Shankar, Alex Fang, Wenshuo Guo, Sara Fridovich-Keil, Ludwig Schmidt, Jonathan Ragan- Kelley, and Benjamin Recht. Neural kernels without tangents. arXiv preprint arXiv:2003.02237, 2020. [68] Mahdi Soltanolkotabi, Adel Javanmard, and Jason D Lee. Theoretical insights into the optimization landscape of over-parameterized shallow neural networks. arXiv preprint arXiv:1707.04926, 2017. 90 [69] Daniel Soudry and Yair Carmon. No bad local minima: Data independent training error guarantees for multilayer neural networks. arXiv preprint arXiv:1605.08361, 2016. [70] Matus Telgarsky. Benefits of depth in neural networks. arXiv preprint arXiv:1602.04485, 2016. [71] Ian Tenney, Dipanjan Das, and Ellie Pavlick. Bert rediscovers the classical nlp pipeline. arXiv preprint arXiv:1905.05950, 2019. [72] Yuandong Tian. An analytical formula of population gradient for two-layered relu network and its applications in convergence and critical point analysis. arXiv preprint arXiv:1703.00560, 2017. [73] Loc Quang Trinh. Greedy layerwise training of convolutional neural networks. Master’s thesis, Mas- sachusetts Institute of Technology, 2019. [74] Santosh Vempala and John Wilmes. Polynomial convergence of gradient descent for training one-hidden- layer neural networks. arXiv preprint arXiv:1805.02677, 2018. [75] Bo Xie, Yingyu Liang, and Le Song. Diversity leads to generalization in neural networks. arXiv preprint Arxiv:1611.03131, 2016. [76] Greg Yang. Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation. arXiv preprint arXiv:1902.04760, 2019. [77] Gilad Yehudai and Ohad Shamir. On the power and limitations of random features for understanding neural networks. arXiv preprint arXiv:1904.00687, 2019. [78] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. arXiv preprint arXiv:1605.07146, 2016. [79] Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pages 818–833. Springer, 2014. [80] Xiao Zhang, Yaodong Yu, Lingxiao Wang, and Quanquan Gu. Learning one-hidden-layer relu networks via gradient descent. arXiv preprint arXiv:1806.07808, 2018. [81] Kai Zhong, Zhao Song, Prateek Jain, Peter L Bartlett, and Inderjit S Dhillon. Recovery guarantees for one-hidden-layer neural networks. arXiv preprint arXiv:1706.03175, 2017. [82] Difan Zou and Quanquan Gu. An improved analysis of training over-parameterized deep neural net- works. In Advances in Neural Information Processing Systems, pages 2053–2062, 2019. [83] Difan Zou, Yuan Cao, Dongruo Zhou, and Quanquan Gu. Stochastic gradient descent optimizes over- parameterized deep relu networks. arXiv preprint arXiv:1811.08888, 2018. 91
{ "id": "1904.11955" }
2001.04451
Reformer: The Efficient Transformer
Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transformers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O($L^2$) to O($L\log L$), where $L$ is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training process instead of $N$ times, where $N$ is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences.
http://arxiv.org/pdf/2001.04451
Nikita Kitaev, Łukasz Kaiser, Anselm Levskaya
cs.LG, cs.CL, stat.ML
ICLR 2020
null
cs.LG
20200113
20200218
0 2 0 2 b e F 8 1 ] G L . s c [ 2 v 1 5 4 4 0 . 1 0 0 2 : v i X r a Published as a conference paper at ICLR 2020 # REFORMER: THE EFFICIENT TRANSFORMER Nikita Kitaev∗ U.C. Berkeley & Google Research [email protected] Łukasz Kaiser∗ Google Research {lukaszkaiser,levskaya}@google.com # ABSTRACT Large Transformer models routinely achieve state-of-the-art results on a number of tasks but training these models can be prohibitively costly, especially on long sequences. We introduce two techniques to improve the efficiency of Transform- ers. For one, we replace dot-product attention by one that uses locality-sensitive hashing, changing its complexity from O(L2) to O(L log L), where L is the length of the sequence. Furthermore, we use reversible residual layers instead of the standard residuals, which allows storing activations only once in the training pro- cess instead of N times, where N is the number of layers. The resulting model, the Reformer, performs on par with Transformer models while being much more memory-efficient and much faster on long sequences. # INTRODUCTION The Transformer architecture (Vaswani et al., 2017) is widely used in natural language processing and yields state-of-the-art results on a number of tasks. To obtain these results, researchers have resorted to training ever larger Transformer models. The number of parameters exceeds 0.5B per layer in the largest configuration reported in (Shazeer et al., 2018) while the number of layers goes up to 64 in (Al-Rfou et al., 2018). Transformer models are also used on increasingly long sequences. Up to 11 thousand tokens of text in a single example were processed in (Liu et al., 2018) and when processing other modalities, like music (Huang et al., 2018) and images (Parmar et al., 2018), even longer sequences are commonplace. These large-scale long-sequence models yield great results but strain resources to the point where some argue that this trend is breaking NLP research1. Many large Transformer models can only realistically be trained in large industrial research laboratories and such models trained with model parallelism cannot even be fine-tuned on a single GPU as their memory requirements demand a multi-accelerator hardware setup even for a single training step. Do large Transformer models fundamentally require such huge resources or are they simply ineffi- cient? Consider the following calculation: the 0.5B parameters used in the largest reported Trans- former layer account for 2GB of memory. Activations for 64K tokens with embedding size 1024 and batch size 8 account for 64K × 1K × 8 = 0.5B floats, requiring another 2GB of memory. If our memory use was only per-layer, then we should fairly easily fit a large Transformer even on sequences of length 64K on a single accelerator. Further, the whole corpus used to train BERT only requires 17GB to store. Why is it then that we cannot even fine-tune these models on single machines? The above estimate includes only per-layer memory and input activations cost and does not take into account the following major sources of memory use in the Transformer. • Memory in a model with N layers is N -times larger than in a single-layer model due to the fact that activations need to be stored for back-propagation. • Since the depth df f of intermediate feed-forward layers is often much larger than the depth dmodel of attention activations, it accounts for a large fraction of memory use. • Attention on sequences of length L is O(L2) in both computational and memory complex- ity, so even for a single sequence of 64K tokens can exhaust accelerator memory. # ∗Equal Contribution 1https://hackingsemantics.xyz/2019/leaderboards/ 1 Published as a conference paper at ICLR 2020 We introduce the Reformer model which solves these problems using the following techniques: • Reversible layers, first introduced in Gomez et al. (2017), enable storing only a single copy of activations in the whole model, so the N factor disappears. • Splitting activations inside feed-forward layers and processing them in chunks removes the df f factor and saves memory inside feed-forward layers. Approximate attention computation based on locality-sensitive hashing replaces the O(L2) factor in attention layers with O(L log L) and so allows operating on long sequences. We study these techniques and show that they have negligible impact on the training process com- pared to the standard Transformer. Splitting activations in fact only affects the implementation; it is numerically identical to the layers used in the Transformer. Applying reversible residuals instead of the standard ones does change the model but has a negligible effect on training in all configurations we experimented with. Finally, locality-sensitive hashing in attention is a more major change that can influence the training dynamics, depending on the number of concurrent hashes used. We study this parameter and find a value which is both efficient to use and yields results very close to full attention. We experiment on a synthetic task, a text task (enwik8) with sequences of length 64K and an image generation task (imagenet-64 generation) with sequences of length 12K. In both cases we show that Reformer matches the results obtained with full Transformer but runs much faster, especially on the text task, and with orders of magnitude better memory efficiency. # 2 LOCALITY-SENSITIVE HASHING ATTENTION Dot-product attention. The standard attention used in the Transformer is the scaled dot-product attention (Vaswani et al., 2017). The input consists of queries and keys of dimension dk, and values of dimension dv. The dot products of the query with all keys are computed, scaled by dk, and a softmax function is applied to obtain the weights on the values. In practice, the attention function on a set of queries is computed simultaneously, packed together into a matrix Q. Assuming the keys and values are also packed together into matrices K and V , the matrix of outputs is defined as: Attention(Q, K, V ) = softmax( QK T √ dk )V (1) Multi-head attention. In the Transformer, instead of performing a single attention function with dmodel-dimensional keys, values and queries, one linearly projects the queries, keys and values h times with different, learned linear projections to dk, dk and dv dimensions, respectively. Attention is applied to each of these projected versions of queries, keys and values in parallel, yielding dv- dimensional output values. These are concatenated and once again projected, resulting in the final values. This mechanism is known as multi-head attention. Memory-efficient attention. To calculate the memory use of the attention mechanism, let us focus on the attention computation from Equation 1. Let us assume that Q, K and V all have the shape [batch size, length, dmodel]. The main issue is the term QK T , which has the shape [batch size, length, length]. In the experimental section we train a model on sequences of length 64K – in this case, even at batch-size of 1, this is a 64K × 64K matrix, which in 32-bit floats would take 16GB of memory. This is impractical and has hindered the use of the Transformer for long sequences. But it is important to note that the QK T matrix does not need to be fully materialized in memory. The attention can indeed be computed for each query qi separately, only calculating softmax( qiKT )V once in memory, and then re-computing it on the backward pass when needed for dk gradients. This way of computing attention may be less efficient but it only uses memory propor- tional to length. We use this memory-efficient implementation of attention to run the full-attention baselines presented in the experimental section. Where do Q, K, V come from? The multi-head attention described above operates on keys, queries and values, but usually we are only given a single tensor of activations A of the shape [batch size, length, dmodel] – e.g., coming from embedding the tokens in a sentence into vectors. 2 Published as a conference paper at ICLR 2020 Sphere Projected Points Random Rotation 0 Random Rotation 1 Random Rotation 2 ane 1 N\, con y: 021 Figure 1: An angular locality sensitive hash uses random rotations of spherically projected points to establish buckets by an argmax over signed axes projections. In this highly simplified 2D depiction, two points x and y are unlikely to share the same hash buckets (above) for the three different angular hashes unless their spherical projections are close to one another (below). To build Q, K and V from A, the Transformer uses 3 different linear layers projecting A into Q, K and V with different parameters. For models with LSH attention, we want queries and keys (Q and K) to be identical. This is easily achieved by using the same linear layer to go from A to Q and K, and a separate one for V. We call a model that behaves like this a shared-QK Transformer. It turns out that sharing QK does not affect the performance of Transformer, even if we additionally normalize the length of the keys K, as we show in the experimental Section 5. Hashing attention. For the LSH attention, we start with two tensors, Q=K and V of the shape [batch size, length, dmodel]. We keep the multi-head mechanism intact and focus on the atten- tion computation from Equation 1. As already mentioned, the main issue is the term QK T , which has the shape [batch size, length, length]. But note that we are actually only interested in softmax(QK T ). Since softmax is dominated by the largest elements, for each query qi we only need to focus on the keys in K that are closest to qi. For example, if K is of length 64K, for each qi we could only consider a small subset of, say, the 32 or 64 closest keys. That is much more efficient, but how can we find the nearest neighbors among the keys? Locality sensitive hashing. The problem of finding nearest neighbors quickly in high-dimensional spaces can be solved by locality-sensitive hashing (LSH). A hashing scheme that assigns each vector x to a hash h(x) is called locality-sensitive if nearby vectors get the same hash with high probability and distant ones do not. In our case, we actually only require that nearby vectors get the same hash with high probability and that hash-buckets are of similar size with high probability. We achieve this by employing random projections as follows (see Figure 1). To get b hashes, we first fix a random matrix R of size [dk, b/2]. We then define h(x) = arg max([xR; −xR]) where [u; v] denotes the concatenation of two vectors. This method is a known LSH scheme (Andoni et al., 2015) and is easy to implement and apply to batches of vectors. LSH attention. Knowing our LSH scheme and the general idea of hashing attention, we will now formalize the LSH attention we use in this paper. We first rewrite the equation for normal attention, (i for a single query position i at a time: where :i oi = exp (qi · kj − z(i, Pi)) vj where Pi = {j : i ≥ j} j∈Pi (2) We introduce the notation Pi to represent the set that the query at position i attends to, and z to denote the partition function (i.e. the normalizing term in the softmax). For clarity, we also omit scaling by For batching purposes we typically perform attention over a larger set P; = {0,1,...,0} D Pi while masking out elements not in P;: 0: = So exp (ai + ky — mj, Py) — 204, P,)) vy where m(j,P,) = { GePi co if7 ¢P; 0 otherwise (3) 3 Published as a conference paper at ICLR 2020 G, 4 4s Gy Is I, G92 Ws 3 Is As Sequence 4 Bera k, of queries=keys I I k, A Al LSH bucketing Hil ml ml k . k, . = k 5 k Sort by LSH bucket =< > fa e k, : EG ___ee Kt Is 1% Chunk sorted (a) Normal (b) Bucketed ee GW Gs Is Ie Is GW Ws Is Ie 5 EES OO OB . . Attend within a 4, same bucket in a al oon FT ESS a. a ) 4, 4s ()Q=K (d) Chunked Figure 2: Simplified depiction of LSH Attention showing the hash-bucketing, sorting, and chunking steps and the resulting causal attentions. (a-d) Attention matrices for these varieties of attention. Now we turn to LSH attention, which we can think of in terms of restricting the set Pi of target items a query position i can attend to, by only allowing attention within a single hash bucket. Pi = {j : h(qi) = h(kj)} (4) Figure 2(a-b) shows a schematic comparison of full-attention with a hashed variant. Part (a) depicts that the attention matrix for full attention is typically sparse, but the computation does not take advantage of this sparsity. In (b), the queries and keys have been sorted according to their hash bucket. Since similar items fall in the same bucket with high probability, the full attention pattern can be approximated by only allowing attention within each bucket. Hash buckets in this formulation tend to be uneven in size, which makes it difficult to batch across buckets. Moreover, the number of queries and the number of keys within a bucket may be unequal — in fact, it is possible for a bucket to contain many queries but no keys. To alleviate these issues, we first ensure that h(k;) = h(q;) by setting ky = Tar Next, we sort the queries by bucket number and, within each bucket, by sequence position; this defines a permutation where i +> s; after sorting. In the sorted attention matrix, pairs from the same bucket will cluster near the diagonal (as depicted in Figure (2p). We can follow a batching approach where chunks of m consecutive queries (after sorting) attend to each other, and one chunk back (Figure 2h). Following our earlier notation, this corresponds to setting: R=Ulnl-ts tals bald 21 If max; |P;| < m, then P; C P;. In practice we set m = Nbuckets L The average bucket size is Toa? and we assume that the probability of a bucket growing that size is sufficiently low. The overall process of LSH attention is summarized in Figure[] (where / is the sequence 2) # length). to twice Multi-round LSH attention. With hashing, there is always a small probability that similar items nevertheless fall in different buckets. This probability can be reduced by doing multiple rounds of hashing with nrounds distinct hash functions {h(1), h(2), . . .}, such that: Nrounds P= U Po where P(”) = { gj hO(G) = W(q5)} 6) r=1 The multi-round case essentially involves performing LSH attention nrounds times in parallel; the details of the procedure are described in in Appendix A. Causal masking for shared-QK attention. In a Transformer decoder, masking (denoted by m(j, Pi) in Equation 3) is used to prevent positions from attending into the future. To implement masking in LSH attention, we associate every query/key vector with a position index, re-order the position indices using the same permutations used to sort the query/key vectors, and then use a comparison operation to compute the mask. 4 Published as a conference paper at ICLR 2020 Table 1: Memory and time complexity of attention variants. We write l for length, b for batch size, nh for the number of heads, nc for the number of LSH chunks, nr for the number of hash repetitions. Attention Type Scaled Dot-Product Memory-Efficient LSH Attention Memory Complexity max(bnhldk, bnhl2) max(bnhldk, bnhl2) Time Complexity max(bnhldk, bnhl2) max(bnhldk, bnhl2) max(bnhldk, bnhlnr(4l/nc)2) max(bnhldk, bnhnrl(4l/nc)2) Table 2: Accuracies on the duplication task of a 1-layer Transformer model with full attention and with locality-sensitive hashing attention using different number of parallel hashes. Eval Train Full Attention LSH-4 LSH-2 LSH-1 Full Attention 100% 0.8% 0.8% 0.8% LSH-8 LSH-4 LSH-2 LSH-1 94.8% 92.5% 76.9% 52.5% 100% 99.9% 99.4% 91.9% 100% 99.9% 98.1% 86.8% 99.9% 99.6% 94.8% 77.9% While attention to the future is not allowed, typical implementations of the Transformer do allow a position to attend to itself. Such behavior is undesirable in a shared-QK formulation because the dot-product of a query vector with itself will almost always be greater than the dot product of a query vector with a vector at another position. We therefore modify the masking to forbid a token from attending to itself, except in situations where a token has no other valid attention targets (e.g. the first token in a sequence). # 2.1 ANALYSIS ON A SYNTHETIC TASK To verify the performance of LSH attention and study its behavior, we start with the following synthetic task: duplicate a sequence of symbols. In this task, each training and testing example has the form 0w0w where w ∈ {1, . . . , N }∗ is a sequence of symbols ranging from 1 to N (we use N = 127 in our experiments). An example with the word w of length 3 is given below. Example: 0 19 113 72 0 19 113 72 To study LSH attention, we train a language model on examples of the above form where each w is of length 511 (so the whole input 0w0w is of length 1024). As this is a language modeling task, we always predict the next symbol given all the previous ones, but we mask the loss and accuracy to only consider positions in the second half of the input, i.e., those that can actually be predicted. The above task can be solved perfectly (to accuracy 100% and loss 0) by a 1-layer Transformer model. Note though, that it requires non-local attention lookups, so it cannot be solved by any model relying on sparse attention with a limited span. To make it easy and fast to train but similar to models used in NLP, we use a 1-layer Transformer with dmodel = df f = 256 and 4 heads. We train it for 150K steps in 4 different settings: with full attention, LSH attention with nrounds = 1, nrounds = 2 and nrounds = 4. From the results summarized in Table 2 we see that a model trained with full attention can be imme- diately used with LSH attention, but at some loss of accuracy. When trained from scratch with LSH attention, the model trained with 4 hashes achieves almost perfect accuracy as well. Interestingly, the accuracy becomes perfect when evaluated with 8 hashes. It goes down when evaluated with 2 or 1 hashes. Models trained with less hashes show worse results but even the model trained with just 1 hash performs almost perfectly when evaluated with 8 hashes. 5 Published as a conference paper at ICLR 2020 # 3 REVERSIBLE TRANSFORMER As the above section shows, the complexity of attention can be reduced from square in length to linear, provided an approximation is acceptable. But it is clear from Table 1 that each field starts with a b · nh · l term: the b · nh · l · dk, or alternatively b · l · dmodel cost cannot be avoided. Indeed, the activations before each layer are already of the size b · l · dmodel, so the memory use of the whole model with nl layers is at least b · l · dmodel · nl. Even worse: inside the feed-forward layers of Transformer this goes up to b · l · df f · nl. In a big Transformer it is usual to set df f = 4K and nl = 16 so with l = 64K this again would use an impractical 16GB of memory In this section, we show how to reduce this cost by first dealing with the nl part of the term using reversible layers and then showing how chunking can allow us to handle the df f problem. The effects of each of these approaches on memory and time complexity are summarized in Table 3. RevNets. Reversible residual networks were introduced by|Gomez et al.|(2017) where it was shown that they can replace ResNets for image classification. The main idea is to allow the activations at any given layer to be recovered from the activations at the following layer, using only the model parameters. Rather than having to checkpoint intermediate values for use in the backward pass, layers can be reversed one-by-one as back-propagation proceeds from the output of the network to its input. Whereas a normal residual layer performs a function x ++ y that operates on a single input and produces a single output and has the form y = x + F(x), a reversible layer works on pairs of inputs/outputs: (71, 72) +> (y1, y2), and follows the equations: y1 = x1 + F (x2) y2 = x2 + G(y1) (7) A layer can be reversed by subtracting (rather than adding) the residuals: x2 = y2 − G(y1) x1 = y1 − F (x2) (8) Reversible Transformer. We apply the RevNet idea to the Transformer by combining the attention and feed-forward layers inside the revnet block. In the notation above, F becomes an attention layer while G becomes the feed-forward layer. Note that Layer Normalization (Ba et al., 2016) is moved inside the residual blocks. Y1 = X1 + Attention(X2) Y2 = X2 + FeedForward(Y1) (9) The reversible Transformer does not need to store activations in each layer and so gets rid of the nl term. In Section 5 we show that it performs the same as the normal Transformer when using the same number of parameters; we achieve this by having both x1 and x2 have size dmodel. Chunking. While reversibility covers the nl term, the thicker layers can still use a lot of memory. The feed-forward layer in particular can use intermediate vectors of dimensionality df f = 4K or higher. However, computations in feed-forward layers are completely independent across positions in a sequence, so the computation can be split into c chunks: Yo = [vss wed yy? = [xs + FeedForward(Y,); Led x? + FeedForward(Y\)] (10) This layer is typically batched by performing operations for all positions in parallel, but operating on one chunk at a time can reduce memory. The reverse computation in (8) and the backward pass are also chunked. In addition to the feed-forward layers, for models with large vocabulary (more than dmodel word types) we also chunk the log-probabilities at the output and calculate the loss for sections of the sequence at a time. Chunking, large batches and parameter reuse. With chunking and reversible layers the memory we use for activations in the whole network is independent of the number of layers. The same is not true for parameters though as their number grows with the number of layers. This problem is remedied though because we can swap layer parameters to and from CPU memory when this layer is not computing. In a standard Transformer this would be inefficient because memory transfer to CPU is slow. The batch size multiplied by length in Reformer is much larger though and therefore the amount of compute done with the parameters amortizes the cost of their transfer. 6 Published as a conference paper at ICLR 2020 Table 3: Memory and time complexity of Transformer variants. We write dmodel and df f for model depth and assume df f ≥ dmodel; b stands for batch size, l for length, nl for the number of layers. We assume nc = l/32 so 4l/nc = 128 and we write c = 1282. Model Type Transformer Reversible Transformer Chunked Reversible Transformer LSH Transformer Reformer Memory Complexity max(bldf f , bnhl2)nl max(bldf f , bnhl2) max(bldmodel, bnhl2) max(bldf f , bnhlnrc)nl max(bldmodel, bnhlnrc) Time Complexity (bldf f + bnhl2)nl (bnhldf f + bnhl2)nl (bnhldf f + bnhl2)nl (bldf f + bnhnrlc)nl (bldf f + bnhnrlc)nl # 4 RELATED WORK The Transformer model introduced in (Vaswani et al., 2017) has been used widely in natural lan- guage tasks and further extended to model diverse data such as music scores (Huang et al., 2018), and images (Parmar et al., 2018; Ramachandran et al., 2019). Most notably, this model class has been applied successfully in the self-supervised training of extremely large language models (Devlin et al., 2018; Radford et al., 2019). Given the enormous computational requirements of state of the art sequence models, there has been increasing interest in finding methods to reduce the memory footprint and computational require- ments of Transformer models. In addition to standard methods such as precision reduction and gradient checkpointing (Sohoni et al., 2019), more efficient versions of the Transformer model’s self-attention mechanism (Sukhbaatar et al., 2019a;b) have also recently been explored. In particular, leveraging sparsity in the attention layers has proved fruitful. OpenAI introduced the sparse Transformer (Child et al., 2019) which exploits a factorized sparse representation of atten- tion. Using product-key attention to increase the key space has also been used to reduce memory requirements in the feed-forward layers with no loss in performance (Lample et al., 2019). Locality-sensitive hashing (LSH) has, to our knowledge, not been directly applied to Transformer attention layers before. But previous work using external memory with neural networks has dealt with memories of large sizes. The original implementation of memory networks (Weston et al., 2014) and later work on scaling it (Bordes et al., 2015; Chandar et al., 2016) used memory with size in the millions. The cost of doing so is that the memory must be fixed prior to training. Moreover, since during the beginning of training the model is unlikely to query the memory correctly, strong supervision is used to encourage the model to query memory locations that are useful. These hints are either given as additional supervising information by the task or determined heuristically as in Hill et al. (2015). The requirement that the memory be fixed before has been removed in Santoro et al. (2016) at the cost of memory size and later alleviated by Rae et al. (2016). The last paper considered memory lookups with approximate nearest neighbors including both LSH and random kd-trees, but only for lookups in external memory. # 5 EXPERIMENTS In this section we present experimental results demonstrating the techniques described above. We analyze the techniques one-by-one to make clear which combinations have impact on performance. We start by showing that reversible layers and shared query-key spaces do not impact performance, then proceed to analyze hashing attention and finally the full Reformer model. We ran our experiments on the imagenet64 and enwik8-64K tasks, where the latter is a variant of enwik8 that is chunked into subsequences of 216 = 64K tokens. We use 3-layer models for our ablations so as to make it tractable to compare with the regular Transformer, which has high memory usage and performs full O(l2) attention. All experiments have dmodel = 1024, df f = 4096, nheads = 8, and a total batch size of 8 sequences. We used the Adafactor optimizer (Shazeer & Stern, 2018) for training these models. We also evaluate on the WMT 2014 English-to-German translation task, following the hyperparameters of Vaswani et al. (2017). Training for all experiments 7 Published as a conference paper at ICLR 2020 i Sharing Query-Key Space - enwik8 an Reversibility - enwik8 separate qk + regular — sh k mo ib 58 shared q 58 reversible 343 43 Es a 29 29 14 14 0 10K 20K 30K 40K 50K 60K =o 10K 20K 30K 40K 50K 60K a Sharing Query-Key Space - imagenet64 a Reversibility - imagenet64 separate qk + regular ae — shared ak ae — reversible 4.0 4.0 339 3.9 Es aed 3.7 3.6 3.6 35 35 3.3 3.3 0 50K 100K 150K 200K C) 50K 100K 150K 200K steps steps Figure 3: Effect of shared query-key space (left) and reversibility (right) on performance on enwik8 and imagenet64 training. The curves show bits per dim on held-out data. Table 4: BLEU scores on newstest2014 for WMT English-German (EnDe). We additionally report detokenized BLEU scores as computed by sacreBLEU (Post, 2018). Model Vaswani et al. (2017), base model Vaswani et al. (2017), big Ott et al. (2018), big Reversible Transformer (base, 100K steps) Reversible Transformer (base, 500K steps, no weight sharing) Reversible Transformer (big, 300K steps, no weight sharing) BLEU Uncased3 Cased4 27.3 28.4 29.3 27.6 28.0 29.1 27.4 27.9 28.9 26.9 27.4 28.4 was parallelized across 8 devices (8 GPUs or 8 TPU v3 cores). Code for training our models is made publicly available.2 Effect of sharing QK. We first consider the | effect of shared-QK attention on a regular Transformer model. Shared-QK attention sets k; = = Tat and prevents tokens from attending to themselves (except when no other context is available). In the left part of Figure 3} we plot perplexity curves for both regular and shared-QK attention. A shared query-key space does not perform worse than regular attention; in fact, for enwik8 it appears to train slightly faster. In other words, we are not sacrificing accuracy by switching to shared-QK attention. Effect of reversible layers. In the two plots on the right in Figure 3, we compare a regular Trans- former per Vaswani et al. (2017) with the reversible one describe in Section 3. The two models have identical parameter counts, and the learning curves likewise appear to be nearly the same. These results show that the memory savings in the reversible Transformer do not come at the expense of accuracy. Reversible layers in machine translation. We also evaluate reversible layers in the context of an encoder-decoder Transformer model for machine translation from English to German. We start by making both the encoder and the decoder fully reversible in the Transformer-base architecture, and # 2https://github.com/google/trax/tree/master/trax/models/reformer 3BLEU+case.lc+lang.en-de+numrefs.1+smooth.exp+test.wmt14/full+tok.intl+version.1.4.3 4BLEU+case.mixed+lang.en-de+numrefs.1+smooth.exp+test.wmt14/full+tok.intl+version.1.4.3 8 Published as a conference paper at ICLR 2020 43 LSH Attention on Imagenet64 — full attention 2 hashes 4 hashes --- 8 hashes —— 16 hashes 4.2 4.0 bpd 3.9 3.7 3.6 20K 40k 60K 80K 100k 120K 140K steps Figure 4: LSH attention performance as a function of hashing rounds on imagenet64. 29 Performance Scaling with Layers - enwik8 10.0 Attention Speed Dependence on Sequence Length - Synthetic Data — Slayers === full attention 26 — 6layers — SHa Lhash — Layers — [SHa 2-hash 5 — 16 layers — [SHa 4-hash - — 20 layers — [SHa 8-hash - seconds / step on 3k 7K KK SKK 20K. 22K 5K 102432 2048/16 4096/8 8192/4 163842 © 32768/1 steps sequence length / batch Figure 5: Left: LSH attention performance as a function of number of layers on enwik8. Right: Speed of attention evaluation as a function of input length for full- and LSH- attention. see that the resulting model performs comparably to Vaswani et al. (2017) when trained for 100K steps. We also evaluate training for a greater number of steps and with a larger model. Reformer models are very memory-efficient, so for the latter two experiments we do not need to save mem- ory by sharing embedding and output projection weight matrices throughout the model. Results are shown in Table 4. We do not apply LSH attention in this setting because examples are single sentences, and sentences tend to be relatively short. Our typical LSH attention configuration uses chunks of 128 tokens after hashing and sorting, whereas the examples in the WMT14 test set are all shorter than 128 tokens. LSH attention in Transformer. LSH attention is an approximation for full attention that, as evi- denced in Figure 4, becomes more accurate as the number of hashes increases. At nrounds = 8, it already almost matches full attention. The computational cost of a model grows with the number of hashes, so this hyperparameter can be adjusted depending on the available compute budget. Ad- ditionally, as in Table 2, the number of hashes can be increased at evaluation time to produce more accurate results. On the right half of Figure 5, we plot the speed of different attention types vs. the sequence length, while holding the total number of tokens fixed. We see that while regular attention becomes slower at longer sequence length, LSH attention speed remains flat. Large Reformer models. To verify that the Reformer can indeed fit large models on a single core and train fast on long sequences, we train up to 20-layer big Reformers on enwik8 and imagenet64. As can be seen in Figure 5, these models fit into memory and train. We were not able to train Trans- former baselines in this case as they are too slow and memory-hungry, but we see clear improvement with the number of layers. A 12-layer model on enwik8 trained for 20K steps with a dropout rate of 0.1 achieves 1.19 bits/dim on the test set. We also trained a 12-layer Reformer model for longer with further tuning and improvements and we reached 1.05 bits/dim on the enwiki8 test set. 9 Published as a conference paper at ICLR 2020 # 6 CONCLUSION Reformer combines the modeling capacity of a Transformer with an architecture that can be executed efficiently on long sequences and with small memory use even for models with a large number of layers. We believe that this will help large, richly-parameterized Transformer models become more widespread and accessible. Also, the ability to handle long sequences opens the way for the use of the Reformer on many generative tasks. In addition to generating very long coherent text, the Reformer can bring the power of Transformer models to other domains like time-series forecasting, music, image and video generation. # REFERENCES Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. Character-level language modeling with deeper self-attention. CoRR, abs/1808.04444, 2018. URL http: //arxiv.org/abs/1808.04444. Alexandr Andoni, Piotr Indyk, Thijs Laarhoven, Ilya P. Razenshteyn, and Ludwig Schmidt. Practical and optimal LSH for angular distance. CoRR, abs/1509.02897, 2015. URL http://arxiv. org/abs/1509.02897. Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016. URL http://arxiv.org/abs/1607.06450. Antoine Bordes, Nicolas Usunier, Sumit Chopra, and Jason Weston. Large-scale simple question answering with memory networks. CoRR, abs/1506.02075, 2015. URL http://arxiv.org/ abs/1506.02075. Sarath Chandar, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Ben- gio. Hierarchical memory networks. arXiv preprint arXiv:1605.07427, 2016. Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. URL https://openai.com/blog/sparse-transformers, 2019. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805. Aidan N Gomez, Mengye Ren, Raquel Urtasun, and Roger B Grosse. The reversible residual net- work: Backpropagation without storing activations. In Advances in neural information processing systems, pp. 2214–2224, 2017. Felix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children’s books with explicit memory representations. CoRR, abs/1511.02301, 2015. URL http://arxiv.org/abs/1511.02301. Cheng-Zhi Anna Huang, Ashish Vaswani, Jakob Uszkoreit, Noam Shazeer, Curtis Hawthorne, An- drew M Dai, Matthew D Hoffman, and Douglas Eck. Music transformer: Generating music with long-term structure. arXiv preprint arXiv:1809.04281, 2018. Guillaume Lample, Alexandre Sablayrolles, Marc’Aurelio Ranzato, Ludovic Denoyer, and Herv´e J´egou. Large memory layers with product keys. CoRR, abs/1907.05242, 2019. URL http: //arxiv.org/abs/1907.05242. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. Generating wikipedia by summarizing long sequences. CoRR, abs/1801.10198, 2018. URL http://arxiv.org/abs/1801.10198. Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. Scaling neural machine trans- In Proceedings of the Third Conference on Machine Translation: Research Papers, lation. pp. 1–9, Brussels, Belgium, October 2018. Association for Computational Linguistics. doi: 10.18653/v1/W18-6301. URL https://www.aclweb.org/anthology/W18-6301. 10 Published as a conference paper at ICLR 2020 Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, and Alexander Ku. Image transformer. CoRR, abs/1802.05751, 2018. URL http://arxiv.org/abs/1802. 05751. Matt Post. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pp. 186–191, Belgium, Brussels, October 2018. As- sociation for Computational Linguistics. URL https://www.aclweb.org/anthology/ W18-6319. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. Jack W Rae, Jonathan J Hunt, Tim Harley, Ivo Danihelka, Andrew Senior, Greg Wayne, Alex Graves, and Timothy P Lillicrap. Scaling memory-augmented neural networks with sparse reads and writes. In Advances in Neural Information Processing Systems, (NIPS), 2016. Prajit Ramachandran, Niki Parmar, Ashish Vaswani, Irwan Bello, Anselm Levskaya, and Jonathon Shlens. Stand-alone self-attention in vision models. CoRR, abs/1906.05909, 2019. URL http: //arxiv.org/abs/1906.05909. Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy P. Lillicrap. One- shot learning with memory-augmented neural networks. CoRR, abs/1605.06065, 2016. URL http://arxiv.org/abs/1605.06065. Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. CoRR, abs/1804.04235, 2018. URL http://arxiv.org/abs/1804.04235. Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and Blake Hecht- man. Mesh-tensorflow: Deep learning for supercomputers. CoRR, abs/1811.02084, 2018. URL http://arxiv.org/abs/1811.02084. Nimit Sharad Sohoni, Christopher Richard Aberger, Megan Leszczynski, Jian Zhang, and Christo- pher R´e. Low-memory neural network training: A technical report. CoRR, abs/1904.10631, 2019. URL http://arxiv.org/abs/1904.10631. Sainbayar Sukhbaatar, Edouard Grave, Piotr Bojanowski, and Armand Joulin. Adaptive atten- tion span in transformers. CoRR, abs/1905.07799, 2019a. URL http://arxiv.org/abs/ 1905.07799. Sainbayar Sukhbaatar, Edouard Grave, Guillaume Lample, Herv´e J´egou, and Armand Joulin. Aug- menting self-attention with persistent memory. CoRR, abs/1907.01470, 2019b. URL http: //arxiv.org/abs/1907.01470. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, 2017. URL http: //arxiv.org/abs/1706.03762. Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. CoRR, abs/1410.3916, 2014. URL http://arxiv.org/abs/1410.3916. 11 Published as a conference paper at ICLR 2020 # A MULTI-ROUND LSH ATTENTION In this section we describe in more detail the multi-hash version of our LSH attention mechanism. We first repeat Equation (3) from the main text, which describes a general formulation of attention with sparsity: co if7 ¢P; 0 otherwise = Ss exp (qi - kj — m(j, Pi) — 2(4, Pi)) vj ~~ where m(j, P;) = { JEP: In the multi-round case, a query position i can attend to key positions Pi as defined in (6), which we also repeat here: # Nrounds Nrounds P, = U Pp” where Pp” = {i AO (qi) = n(q,)} (6) r=1 For batching purposes, attention is performed on chunks of sorted queries/keys: (r) (r) (r) Wad E | vs E | . E \} u m m m Combining (3) and (6) gives: oi = exp (qi · kj − m(j, Pi) − z(i, Pi)) vj (12) # jEPi Nrounds Nrounds exp 2(i, P| KH 2(i,P,)) Ss wi oP (a: sky m(j, PS KH 2(i, PL )) v; r=1 jeP uj # Nrounds Nrounds = exp 2(i, PL”) - 2(i,Pi)) of” (4) r=1 r=1 of”) = » exp (ai kj — m") — 2(i, PI) 0; (15) jeP” , _ {x if j ¢P where N;; = {rs eP af and m(") = ¢ 108 ifi=j (16) log Nj; otherwise Each round of LSH attention produces a vector o(r) that can be computed independently from other i rounds, except for the inclusion of a term Ni,j to avoid double-counting elements when constructing the union of P (r) We also modify m(r) i,j to introduce a special case for i = j. This case is added because causal masking in a standard Transformer allows position i to attend to itself, which is not desirable in a shared-QK formulation. We set the mask to a large but finite value to disallow attention-in-place, except in the situation where a token has no other valid attention targets. For example, the first token in a sequence attends only to itself, because no prior context is available. 12 (6) (13)
{ "id": "1607.06450" }
2001.00973
Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing
Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed systems are audited for harm by investigators from outside the organizations deploying the algorithms. However, it remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source. In this paper, we introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development lifecycle. Each stage of the audit yields a set of documents that together form an overall audit report, drawing on an organization's values or principles to assess the fit of decisions made throughout the process. The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.
http://arxiv.org/pdf/2001.00973
Inioluwa Deborah Raji, Andrew Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, Parker Barnes
cs.CY
Accepted to ACM FAT* (Fariness, Accountability and Transparency) conference 2020. Full workable templates for the documents of the SMACTR framework presented in the paper can be found here https://drive.google.com/drive/folders/1GWlq8qGZXb2lNHxWBuo2wl-rlHsjNPM0?usp=sharing
null
cs.CY
20200103
20200103
0 2 0 2 n a J 3 ] Y C . s c [ 1 v 3 7 9 0 0 . 1 0 0 2 : v i X r a # Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing Inioluwa Deborah Raji∗ Partnership on AI [email protected] Andrew Smart∗ Google [email protected] Rebecca N. White Google Margaret Mitchell Google Timnit Gebru Google Ben Hutchinson Google Jamila Smith-Loud Google Daniel Theron Google Parker Barnes Google ABSTRACT Rising concern for the societal implications of artificial intelligence systems has inspired a wave of academic and journalistic literature in which deployed systems are audited for harm by investigators from outside the organizations deploying the algorithms. However, it remains challenging for practitioners to identify the harmful repercussions of their own systems prior to deployment, and, once deployed, emergent issues can become difficult or impossible to trace back to their source. ACM Reference Format: Inioluwa Deborah Raji, Andrew Smart, Rebecca N. White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. In Conference on Fairness, Accountability, and Transparency (FAT* ’20), January 27–30, 2020, Barcelona, Spain. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3351095. 3372873 In this paper, we introduce a framework for algorithmic auditing that supports artificial intelligence system development end-to-end, to be applied throughout the internal organization development life- cycle. Each stage of the audit yields a set of documents that together form an overall audit report, drawing on an organization’s values or principles to assess the fit of decisions made throughout the pro- cess. The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity. CCS CONCEPTS • Social and professional topics → System management; Tech- nology audits; • Software and its engineering → Software de- velopment process management. # KEYWORDS Algorithmic audits, machine learning, accountability, responsible innovation ∗Both authors contributed equally to this paper. This work was done by Inioluwa Deborah Raji as a fellow at Partnership on AI (PAI), of which Google, Inc. is a partner. This should not be interpreted as reflecting the official position of PAI as a whole, or any of its partner organizations. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). FAT* ’20, January 27–30, 2020, Barcelona, Spain © 2020 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6936-7/20/02. https://doi.org/10.1145/3351095.3372873 1 INTRODUCTION With the increased access to artificial intelligence (AI) development tools and Internet-sourced datasets, corporations, nonprofits and governments are deploying AI systems at an unprecedented pace, often in massive-scale production systems impacting millions if not billions of users [1]. In the midst of this widespread deployment, however, come valid concerns about the effectiveness of these auto- mated systems for the full scope of users, and especially a critique of systems that have the propensity to replicate, reinforce or am- plify harmful existing social biases [8, 37, 62]. External audits are designed to identify these risks from outside the system and serve as accountability measures for these deployed models. However, such audits tend to be conducted after model deployment, when the system has already negatively impacted users [26, 51]. In this paper, we present internal algorithmic audits as a mecha- nism to check that the engineering processes involved in AI system creation and deployment meet declared ethical expectations and standards, such as organizational AI principles. The audit process is necessarily boring, slow, meticulous and methodical—antithetical to the typical rapid development pace for AI technology. However, it is critical to slow down as algorithms continue to be deployed in increasingly high-stakes domains. By considering historical ex- amples across industries, we make the case that such audits can be leveraged to anticipate potential negative consequences before they occur, in addition to providing decision support to design mitiga- tions, more clearly defining and monitoring potentially adverse out- comes, and anticipating harmful feedback loops and system-level risks [20]. Executed by a dedicated team of organization employees, internal audits operate within the product development context and can inform the ultimate decision to abandon the development of AI technology when the risks outweigh the benefits (see Figure 1). FAT* ’20, January 27–30, 2020, Barcelona, Spain Inspired from the practices and artifacts of several disciplines, we go further to develop SMACTR, a defined internal audit framework meant to guide practical implementations. Our framework strives to establish interdisciplinarity as a default in audit and engineering processes while providing the much needed structure to support the conscious development of AI systems. # 2 GOVERNANCE, ACCOUNTABILITY AND AUDITS We use accountability to mean the state of being responsible or answerable for a system, its behavior and its potential impacts [38]. Although algorithms themselves cannot be held accountable as they are not moral or legal agents [7], the organizations designing and deploying algorithms can through governance structures. Proposed standard ISO 37000 defines this structure as "the system by which the whole organization is directed, controlled and held accountable to achieve its core purpose over the long term."1 If the responsible development of artificial intelligence is a core purpose of organiza- tions creating AI, then a governance system by which the whole organization is held accountable should be established. # 1https://committee.iso.org/sites/tc309/home/projects/ongoing/ongoing-1.html Product Other y sea ers —— Auditable Artifacts ani Principles ON Management aA Â¥ » Audit Product Team asd y Audit Report Policies ———_| Audit Team ash ~ oT Y Figure 1: High-level overview of the context of an internal algorithmic audit. The audit is conducted during product development and prior to launch. The audit team leads the product team, management and other stakeholders in con- tributing to the audit. Policies and principles, including in- ternal and external ethical expectations, also feed into the audit to set the standard for performance. Raji & Smart, et al. In environmental studies, Lynch and Veland [45] introduced the concept of urgent governance, distinguishing between audit- ing for system reliability vs societal harm. For example, a power plant can be consistently productive while causing harm to the environment through pollution [42]. Similarly, an AI system can be found technically reliable and functional through a traditional engineering quality assurance pipeline without meeting declared ethical expectations. A separate governance structure is necessary for the evaluation of these systems for ethical compliance. This evaluation can be embedded in the established quality assurance workflow but serves a different purpose, evaluating and optimizing for a different goal centered on social benefits and values rather than typical performance metrics such as accuracy or profit [39]. Although concerns about reliability are related, and although prac- tices for testing production AI systems are established for industry practitioners [4], issues involving social impact, downstream ef- fects in critical domains, and ethics and fairness concerns are not typically covered by concepts such as technical debt and reliability engineering. 2.1 What is an audit? Audits are tools for interrogating complex processes, often to deter- mine whether they comply with company policy, industry standards or regulations [43]. The IEEE standard for software development defines an audit as “an independent evaluation of conformance of software products and processes to applicable regulations, stan- dards, guidelines, plans, specifications, and procedures” [32]. Build- ing from methods of external auditing in investigative journalism and research [17, 62, 65], algorithmic auditing has started to become similar in spirit to the well-established practice of bug bounties, where external hackers are paid for finding vulnerabilities and bugs in released software [46]. These audits, modeled after intervention strategies in information security and finance [62], have signifi- cantly increased public awareness of algorithmic accountability. An external audit of automated facial analysis systems exposed high disparities in error rates among darker-skinned women and lighter-skinned men [8], showing how structural racism and sexism can be encoded and reinforced through AI systems. [8] reveals interaction failures, in which the production and deployment of an AI system interacts with unjust social structures to contribute to biased predictions, as Safiya Noble has described [54]. Such findings demonstrate the need for companies to understand the social and power dynamics of their deployed systems’ environments, and record such insights to manage their products’ impact. # 2.2 AI Principles as Customized Ethical Standards According to Mittelstadt [49], at least 63 public-private initiatives have produced statements describing high-level principles, values and other tenets to guide the ethical development, deployment and governance of AI. Important values such as ensuring AI tech- nologies are subject to human direction and control, and avoiding the creation or reinforcement of unfair bias, have been included in many organizations’ ethical charters. However, the AI industry lacks proven methods to translate principles into practice [49], and AI principles have been criticized for being vague and providing Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing little to no means of accountability [27, 82]. Nevertheless, such principles are becoming common methods to define the ethical priorities of an organization and thus the operational goals for which to aim [34, 83]. Thus, in the absence of more formalized and universal standards, they can be used as a North Star to guide the evaluation of the development lifecycle, and internal audits can investigate alignment with declared AI principles prior to model deployment. We propose a framing of risk analyses centered on the failure to achieve AI principle objectives, outlining an audit practice that can begin translating ethical principles into practice. 2.3 Audit Integrity and Procedural Justice Audit results are at times approached with skepticism since they are reliant on and vulnerable to human judgment. To establish the in- tegrity of the audit itself as an independently valid result, the audit must adhere to the proper execution of an established audit pro- cess. This is a repeatedly observed phenomenon in tax compliance auditing, where several international surveys of tax compliance demonstrate that a fixed and vetted tax audit methodology is one of the most effective strategies to convince companies to respect audit results and pay their full taxes [22, 53]. Procedural justice implies the legitimacy of an outcome due to the admission of a fair and thorough process. Establishing proce- dural justice to increase compliance is thus a motivating factor for establishing common and robust frameworks through which independent audits can demonstrate adherence to standards. In ad- dition, audit integrity is best established when auditors themselves live up to an ethical standard, vetted by adherence to an expected code of conduct or norm in how the audit is to be conducted. In finance, for example, it became clear that any sense of dishonesty or non-transparency in audit methodology would lead audit targets to dismiss rather than act on results [66]. 2.4 The Internal Audit External auditing, in which companies are accountable to a third party [62], are fundamentally limited by lack of access to internal processes at the audited organizations. Although external audits conducted by credible experts are less affected by organization- internal considerations, external auditors can only access model outputs, for example by using an API [65]. Auditors do not have access to intermediate models or training data, which are often protected as trade secrets [9]. Internal auditors’ direct access to sys- tems can thus help extend traditional external auditing paradigms by incorporating additional information typically unavailable for external evaluations to reveal previously unidentifiable risks. The goals of an internal audit are similar to quality assurance, with the objective to enrich, update or validate the risk analysis for product deployment. Internal audits aim to evaluate how well the product candidate, once in real-world operation, will fit the expected system behaviour encoded in standards. A modification in objective from a post-deployment audit to pre-deployment audit applied throughout the development process enables proactive ethical intervention methods, rather than simply informing reactive measures only implementable after deployment, as is the case with a purely external approach. Because there is an increased level of system access in an internal audit, identified FAT* ’20, January 27–30, 2020, Barcelona, Spain gaps in performance or processes can be mapped to sociotechnical considerations that should be addressed through joint efforts with product teams. As the audit results can lead to ambiguous conclu- sions, it is critical to identify key stakeholders and decision makers who can drive appropriate responses to audit outcomes. Additionally, with an internal audit, because auditors are em- ployees of the organization and communicate their findings pri- marily to an internal audience, there is opportunity to leverage these audit outcomes for recommendations of structural organiza- tional changes needed to make the entire engineering development process auditable and aligned with ethical standards. Ultimately, internal audits complement external accountability, generating ar- tifacts or transparent information [70] that third parties can use for external auditing, or even end-user communication. Internal audits can thus enable review and scrutiny from additional stakeholders, by enforcing transparency through stricter reporting requirements. # 3 LESSONS FROM AUDITING PRACTICES IN OTHER INDUSTRIES Improving the governance of artificial intelligence development is intended to reduce the risks posed by new technology. While not without faults, safety-critical and regulated industries such as aerospace and medicine have long traditions of auditable processes and design controls that have dramatically improved safety [77, 81]. 3.1 Aerospace Globally, there is one commercial airline accident per two million flights [63]. This remarkable safety record is the result of a joint and concerted effort over many years by aircraft and engine manufac- turers, airlines, governments, regulatory bodies, and other industry stakeholders [63]. As modern avionic systems have increased in size and complexity (for example, the Boeing 787 software is estimated at 13 million lines of code [35]), the standard 1-in-1,000,000,000 per use hour maximum failure probability for critical aerospace systems remains an underappreciated engineering marvel [19]. However, as the recent Boeing 737 MAX accidents indicate, safety is never finished, and the qualitative impact of failures cannot be ignored—even one accident can impact the lives of many and is rightfully acknowledged as a catastrophic tragedy. Complex sys- tems tend to drift toward unsafe conditions unless constant vigi- lance is maintained [42]. It is the sum of the tiny probabilities of individual events that matters in complex systems—if this grows without bound, the probability of catastrophe goes to one. The Borel-Cantelli Lemmas are formalizations of this statistical phenom- enon [13], which means that we can never be satisfied with safety standards. Additionally, standards can be compromised if compet- ing business interests take precedence. Because the non-zero risk of failure grows over time, without continuous active measures being developed to mitigate risk, disaster becomes inevitable [29]. 3.1.1 Design checklists. Checklists are simple tools for assisting designers in having a more informed view of important questions, edge cases and failures [30]. Checklists are widely used in aerospace for their proven ability to improve safety and designs. There are several cautions about using checklists during the development of complex software, such as the risk of blind application, the broader FAT* ’20, January 27–30, 2020, Barcelona, Spain context and nuanced interrelated concerns are not considered. How- ever, a checklist can be beneficial. It is good practice to avoid yes/no questions to reduce the risk that the checklist becomes a box-ticking activity, for example by asking designers and engineers to describe their processes for assessing ethical risk. Checklist use should also be related to real-world failures and higher-level system hazards. 3.1.2 Traceability. Another key concept from aerospace and safety- critical software engineering is traceability—which is concerned with the relationships between product requirements, their sources and system design. This practice is familiar to the software industry in requirements engineering [2]. However, in AI research, it can often be difficult to trace the provenance of large datasets or to inter- pret the meaning of model weights—to say nothing of the challenge of understanding how these might relate to system requirements. Additionally, as the complexity of sociotechnical systems is rapidly increasing, and as the speed and complexity of large-scale artificial intelligence systems increase, new approaches are necessary to understand risk [42]. Failure Modes and Effects Analysis. Finally, a standard tool in 3.1.3 safety engineering is a Failure Modes and Effects Analysis (FMEA), methodical and systematic risk management approach that exam- ines a proposed design or technology for foreseeable failures [72]. The main purpose of a FMEA is to define, identify and eliminate potential failures or problems in different products, designs, sys- tems and services. Prior to conducting a FMEA, known issues with a proposed technology should be thoroughly mapped through a literature review and by collecting and documenting the experi- ences of the product designers, engineers and managers. Further, the risk exercise is based on known issues with relevant datasets and models, information that can be gathered from interviews and from extant technical documentation. FMEAs can help designers improve or upgrade their products to reduce risk of failure. They can also help decision makers formulate corresponding preventive measures or improve reactive strategies in the event of post-launch failure. FMEAs are widely used in many fields including aerospace, chemical engineering, design, mechani- cal engineering and medical devices. To our knowledge, however, the FMEA method has not been applied to examine ethical risks in production-scale artificial intelligence models or products. 3.2 Medical devices Internal and external quality assurance audits are a daily occurrence in the pharmaceutical and medical device industry. Audit document trails are as important as the drug products and devices themselves. The history of quality assurance audits in medical devices dates from several medical disasters in which devices, such as infusion pumps and autoinjectors, failed or were used improperly [80]. 3.2.1 Design Controls. For medical devices, the stages of prod- uct development are strictly defined. In fact, federal law (Code of Federal Regulations Title 21) mandates that medical-device mak- ers establish and maintain “design control” procedures to ensure that design requirements are met and designs and development processes are auditable. Practically speaking, design controls are a documented method of ensuring that the end product matches the intended use, and that potential risks from using the technology Raji & Smart, et al. have been anticipated and mitigated [77]. The purpose is to ensure that anticipated risks related to the use of technology are driven down to the lowest degree that is reasonably practicable. Intended Use. Medical-device makers must maintain proce- 3.2.2 dures to ensure that design requirements meet the “intended use” of the device. The intended use of a “device” (or, increasingly in medicine, an algorithm—see [60] for more) determines the level of design control required: for example, a tongue depressor (a simple piece of wood) is the lowest class of risk (Class I), while a deep brain implant would be the highest (Class III). The intended use of a tongue depressor could be “to displace the tongue to facilitate examination of the surrounding organs and tissues”, differentiating a tongue depressor from a Popsicle stick. This may be important when considering an algorithm that can be used to identify cats or to identify tumors; depending on its intended use, the same algo- rithm might have drastically different risk profiles, and additional risks arise from unintended uses of the technology. 3.2.3 Design History File. For products classified as medical de- vices, at every stage of the development process, device makers must document the design input, output, review, verification, vali- dation, transfer and changes—the design control process (section 3.2.1). Evidence that medical device designers and manufacturers have followed design controls must be kept in a design history file (DHF), which must be an accurate representation and docu- mentation of the product and its development process. Included in the DHF is an extensive risk assessment and hazard analysis, which must be continuously updated as new risks are discovered. Companies also proactively maintain “post-market surveillance” for any issues that may arise with safety of a medical device. Structural Vulnerability. In medicine there is a deep acknowl- 3.2.4 edgement of socially determinant factors in healthcare access and effectiveness, and an awareness of the social biases influencing the dynamic of prescriptions and treatments. This widespread ac- knowledgement led to the framework of operationalizing structural vulnerability in healthcare contexts, and effectively the design of an assessment tool to record the anticipated social conditions sur- rounding a particular remedy or medical recommendation [61]. Artificial intelligence models are equally subject to social influence and social impact, and undergoing such assessments on more holis- tic and population- or environment-based considerations is relevant to algorithmic auditing. 3.3 Finance As automated accounting systems started to appear in the 1950s, corporate auditors continued to rely on manual procedures to audit “around the computer”. In the 1970s, the Equity Funding Corpora- tion scandal and the passage of the Foreign Corrupt Practices Act spurred companies to more thoroughly integrate internal controls throughout their accounting systems. This heightened the need to audit these systems directly. The 2002 Sarbanes-Oxley Act intro- duced sweeping changes to the profession in demanding greater focus on financial reporting and fraud detection [10]. Financial auditing had to play catch-up as the complexity and automation of financial business practices became too unwieldy to manage manually. Stakeholders in large companies and government Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing regulators desired a way to hold companies accountable. Concerns among regulators and shareholders that the managers in large financial firms would squander profits from newly created financial instruments prompted the development of financial audits [74]. Additionally, as financial transactions and markets became more automated, abstract and opaque, threats to social and economic val- ues were answered increasingly with audits. But financial auditing lagged behind the process of technology-enabled financialization of markets and firms. 3.3.1 Audit Infrastructure. In general, internal financial audits seek assurance that the organization has a formal governance process that is operating as intended: values and goals are established and communicated, the accomplishment of goals is monitored, account- ability is ensured and values are preserved. Further, internal audits seek to find out whether significant risks within the organization are being managed and controlled to an acceptable level [71]. Internal financial auditors typically have unfettered access to necessary information, people, records and outsourced operations across the organization. IIA Performance Standard 2300, Performing the Engagement [55], states that internal auditors should identify, analyze, evaluate and record sufficient information to achieve the audit objectives. The head of internal audit determines how internal auditors carry out their work and the level of evidence required to support their conclusions. 3.4 Discussion and Challenges The lessons from other industries above are a useful guide toward building internal accountability to society as a stakeholder. Yet, there are many novel and unique aspects of artificial intelligence development that present urgent research challenges to overcome. Current software development practice in general, and arti- ficial intelligence development in particular, does not typically follow the waterfall or verification-and-validation approach [16]. These approaches are still used, in combination with agile methods, in the above-mentioned industries because they are much more documentation-oriented, auditable and requirements-driven. Agile artificial intelligence development is much faster and iterative, and thus presents a challenge to auditability. However, applying agile methodologies to internal audits themselves is a current topic of research in the internal audit profession.2 Most internal audit functions outside of heavily regulated indus- tries tend to take a risk-based approach. They work with product teams to ask "what could go wrong" at each step of a process and use that to build a risk register [59]. This allows risks to rise to the surface in a way that is informed by the people who know these processes and systems the best. Internal audits can also lever- age relevant experts from within the company to facilitate such discussions and provide additional insight on potential risks [3]. Large-scale production AI systems are extraordinarily complex, and a critical line of future research relates to addressing the inter- action of highly complex coupled sociotechnical systems. Moreover, there is a dynamic complex interaction between users as sources of data, data collection, and model training and updating. Additionally, governance processes based solely on risk have been criticized for 2https://deloitte.wsj.com/riskandcompliance/2018/08/06/mind-over-matter- implementing-agile-internal-audit/ FAT* ’20, January 27–30, 2020, Barcelona, Spain being unable to anticipate the most profound impacts from techno- logical innovation, such as the financial crisis in 2008, in which big data and algorithms played a large role [52, 54, 57]. With artificial intelligence systems it can be difficult to trace model output back to requirements because these may not be ex- plicitly documented, and issues may only become apparent once systems are released. However, from an ethical and moral perspec- tive it is incumbent on producers of artificial intelligence systems to anticipate ethics-related failures before launch. However, as [58] and [31] point out, the design, prototyping and maintenance of AI systems raises many unique challenges not commonly faced with other kinds of intelligent systems or computing systems more broadly. For example, data entanglement results from the fact that artificial intelligence is a tool that mixes data sources together. As Scully et al. point out, artificial intelligence models create entangle- ment and make the isolation of improvements effectively impossible [67], which they call Change Anything Change Everything. We sug- gest that by having explicit documentation about the purpose, data, and model space, potential hazards could be identified earlier in the development process. Selbst and Barocas argue that “one must seek explanations of the process behind a model‘s development, not just explanations of the model itself” [68]. As a relatively young community focused on fairness, accountability, and transparency in AI, we have some indication of the system culture requirements needed to normalize, for example, an adequately thorough documentation procedure and guidelines [24, 48]. Still, we lack the formalization of a standard model development template or practice, or process guidelines for when and in which contexts it is appropriate to implement certain recommendations. In these cases, internal auditors can work with engineering teams to construct the missing documentation to assess practices against the scope of the audit. Improving documentation can then be a remediation for future work. Also, as AI is at times considered a “general purpose technology” with multiple and dual uses [78], the lack of reliable standardization poses significant challenges to governance efforts. This challenge is compounded by increasing customization and variability of what an AI product development lifecycle looks like depending on the anticipated context of deployment or industry. We thus combine learnings from prior practice in adjacent in- dustries while recognizing the uniqueness of the commercial AI industry to identify key opportunities for internal auditing in our specific context. We do so in a way that is appropriate to the re- quirements of an AI system. # 4 SMACTR: AN INTERNAL AUDIT FRAMEWORK We now outline the components of an initial internal audit frame- work, which can be framed as encompassing five distinct stages— Scoping, Mapping, Artifact Collection, Testing and Reflection (SMACTR)— all of which have their own set of documentation requirements and account for a different level of the analysis of a system. Figure 2 illustrates the full set of artifacts recommended for each stage. To illustrate the utility of this framework, we contextualize our descriptions with the hypothetical example of Company X Inc., FAT* ’20, January 27–30, 2020, Barcelona, Spain Raji & Smart, et al. / Scoping | Mapping Artifact Collection | / Testing | / Reflection | | Post-Audit / Social Impact Assessment Failure modes and effects analysis (FMEA) Define Audit Scope Stakeholder Buy-In Audit Checklist Review Documentation Remediation Plan Go / No-Go Decisions Lsvepacd ee Conduct Interviews Model Cards Adversarial Testing Design History File (ADHF) | | Design Mitigations Al Principles Stakeholder Map Datasheets Ethical Risk Analysis Chart Track Implementation Use Case Ethics Review Interview Transcripts Summary Report Figure 2: Overview of Internal Audit Framework. Gray indicates a process, and the colored sections represent documents. Documents in orange are produced by the auditors, blue documents are produced by the engineering and product teams and green outputs are jointly developed. a large multinational software engineering consulting firm, spe- cializing in developing custom AI solutions for a diverse range of clients. We imagine this company has designated five AI princi- ples, paraphrased from the most commonly identified AI principles in a current online English survey [34]–"Transparency", "Justice, Fariness & Non-Discrimination", "Safety & Non-Maleficence", "Re- sponsibility & Accountability" and "Privacy". We also assume that the corporate structure of Company X is typical of any technical consultancy, and design our stakeholder map by this assumption. Company X has decided to pilot the SMACTR internal audit framework to fulfill a corporate mandate towards responsible in- novation practice, accommodate external accountability and op- erationalize internal consistency with respect to its identified AI principles. The fictional company thus pilots the audit framework on two hypothetical client projects. The first (hypothetical) client wishes to develop a child abuse screening tool similar to that of the real cases extensively studied and reported on [11, 14, 15, 21, 25, 36]. This complex case inter- sects heavily with applications in high-risk scenarios with dire consequences. This scenario demonstrates how, for algorithms in- terfacing with high-risk contexts, a structured framework can allow for the careful consideration of all the possibilities and risks with taking on the project, and the extent of its understood social impact. The second invented client is Happy-Go-Lucky, Inc., an imag- ined photo service company looking for a smile detection algorithm to automatically trigger the cameras in their installed physical photo booths. In this scenario, the worst case is a lack of customer satisfaction—the stakes are low and the situation seems relatively straightforward. This scenario demonstrates how in even seem- ingly simple and benign cases, ethical consideration of system deployment can reveal underlying issues to be addressed prior to deployment, especially when we contextualize the model within the setting of the product and deployment environment. a design history file and the summary report. Workable templates can also be accessed as an online resource here. 4.1 The Governance Process To design our audit procedure, we suggest complementing formal risk assessment methodologies with ideas from responsible innova- tion, which stresses four key dimensions: anticipation, reflexivity, inclusion and responsiveness [73], as well as system-theoretic con- cepts that help grapple with increasing complexity and coupling of artificial intelligence systems with the external world [42]. Risk- based assessments can be limited in their ability to capture social and ethical stakes, and they should be complemented by anticipa- tory questions such as, “what if...?”. The aim is to increase ethical foresight through systematic thinking about the larger sociotechni- cal system in which a product will be deployed [50]. There are also intersections between this framework and just effective product development theory [5], as many of the components of audit de- sign refocus the product development process to prioritize the user and their ultimate well-being, resulting in a more effective product performance outcome. At a minimum, the internal audit process should enable critical reflections on the potential impact of a system, serving as internal education and training on ethical awareness in addition to leav- ing what we refer to as a “transparency trail” of documentation at each step of the development cycle (see Figure 2). To shift the pro- cess into an actionable mechanism for accountability, we present a validated and transparently outlined procedure that auditors can commit to. The thoroughness of our described process will hope- fully engage the trust of audit targets to act on and acknowledge post-audit recommendations for engineering practices in alignment with prescribed AI principles. An end-to-end worked example of the audit framework is avail- able as supplementary material to this paper for the Happy-Go- Lucky, Inc. client case. This includes demonstrative templates of all recommended documentation, with the exception of specific process files such as any experimental results, interview transcripts, This process primarily addresses how to conduct internal audits, providing guidance for those that have already deemed an audit necessary but would like to further define the scope and execution details. Though not covered here, an equally important process is determining what systems to audit and why. Each industry has a way to judge what requires a full audit, but that process is discre- tionary and dependent on a range of contextual factors pertinent to the industry, the organization, audit team resourcing, and the case Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing at hand. Risk prioritization and the necessary variance in scrutiny is a separately interesting and rich research topic on its own. The process outlined below can be applied in full or in a lighter-weight formulation, depending on the level of assessment desired. 4.2 The Scoping Stage For both clients, a product or request document is provided to spec- ify the requirements and expectations of the product or feature. The goal of the scoping stage is to clarify the objective of the audit by reviewing the motivations and intended impact of the inves- tigated system, and confirming the principles and values meant to guide product development. This is the stage in which the risk analysis begins by mapping out intended use cases and identify- ing analogous deployments either within the organization or from competitors or adjacent industries. The goal is to anticipate areas to investigate as potential sources of harm and social impact. At this stage, interaction with the system should be minimal. In the case of the smile-triggered phone booth, a smile detection model is required, providing a simple product, with not a broad scope of considerations as the potential for harm does not go much beyond inconvenience or customer exclusion and dissatisfaction. For the child abuse detection product, there are many more ap- proaches to solving the issue and many more options for how the model interacts with the broader system. The use case itself in- volves many ethical considerations, as an ineffective model may result in serious consequences like death or family separation. The key artifacts developed by the auditors from this stage in- clude an ethical review of the system use case and a social impact assessment. Pre-requisite documents from the product and engi- neering team should be a declaration or confirmation statement of ethical objectives, standards and AI principles. The product team should also provide a Product Requirements Document (PRD), or project proposal from the initial planning of the audited product. 4.2.1 Artifact: Ethical Review of System Use Case. When a potential AI system is in the development pipeline, it should be reviewed with a series of questions that first and foremost check to see, at a high level, whether the technology aligns with a set of ethical values or principles. This can take the form of an ethical review that considers the technology from a responsible innovation perspective by asking who is likely to be impacted and how. Importantly, we stress standpoint diversity in this process. Al- gorithm development implicitly encodes developer assump- tions that they may not be aware of, including ethical and political values. Thus it is not always possible for individual tech- nology workers to identify or assess their own biases or faulty assumptions [33]. For this reason, a critical range of viewpoints is included in the review process. The essential inclusion of indepen- dent domain experts and marginalized groups in the ethical review process "has the potential to lead to more rigorous critical reflection because their experiences will often be precisely those that are most needed in identifying problematic background assumptions and revealing limitations with research questions, models, or method- ologies" [33]. Another method to elicit implicit biases or motivated cognition [40] is to ask people to reflect on their preliminary assess- ment and then ask whether they might have reason to regret the FAT* ’20, January 27–30, 2020, Barcelona, Spain action later on. This can shed light on how our position in society biases our assumptions and ways of knowing [18]. An internal ethics review board that includes a diversity of voices should review proposed projects and document its views. Internal ethics review boards are common in biomedical research, and the purpose of these boards is to ensure that the rights, safety, and well-being of all human subjects involved in medical research are protected [56]. Similarly, the purpose of an ethics review board for AI systems includes safeguarding human rights, safety, and well-being of those potentially impacted. 4.2.2 Artifact: Social Impact Assessment. A social impact assess- ment should inform the ethical review. Social impact assessments are commonly defined as a method to analyze and mitigate the unintended social consequences, both positive and negative, that occur when a new development, program, or policy engages with human populations and communities [79]. In it, we describe how the use of an artificial intelligence system might change people’s ways of life, their culture, their community, their political systems, their environment, their health and well-being, their personal and property rights, and their experiences (positive or negative) [79]. The social impact assessment includes two primary steps: an assessment of the severity of the risks, and an identification of the relevant social, economic, and cultural impacts and harms that an artificial intelligence system applied in context may create. The severity of risk is the degree to which the specific context of the use case is assessed to determine the degree in which potential harms may be amplified. The severity assessment proceeds from the analysis of impacts and harms to give a sense of the relative severity of the harms and impacts depending on the sensitivity, constraints, and context of the use case. 4.3 The Mapping Stage The mapping stage is not a step in which testing is actively done, but rather a review of what is already in place and the perspectives involved in the audited system. This is also the time to map internal stakeholders, identify key collaborators for the execution of the audit, and orchestrate the appropriate stakeholder buy-in required for execution. At this stage, the FMEA (Section 3.1.3) should begin and risks should be prioritized for later testing. As Company X is a consultancy, this stage mainly requires iden- tifying the stakeholders across product and engineering teams an- chored to this particular client project, and recording the nature of their involvement and contribution. This enables an internal record of individual accountability with respect to participation towards the final outcome, and enables the trace of relevant contacts for future inquiry. For the child abuse detection algorithm, the initial identification of failure modes reveals the high stakes of the application, and immediate threats to the "Safety & Non-Maleficence" principle. False positives overwhelm staff and may lead to the separation of families that could have recovered. False negatives may result in a dead or injured child that could have been rescued. For the smile detector, failures disproportionately impact those with alternative emotional expressions—those with autism, different cultural norms on the formality of smiling, or different expectations for the photograph who are then excluded from the product by design. FAT* ’20, January 27–30, 2020, Barcelona, Spain The key artifacts from this stage include a stakeholder map and collaborator contact list, a system map of the product development lifecycle, and the engineering system overview, especially in cases where multiple models inform the end product. Additionally, this stage includes a design history file review of all existing documen- tation of the development process or historical artifacts on past versions of the product. Finally, it includes a report or interview transcripts on key findings from internal ethnographic fieldwork involving the stakeholders and engineers. 4.3.1 Artifact: Stakeholder Map. Who was involved in the system audit and collaborators in the execution of the audit should be out- lined. Clarifying participant dynamics ensures a more transparent representation of the provided information, giving further context to the intended interpretation of the final audit report. 4.3.2 Artifact: Ethnographic Field Study. As Leveson points out, bottom-up decentralized decision making can lead to failures in complex sociotechnical systems [42]. Each local decision may be correct in the limited context in which it was made, but can lead to problems when these decisions and organizational behaviors interact. With modern large-scale artificial intelligence projects and API development, it can be difficult to gain a shared understanding at the right level of system description to understand how local decisions, such as the choice of dataset or model architecture, will impact final system behavior. Therefore, ethnography-inspired fieldwork methodology based on how audits are conducted in other industries, such as finance [74] and healthcare [64] is useful to get a deeper and qualitative understanding of the engineering and product development pro- cess. As in internal financial auditing, access to key people in the organization is important. This access involves semi-structured interviews with a range of individuals close to the development process and documentation gathering to gain an understanding of possible gaps that need to be examined more closely. Traditional metrics for artificial intelligence like loss may con- ceal fairness concerns, social impact risks or abstraction errors [69]. A key challenge is to assess how the numerical metrics specified in the design of an artificial intelligence system reflect or conform with these values. Metrics and measurement are important parts of the auditing process, but should not become aims and ends in themselves when weighing whether an algorithmic system under audit is ethically acceptable for release. Taking metrics measured in isolation risks recapitulating the abstraction error that [69] point out, "To treat fairness and justice as terms that have meaningful application to technology separate from a social context is there- fore to make a category error, or as we posit here, an abstraction error." What we consider data is already an interpretation, highly subjective and contested [23]. Metrics must be understood in re- lation to the engineering context in which they were developed and the social context into which they will be deployed. During the interviews, auditors should capture and pay attention to what falls outside the measurements and metrics, and to render explicit the assumptions and values the metrics apprehend [75]. For example, the decision about whether to prioritize the false positive rate over false negative rate (precision/recall) is a question about values and cannot be answered without stating the values of the organization, team or even engineer within the given development context. Raji & Smart, et al. 4.4 The Artifact Collection Stage Note that the collection of these artifacts advances adherence to the declared AI principles of the organization on "Responsibility & Accountability" and "Transparency". In this stage, we identify and collect all the required documenta- tion from the product development process, in order to prioritize opportunities for testing. Often this implies a record of data and model dynamics though application-based systems can include other product development artifacts such as design documents and reviews, in addition to systems architecture diagrams and other implementation planning documents and retrospectives. At times documentation can be distributed across different teams and stakeholders, or is missing altogether. In certain cases, the au- ditor is in a position to enforce retroactive documentation require- ments on the product team, or craft documents themselves. The model card for the smile detection model is the template model card from the original paper [48]. A hypothetical datasheet for this system is filled out using studies on the CelebA dataset, with which the smile detector is built [44, 47]. In the model card, we identify potential for misuse if smiling is confused for positive affect. From the datasheet for the CelebA dataset, we see that although the provided binary gender labels seem balanced for this dataset (58.1% female, 42% male), other demographic details are quite skewed (77.8% aged 0-45, 22.1% aged over 46 and 14.2% lighter-skinned, 85.8% darker-skinned)[47]. The key artifact from auditors during this stage is the audit check- list, one method of verifying that all documentation pre-requisites are provided in order to commence the audit. Those pre-requisites can include model and data transparency documentation. 4.4.1 Artifact: Design Checklist. This checklist is a method of tak- ing inventory of all the expected documentation to have been gen- erated from the product development cycle. It ensures that the full scope of expected product processes and that the corresponding documentation required to be completed before the audit review can begin are finished. This is also a procedural evaluation of the development process for the system, to ensure that appropriate actions were pursued throughout system development ahead of the evaluation of the final system outcome. 4.4.2 Artifacts: Datasheets and Model Cards. Two recent standards can be leveraged to create auditable documentation, model cards and datasheets [24, 48]. Both model cards and datasheets are im- portant tools toward making algorithmic development and the algorithms themselves more auditable, with the aim of anticipating risks and harms with using artificial intelligence systems. Ideally, these artifacts should be developed and/or collected by product stakeholders during the course of system development. To clarify the intended use cases of artificial intelligence models and minimize their usage in contexts for which they are not well suited, Mitchell et al. recommend that released models be accompa- nied by documentation detailing their performance characteristics [48], called a model card. This should include information about how the model was built, what assumptions were made during development, and what type of model behavior might be experi- enced by different cultural, demographic or phenotypic groups. A Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing model card is also extremely useful for internal development pur- poses to make clear to stakeholders details about trained models that are included in larger software pipelines, which are parts of internal organizational dynamics, which are then parts of larger sociotechnical logics and processes. A robust model card is key to documenting the intended use of the model as well as information about the evaluation data, model scope and risks, and what might be affecting model performance. Model cards are intended to complement "Datasheets for Datasets" [24]. Datasheets for machine learning datasets are derived by anal- ogy from the electronics hardware industry, where a datasheet for an electronics component describes its operating characteristics, test results, and recommended uses. A critical part of the datasheet covers the data collection process. This set of questions are intended to provide consumers of the dataset with the information they need to make informed decisions about using the dataset: what mecha- nisms or procedures were used to collect the data? Was any ethical review process conducted? Does the dataset relate to people? This documentation feeds into the auditors’ assessment process. 4.5 The Testing Stage This stage is where the majority of the auditing team’s testing activity is done—when the auditors execute a series of tests to gauge the compliance of the system with the prioritized ethical values of the organization. Auditors engage with the system in various ways, and produce a series of artifacts to demonstrate the performance of the analyzed system at the time of the audit. Additionally, auditors review the documentation collected from the previous stage and begin to make assessments of the likelihood of system failures to comply with declared principles. High variability in approach is likely during this stage, as the tests that need to be executed change dramatically depending on organizational and system context. Testing should be based on a risk prioritization from the FMEA. For the smile detector, we might employ counterfactual adver- sarial examples designed to confuse the model and find problematic failure modes derived from the FMEA. For the child prediction model, we test performance on a selection of diverse user profiles. These profiles can also be treated for variables that correlate with vulnerable groups to test whether the model has learned biased associations with race or SES. For the ethical risk analysis chart, we look at the principles and realize that there are immediate risks to the "Privacy" principle— with one case involving juvenile data, which is sensitive, and the other involving face data, a biometric. This is also when it becomes clear that in the smiling booth case, there is disproportionate per- formance for certain underrepresented user subgroups, thus jeop- ardizing the "Justice, Fariness & Non-Discrimination" principle. The main artifacts from this stage of the auditing process are the results of tests such as adversarial probing of the system and an ethical risk analysis chart. 4.5.1 Artifact: Adversarial Testing . Adversarial testing is a common approach to finding vulnerabilities in both pre-release and post- launch technology, for example in privacy and security testing [6]. In general, adversarial testing attempts to simulate what a hostile actor might do to gain access to a system, or to push the limits of FAT* ’20, January 27–30, 2020, Barcelona, Spain the system into edge case or unstable behavior to elicit very-low probability but high-severity failures. In this process, direct non-statistical testing uses tailored inputs to the model to see if they result in undesirable outputs. These inputs can be motivated by an intersectional analysis, for exam- ple where an ML system might produce unfair outputs based on demographic and phenotypic groups that might combine in non- additive ways to produce harm, or over time recapitulate harmful stereotypes or reinforce unjust social dynamics (for example, in the form of opportunity denial). This is distinct from adversarially attacking a model with human-imperceptible pixel manipulations to trick the model into misidentifying previously learned outputs [28], but these approaches can be complementary. This approach is more generally defined—encompassing a range of input options to try in an active attempt to fool the system and incite identified failure modes from the FMEA. Internal adversarial testing prior to launch can reveal unexpected product failures before they can impact the real world. Addition- ally, proactive adversarial testing of already-launched products can be a best practice for lifecycle management of released systems. The FMEA should be updated with these results, and the relative changes to risks assessed. 4.5.2 Artifact: Ethical Risk Analysis Chart. The ethical risk analysis chart considers the combination of the likelihood of a failure and the severity of a failure to define the importance of the risk. Highly likely and dangerous risks are considered the most high-priority threats. Each risk is assigned a severity indication of "high", "mid" and "low" depending on their combination of these features. Failure likelihood is estimated by considering the occurrence of certain failures during the adversarial testing of the system and the severity of the risk is identified in earlier stages, from informative processes such as the social impact assessment and ethnographic interviews. 4.6 The Reflection Stage This phase of the audit is the more reflective stage, when the results of the tests at the execution stage are analyzed in juxtaposition with the ethical expectations clarified in the audit scoping. Auditors update and formalize the final risk analysis in the context of test results, outlining specific principles that may be jeopardized by the AI system upon deployment. This phase will reflect on product de- cisions and design recommendations that could be made following the audit results. Additionally, key artifacts at this stage may include a mitigation plan or action plan, jointly developed by the audit and engineering teams, that outlines prioritized risks and test failures that the engi- neering team is in a position to mitigate for future deployments or for a future version of the audited system. For the smile detection algorithm, the decision could be to train a new version of the model on more diverse data before considering deployment, and add more samples of underrepresented popula- tions in CelebA to the training data. It could be decided that the use case does not necessarily define affect, but treats smiling as a favourable photo pose. Design choices for other parts of the product outside the model should be considered—for instance, an opt-in functionality with user permissions required on the screen before FAT* ’20, January 27–30, 2020, Barcelona, Spain applying the model-controlled function, and the default being that the model-controlled trigger is disabled. There could also be an included disclaimer on privacy, assuring users of safe practices for face data storage and consent. Once these conditions are met, Com- pany X could be confident to greenlight developing this product for the client. For the child abuse detection model—this is a more complex decision. Given the ethical considerations involved, the project may be stalled or even cancelled, requiring further inquiry into the ethics of the use case, and the capability of the team to complete the mitigation plan required to deploy an algorithm in such a high risk scenario. 4.6.1 Artifact: Algorithmic Use-related Risk Analysis and FMEA. The risk analysis should be informed by the social impact assess- ment and known issues with similar models. Following Leveson’s work on safety engineering [42], we stress that careful attention must be paid to the distinction between the designers’ mental mod- els of the artificial intelligence system and the user’s mental model. The designers’ mental models are an idealization of the artificial intelligence system before the model is released. Significant differ- ences exist between this ideal model and how the actual system will behave or be used once deployed. This may be due to many factors, such as distributional drift [41] where the training and test set dis- tributions differ from the real-world distribution, or intentional or unintentional misuse of the model for purposes other than those for which it was designed. Reasonable and foreseeable misuse of the model should be anticipated by the designer. Therefore, the user’s mental model of the system should be anticipated and taken into consideration. Large gaps between the intended and actual uses of algorithms have been found in contexts such as criminal justice and web journalism [12]. This adds complexity to anticipated hazards and risks, neverthe- less these should be documented where possible. Christin points out “the importance of studying the practices, uses, and implemen- tations surrounding algorithmic technologies. Intellectually, this involves establishing new exchanges between literatures that may not usually interact, such as critical data studies, the sociology of work, and organizational analysis”. We propose that known use- related issues with deployed systems be taken into account during the design stage. The format of the risk analysis can be variable depending on context, and there are many valuable templates to be found in Failure Modes and Effects Analysis (Section 3.1.3) framing and other risk analysis tools in finance and medical deployments. 4.6.2 Artifact: Remediation and Risk Mitigation Plan. After the au- dit is completed and findings are presented to the leadership and product teams, it is important to develop a plan for remediating these problems. The goal is to drive down the risk of ethical con- cerns or potential negative social impacts to the extent reasonably practicable. This plan can be reviewed by the audit team and lead- ership to better inform deployment decisions. For the concerns raised in any audit against ethical values, a technical team will want to know: what is the threshold for ac- ceptable performance? If auditors discover, for example, unequal classifier performance across subgroups, how close to parity is nec- essary to say the classifier is acceptable? In safety engineering, a risk threshold is usually defined under which the risk is considered Raji & Smart, et al. tolerable. Though a challenging problem, similar standards could be established and developed in the ethics space as well. 4.6.3 Artifact: Algorithmic Design History File. Inspired by the con- cept of the design history file from the medical device industry [77], we propose an algorithmic design history file (ADHF) which would collect all the documentation from the activities outlined above related to the development of the algorithm. It should point to the documents necessary to demonstrate that the product or model was developed in accordance with an organization’s ethical values, and that the benefits of the product outweigh any risks identified in the risk analysis process. This design history file would form the basis of the final audit report, which is a written evaluation by the organization’s audit team. The ADHF should assist with an audit trail, enabling the reconstruction of key decisions and events during the development of the product. The algorithmic report would then be a distillation and summary of the ADHF. 4.6.4 Artifact: Algorithmic Audit Summary Report. The report ag- gregates all key audit artifacts, technical analyses and documenta- tion, putting this in one accessible location for review. This audit report should be compared qualitatively and quantitatively to the expectations outlined in the given ethical objectives and any corre- sponding engineering requirements. 5 LIMITATIONS OF INTERNAL AUDITS Internal auditors necessarily share an organizational interest with the target of the audit. While it is important to maintain an indepen- dent and objective viewpoint during the execution of an audit, we awknowledge that this is challenging. The audit is never isolated from the practices and people conducting the audit, just as artifi- cial intelligence systems are not independent of their developers or of the larger sociotechnical system. Audits are not unified or monolithic processes with an objective "view from nowhere", but must be understood as a "patchwork of coupled procedures, tools and calculative processes" [74]. To avoid audits becoming simply acts of reputation management for an organization, the auditors should be mindful of their own and the organizations’ biases and viewpoints. Although long-standing internal auditing practices for quality assurance in the financial, aviation, chemical, food, and phar- maceutical industries have been shown to be an effective means of controlling risk in these industries [76], the regulatory dynamics in these industries suggest that internal audits are only one important aspect of a broader system of required quality checks and balances. 6 CONCLUSION AI has the potential to benefit the whole of society, however there is currently an inequitable risk distribution such that those who already face patterns of structural vulnerability or bias dispropor- tionately bear the costs and harms of many of these systems. Fair- ness, justice and ethics require that those bearing these risks are given due attention and that organizations that build and deploy artificial intelligence systems internalize and proactively address these social risks as well, being seriously held to account for system compliance to declared ethical principles. Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing REFERENCES [1] Omar Y Al-Jarrah, Paul D Yoo, Sami Muhaidat, George K Karagiannidis, and Kamal Taha. 2015. Efficient machine learning for big data: A review. Big Data Research 2, 3 (2015), 87–93. [2] Amel Bennaceur, Thein Than Tun, Yijun Yu, and Bashar Nuseibeh. 2019. Require- ments Engineering. In Handbook of Software Engineering. Springer, 51–92. [3] Li Bing, Akintola Akintoye, Peter J Edwards, and Cliff Hardcastle. 2005. The allocation of risk in PPP/PFI construction projects in the UK. International Journal of project management 23, 1 (2005), 25–35. [4] Eric Breck, Shanqing Cai, Eric Nielsen, Michael Salib, and D Sculley. 2017. The ml test score: A rubric for ml production readiness and technical debt reduction. In 2017 IEEE International Conference on Big Data (Big Data). IEEE, 1123–1132. [5] Shona L Brown and Kathleen M Eisenhardt. 1995. Product development: Past research, present findings, and future directions. Academy of management review 20, 2 (1995), 343–378. [6] Chad Brubaker, Suman Jana, Baishakhi Ray, Sarfraz Khurshid, and Vitaly Shmatikov. 2014. Using Frankencerts for Automated Adversarial Testing of Certificate Validation. In in SSL/TLS Implementations,âĂİ IEEE Symposium on Security and Privacy. Citeseer. [7] Joanna J Bryson, Mihailis E Diamantis, and Thomas D Grant. 2017. Of, for, and by the people: the legal lacuna of synthetic persons. Artificial Intelligence and Law 25, 3 (2017), 273–291. [8] Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accu- racy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency. 77–91. [9] Jenna Burrell. 2016. How the machine “thinks“: Understanding opacity in machine learning algorithms. Big Data & Society 3, 1 (2016), 2053951715622512. [10] Paul Eric Byrnes, Abdullah Al-Awadhi, Benita Gullvist, Helen Brown-Liburd, Ryan Teeter, J Donald Warren Jr, and Miklos Vasarhelyi. 2018. Evolution of Auditing: From the Traditional Approach to the Future Audit 1. In Continuous Auditing: Theory and Application. Emerald Publishing Limited, 285–297. [11] Alexandra Chouldechova, Diana Benavides-Prado, Oleksandr Fialko, and Rhema Vaithianathan. 2018. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. In Conference on Fairness, Accountabil- ity and Transparency. 134–148. [12] Angèle Christin. 2017. Algorithms in practice: Comparing web journalism and criminal justice. Big Data & Society 4, 2 (2017), 2053951717718855. [13] Kai Lai Chung and Paul Erdös. 1952. On the application of the Borel-Cantelli lemma. Trans. Amer. Math. Soc. 72, 1 (1952), 179–186. [14] Rachel Courtland. 2018. Bias detectives: the researchers striving to make algo- rithms fair. Nature 558, 7710 (2018), 357–357. [15] Stephanie Cuccaro-Alamin, Regan Foust, Rhema Vaithianathan, and Emily Putnam-Hornstein. 2017. Risk assessment and decision making in child pro- tective services: Predictive risk modeling in context. Children and Youth Services Review 79 (2017), 291–298. [16] Michael A Cusumano and Stanley A Smith. 1995. Beyond the waterfall: Software development at Microsoft. (1995). [17] Nicholas Diakopoulos. 2014. Algorithmic accountability reporting: On the inves- tigation of black boxes. (2014). [18] Roel Dobbe, Sarah Dean, Thomas Gilbert, and Nitin Kohli. 2018. A Broader View on Bias in Automated Decision-Making: Reflecting on Epistemology and Dynamics. arXiv preprint arXiv:1807.00553 (2018). [19] Kevin Driscoll, Brendan Hall, HÃ¥kan Sivencrona, and Phil Zumsteg. 2003. Byzan- tine fault tolerance, from theory to reality. In International Conference on Computer Safety, Reliability, and Security. Springer, 235–248. [20] Danielle Ensign, Sorelle A Friedler, Scott Neville, Carlos Scheidegger, and Suresh Venkatasubramanian. 2017. Runaway feedback loops in predictive policing. arXiv preprint arXiv:1706.09847 (2017). [21] Virginia Eubanks. 2018. A child abuse prediction model fails poor families. Wired Magazine (2018). [22] Sellywati Mohd Faizal, Mohd Rizal Palil, Ruhanita Maelah, and Rosiati Ramli. 2017. Perception on justice, trust and tax compliance behavior in Malaysia. Kasetsart Journal of Social Sciences 38, 3 (2017), 226–232. [23] Jonathan Furner. 2016. “Data“: The data. In Information Cultures in the Digital Age. Springer, 287–306. [24] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford. 2018. Datasheets for datasets. arXiv preprint arXiv:1803.09010 (2018). [25] Jeremy Goldhaber-Fiebert and Lea Prince. 2019. Impact Evaluation of a Predictive Risk Modeling Tool for Allegheny CountyâĂŹs Child Welfare Office. Pittsburgh: Allegheny County.[Google Scholar] (2019). [26] Ben Green and Yiling Chen. 2019. Disparate interactions: An algorithm-in-the- loop analysis of fairness in risk assessments. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 90–99. [27] Daniel Greene, Anna Lauren Hoffmann, and Luke Stark. 2019. Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference FAT* ’20, January 27–30, 2020, Barcelona, Spain on System Sciences. [28] Shixiang Gu and Luca Rigazio. 2014. Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068 (2014). [29] John Haigh. 2012. Probability: A very short introduction. Vol. 310. Oxford Univer- sity Press. [30] Brendan Hall and Kevin Driscoll. 2014. Distributed System Design Checklist. (2014). [31] Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudík, and Hanna Wallach. 2018. Improving fairness in machine learning systems: What do industry practitioners need? arXiv preprint arXiv:1812.05239 (2018). [32] IEEE. 2008. IEEE Standard for Software Reviews and Audits. IEEE Std 1028-2008 (Aug 2008), 1–53. https://doi.org/10.1109/IEEESTD.2008.4601584 [33] Kristen Intemann. 2010. 25 years of feminist empiricism and standpoint theory: Where are we now? Hypatia 25, 4 (2010), 778–796. [34] Anna Jobin, Marcello Ienca, and Effy Vayena. 2019. Artificial Intelligence: the global landscape of ethics guidelines. arXiv preprint arXiv:1906.11668 (2019). [35] Paul A Judas and Lorraine E Prokop. 2011. A historical compilation of software metrics with applicability to NASA‘s Orion spacecraft flight software sizing. Innovations in Systems and Software Engineering 7, 3 (2011), 161–170. [36] Emily Keddell. 2019. Algorithmic Justice in Child Protection: Statistical Fairness, Social Justice and the Implications for Practice. Social Sciences 8, 10 (2019), 281. [37] Svetlana Kiritchenko and Saif M Mohammad. 2018. Examining gender and race bias in two hundred sentiment analysis systems. arXiv preprint arXiv:1805.04508 (2018). [38] Nitin Kohli, Renata Barreto, and Joshua A Kroll. 2018. Translation Tutorial: A Shared Lexicon for Research and Practice in Human-Centered Software Systems. In 1st Conference on Fairness, Accountability, and Transparancy. New York, NY, USA. 7. [39] Joshua A Kroll, Solon Barocas, Edward W Felten, Joel R Reidenberg, David G Robinson, and Harlan Yu. 2016. Accountable algorithms. U. Pa. L. Rev. 165 (2016), 633. [40] Arie W Kruglanski. 1996. Motivated social cognition: Principles of the interface. (1996). [41] Joel Lehman. 2019. Evolutionary Computation and AI Safety: Research Problems Impeding Routine and Safe Real-world Application of Evolution. arXiv preprint arXiv:1906.10189 (2019). [42] Nancy Leveson. 2011. Engineering a safer world: Systems thinking applied to safety. MIT press. [43] Jie Liu. 2012. The enterprise risk management and the risk oriented internal audit. Ibusiness 4, 03 (2012), 287. [44] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. 2015. Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision. 3730–3738. [45] Amanda H Lynch and Siri Veland. 2018. Urgency in the Anthropocene. MIT Press. [46] Thomas Maillart, Mingyi Zhao, Jens Grossklags, and John Chuang. 2017. Given enough eyeballs, all bugs are shallow? Revisiting Eric Raymond with bug bounty programs. Journal of Cybersecurity 3, 2 (2017), 81–90. [47] Michele Merler, Nalini Ratha, Rogerio S Feris, and John R Smith. 2019. Diversity in faces. arXiv preprint arXiv:1901.10436 (2019). [48] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 220–229. [49] Brent Mittelstadt. 2019. AI Ethics: Too Principled to Fail? SSRN (2019). [50] Brent Daniel Mittelstadt and Luciano Floridi. 2016. The ethics of big data: current and foreseeable issues in biomedical contexts. Science and engineering ethics 22, 2 (2016), 303–341. [51] Laura Moy. 2019. How Police Technology Aggravates Racial Inequity: A Taxon- omy of Problems and a Path Forward. Available at SSRN 3340898 (2019). [52] Fabian Muniesa, Marc Lenglet, et al. 2013. Responsible innovation in finance: directions and implications. Responsible Innova-tion: Managing the Responsible Emergence of Science and Innovation in Society. Wiley, London (2013), 185–198. [53] Kristina Murphy. 2003. Procedural justice and tax compliance. Australian Journal of Social Issues (Australian Council of Social Service) 38, 3 (2003). [54] Safiya Umoja Noble. 2018. Algorithms of oppression: How search engines reinforce racism. nyu Press. [55] Institute of Internal Auditors. Research Foundation and Institute of Internal Au- ditors. 2007. The Professional Practices Framework. Inst of Internal Auditors. [56] General Assembly of the World Medical Association et al. 2014. World Medical Association Declaration of Helsinki: ethical principles for medical research in- volving human subjects. The Journal of the American College of Dentists 81, 3 (2014), 14. [57] Cathy O’neil. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books. [58] Charles Parker. 2012. Unexpected challenges in large scale machine learning. In Proceedings of the 1st International Workshop on Big Data, Streams and Heteroge- neous Source Mining: Algorithms, Systems, Programming Models and Applications. ACM, 1–6. FAT* ’20, January 27–30, 2020, Barcelona, Spain [59] Fiona D Patterson and Kevin Neailey. 2002. A risk register database system to aid the management of project risk. International Journal of Project Management 20, 5 (2002), 365–374. [60] W Price and II Nicholson. 2017. Regulating black-box medicine. Mich. L. Rev. 116 (2017), 421. [61] James Quesada, Laurie Kain Hart, and Philippe Bourgois. 2011. Structural vul- nerability and health: Latino migrant laborers in the United States. Medical anthropology 30, 4 (2011), 339–362. [62] Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable auditing: Investi- gating the impact of publicly naming biased performance results of commercial ai products. In AAAI/ACM Conf. on AI Ethics and Society. [63] Clarence Rodrigues and Stephen Cusick. 2011. Commercial aviation safety 5/e. McGraw Hill Professional. [64] G Sirgo Rodríguez, M Olona Cabases, MC Martin Delgado, F Esteban Reboll, A Pobo Peris, M Bodí Saera, et al. 2014. Audits in real time for safety in critical care: definition and pilot study. Medicina intensiva 38, 8 (2014), 473–482. [65] Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. 2014. Auditing algorithms: Research methods for detecting discrimination on internet platforms. Data and discrimination: converting critical concerns into productive inquiry 22 (2014). [66] David Satava, Cam Caldwell, and Linda Richards. 2006. Ethics and the auditing culture: Rethinking the foundation of accounting and auditing. Journal of Business Ethics 64, 3 (2006), 271–284. [67] David Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Diet- mar Ebner, Vinay Chaudhary, and Michael Young. 2014. Machine learning: The high interest credit card of technical debt. (2014). [68] Andrew D Selbst and Solon Barocas. 2018. The intuitive appeal of explainable machines. Fordham L. Rev. 87 (2018), 1085. [69] Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 59–68. Raji & Smart, et al. [70] Hetan Shah. 2018. Algorithmic accountability. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 376, 2128 (2018), 20170362. [71] Dominic SB Soh and Nonna Martinov-Bennie. 2011. The internal audit function: Perceptions of internal audit roles, effectiveness and evaluation. Managerial Auditing Journal 26, 7 (2011), 605–622. [72] Diomidis H Stamatis. 2003. Failure mode and effect analysis: FMEA from theory to execution. ASQ Quality press. [73] Jack Stilgoe, Richard Owen, and Phil Macnaghten. 2013. Developing a framework for responsible innovation. Research Policy 42, 9 (2013), 1568–1580. [74] Alexander Styhre. 2015. The financialization of the firm: Managerial and social implications. Edward Elgar Publishing. [75] Alexander Styhre. 2018. The unfinished business of governance: towards new governance regimes. In The Unfinished Business of Governance. Edward Elgar Publishing. [76] JohnK Taylor. 2018. Quality assurance of chemical measurements. Routledge. [77] Marie B Teixeira, Marie Teixeira, and Richard Bradley. 2013. Design controls for the medical device industry. CRC press. [78] Manuel Trajtenberg. 2018. AI as the next GPT: a Political-Economy Perspective. Technical Report. National Bureau of Economic Research. [79] Frank Vanclay. 2003. International principles for social impact assessment. Impact assessment and project appraisal 21, 1 (2003), 5–12. [80] Tim Vanderveen. 2005. Averting highest-risk errors is first priority. Patient Safety and Quality Healthcare 2 (2005), 16–21. [81] Ajit Kumar Verma, Srividya Ajit, Durga Rao Karanki, et al. 2010. Reliability and safety engineering. Vol. 43. Springer. [82] Jess Whittlestone, Rune Nyrup, Anna Alexandrova, and Stephen Cave. 2019. The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions. In Proceedings of the AAAI/ACM Conference on AI Ethics and Society, Honolulu, HI, USA. 27–28. [83] Yi Zeng, Enmeng Lu, and Cunqing Huangfu. 2018. Linking Artificial Intelligence Principles. arXiv preprint arXiv:1812.04814 (2018).
{ "id": "1901.10436" }
1912.13283
oLMpics -- On what Language Model Pre-training Captures
Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are useful for symbolic reasoning tasks have been limited and scattered. In this work, we propose eight reasoning tasks, which conceptually require operations such as comparison, conjunction, and composition. A fundamental challenge is to understand whether the performance of a LM on a task should be attributed to the pre-trained representations or to the process of fine-tuning on the task data. To address this, we propose an evaluation protocol that includes both zero-shot evaluation (no fine-tuning), as well as comparing the learning curve of a fine-tuned LM to the learning curve of multiple controls, which paints a rich picture of the LM capabilities. Our main findings are that: (a) different LMs exhibit qualitatively different reasoning abilities, e.g., RoBERTa succeeds in reasoning tasks where BERT fails completely; (b) LMs do not reason in an abstract manner and are context-dependent, e.g., while RoBERTa can compare ages, it can do so only when the ages are in the typical range of human ages; (c) On half of our reasoning tasks all models fail completely. Our findings and infrastructure can help future work on designing new datasets, models and objective functions for pre-training.
http://arxiv.org/pdf/1912.13283
Alon Talmor, Yanai Elazar, Yoav Goldberg, Jonathan Berant
cs.CL, cs.AI, cs.LG
TACL 2020
null
cs.CL
20191231
20201119
0 2 0 2 v o N 9 1 ] L C . s c [ 2 v 3 8 2 3 1 . 2 1 9 1 : v i X r a # oLMpics - On what Language Model Pre-training Captures # Alon Talmor1,2 Yanai Elazar1,3 Yoav Goldberg1,3 Jonathan Berant1,2 # 1The Allen Institute for AI 2Tel-Aviv University 3Bar-Ilan University {alontalmor@mail,joberant@cs}.tau.ac.il {yanaiela,yoav.goldberg}@gmail.com # Abstract Recent success of pre-trained language models (LMs) has spurred widespread in- terest in the language capabilities that they possess. However, efforts to understand whether LM representations are useful for symbolic reasoning tasks have been limited and scattered. In this work, we propose eight reasoning tasks, which conceptually require operations such as comparison, con- junction, and composition. A fundamental challenge is to understand whether the per- formance of a LM on a task should be at- tributed to the pre-trained representations or to the process of fine-tuning on the task data. To address this, we propose an evaluation protocol that includes both zero-shot eval- uation (no fine-tuning), as well as compar- ing the learning curve of a fine-tuned LM to the learning curve of multiple controls, which paints a rich picture of the LM ca- pabilities. Our main findings are that: (a) different LMs exhibit qualitatively differ- ent reasoning abilities, e.g., ROBERTA suc- ceeds in reasoning tasks where BERT fails completely; (b) LMs do not reason in an abstract manner and are context-dependent, e.g., while ROBERTA can compare ages, it can do so only when the ages are in the typ- ical range of human ages; (c) On half of our reasoning tasks all models fail completely. Our findings and infrastructure can help fu- ture work on designing new datasets, mod- els and objective functions for pre-training. # Introduction Large pre-trained language models (LM) have rev- olutionized the field of natural language process- ing in the last few years (Dai and Le, 2015; Pe- ters et al., 2018a; Yang et al., 2019; Radford et al., 2019; Devlin et al., 2019). This has instigated re- search exploring what is captured by the contex- tualized representations that these LMs compute, A) Object Comparison B) Always-Never "The size of a cat is flarger/smallier] 1.0 than the size of a mouse.” —# ROBERTa-Large -® No-Language contro! SF MuM-Baseline 08 Language sensitivi 0.6 “A cat [never/always/...] drinks coffee.” 2 © vs. non- pretrained baseline model 2 u Dev-set Accuracy 0.4- * "cat [blah/ya] mouse.” “ "cat [blah/ya/...] coffee.” 0.2 0 62 125 250 500 1k 2k 4k 0 62125 250 500 1k Number of training examples Number of training examples Figure 1: Overview of our experimental design. Two probes are evaluated using learning curves (including zero-shot). ROBERTA-L’s (red squares, upper text in black) accuracy is compared to a NO LANGUAGE (NO LANG.) control (red circles, lower text in black), and MLM-BASELINE, which is not pre-trained (green triangles). Here, we conclude that the LM representations are well-suited for task A), whereas in task B) the model is adapting to the task during fine-tuning. revealing that they encode substantial amounts of syntax and semantics (Linzen et al., 2016b; Ten- ney et al., 2019b,a; Shwartz and Dagan, 2019; Lin et al., 2019; Coenen et al., 2019). Despite these efforts, it remains unclear what symbolic reasoning capabilities are difficult to learn from an LM objective only? In this paper, we propose a diverse set of probing tasks for types of symbolic reasoning that are potentially difficult to capture using a LM objective (see Table 1). Our intuition is that since a LM objective focuses on word co-occurrence, it will struggle with tasks that are considered to involve symbolic reasoning such as determining whether a conjunction of proper- ties is held by an object, and comparing the sizes of different objects. Understanding what is miss- ing from current LMs may help design datasets and objectives that will endow models with the missing capabilities. However, how does one verify whether pre- trained representations hold information that is useful for a particular task? Past work mostly re- Probe name ALWAYS-NEVER AGE COMPARISON OBJECTS COMPARISON ANTONYM NEGATION PROPERTY CONJUNCTION MC-QA TAXONOMY CONJUNCTION MC-MLM A ferry and a floatplane are both a type of [MASK]. A. vehicle B. airplane C. boat When did the band where Junior Cony played first form? A. 1978 B. 1977 C. 1980 ENCYC. COMPOSITION MULTI-HOP COMPOSITION MC-MLM When comparing a 23, a 38 and a 31 year old, the [MASK] is oldest A. second B. first C. third Table 1: Examples for our reasoning probes. We use two types of experimental setups, explained in §2. A. is the correct answer. sorted to fixing the representations and fine-tuning a simple, often linear, randomly initialized probe, to determine whether the representations hold rel- evant information (Ettinger et al., 2016; Adi et al., 2016; Belinkov and Glass, 2019; Hewitt and Man- ning, 2019; Wallace et al., 2019; Rozen et al., 2019; Peters et al., 2018b; Warstadt et al., 2019). However, it is difficult to determine whether suc- cess is due to the pre-trained representations or due to fine-tuning itself (Hewitt and Liang, 2019). To handle this challenge, we include multiple con- trols that improve our understanding of the results. Our “purest” setup is zero-shot: we cast tasks in the masked LM format, and use a pre-trained LM without any fine-tuning. For example, given the statement “A cat is [MASK] than a mouse”, an LM can decide if the probability of “larger” is higher than “smaller” for the a masked word (Fig- ure 1). If a model succeeds without pre-training over many pairs of objects, then its representations are useful for this task. However, if it fails, it could be due to a mismatch between the language it was pre-trained on and the language of the probing task (which might be automatically generated, contain- ing grammatical errors). Thus, we also compute the learning curve (Figure 1), by fine-tuning with increasing amounts of data on the already pre- trained masked language modeling (MLM) out- put “head”, a 1-hidden layer MLP on top of the model’s contextualized representations. A model that adapts from fewer examples arguably has bet- ter representations for it. Moreover, to diagnose whether model perfor- mance is related to pre-training or fine-tuning, we add controls to every experiment (Figures 1,2). First, we add a control that makes minimal use of language tokens, i.e., “cat [MASK] mouse” (NO LANG. If a model succeeds given minimal use of language, the performance can be mostly attributed to fine-tuning rather than to the pre-trained language representations. Simi- lar logic is used to compare against baselines that are not pre-trained (except for non-contextualize word embeddings). Overall, our setup provides a rich picture of whether LM representations help in solving a wide range of tasks. We introduce eight tasks that test different types of reasoning, as shown in Table 1.1 We run ex- periments using several pre-trained LMs, based on BERT (Devlin et al., 2019) and ROBERTA (Liu et al., 2019). We find that there are clear qualita- tive differences between different LMs with simi- lar architecture. For example, ROBERTA-LARGE (ROBERTA-L) can perfectly solve some reason- ing tasks, such as comparing numbers, even in a zero-shot setup, while other models’ performance is close to random. However, good performance is highly context-dependent. Specifically, we re- peatedly observe that even when a model solves a task, small changes to the input quickly derail it to low performance. For example, ROBERTA-L can almost perfectly compare people’s ages, when the numeric values are in the expected range (15- 105), but miserably fails if the values are outside this range. Interestingly, it is able to reliably an- swer when ages are specified through the birth year in the range 1920-2000. This highlights that the LMs ability to solve this task is strongly tied to the specific values and linguistic context and does not generalize to arbitrary scenarios. Last, we find that in four out of eight tasks, all LMs perform poorly compared to the controls. Our contributions are summarized as follows: • A set of probes that test whether specific reason- ing skills are captured by pre-trained LMs. for understanding whether a capability is encoded in pre-trained representations or is learned during fine-tuning. • An analysis of skills that current LMs possess. We find that LMs with similar architectures are qualitatively different, that their success is context-dependent, and that often all LMs fail. 1Average human accuracy was evaluated by two of the au- thors. Overall inter-annotator agreement accuracy was 92%. • Code and infrastructure for designing and test- ing new probes on a large set of pre-trained LMs. The code and models are available at http: //github.com/alontalmor/oLMpics. # 2 Models We now turn to the architectures and loss functions used throughout the different probing tasks. # 2.1 Pre-trained Language Models All models in this paper take a sequence of to- kens x = (x1, . . . , xn), and compute contex- tualized representations with a pre-trained LM, that is, h = ENCODE(x) = (h1, . . . , hn). Specifically, we consider (a) BERT: (Devlin et al., 2019), a pre-trained LM built using the Trans- former (Vaswani et al., 2017) architecture, which consists of a stack of Transformer layers, where each layer includes a multi-head attention sublayer and a feed-forward sub-layer. BERT is trained on large corpora using the masked-language model- ing objective (MLM), i.e., the model is trained to predict words that are masked from the in- put; including BERT-WHOLE-WORD-MASKING (BERT-WWM), that was trained using whole- word-masking (b) ROBERTA (Liu et al., 2019), which has the same architecture as BERT, but was trained on 10x more data and optimized carefully. # 2.2 Probing setups We probe the pre-trained LMs using two setups: multi-choice masked LM (MC-MLM) and multi- choice question answering (MC-QA). The de- fault setup is MC-MLM, used for tasks where the answer-set is small, consistent across the differ- ent questions, and each answer appears as a sin- gle item in the word-piece vocabulary.2 The MC- QA setup is used when the answer-set substan- tially varies between questions, and many of the answers have more than one word piece. MC-MLM Here, we convert the MLM setup to a multi-choice setup (MC-MLM). Specifi- cally, the input to the LM is the sequence x = ([CLS], . . . , xi−1, [MASK], xi+1, . . . , [SEP]), where a single token xi is masked. Then, the contextualized representation hi is passed through a MC-MLM head where V is the vocabulary, and 2Vocabularies of LMs such as BERT and ROBERTA con- tain word-pieces, which are sub-word units that are frequent in the training corpus. For details see Sennrich et al. (2016). F FMLM is a 1-hidden layer MLP: l = F FMLM(hi) ∈ R|V|, p = softmax(m ⊕ l), where ⊕ is element-wise addition and m ∈ {0, −∞}|V| is a mask that guarantees that the sup- port of the probability distribution will be over ex- actly K ∈ {2, 3, 4, 5} candidate tokens: the cor- rect one and K − 1 distractors. Training mini- mizes cross-entropy loss given the gold masked to- ken. An input, e.g. “[CLS] Cats [MASK] drink coffee [SEP]”, is passed through the model, the contextualized representation of the masked token is passed through the MC-MLM head, and the fi- nal distribution is over the vocabulary words “al- ways”, “sometimes” and “never”, where the gold token is “never”, in this case. A compelling advantage of this setup, is that reasonable performance can be obtained without training, using the original LM representations and the already pre-trained MLM head weights (Petroni et al., 2019). MC-QA Constructing a MC-MLM probe lim- its the answer candidates to a single token from the word-piece vocabulary. To relax this we use in two tasks the standard setup for answering multi-choice questions with pre-trained LMs (Tal- mor et al., 2019; Mihaylov et al., 2018). Given a question q and candidate answers a1, . . . , aK, we compute for each candidate answer ak repre- sentations h(k) from the input tokens “[CLS] q [SEP] ak [SEP]”. Then the probability over an- swers is obtained using the multi-choice QA head: l(k) = F FQA(h(k) 1 ), p = softmax(l(1), . . . , l(K)), where F FQA is a 1-hidden layer MLP that is run over the [CLS] (first) token of an answer candi- date and outputs a single logit. Note that in this setup that parameters of F FQA cannot be initial- ized using the original pre-trained LM. # 2.3 Baseline Models To provide a lower bound on the performance of pre-trained LMs, we introduce two baseline mod- els with only non-contextualized representations. MLM-BASELINE This serves as a lower-bound for the MC-MLM setup. The input to F FMLM(·) is the hidden representation h ∈ R1024 (for large models). To obtain a similar architecture with non-contextualized representations, we concate- nate the first 20 tokens of each example, represent- ing each token with a 50-dimensional GLOVE vec- tor (Pennington et al., 2014), and pass this 1000- dimensional representation of the input through F FMLM, exactly like in MC-MLM. In all probes, phrases are limited to 20 tokens. If there are less than 20 tokens in the input, we zero-pad the input. MC-QA baseline This serves as a lower-bound for MC-QA. We use the ESIM architecture over GLOVE representations, which is known to pro- vide a strong model when the input is a pair of text fragments (Chen et al., 2017). We adapt the architecture to the multi-choice setup using the procedure proposed by Zellers et al. (2018). Each phrase and candidate answer are passed as a list of token ‘[CLS] phrase [SEP] answer [SEP]’ to the LM. The contextualized representation of the [CLS] token is linearly pro- jected to a single logit. The logits for candidate answers are passed through a softmax layer to ob- tain probabilities, and the argmax is selected as the model prediction. # 3 Controlled Experiments We now describe the experimental design and con- trols used to interpret the results. We use the AGE- COMPARE task as a running example, where mod- els need to compare the numeric value of ages. # 3.1 Zero-shot Experiments with MC-MLM Fine-tuning pre-trained LMs makes it hard to dis- entangle what is captured by the original represen- tations and what was learned during fine-tuning. Thus, ideally, one should test LMs using the pre- trained weights without fine-tuning (Linzen et al., 2016a; Goldberg, 2019). The MC-MLM setup, which uses a pre-trained MLM head, achieves ex- actly that. One only needs to design the task as a statement with a single masked token and K possible output tokens. For example, in AGE- COMPARE, we chose the phrasing “A AGE-1 year old person is [MASK] than me in age, If I am a AGE-2 year old person.”, where AGE-1 and AGE-2 are replaced with different integers, and possible answers are “younger” and “older”. Otherwise, no training is needed, and the original representations are tested. Figure 2A provides an example of such zero- shot evaluation. Different values are assigned to AGE-1 and AGE-2, and the pixel is colored Model Zero MLPMLM shot WS MAX WS MAX pert LINEAR LANGSENSE nolang RoBERTa-L 98 BERT-WWM 70 50 BERT-L 68 RoBERTa-B 49 BERT-B 49 Baseline 98 82 52 75 49 58 100 100 57 91 50 79 97 69 50 69 50 - 100 85 51 84 50 - 31 13 1 24 0 0 51 15 0 25 0 0 Table 2: AGE-COMPARE results. Accuracy over two answer candidates (random is 50%). LANGSENSE are the Language Sensitivity controls, pert is PERTURBED LANG. and nolang is NO LANG.. The baseline row is MLM-BASELINE. when the model predicts “younger”. Accuracy (acc.) is measured as the proportion of cases when the model output is correct. The perfor- mance of BERT-WWM, is on the left (blue), and of ROBERTA-L on the right (green). The results in Figure 2A and Table 2 show that ROBERTA-L compares numbers correctly (98% acc.), BERT- (70% WWM achieves higher than random acc. acc.), while BERT-L is random (50% acc.). The performance of MLM-BASELINE is also random, as the MLPMLM weights are randomly initialized. We note that picking the statement for each task was done through manual experimentation. We tried multiple phrasings (Jiang et al., 2019) and chose the one that achieves highest average zero- shot accuracy across all tested LMs. A case in point ... Thus, if a model performs well, one can infer that it has the tested reasoning skill. However, fail- ure does not entail that the reasoning skill is miss- ing, as it is possible that there is a problem with the lexical-syntactic construction we picked. # 3.2 Learning Curves Despite the advantages of zero-shot evaluation, performance of a model might be adversely af- fected by mismatches between the language the pre-trained LM was trained on and the language of the examples in our tasks (Jiang et al., 2019). To tackle this, we fine-tune models with a small number of examples. We assume that if the LM representations are useful for a task, it will require few examples to overcome the language mismatch and achieve high performance. In most cases, we train with N ∈ {62, 125, 250, 500, 1K, 2K, 4K} examples. To account for optimization instabil- ities, we fine-tune several times with different seeds, and report average accuracy across seeds. The representations h are fixed during fine-tuning, and we only fine-tune the parameters of MLPMLM. Evaluation and learning-curve metrics Learn- ing curves are informative, but inspecting many learning curves can be difficult. Thus, we sum- marize them using two aggregate statistics. We report: (a) MAX, i.e., the maximal accuracy on the learning curve, used to estimate how well the model can handle the task given the limited amount of examples. (b) The metric WS, which is a weighted average of accuracies across the learn- ing curve, where higher weights are given to points where N is small.3 WS is related to the area un- der the accuracy curve, and to the online code met- ric, proposed by Yogatama et al. (2019); Blier and Ollivier (2018). The linearly decreasing weights emphasizes our focus on performance given little training data, as it highlights what was encoded by the model before fine-tuning. For AGE-COMPARE, in Figure 2B illustrate the learning curves of ROBERTA-L and BERT-WWM, and Table 2 shows the aggregate statistics. We fine-tune the model by replacing AGE-1 and AGE-2 with values between 43 and 120, but test with values between 15 and 38, to guarantee that the model generalizes to values unseen at training time. Again, we see that the representations learned by ROBERTA-L are already equipped with the knowledge necessary for solving this task. # 3.3 Controls Comparing learning curves tells us which model learns from fewer examples. However, since highly-parameterized MLPs, as used in LMs, can approximate a wide range of functions, it is diffi- cult to determine whether performance is tied to the knowledge acquired at pre-training time, or to the process of fine-tuning itself. We present con- trols that attempt to disentangle these two factors. Are LMs sensitive to the language input? We are interested in whether pre-trained representa- tions reason over language examples. Thus, a nat- ural control is to present the reasoning task without language and inspect performance. If the learning curve of a model does not change when the input is perturbed or even mostly deleted, then the model shows low language sensitivity and the pre-trained representations do not explain the probe perfor- mance. This approach is related to work by Hewitt and Liang (2019), who proposed a control task, 3We decreasing weights W the (0.23, 0.2, 0.17, 0.14, 0.11, 0.08, 0.07). use = A) Zero-shot evaluation "A 28 year old person is [MASK] than ‘me in age, If | am a 38 year old person.” A. younger, B. older B) No-Language (contro!) “25 [MASK] 38" A. blah, B. ya 18 23 28 33 38 18 23 28 33 38 C) Perturbed language (control) “A 25 year old person is [MASK] boo me in blah, If | am a 38 year old person.” A. younger, B. older aad a natn nom a4! © Serre Fy moseRTaL mc-ok SE Sexrwnmrncon "62 125 280 500 1k ak 6 62 125 280 soo 1k 2k ak Number: ‘examples Number of taining examples z -T Pa me sexranm 308 ter" F} -e serrsnimtper | 8 rye of tes os. ¥ + he Figure 2: An illustration of our evaluation protocol. We com- pare ROBERTA-L (green) and BERT-WWM (blue), con- trols are in dashed lines and markers are described in the legends. Zero-shot evaluation on the top left, AGE-1 is “younger” (in color) vs. “older” (in white) than AGE-2. where the learning curve of a model is compared to a learning curve when words are associated with random behaviour. We propose two control tasks: NO LANGUAGE control We remove all input to- kens, except for [MASK] and the arguments of the task, i.e., the tokens that are necessary for com- puting the output. In AGE-COMPARE, an example is reduced to the phrase “24 [MASK] 55”, where the candidate answers are the words “blah”, for “older”, and “ya”, for “younger”. If the learn- ing curve is similar to when the full example is given (low language sensitivity), then the LM is not strongly using the language input. The dashed lines in Figure 2B illustrate the learning curves in NO LANG.: ROBERTA-L (green) shows high language sensitivity, while BERT-WWM (blue) has lower language sensitiv- ity. This suggests it handles this task partially dur- ing fine-tuning. Table 2 paints a similar picture, where the metric we use is identical to WS, ex- cept that instead of averaging accuracies, we aver- age the difference in accuracies between the stan- dard model and NO LANG. (rounding negative numbers to zero). For ROBERTA-L the value is 51, because ROBERTA-L gets almost 100% acc. in the presence of language, and is random (50% acc.) without language. PERTURBED LANGUAGE control A more tar- geted language control, is to replace words that are central for the reasoning task with nonsense words. Specifically, we pick key words in each probe template, and replace these words by ran- domly sampling from a list of 10 words that carry relatively limited meaning.4 For example, in PROPERTY CONJUNCTION, we can replace the word “and” with the word “blah” to get the ex- ample “What is located at hand blah used for writing?”. If the learning curve of PERTURBED LANG. is similar to the original example, then the model does not utilize the pre-trained representa- tion of “and” to solve the task, and may not cap- ture its effect on the semantics of the statement. Targeted words change from probe to probe. For example, the targeted in AGE-COMPARE, words are “age” and “than”, resulting in exam- ples like “A AGE-1 year old person is [MASK] blah me in da, If i am a AGE-2 year old per- son.”. Figure 2C shows the learning curves for ROBERTA-L and BERT-WWM, where solid lines corresponds to the original examples and dashed lines are the PERTURBED LANG. control. Despite this minor perturbation, the performance of ROBERTA-L substantially decreases, imply- ing that the model needs the input. Conversely, BERT-WWM performance decreases only mod- erately. Does a linear transformation suffice? In MC- MLM, the representations h are fixed, and only the pre-trained parameters of MLPMLM are fine- tuned. As a proxy for measuring “how far" the representations are from solving a task, we fix the weights of the first layer of MLPMLM, and only train the final layer. Succeeding in this setup means that only a linear transformation of h is required. Table 2 shows the performance of this setup (LINEAR), compared to MLPMLM. Why is MC-MLM preferred over MC-QA? Constructing a MC-MLM probe limits the answer candidates to a single token from the word-piece vocabulary. To relax this setup we also explore the In MC-QA, we phrase MC-QA setup from §2. the task as a question, letting answer candidates be arbitrary strings, which provides ample expres- sivity (Gardner et al., 2019b) and facilitates prob- ing question involving complex and commonsense reasoning (Talmor et al., 2019; Gardner et al., 2019a; Talmor and Berant, 2018). In Table 1, PROPERTY CONJUNCTION and ENCYC. COM- 4The list of substitutions is: “blah”, “ya”, “foo”, “snap”, “woo”, “boo”, “da”, “wee”, “foe” and “fee”. PARISON serve as examples for this setup. For AGE-COMPARE we use the same task in MC-QA setup. Figure 2D compares the learning curves of MC-MLM and MC-QA in AGE-COMPARE. Be- cause in MC-QA, the network MLPQA cannot be initialized by pre-trained weights, zero-shot eval- uation is not meaningful, and more training exam- ples are needed to train MLPQA. Still, the trends observed in MC-MLM remain, with ROBERTA- L achieving best performance with the fewest ex- amples. # 4 The oLMpic Games We now move to describe the research questions and various probes used to answer these questions. For each task we describe how it was constructed, show results via a table as described in the controls section, and present an analysis. Our probes are mostly targeted towards sym- bolic reasoning skills (Table 1). We examine the ability of language models to compare numbers, to understand whether an object has a conjunction of properties, to perform multi-hop composition of facts, among others. However, since we generate examples automatically from existing resources, some probes also require background knowledge, such as sizes of objects. Moreover, as explained in §3.1, we test models on a manually-picked phras- ing that might interact with the language abilities of the model. Thus, when a model succeeds this is evidence that it has the necessary skill, but fail- ure could be attributed to issues with background knowledge and linguistic abilities as well. In each probe, we will explicitly mention what knowledge and language abilities are necessary. # 4.1 Can LMs perform robust comparison? Comparing two numeric values requires repre- senting the values and performing the compari- son operations. In §3 we saw the AGE-COMPARE task, in which ages of two people were compared. We found that ROBERTA-L and to some extent BERT-WWM were able to handle this task, per- forming well under the controls. We expand on this to related comparison tasks and perturbations that assess the sensitivity of LMs to the particular context and to the numerical value. Is ROBERTA-L comparing numbers or ages? of 98% ROBERTA-L obtained zero-shot acc. in AGE-COMPARE. But is it robust? We test this using perturbations to the task and present the results in Figure 3. Figure 3A corresponds to the experiment from §3, where we observed that ROBERTA-L predicts “younger” (blue pix- els) and “older” (white pixels) almost perfectly. To test whether ROBERTA-L can compare ages given the birth year rather than the age, we use the statement “A person born in YEAR-1 is [MASK] than me in age, If i was born in YEAR-2.” Fig- ure 3B shows that it correctly flips “younger” to “older” (76% acc.), reasoning that a person born in 1980 is older than one born in 2000. However, when evaluated on the exact same statement, but with values corresponding to typi- cal ages instead of years (Figure 3D), ROBERTA- L obtains an acc. of 12%, consistently outputting the opposite prediction. With ages as values and not years, it seems to disregard the language, per- forming the comparison based on the values only. We will revisit this tendency in §4.4. Symmetrically, Figure 3C shows results when numeric values of ages are swapped with typical years of birth. ROBERTA-L is unable to handle this, always predicting “older”.5 This emphasizes that the model is sensitive to the argument values. Age comparison Birth-year comparison A) “A 25 year old person is [MASK] B) “A person bom in 1984 is [MASK] than me in age, If | am a 38 year old than me in age, Ifi was bom in 1992.” person.” A. younger, B. older A. older, B. younger a 2 2000 c © 25 1995 Bou $ 7 30 21990 5 2 Fa + a $ 35 20 25 30 35 2000 1995 1990 1985 AGE-2 Birth-year-2 ) “A 1980 year old person is [MASK] | D) “A person bom in 20 is [MASK] than me in age, If | am a 1988 year than me in age, Ifi was born in 30.” Dold person.” A. younger, B. older A. older, B. younger & 1980 40 = = 1985, ae) 3° g F095 2 7 o D> < 1980 1985 1990 1995 40 35 30 25 Birth year-2 Figure 3: AGE COMPARISON perturbations. Left side graphs are age-comparison, right side graphs are age comparison by birth-year. In the bottom row, the values of ages are swapped with birth-years and vice versa. In blue pixels the model pre- dicts “older”, in white “younger”. (A) is the correct answer. Can Language Models compare object sizes? Comparing physical properties of objects requires 5We observed that in neutral contexts models have a slight preference for “older” over “younger”, which could poten- tially explain this result. knowledge of the numeric value of the property and the ability to perform comparison. Previ- ous work has shown that such knowledge can be extracted from text and images (Bagherinezhad et al., 2016; Forbes and Choi, 2017; Yang et al., 2018a; Elazar et al., 2019; Pezzelle and Fernán- dez, 2019). Can LMs do the same? Probe Con- struction We construct statements of the form “The size of a OBJ-1 is usually much [MASK] than the size of a OBJ-2.”, where the candidate answers are “larger” and “smaller”. To instan- tiate the two objects, we manually sample from a list of objects from two domains: animals (e.g. “camel” ) and general objects (e.g. “sun”), and use the first domain for training and the second for evaluation. We bucket different objects based on the numerical value of their size based on their me- dian value in DOQ (Elazar et al., 2019), and then manually fix any errors. This probe requires prior knowledge of object sizes and understanding of a comparative language construction. Overall, we collected 127 and 35 objects for training and de- velopment respectively. We automatically instan- tiate object slots using objects that are in the same bucket. Results ROBERTA-L excels in this task, starting from 84% acc. in the zero-shot setup and reaching MAX of 91% (Table 3). Other models start with random performance and are roughly on par with MLM-BASELINE. ROBERTA-L shows sensitiv- ity to the language, suggesting that the ability to compare object sizes is encoded in it. Analysis Table 4 shows running ROBERTA-L in the zero-shot setup over pairs of objects, where we sampled a single object from each bucket. Objects are ordered by their size from small to large. Overall, ROBERTA- L correctly predicts “larger” below the diago- nal, and “smaller” above it. Interestingly, errors are concentrated around the diagonal, due to the more fine-grained differences in sizes, and when we compare objects to “sun”, mostly emitting “larger”, ignoring the rest of the statement. # 4.2 Do LMs know “always” from “often”? Adverbial modifiers such as “always”, “some- times” or “never”, tell us about the quantity or frequency of events (Lewis, 1975; Barwise and Cooper, 1981). Anecdotally, when ROBERTA-L predicts a completion for the phrase “Cats usually drink [MASK].”, the top completion is “coffee”, nolang RoBERTa-L 84 BERT-WWM 55 52 BERT-L 56 BERT-B 50 RoBERTa-B 46 Baseline 88 65 56 55 61 57 91 81 66 72 74 74 86 63 53 53 57 - 90 77 56 56 66 - 22 9 5 2 8 2 26 9 4 3 0 1 Table 3: Results for the OBJECTS COMPARISON probe. Ac- curacy over two answer candidates (random is 50%). nail pen laptop table house airplane city sun nail pen laptop table house airplane city sun - smaller larger larger larger larger larger larger smaller - larger larger larger larger larger larger smaller smaller - larger larger larger larger larger smaller smaller larger - larger larger larger larger smaller smaller smaller smaller - larger larger larger smaller smaller smaller larger larger - larger larger smaller smaller smaller smaller smaller larger - larger smaller smaller smaller larger larger larger larger - Table 4: ROBERTA-L Zero-shot SIZE COMP. predictions. a frequent drink in the literature it was trained on, rather then “water”. However, humans know that“Cats NEVER drink coffee”. Prior work ex- plored retrieving the correct quantifier for a state- ment (Herbelot and Vecchi, 2015; Wang et al., 2017). Here we adapt this task to a masked lan- guage model. The “Always-Never” task We present statements, such as “rhinoceros [MASK] have fur”, with an- swer candidates, such as “never” or “always”. To succeed, the model must know the frequency of an event, and map the appropriate adverbial mod- ifier to that representation. Linguistically, the task tests how well the model predicts frequency quan- tifiers (or adverbs) modifying predicates in differ- ent statements (Lepore and Ludwig, 2007). Probe Construction We manually craft templates that contain one slot for a subject and an- other for an object, e.g. “FOOD-TYPE is [MASK] part of a ANIMAL’s diet.” (more exam- ples available in Table 6). The subject slot is instantiated with concepts of the correct seman- tic type, according to the isA predicate in CON- CEPTNET. In the example above we will find con- cepts that are of type FOOD-TYPE and ANIMAL. The object slot is then instantiated by forming masked templates of the form “meat is part of a [MASK]’s diet.” and “cats have [MASK].” and letting BERT-L produce the top-20 completions. We filter out completions that do not have the cor- rect semantic type according to the isA predi- cate. Finally, we crowdsource gold answers us- ing Amazon Mechanical Turk. Annotators were presented with an instantiated template (with the masked token removed), such as “Chickens have horns.” and chose the correct answer from 5 can- didates: “never”, “rarely”, “sometimes”, “often” and “always”.6 We collected 1,300 examples with 1,000 used for training and 300 for evaluation. We note that some examples in this probe are similar to OBJECTS COMPARISON (line 4 in Ta- ble 5). However, the model must also determine if sizes can be overlapping, which is the case in 56% of the examples. Results Table 5 shows the results, where ran- dom accuracy is 20%, and majority vote accu- racy is 35.5%. In the zero-shot setup, acc. is less than random. In the MLPMLM and LINEAR setup acc. reaches a maximum of 57% in BERT-L, but MLM-BASELINE obtains similar acc., imply- ing that the task was mostly tackled at fine-tuning time, and the pre-trained representations did not contribute much. Language controls strengthen this hypothesis, where performance hardly drops in the PERTURBED LANG. control and slightly drops in the NO LANG. control. Figure 1B com- pares the learning curve of ROBERTA-L with controls. MLM-BASELINE consistently outper- forms ROBERTA-L, which display only minor language sensitivity, suggesting that pre-training is not effective for solving this task. Model Zero MLPMLM shot WS MAX WS MAX pert LINEAR LANGSENSE nolang RoBERTa-L 14 BERT-WWM 10 22 BERT-L 11 BERT-B 15 RoBERTa-B 20 Baseline 44 46 45 44 43 46 55 57 55 56 53 56 26 32 36 30 25 - 41 52 50 52 44 - 3 2 3 3 2 1 5 3 8 8 6 2 Table 5: Results for the ALWAYS-NEVER probe. Accuracy over five answer candidates (random is 20%). Analysis We generated predictions from the best model, BERT-WWM, and show analysis results in Table 6. For reference, we only selected ex- amples where human majority vote led to the cor- rect answer, and thus the majority vote is near 100% on these examples. Although the answers “often” and “rarely” are the gold answer in 19% of the training data, the LMs predict these an- swers in less than 1% of examples. In the tem- plate “A dish with FOOD-TYPE [MASK] contains 6The class distribution over the answers is “never”:24%, “rarely”:10%, “sometimes”:34%, “often”:7% and “al- ways”:23%. FOOD-TYPE.” the LM always predicts “some- times”. Overall we find models do not perform well. Reporting bias (Gordon and Van Durme, 2013) may play a roll in the inability to cor- rectly determine that “A rhinoceros NEVER has fur.” Interestingly, behavioral research conducted on blind humans shows they exhibit a similar bias (Kim et al., 2019). Question A dish with pasta [MASK] contains pork . stool is [MASK] placed in the box . A lizard [MASK] has a wing . A pig is [MASK] smaller than a cat . meat is [MASK] part of a elephant’s diet . A calf is [MASK] larger than a dog . Answer sometimes never never rarely never sometimes Distractor Acc. sometimes sometimes always always sometimes often 75 68 61 47 41 30 Table 6: Error analysis for ALWAYS-NEVER. Model predic- tions are in bold, and Acc. shows acc. per template. # 4.3 Do LMs Capture Negation? Ideally, the presence of the word “not” should af- fect the prediction of a masked token. However, Several recent works have shown that LMs do not take into account the presence of negation in sen- tences (Ettinger, 2019; Nie et al., 2020; Kassner and Schütze, 2020). Here, we add to this liter- ature, by probing whether LMs can properly use negation in the context of synonyms vs. antonyms. Do LMs Capture the Semantics of Antonyms? In the statement “He was [MASK] fast, he was very slow.”, [MASK] should be replaced with “not”, since “fast” and “slow” are antonyms. in “He was [MASK] fast, he was Conversely, very rapid”, the LM should choose a word like “very” in the presence of the synonyms “fast” and “rapid”. An LM that correctly distinguishes be- tween “not” and “very”, demonstrates knowledge of the taxonomic relations as well as the ability to reason about the usage of negation in this context. Probe Construction We sample synonym and antonym pairs from CONCEPTNET (Speer et al., 2017) and WORDNET (Fellbaum, 1998), and use Google Books Corpus to choose pairs that occur frequently in language. We make use of the state- ments introduced above. Half of the examples are synonym pairs and half antonyms, generating 4,000 training examples and 500 for evaluation. Linguistically, we test whether the model appro- priately predicts a negation vs. intensification ad- verb based on synonymy/antonymy relations be- tween nouns, adjectives and verbs. Results ROBERTA-L shows higher than chance of 75% in the zero-shot setting, as well acc. A) Antonym Negation "it was [MASK] fast, it was really rapid .” A. really, B. not B) Encyclopedic Composition "Where is the headquarters of the company that Toshio Suzuki established located?” A. Koganei, B. Fukuoka C. Iwata 10 2.07 ee BERT e- BERT-AWWM No-Lang 08 —% Baseline 2 ® dos SS ee el Z # ROBERTALL o.4- O57 = ROBERTA-L No-Lang / 7 Baseline 0.3- 4 | i 0° 62125 250 300 kK kK ak ° ‘Number of taining examples 62 125 250 ‘500 ik Number of training examples 2k ak Figure 4: Learning curves in two tasks. For each task the best performing LM is shown alongside the NO LANG. control and baseline model. (A) is the correct answer. as high Language Sensitivity (Table 7). MLM- BASELINE, equipped with GloVe word embed- dings, is able to reach a comparable WS of 67 and MAX of 80%, suggesting they do not have a large advantage on this task. Model Zero MLPMLM shot WS MAX WS MAX pert LINEAR LANGSENSE nolang RoBERTa-L 75 BERT-WWM 57 51 BERT-L 52 BERT-B 57 RoBERTa-B 47 Baseline 85 70 70 68 74 67 91 81 82 81 87 80 77 61 58 59 63 - 84 73 74 74 78 - 14 5 5 2 10 0 21 6 9 9 16 0 Table 7: Results for the ANTONYM NEGATION probe. Ac- curacy over two answer candidates (random is 50%). # 4.4 Can LMs handle conjunctions of facts? We present two probes where a model should un- derstand the reasoning expressed by the word and. Property a Knowledge-Base that describes the properties of millions of concepts through its (subject, predicate, object) triples. We use CON- CEPNET to test whether LMs can find concepts for which a conjunction of properties holds. For example, we will create a question like “What is located in a street and is related to octagon?”, where the correct answer is “street sign”. Be- cause answers are drawn from CONCEPTNET, they often consist of more than one word-piece, thus examples are generated in the MC-QA setup. Probe Construction To construct an exam- ple, we first choose a concept that has two properties in CONCEPTNET, where a prop- erty is a (predicate, object) pair. For stop sign has example, properties (atLocation,street) and (relatedTo, octagon). Then, we create two distractor for which only one property holds: concepts, car has the property (atLocation, street), and math has the property (relatedTo, octagon). Given the answer concept, the dis- tractors and the properties, we can automatically generate pseudo-langauge questions and answers by mapping 15 CONCEPTNET predicates to natural language questions. We split examples such that concepts in training and evaluation are disjoint. This linguistic structure tests whether the LM can answer questions with conjoined predicates, requiring world knowledge of object and relations. Results In MC-QA, we fine-tune the entire net- work and do not freeze any representations. Zero- shot cannot be applied since the weights of MLPQA are untrained. All LMs consistely im- prove as the number of examples increases, reach- ing a MAX of 57-87% (Table 8). The high MAX results suggest that the LMs generally have the required pre-existing knowledge. The WS of most models is slightly higher than the baselines (49% MAX and 39 WS). Language Sensitivity is slightly higher than zero in some models. Overall, results suggest the LMs do have some capability in this task, but proximity to baseline results, and low language selectivity make it hard to clearly deter- mine if it existed before fine-tuning. To further validate our findings, we construct a parallel version of our data, where we replace the word “and” by the phrase “but not”. In this ver- sion, the correct answer is the first distractor in the original experiment, where one property holds and the other does not. Overall, we observe a similar trend (with an increase in performance across all models): MAX results are high (79-96%), pointing that the LMs hold the relevant information, but im- provement over ESIM-Baseline and language sen- sitivity are low. For brevity, we omit the detailed numerical results. Model LEARNCURVE LANGSENSE WS MAX pert nolang RoBERTa-L 49 BERT-WWM 46 48 BERT-L 47 BERT-B 40 RoBERTa-B 39 Baseline 87 80 75 71 57 49 2 0 2 2 0 0 4 1 5 1 0 0 Table 8: Results for the PROPERTY CONJUNCTION probe. Accuracy over three answer candidates (random is 33%). Taxonomy conjunction A different operation is to find properties that are shared by two concepts. Specifically, we test whether LMs can find the mutual hypernym of a pair of concepts. For ex- ample, “A germ and a human are both a type of [MASK].”, where the answer is “organism”. Probe Construction We use CONCEPTNET and WORDNET to find pairs of concepts and their hy- pernyms, keeping only pairs that frequently ap- pear in the GOOGLE BOOK CORPUS. The exam- ple template is “A ENT-1 and a ENT-2 are both a type of [MASK].”, where ENT-1 and ENT-2 are replaced with entities that have a common hy- pernym, which is the gold answer. Distractors are concepts that are hypernyms of ENT-1, but not ENT-2, or vice versa. For evaluation, we keep all examples related to food and animal tax- onomies, e.g., “A beer and a ricotta are both a type of [MASK].”, where the answer is “food” and the distractors are “cheese” and “alcohol”. This phrasing requires the model to handle conjoined co-hyponyms in the subject position, based on lex- ical relations of hyponymy / hypernymy between nouns. For training, we use examples from differ- ent taxonomic trees, such that the concepts in the training and evaluation sets are disjoint. Results Table 9 shows that models’ zero-shot acc. is substantially higher than random (33%), but overall even after fine-tuning acc. is at most 59%. However, the NO LANG. control shows some language sensitivity, suggesting that some models have pre-existing capabilities. Model Zero MLPMLM shot WS MAX WS MAX pert LINEAR LANGSENSE nolang RoBERTa-L 45 BERT-WWM 46 53 BERT-L 47 BERT-B 46 RoBERTa-B 33 Baseline 50 48 54 48 50 33 56 52 57 50 59 47 45 46 53 47 47 - 46 46 54 47 49 - 0 0 0 0 0 1 3 7 15 12 18 2 Table 9: Results for the TAXONOMY CONJUNCTION probe. Accuracy over three answer candidates (random is 33%). Analysis Analyzing the errors of ROBERTA-L, we found that a typical error is predicting for “A crow and a horse are both a type of [MASK].” that the answer is “bird”, rather than “animal”. Specifically, LMs prefer hypernyms that are closer in terms of edge distance on the taxonomy tree. Thus, a crow is first a bird, and then an animal. We find that when distractors are closer to one of the entities in the statement than the gold answer, the models will consistently (80%) choose the dis- tractor, ignoring the second entity in the phrase. # 4.5 Can LMs do multi-hop reasoning? Questions that require multi-hop reasoning, such as “Who is the director of the movie about a WW2 pacific medic?”, have recently drawn attention (Yang et al., 2018b; Welbl et al., 2018; Talmor and Berant, 2018) as a challenging task for contempo- rary models. But do pre-trained LMs have some internal mechanism to handle such questions? To address this question, we create two probes, one for compositional question answering, and the other uses a multi-hop setup, building upon our observation (§3) that some LMs can compare ages. Encyclopedic composition We construct ques- tions such as “When did the band where John Lennon played first form?”. Here answers require multiple tokens, thus we use the MC-QA setup. Probe Construction We use the following three (1) “when did the band where ENT templates: played first form?”, (2) “who is the spouse of the actor that played in ENT?” and (3) “where is the headquarters of the company that ENT established located?”. We instantiate ENT using information from WIKIDATA (Vrandeˇci´c and Kr˝otzsch, 2014), choosing challenging distractors. For example, for template 1, the distractor will be a year close to the gold answer, and for template 3, it will be a city in the same country as the gold answer city. This lin- guistic structure introduces a (restrictive) relative clauses that requires a) Correctly resolving the ref- erence of the noun modified by the relative clause, b) Answering the full question subsequently. To solve the question, the model must have knowledge of all single-hop encyclopedic facts re- quired for answering it. Thus, we first fine-tune the model on all such facts (e.g. “What company did Bill Gates establish? Microsoft”) from the training and evaluation set, and then fine-tune on multi-hop composition. Results Results are summarized in Table 10. All models achieve low acc. in this task, and the base- line performs best with a MAX of 54%. Language sensitivity of all models is small, and MLM- BASELINE performs slightly better (Figure 4B), suggesting that the LMs are unable to resolve com- positional questions, but also struggle to learn it with some supervision. Multi-hop Comparison Multi-hop reasoning can be found in many common structures in nat- WS MAX pert nolang RoBERTa-L BERT-WWM BERT-L BERT-B RoBERTa-B ESIM-Baseline 42 47 45 43 41 49 50 53 51 48 46 54 0 1 1 0 0 3 2 4 4 3 0 0 Table 10: Results for ENCYCLOPEDIC COMPOSITION. Ac- curacy over three answer candidates (random is 33%). ural language. In the phrase “When comparing a 83 year old, a 63 year old and a 56 year old, the [MASK] is oldest” one must find the oldest person, then refer to its ordering: first, second or third. Probe Construction We use the template above, treating the ages as arguments, and “first”, “sec- ond”, and “third” as answers. Age arguments are in the same ranges as in AGE-COMPARE. Linguis- tically, the task requires predicting the subject of sentences whose predicate is in a superlative form, where the relevant information is contained in a “when”-clause. The sentence also contains nomi- nal ellipsis, also known as fused-heads (Elazar and Goldberg, 2019). Model Zero MLPMLM shot WS MAX WS MAX pert LINEAR LANGSENSE nolang RoBERTa-L 29 BERT-WWM 33 33 BERT-L 32 BERT-B 33 RoBERTa-B 34 Baseline 36 41 32 33 32 35 49 65 35 35 40 48 31 32 31 33 29 - 41 36 34 35 33 - 2 6 0 0 0 1 2 4 3 2 0 0 Table 11: Results for COMPOSITIONAL COMPARISON. Ac- curacy over three answer candidates (random is 33%). Results All three possible answers appear in ROBERTA-L’s top-10 zero-shot predictions, in- dicating that the model sees the answers as viable choices. Although successful in AGE-COMPARE, the performance of ROBERTA-L is poor in this probe (Table 11), With zero-shot acc. that is al- most random, WS slightly above random, MAX lower than MLM-BASELINE (48%), and close to zero language sensitivity. All LMs seem to be learning the task during probing. Although BERT-WWM was able to partially solve the task with a MAX of 65% when approaching 4,000 training examples, the models do not appear to show multi-step capability in this task. RoBERTa | BERT Large | WWM BERT Large RoBERTa Base BERT Base ALWAYS-NEVER PROPERTY CONJ. TAXONOMY CON). ENCYC. COMP. MULTI-HOP COMP. Table 12: The oLMpic games medals’, summarizing per-task success. ¥ indicate the LM has achieved high accuracy con- sidering controls and baselines, X indicates partial success. # 5 Medals We summarize the results of the oLMpic Games in Table 12. Generally, the LMs did not demon- strate strong pre-training capabilities in these sym- bolic reasoning tasks. BERT-WWM showed par- tial success in a few tasks, whereas ROBERTA- L showed high performance in ALWAYS-NEVER, OBJECTS COMPARISON and ANTONYM NEGA- TION, and emerges as the most promising LM. However, when perturbed, ROBERTA-L has failed to demonstrates consistent generalization and abstraction. Analysis of correlation with pre-training data A possible hypothesis for why a particular model is successful in a particular task might be that the language of a probe is more common in the corpus it was pre-trained on. To check that, we compute the unigram distribution over the training corpus of both BERT and ROBERTA. We then compute the average log probability of the development set under these two unigram distributions for each task (taking into account only content words). Fi- nally, we compute the correlation between which model performs better on a probe (ROBERTA-L vs. BERT-WWM) and which training corpus in- duces higher average log probability on that probe. We find that the Spearman correlation is 0.22, hint- ing that the unigram distributions do not fully ex- plain the difference in performance. # 6 Discussion We presented eight different tasks for evaluating the reasoning abilities of models, alongside an evaluation protocol for disentangling pre-training from fine-tuning. We found that even models that have identical structure and objective func- tions differ not only quantitatively but also qual- itatively. Specifically, ROBERTA-L has shown reasoning abilities that are absent from other mod- els. Thus, with appropriate data and optimization, models can acquire from an LM objective skills that might be surprising intuitively. However, when current LMs succeed in a rea- soning task, they do not do so through abstraction and composition as humans perceive it. The abili- ties are context-dependent, if ages are compared – then the numbers should be typical ages. Discrep- ancies from the training distribution lead to large drops in performance. Last, the performance of LM in many reasoning tasks is poor. Our work sheds light on some of the blind spots of current LMs. We will release our code and data to help researchers evaluate the reasoning abili- ties of models, aid the design of new probes, and guide future work on pre-training, objective func- tions and model design for endowing models with capabilities they are currently lacking. Acknowledgements This work was completed in partial fulfillment for the PhD degree of the first author. We thank our colleagues at The Allen Institute of AI, especially Kyle Richardson, Asaf Amrami, Mor Pipek, Myle Ott, Hillel Taub- Tabib and Reut Tsarfaty. This research was par- tially supported by The Israel Science Founda- tion grant 942/16, The Blavatnik Computer Sci- ence Research Fund and The Yandex Initiative for Machine Learning, and the European Union’s Seventh Framework Programme (FP7) under grant agreement no. 802774-ERC-iEXTRACT and no. 802800-DELPHI. # References Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine- grained analysis of sentence embeddings us- ing auxiliary prediction tasks. arXiv preprint arXiv:1608.04207. Hessam Bagherinezhad, Hannaneh Hajishirzi, Yejin Choi, and Ali Farhadi. 2016. Are ele- phants bigger than butterflies? reasoning about sizes of objects. In Thirtieth AAAI Conference on Artificial Intelligence. Jon Barwise and Robin Cooper. 1981. General- In Phi- ized quantifiers and natural language. losophy, language, and artificial intelligence, pages 241–301. Springer. Yonatan Belinkov and James Glass. 2019. Analy- sis methods in neural language processing: A Transactions of the Association for survey. Computational Linguistics, 7:49–72. Léonard Blier and Yann Ollivier. 2018. The de- scription length of deep learning models. In Ad- vances in Neural Information Processing Sys- tems, pages 2216–2226. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Si Wei, Hui Jiang, and Diana Inkpen. 2017. Enhanced LSTM for natural language inference. In Pro- ceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Vol- ume 1: Long Papers), pages 1657–1668, Van- couver, Canada. Association for Computational Linguistics. Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, and Mar- tin Wattenberg. 2019. Visualizing and mea- arXiv preprint suring the geometry of bert. arXiv:1906.02715. Andrew M Dai and Quoc V Le. 2015. Semi- supervised sequence learning. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Infor- mation Processing Systems 28, pages 3079– 3087. Curran Associates, Inc. J. Devlin, M. Chang, K. Lee, and K. Toutanova. 2019. Bert: Pre-training of deep bidirectional In transformers for language understanding. North American Association for Computational Linguistics (NAACL). Yanai Elazar and Yoav Goldberg. 2019. Where’s my head? definition, data set, and models for numeric fused-head identification and resolu- tion. Transactions of the Association for Com- putational Linguistics, 7:519–535. Yanai Elazar, Abhijit Mahabal, Deepak Ra- machandran, Tania Bedrax-Weiss, and Dan Roth. 2019. How large are lions? inducing distributions over quantitative attributes. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3973–3983, Florence, Italy. Association for Computational Linguistics. Allyson Ettinger. 2019. What bert is not: Lessons from a new suite of psycholinguistic diag- arXiv preprint nostics for language models. arXiv:1907.13528. Allyson Ettinger, Ahmed Elgohary, and Philip Resnik. 2016. Probing for semantic evidence of composition by means of simple classifica- tion tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 134–139. C. Fellbaum. 1998. WordNet: An Electronic Lexi- cal Database. MIT Press. Maxwell Forbes and Yejin Choi. 2017. Verb physics: Relative physical knowledge of ac- In Proceedings of the 55th tions and objects. Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 266–276. Matt Gardner, Jonathan Berant, Hannaneh Ha- jishirzi, Alon Talmor, and Sewon Min. 2019a. On making reading comprehension more com- In Proceedings of the 2nd Work- prehensive. shop on Machine Reading for Question Answer- ing, pages 105–112. Matt Gardner, Jonathan Berant, Hannaneh Ha- jishirzi, Alon Talmor, and Sewon Min. 2019b. Question answering is a format; when is it use- ful? arXiv preprint arXiv:1909.11291. Yoav Goldberg. 2019. Assessing bert’s syntactic abilities. arXiv preprint arXiv:1901.05287. Jonathan Gordon and Benjamin Van Durme. 2013. Reporting bias and knowledge acquisition. In Proceedings of the 2013 workshop on Auto- mated knowledge base construction, pages 25– 30. ACM. Aurélie Herbelot and Eva Maria Vecchi. 2015. Building a shared world: Mapping distribu- tional to model-theoretic semantic spaces. In Proceedings of the 2015 Conference on Empir- ical Methods in Natural Language Processing, pages 22–32. John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Proceedings of the 2019 Conference on Em- pirical Methods in Natural Language Process- ing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP), pages 2733–2743. John Hewitt and Christopher D. Manning. 2019. A structural probe for finding syntax in word In Proceedings of the Con- representations. ference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, pages 4129–4138. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2019. How can we know arXiv preprint what language models know? arXiv:1911.12543. and Hinrich Schütze. 2020. Negated and misprimed probes for pretrained language models: Birds can talk, but cannot fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811–7818, Online. Association for Computational Linguistics. Judy S Kim, Giulia V Elli, and Marina Bedny. 2019. Knowledge of animal appearance among sighted and blind adults. Proceedings of the National Academy of Sciences, 116(23):11213– 11222. Ernest Lepore and Kirk Ludwig. 2007. Donald Davidson’s truth-theoretic semantics. Oxford University Press. Adverbs of quantifica- tion. Formal semantics-the essential readings, 178:188. Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside bert’s lin- guistic knowledge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241–253. Tal Linzen, Emmanuel Dupoux, and Yoav Gold- berg. 2016a. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. TACL, 4:521–535. Tal Linzen, D. Emmanuel, and G. Yoav. 2016b. Assessing the ability of LSTMs to learn syntax-sensitive dependencies. Transactions of the Association for Computational Linguistics (TACL), 4. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly opti- mized bert pretraining approach. arXiv preprint arXiv:1907.11692. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In EMNLP. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natu- ral language understanding. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4885–4901, Online. Association for Computational Linguis- tics. J. Pennington, R. Socher, and C. D. Manning. 2014. GloVe: Global vectors for word rep- In Empirical Methods in Natural resentation. Language Processing (EMNLP), pages 1532– 1543. M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. 2018a. Deep contextualized word representations. In North American Association for Computational Linguistics (NAACL). Matthew Peters, Mark Neumann, Luke Zettle- moyer, and Wen-tau Yih. 2018b. Dissecting contextual word embeddings: Architecture and In Proceedings of the 2018 representation. Conference on Empirical Methods in Natural Language Processing, pages 1499–1509. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2463–2473. Sandro Pezzelle and Raquel Fernández. 2019. Is the red square big? malevic: Modeling ad- In Pro- jectives leveraging visual contexts. ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing (EMNLP-IJCNLP), pages 2858–2869. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Ohad Rozen, Vered Shwartz, Roee Aharoni, and Ido Dagan. 2019. Diversify your datasets: Analyzing generalization via controlled vari- ance in adversarial datasets. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 196–205. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare In Proceedings of words with subword units. the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1715–1725. Vered Shwartz and Ido Dagan. 2019. Still a pain in the neck: Evaluating text representations on lex- ical composition. In Transactions of the Asso- ciation for Computational Linguistics (TACL). Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual In Thirty-First graph of general knowledge. AAAI Conference on Artificial Intelligence. The web as knowledge-base for answering complex ques- tions. In North American Association for Com- putational Linguistics (NAACL). A. Talmor, J. Herzig, N. Lourie, and J. Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowledge. In North American Association for Computa- tional Linguistics (NAACL). Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a. BERT rediscovers the classical NLP In Proceedings of the 57th Annual pipeline. Meeting of the Association for Computational Linguistics, pages 4593–4601, Florence, Italy. Association for Computational Linguistics. Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019b. What do you learn from context? probing for sentence struc- ture in contextualized word representations. In International Conference on Learning Repre- sentations. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Advances in neural information processing systems, pages 5998– 6008. D. Vrandeˇci´c and M. Kr˝otzsch. 2014. Wikidata: A free collaborative knowledgebase. Commu- nications of the ACM, 57. Eric Wallace, Yizhong Wang, Sujian Li, Sameer Singh, and Matt Gardner. 2019. Do nlp mod- els know numbers? probing numeracy in em- In Proceedings of the 2019 Con- beddings. ference on Empirical Methods in Natural Lan- guage Processing and the 9th International Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 5310–5318. Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. 2017. Premise selection for theorem proving by deep graph embedding. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fer- gus, S. Vishwanathan, and R. Garnett, edi- tors, Advances in Neural Information Process- ing Systems 30, pages 2786–2796. Curran As- sociates, Inc. Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, et al. 2019. Investigating bert’s knowledge of lan- guage: Five analysis methods with npis. In Pro- ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing (EMNLP-IJCNLP), pages 2870–2880. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi- hop reading comprehension across documents. Transactions of the Association for Computa- tional Linguistics, 6:287–302. Yiben Yang, Larry Birnbaum, Ji-Ping Wang, and Doug Downey. 2018a. Extracting common- sense properties from embeddings with limited In Proceedings of the 56th human guidance. Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 644–649. Z. Yang, P. Qi, S. Zhang, Y. Bengio, W. W. Co- hen, R. Salakhutdinov, and C. D. Manning. 2018b. HotpotQA: A dataset for diverse, ex- plainable multi-hop question answering. In Em- pirical Methods in Natural Language Process- ing (EMNLP). Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. In Ad- vances in neural information processing sys- tems, pages 5753–5763. D. Yogatama, C. de M. d’Autume, J. Con- nor, T. Kocisky, M. Chrzanowski, L. Kong, A. Lazaridou, W. Ling, L. Yu, C. Dyer, Learning and evaluating gen- et al. 2019. arXiv preprint eral arXiv:1901.11373. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adver- sarial dataset for grounded commonsense infer- ence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP).
{ "id": "1906.02715" }
2001.04830
Sequential Recommender Systems: Challenges, Progress and Prospects
The emerging topic of sequential recommender systems has attracted increasing attention in recent years.Different from the conventional recommender systems including collaborative filtering and content-based filtering, SRSs try to understand and model the sequential user behaviors, the interactions between users and items, and the evolution of users preferences and item popularity over time. SRSs involve the above aspects for more precise characterization of user contexts, intent and goals, and item consumption trend, leading to more accurate, customized and dynamic recommendations.In this paper, we provide a systematic review on SRSs.We first present the characteristics of SRSs, and then summarize and categorize the key challenges in this research area, followed by the corresponding research progress consisting of the most recent and representative developments on this topic.Finally, we discuss the important research directions in this vibrant area.
http://arxiv.org/pdf/2001.04830
Shoujin Wang, Liang Hu, Yan Wang, Longbing Cao, Quan Z. Sheng, Mehmet Orgun
cs.IR, cs.LG, Recommender systems, machine learning, information retrival
IJCAI-2019 Survey Track Paper
null
cs.IR
20191228
20191228
# Sequential Recommender Systems: Challenges, Progress and Prospects ∗ Shoujin Wang1, Liang Hu2, Yan Wang1, Longbing Cao2, Quan Z. Sheng1 , Mehmet Orgun1 1Department of Computing, Macquarie University 2Advanced Analytics Institute, University of Technology Sydney {shoujin.wang, yan.wang}@mq.edu.au, [email protected], Abstract The emerging topic of sequential recommender systems (SRSs) has attracted increasing attention in recent years. Different from the conventional recommender including collaborative filtering and content-based filtering, SRSs try to understand and model the sequential user behaviors, the interactions between users and items, and the evolution of users’ preferences and item popularity over time. SRSs involve the above aspects for more precise characterization of item user contexts, consumption trend, leading to more accurate, customized and dynamic recommendations. In this paper, we provide a systematic review on SRSs. We first present the characteristics of SRSs, and then summarize and categorize the key challenges in the corresponding research progress consisting of the most recent and representative developments on important this topic. Finally, we discuss the research directions in this vibrant area. # 1 Introduction Sequential recommender systems (SRSs) suggest items which may be of interest to a user by mainly modelling the sequential dependencies over the user-item interactions (e.g., view or purchase items on an online shopping platform) in a sequence [27]. The traditional recommender systems (RSs), including the content-based and collaborative filtering RSs, model the user-item interactions in a static way and can only capture the users’ general preferences. In contrast, SRSs treat the user-item interactions as a dynamic sequence and take the sequential dependencies into account to capture the current and recent preference of a user for more accurate recommendation [1]. In order to enhance the understanding of SRSs, next we present the motivation and formalization of SRSs. o Jimmy ? Jimmy 6 8 2 # Tina Tina Figure 1: Two examples of SRSs: (1) After Jimmy has booked a flight, a hotel and rented a car, what will be his next action? (2) After Tina has bought an iPhone, an iWatch and a pair of AirPods, what would she buy next? Motivation: Why Sequential Recommender Systems? The user-item interactions are essentially sequentially dependent. In the real world, users’ shopping behaviours usually happen successively in a sequence, rather than in an isolated manner. Taking the shopping events of Jimmy depicted in Figure 1 as an example, before Jimmy started holiday, he booked a flight and a hotel and rented a car successively, and his next action may be visiting a tourist attraction via selfdriving. In such a case, the hotel may be close to the destination airport of the flight and the location for picking up the rented car may be not far away from the hotel. In this scenario, each of Jimmy’s next actions depends on the prior ones and thus all the four consumption actions are sequentially dependent. Likewise, we can see the sequential dependencies in Tina’s case. Such kind of sequential dependencies commonly exist in transaction data but cannot be well captured by the conventional content- based RSs or collaborative filtering RSs [12], which essentially motivates the development of SRSs. Both the users’ preference and items’ popularity are dynamic rather than static over time. In fact, a user’s preference and taste may change over time. For instance, many young people who used to be iPhone fans now have switched to become fans of the phones manufactured by Huawei or Samsung and the popularity of iPhone has been dropping in recent years. Such dynamics are of great significance for precisely profiling a user or an item for more accurate recommendations and they can only be captured by SRSs. A comprehensive survy on session-based recommender systems can be found: https://arxiv.org/abs/1902.04864. User-item interactions usually happen under a certain sequential context. Different contexts usually lead to different users’ interactions with items, which is, however, often ignored by traditional RSs like collaborative filtering. In contrast, an SRS takes the prior sequential interactions as a context to predict which items would be interacted in the near future. As a result, it is much easier to diversify the repeatedly recommendation recommending those items identical or similar to those already chosen. Formalization: What are Sequential Recommender Systems? Generally, an SRS takes a sequence of user-item interactions as the input and tries to predict the subsequent user-item interactions that may happen in the near future through modelling the complex sequential dependencies embedded in the sequence of user-item interactions. More specifically, given a recommendation list consisting of top ranked candidate items are generated by maximizing a utility function value (e.g., the likelihood): R = arg max f(S) (1) where f is a utility function to output a ranking score for the candidate items, and it could be of various forms, like a conditional probability [19], or an interaction score [11]. S = {i1,i2,...,i|S|} is a sequence of user-item interactions where each interaction ij =< u,a,v > is a triple consisting of a user u, the user’s action a, and the corresponding item v. In some cases, users and items are associated with some meta data (e.g., the demographics or the features), while the actions may have different types (e.g., click, add to the cart, purchase) and happen under various contexts (e.g., the time, location, weather). The output R is a list of items ordered by the ranking score. Different from the general sequence modelling in which the sequence structure is much simpler since a sequence is often composed of atomic elements (e.g., real values, genes), the learning task in SRSs is much more challenging because of the more complex sequence structure (e.g., each element is a triple). This motivates us to systematically analyze the challenges in SRSs and summarize the corresponding progress. Contributions. The main contributions of this work are summarized below: • We systematically analyze a number of key challenges caused by different data characteristics in SRSs and categorize them from a data driven perspective, which provides a new view to deeply understand the characteristics of SRSs. • We summarize the current research progress in SRSs by systematically categorizing the state-of-the-art works from a technical perspective. • We share and discuss some prospects of SRSs for the reference of the community. 2 Data Characteristics and Challenges Due to the diversity and complexity of the customers’ shopping behaviours, item characteristics and the specific shopping contexts in the real world, the generated user-item interaction data often has different characteristics. Different data characteristics essentially bring different challenges for SRSs, which require different solutions, as presented in Table 1. In the following five subsections, we specifically discuss five key challenges respectively in SRSs caused by different data characteristics. In each subsection, we first introduce the particular data characteristics and then illustrate the corresponding challenges. 2.1 Handling Long User-Item Interaction Sequences A long user-item interaction sequence consists of a relatively large number of user-item interactions. As a result, it has a much higher chance to have more complex and comprehensive dependencies over the multiple interactions inside it, which makes the sequential recommendations much more challenging. Specifically, two most critical challenges in long user-item interaction sequences are learning higher-order sequential dependencies and learning long-term sequential dependencies, which will be presented respectively below. Learning higher-order sequential dependencies. Hig herorder sequential dependencies commonly exist in the useritem interaction sequences, especially in long ones. Compared to the lower-order sequential dependencies, which are relatively simple and can be easily modeled by Markov chain models [3] or factorization machines [14; 10], higher-order sequential dependencies are much more complex and harder to be captured because of their complicated multi-level cascading dependencies crossing multiple user-item interactions. So far, there have been mainly two basic approaches reported that can address this challenge in SRSs to some extent: higher-order Markov-chain models [6] and recurrent neural networks (RNN) [7], as shown in Table 1. However, each approach has its own limitations, for example, the historical states that can be involved in a higher-order Markov-chain model are quite limited as the number of the model parameters to be estimated grows exponentially with the order, while the overly strong order assumption employed in RNN limits the application of RNN in sequences with a flexible order. The technical progress achieved in both approaches will be presented in Sections 3.1 and 3.3 respectively in more details. Learning long-term sequential dependencies. Long-term sequential dependencies refer to the dependencies between interactions that are far from each other in a sequence. For instance, given a shopping sequence S1 ={a rose, eggs, bread, a bottle of milk, a vase}, which consists of a basket of items that are purchased successively by a user Janet. Obviously, the vase and the rose are highly dependent even though they are far from each other. Such cases are not uncommon in the real world as users’ behaviours are usually highly uncertain and thus they may put any items into the cart. To address such a critical issue, Long Short Term Memory (LSTM)-based [21] and Gated Recurrent Unit (GRU)-based [7] RNN have been applied in SRSs to capture the long-term dependencies among the user-item interactions in a sequence. However, it is easy for RNN models to generate false dependencies by overly assuming any adjacent items in a sequence are highly dependent. In the above example of Janet’s shopping sequence, an RNN usually models S1 by assuming the milk and vase are dependent due to the close distance between them, but actually they are not. Some other efforts have been made to solve this issue by utilizing the advantage of mixture models to combine multiple sub-models with different temporal ranges to capture both short- and long- term dependencies in a unified model [15]. Overall, the works that are able to tackle this challenge are quite limited and more investigations are required to bridge this gap. The technical progress achieved in RNN and mixture models will be presented in Section 3.3. 2.2 Handling User-Item Interaction Sequences with a Flexible Order In the real world, some user-item interaction sequences are strictly ordered while others may not be, namely, not all adjacent interactions are sequentially dependent in a sequence [4]. For instance, in a shopping sequence S2 = {milk, butter, flour}, it does not matter whether to buy milk or butter first, but the purchase of both items leads to a higher probability of buying flour next; namely, there is no strict order between milk and butter, but flour sequentially depends on the union of them. Therefore, for a sequence with a flexible order, it is much better to capture the collective sequential dependencies, rather than the point- wise ones as the former is fuzzy and does not assume a strict order over user-item interactions. As a result, how to capture collective sequential dependencies under an assumption of flexible order becomes the key challenge in handling sequences with a flexible order in SRSs. Although common and important, reported studies in SRSs have not paid much attention to this issue yet. Existing SRSs built on Markov-chains, factorization machines or RNN can only handle the point-wise dependencies but are not good at modelling and capturing collective dependencies. Only quite few works like [17; 26] have attempted to address such a challenge by employing the strength of convulotional neural networks (CNN) to model the local and global dependencies between different areas in an “image”, i.e., the embedding matrix of a sequence of interactions. The technical progress achieved in CNN-based SRSs will be presented in Section 3.3. 2.3 Handling User-Item Interaction Sequences with Noise Due to the uncertainty of user shopping behaviours, most of the user-item interaction sequences are not clean, meaning that they may contain some noisy and irrelevant interactions that generate interaction prediction. In practice, in a user-item interaction sequence, some historical interactions are strongly relevant to the next interaction, while others may be weakly relevant or even irrelevant. For example, in another shopping sequence S3 = {bacon, a rose, eggs, bread}, the item “rose” may be a noisy item as it is quite different from others and has no correlation to them. The next item may be a bottle of milk with a high probability and it only sequentially depends on bacon, eggs and bread while has nothing to to with the rose. Therefore, another key challenge in SRSs is to learn sequential dependencies attentively and discriminatively over user-item interaction sequences with noise. Quite a few works have attempted to solve such a typical issue by employing the attention models [19] or memory networks [1] to selectively retain and utilize information from those interactions that are truly relevant to the next interaction prediction. The technical progress achieved in these solutions will be presented in Section 3.3. # 2.4 Handling User-Item Interaction Sequences with Heterogeneous Relations Heterogeneous relations refer to different types of relations which deliver different kinds of information and should be modelled differently in SRSs. For instance, in a user-item interaction the widespread occurrencebased sequential dependencies over user-item interactions, there are also similarity-based relations between the interacted items in terms of their features. Furthermore, sequential dependencies, long-term sequential dependencies are quite different from short-term ones and they cannot be modelled in the same way. Therefore, another key challenge in SRSs is how to effectively capture these heterogeneous relations embedded sequences respectively and to make them work collaboratively for the sequential recommendations when handling user-item interaction sequences associated with heterogeneous relations. There are quite limited works reported in the literature to solve this challenge in SRSs. Mixture models [12; 15; 20] are the only solution to address such challenge so far. A mixture model integrates different types of relations modelled by different sub-models to collaboratively generate sequential recommendations. The specific technical progress will be presented in Section 3.3. # 2.5 Handling User-Item Interaction Sequences with Hierarchical Structures Generally, there are mainly two kinds of hierarchical structures that may be associated with a user-item interaction sequence: (1) the hierarchical structure between meta data and user-item interactions. To be specific, the users’ demographics can determine the users’ preferences in some degree and can further affect their interactions with the items. Similarly, the features of items often have some Table 1: A summary of challenges driven by data characteristics in SRSs Data characteristics Challenges Existing solutions Long user-item interac- tion sequences Learning higher-order se- quential dependencies Higher-order Markov chain [He and McAuley, 2016], RNN [Hidasi et al., 2016a] Learning long-term sequen- tial dependencies LSTM- [Wu ef al., 2017] and GRU-based [Hidasi ef al., 2016a] RNN, mixture models [Tang er al., 2019] User-item interaction sequences with a flexible order Learning collective sequen- tial dependencies under the assumption of flexible order CNN [Tang and Wang, 2018; Yuan ef al., 2019] User-item interaction sequences with noise Learning sequential depen- dencies attentively and dis- craminatively Attention models [Wang er al., 2018], memory networks [Chen et al., 2018] User-item interaction sequences with hetero- geneous relations Learning heterogeneous re- lations discriminatively and integrating them effectively Mixture models [Kang ef al., 2018; Tang et al., 2019] User-item interac- tion sequences with hierarchical structures Learning hierarchical de- pendencies Feature-enriched RNN [Hidasi e7 al., 2016b], hierarchical embedding [Wang er al., 2015], hierarchical RNN [Quad- rana et al., 2017], hierarchical attention [Ying et al., 2018] effects on whether they will be liked and interacted by users [9]; and (2) the hierarchical structure between sub- sequences and user-item interactions. More specifically, in some SRSs, one user-item interaction sequence includes multiple sub-sequences (also called sessions). In such a case, in addition to the prior interactions within the current sub- sequence, the historical subsequences may also influence the next user-item interaction to be predicted in the current sub-sequence [25]. Therefore, one more key challenge in SRSs is how to incorporate the hierarchical dependencies embedded in these two kinds of hierarchical structures into sequential dependency learning to generate more accurate sequential recommendations. technical perspective and then briefly highlight the recent progress in each category. The categorization of SRS approaches is presented in Figure 2. We observe that the various approaches for SRSs are first categorized into 11 atomic classes ( e.g., the sequential pattern mining, factorization machine, and recurrent neural networks) from the technical perspective. All these atomic classes are then further categorized into three taxonomies, including traditional sequence models, latent representation models, and deep neural network models. Generally speaking, these three taxonomies change from simple to complex and are reported successively. Next we summarize the research progress in each of the three taxonomies. Although quite a few works have attempted to address this challenge from certain aspects, some other aspects have been less studied. On the one hand, to take the influences of items’ features on the user-item interactions into account, a series of feature-enriched neural models including [9] have been proposed for SRSs. In comparison, the influences of users’ demographics have been rarely considered in existing SRSs and more efforts should be devoted into this direction. On the other hand, some hierarchical models including hierarchical embedding models [18], hierarchical RNN [13] and hierarchical attention networks [25] have been devised to incorporate the historical sub-sequences into sequential dependency learning to build more powerful SRSs. Particularly, the technical progress achieved to address this challenge will be presented in Sections 3.2 and 3.3. 3 Research Progress To provide an overview of the technical progress in SRSs and to give more technical details of the solutions to the aforementioned challenges, we summarize and briefly discuss the research progress in SRSs from a technical perspective in this section. Particularly, we first present a categorization of all the approaches for SRSs from the 3.1 Traditional Sequence Models for SRSs Traditional sequence models including sequential pattern mining and Markov chain models are intuitive solutions to SRSs by taking advantage of their natural strength in modelling sequential dependencies among the user-item interactions in a sequence. Sequential pattern mining. Sequential pattern-based RSs first mine frequent patterns on sequence data and then utilize the mined patterns to guide the subsequent recommendations. Although simple and straightforward, sequential pattern mining usually generates a large number of redundant patterns, which increases unnecessary cost w.r.t. time and space. Another obvious shortcoming is that it often loses those infrequent patterns and items due to the frequency constraint, which limits the recommendation results to those popular items. Therefore, quite few works have been reported in this class, except a representative one [24; 16]. Markov chain models. Markov chain-based RSs adopt Markov chain models to model the transitions over user- item interactions in a sequence, for the prediction of the Traditional sequence models Sequential pattern mining Markov chain models Sequential r ecommender system approaches Factorization machine s Latent representation models Embedding Recurrent neural networks Basic deep neural networks Convolutional neural networks Deep neural network models Graph neural networks Attention model s Advanced models Memory networks Mixture models Recurrent neural networks Convolutional neural networks Figure 2: A categorization of SRS approaches from the technical perspective next interaction. According to the specific technique used, Markov chain-based RSs are divided into basic Markov Chain-based approaches and latent Markov embedding- based approaches. The former one directly calculates the transition probability based on the explicit observations [3], while the latter first embeds the Markov chains into an Euclidean space and then calculates transition probabilities between interactions based on their Euclidean distance [2]. The shortcomings of Markov chain-based RSs are obvious, namely, on the one hand, they can only capture the short-term dependencies while ignoring long-term ones due to the Markov property which assumes that the current interaction depends on one or several most recent interactions only; on the other hand, they can only capture the point-wise dependencies while ignoring the collective dependencies over user-item interactions. Consequently, they are less and less employed in SRSs in recent years. tensor to be factorized is composed of interactions rather than the ratings in CF. Such a model is easily affected by the sparsity of the observed data and thus cannot achieve ideal recommendations. Embedding. Embedding-based SRSs latent representations for each user and item for the subsequent recommendations by encoding all the user-item interactions in a sequence into a latent space. Specifically, some works take the learned latent representations as the input of a network to further calculate an interaction score between users and items, or successive users’ actions [18; 19], while other works directly utilize them to calculate a metric like the Euclidean distance as the interaction score [5]. This model has shown great potential in recent years due to its simplicity, efficiency and efficacy. # 3.3 Deep Neural Network Models for SRSs 3.2 Latent Representation Models for SRSs Latent latent representation of each user or item, and then predict the subsequent useritem interactions by utilizing the learned representations. As a result, more implicit and complex dependencies are captured in a latent space, which greatly benefits the recommendations. Next, we introduce two representative models falling into this taxonomy. Factorization machines. Factorization machine-based SRSs usually utilize the matrix factorization or tensor factorization to factorize the observed user-item interactions into latent factors of users and items for recommendations [14; 10]. Different from collaborative filtering (CF), the matrix or Deep neural networks [23] have natural strength to model and capture the comprehensive relations over different entities (e.g., users, items, interactions) in a sequence, and thus they nearly dominate SRSs in the past few years. The latest progress achieved in SRSs also belongs to this taxonomy. Generally, this taxonomy can be divided into two sub classes: SRSs built on basic deep neural networks and SRSs built on deep neural networks with some advanced models incorporated. Basic Deep Neural Networks The most commonly used deep neural networks for SRSs are recurrent neural networks (RNN) due to their natural strength in sequence modelling, but they also have defects. Recently, convolutional neural networks (CNN) and graph neural networks (GNN) have also been applied in SRSs to make up the defects of RNN. Next, we introduce the SRSs built on top of these three deep neural networks respectively. RNN-based SRSs. Given a sequence of historical user-item interactions, an RNN-based SRS tries to predict the next possible sequential dependencies over the given interactions. Except for the basic RNN, longshort-term-memory (LSTM)- [21] and gated recurrent unit (GRU)-based [7] RNN have also been developed to capture the long-term dependencies in a sequence. Recent years have witnessed the prosperity of RNN-based SRSs and they dominate the research on the deep learning-based SRSs or even the whole SRSs. Besides the basic RNN structure, some variants are proposed to capture more complex dependencies in a sequence, like hierarchical RNN [13]. However, RNN is not flawless for SRSs, with the shortcomings in two aspects: (1) it is easy to generate fake dependencies due to the overly strong assumption that any adjacent interactions in a sequence must be dependent, which may not be the cases in the real world because there are usually irrelevant or noisy interactions inside a sequence; and (2) it is likely to capture ignoring the the point-wise dependencies only while collective dependencies interactions several (e.g., collaboratively affect the next one). CNN-based SRSs. Different from RNN, given a sequence of user-item interactions, a CNN first puts all the embeddings of these interactions into a matrix, and then treats such a matrix as an “image” in the time and latent spaces. Finally, a CNN learns sequential patterns as local features of the image subsequent filters using recommendations. Since a CNN does not have strong order assumptions over the interactions in a sequence, and they learn patterns between the areas in an “image” rather than over interactions, therefore, CNN-based SRSs can make up the aforementioned drawbacks of RNN-based SRSs to some degree. However, CNN-based SRSs cannot effectively capture long-term dependencies due to the limited sizes of the filters used in CNN, which limits their applications. The typical works include [17; 26]. GNN-based SRSs. Recently, with the fast development of GNN, GNN-based SRSs have been devised to leverage GNN to model and capture the complex transitions over user- item interactions in a sequence. Typically a directed graph is first built on the sequence data by taking each interaction as a node in the graph while each sequence is mapped to a path. Then, the embeddings of users or items are learned on the graph to embed more complex relations over the whole graph [22]. Such an approach makes full use of the advantage of GNN to capture the complex relations in structured relation datasets. GNN-based SRSs have shown a great potential to provide explainable recommendations by revealing the complex relations between the recommended items and the corresponding sequential context. Such kind of SRSs are still in their early stages. Advanced Models To address the limitations of SRSs built on basic neural network structures, some advanced models are usually combined together with a certain kind of basic deep neural networks (e.g., RNN, CNN) to build more powerful SRSs which are able to address particular challenges. Next, we introduce three advanced models that are commonly used in SRSs. Attention models. Attention models are commonly employed in SRSs to emphasize those really relevant and important interactions in a sequence while downplaying those ones irrelevant to the next interaction. They are widely incorporated into shallow networks [19] and RNN [25] to handle interaction sequences with noise. Memory networks. Memory networks are introduced into SRSs to capture the dependencies between any historical user-item interaction and the next one directly by incorporating an external memory matrix. Such matrix enables it possible to store and update the historical interactions in a sequence more explicitly and dynamically to improve the expressiveness of the model and reduce the [1]. interference of Furthermore, some works incorporate a key-value memory network to store and update the corresponding knowledge base information of the interacted items in a sequence to learn the attribute level preference for the enhancement of recommendations [11]. Generally, memory networks have shown their potential in SRSs, but are not sufficiently studied yet. Mixture models. A mixture model-based SRS combines different models that excel at capturing different kinds of dependencies to enhance the capability of the whole model for in better recommendations. A typical example [15], which combines different kinds of encoders that are suitable for short- and long-term dependencies respectively to learn a more precise sequence representation for the subsequent recommendations and has demonstrated to be quite effective. However, such models are in their early stages. 4 Open Research Directions Recent years, particularly the recent three years, have witnessed the fast development of sequential recommender systems, along with the prosperity of deep learning, especially that of the recurrent neural networks. While categorizing and summarizing the research practices in this filed, we have identified further open research directions discussed below. Context-aware sequential recommender systems. The current context in which a user or an item is could greatly influence the user’s choice on the item, which should be considered when conducting recommendations. This is even more necessary in SRSs as the context may change over time. However, most existing SRSs ignore such significant aspect. Therefore, context-aware SRSs would be an important direction for future work. Social-aware sequential recommender systems. Users live in a society and are connected with various people both online and offline. Others’ behaviors or opinions often affect the users’ choices greatly. Therefore, the social influence needs to be taken into account in SRSs, which is usually ignored in the existing works. Interactive sequential recommender systems. Most of shopping behaviours in the real-world are continuous rather than isolated events. In other words, there are actually sequential interactions between a user and the shopping platform (e.g., Amazon). However, the existing SRSs often neglect generate recommendations for one action at a single time step. How to incorporate the user-seller interactions and thus generate multi-time step recommendations is a promising research direction. Cross-domain sequential recommender systems. In the real world, items purchased by a user during a certain time period are often from multi-domains rather than one domain. sequential dependencies between items from different domains, such as the purchase of a car insurance after the purchase of a car. Such cross-domain sequential dependencies are ignored in most SRSs. Therefore, cross-domain SRS is another promising research direction to generate more accurate recommendations by leveraging information from other domains and more diverse recommendations from different domains. 5 Conclusions Recommender systems (RS) is one of the most direct and practical applications of artificial intelligence in our daily lives. Sequential recommender systems (SRSs) have been at the core of the RS field in the past three to five years as they provide more intelligent and favorable recommendations to satisfy our daily requirements. It is our hope that this summary provides an overview of the challenges and the recent progress as well as some future directions in SRSs to the RS research community. Acknowledgements This work was partially supported by Australian Research Council Discovery Project DP180102378. References [1] Xu Chen, Hongteng Xu, Yongfeng Zhang, and et al. recommendation with user memory Sequential networks. In Proceedings of International Conference on Web Search and Data Mining, pages 108–116, 2018. [2] Shanshan Feng, Xutao Li, Yifeng Zeng, Gao Cong, and Yeow Meng Chee. Personalized ranking metric embedding for next new poi recommendation. In Proceedings of the 24th International Joint Conference on Artificial Intelligence, pages 2069–2075, 2015. [3] Florent Garcin, Christos Dimitrakakis, and Boi Faltings. Personalized news recommendation with context trees. In Proceedings of the 7th ACM Conference on Recommender Systems, pages 105–112, 2013. [4] Liang Hu, Longbing Cao, Shoujin Wang, Guandong Xu,Jian Cao, and Zhiping Gu. Diversifying personalized recommendation with user-session context. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, pages 1858–1864, 2017. [5] Ruining He, Wang-Cheng Kang, and Julian McAuley. scalable Translation-based method In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 5264–5268, 2018. [6] Ruining He and Julian McAuley. Fusing similarity models with markov chains for sparse sequential recommendation. In Proceedings of the 16th IEEE International Conference on Data Mining, pages 191– 200, 2016. [7] Balazs Hidasi, Alexandros Karatzoglou, and et al.´ Session-based recommendations with recurrent neural networks. In Proceedings of the 4th International Conference on Learning Representations, pages 1–10, 2016. [8] Shoujin Wang, Liang Hu, and Longbing Cao. Perceiving the next choice with comprehensive transaction embeddings In Proceedings of the 15th Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 285– 302, 2017. [9] Balazs Hidasi, Massimo Quadrana, Alexandros Karat-´ zoglou, and Domonkos Tikk. Parallel recurrent neural network architectures for feature-rich session-based recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems, pages 241–248, 2016. General context-aware factorization recommendations. Data Mining and Knowledge Discovery, 30(2):342– 371, 2016. [11] Jin Huang, Wayne Xin Zhao, Hongjian Dou, Ji-Rong Wen, sequential and Edward Y Chang. recommendation with knowledge-enhanced memory networks. In Proceedings of the 41st ACM SIGIR Conference on Research & Development in Information Retrieval, pages 505–514, 2018. [12] Wang-Cheng Kang, Mengting Wan, and Julian McAuley. Recommendation through mixtures of heterogeneous item relationships. In Proceedings of the 27th ACM International Conference on Information and Knowledge Management, pages 1143–1152, 2018. [13] Massimo Quadrana, Alexandros Karatzoglou, and et al. Personalizing session-based recommendations with hierarchical recurrent neural networks. In Proceedings of the 11th ACM Conference on Recommender Systems, pages 130–137, 2017. [14] Steffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. Factorizing personalized markov chains In Proceedings of the 19th International Conference on World Wide Web, pages 811–820, 2010. [15] Jiaxi Tang, Francois Belletti, Sagar Jain, Minmin Chen, Alex Beutel, and et al. Towards neural mixture recommender range dependent user sequences. In Proceedings of the 28th International Conference on World Wide Web, pages 811–820, 2019. [16] Shoujin Wang, and Longbing Cao. Inferring implicit rules by learning explicit and hidden item dependency. IEEE Transactions on Systems, Man, and Cybernetics: Systems, pages 1–12, 2017. [17] Jiaxi Tang and Ke Wang. Personalized top-n sequential sequence recommendation the 11th ACM embedding. International Conference on Web Search and Data Mining, pages 565–573, 2018. [18] Pengfei Wang, Jiafeng Guo, Yanyan Lan, Jun Xu, Learning Shengxian Wan, and Xueqi Cheng. hierarchical representation model for next basket recommendation. In Proceedings of the 38th ACM SIGIR Conference on Research and Development in Information Retrieval, pages 403–412, 2015. [19] Shoujin Wang, Liang Hu, Longbing Cao, Xiaoshui Huang, Defu Lian, and Wei Liu. Attention-based transactional context embedding for next-item recommendation. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 2532–2539, 2018. [20] Shoujin Wang, Liang Hu, Yang Wang, and et al. for next-item Modeling multi-purpose via mixture-channel purpose recommendations routing networks. the 28th In Proceedings of International Joint Conference on Artificial Intelligence, pages 1–7, 2019. [21] Chao-Yuan Wu, Amr Ahmed, Alex Beutel, Alexander J Jing. Recurrent recommender Smola, and How networks. the 10th ACM International Conference on Web Search and Data Mining, pages 495–503, 2017. [22] Shu Wu, Yuyuan Tang, and et al. Session-based recommendation with graph neural networks. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, pages 1–9, 2019. [23] Shoujin Wang, Wei Liu, Jia Wu, Longbing Cao, Qinxue Meng, and Paul J Kennedy. Training deep neural In 2016 imbalanced data networks on international joint conference on neural networks, pages 4368–4374, 2016. [24] Ghim-Eng Yap, Xiao-Li Li, and Philip Yu. Effective next- items recommendation via personalized sequential pattern mining. In Database Systems for Advanced Applications, pages 48–64, 2012. [25] Haochao Ying, Fuzhen Zhuang, Fuzheng Zhang, Yanchi Liu, Guandong Xu, and Xing Xie. Sequential recommender system based on hierarchical attention networks. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018. [26] Fajie Yuan, Alexandros Karatzoglou, Ioannis Arapakis, Jose, and Xiangnan He. A simple Joemon M convolutional generative network for next item recommendation. In Proceedings of the 12th ACM International Conference on Web Search and Data Mining, pages 582–590, 2019. [27] Shoujin Wang, Longbing Cao, and Yan Wang. A survey systems. arXiv on preprint arXiv:1902.04864, pages 1–35, 2019. session-based recommender
{ "id": "1902.04864" }
1912.12394
All-in-One Image-Grounded Conversational Agents
As single-task accuracy on individual language and image tasks has improved substantially in the last few years, the long-term goal of a generally skilled agent that can both see and talk becomes more feasible to explore. In this work, we focus on leveraging individual language and image tasks, along with resources that incorporate both vision and language towards that objective. We design an architecture that combines state-of-the-art Transformer and ResNeXt modules fed into a novel attentive multimodal module to produce a combined model trained on many tasks. We provide a thorough analysis of the components of the model, and transfer performance when training on one, some, or all of the tasks. Our final models provide a single system that obtains good results on all vision and language tasks considered, and improves the state-of-the-art in image-grounded conversational applications.
http://arxiv.org/pdf/1912.12394
Da Ju, Kurt Shuster, Y-Lan Boureau, Jason Weston
cs.CL, cs.CV, cs.LG
null
null
cs.CL
20191228
20200115
0 2 0 2 n a J 5 1 ] L C . s c [ 2 v 4 9 3 2 1 . 2 1 9 1 : v i X r a # All-in-One Image-Grounded Conversational Agents Da Ju∗ , Kurt Shuster , Y-Lan Boureau and Jason Weston Facebook AI Research {daju, kshuster, ylan, jase}@fb.com # Abstract As single-task accuracy on individual language and image tasks has improved substantially in the last few years, the long-term goal of a generally skilled agent that can both see and talk becomes more fea- sible to explore. In this work, we focus on lever- aging individual language and image tasks, along with resources that incorporate both vision and lan- guage towards that objective. We design an archi- tecture that combines state-of-the-art Transformer and ResNeXt modules fed into a novel attentive multimodal module to produce a combined model trained on many tasks. We provide a thorough anal- ysis of the components of the model, and transfer performance when training on one, some, or all of the tasks. Our final models provide a single system that obtains good results on all vision and language tasks considered, and improves the state of the art in image-grounded conversational applications. In this work, we design an architecture that leverages ex- isting state-of-the-art vision and language modules, and com- bines them with a novel attentive multimodal combiner mod- ule. The module can learn when and how to attend between the two modalities, depending on the particular inputs, and improves over a standard attention mechanism. Our work also provides a detailed analysis of what works and what does not. We perform multiple ablation experiments to compare what types of architectures, training objectives, and optimiza- tion strategies work best for what tasks, or for achieving the most balanced performance across the board. We thus obtain models that improve the state of the art over several individ- ual image-grounded conversational tasks, and a single system that is capable of doing well on all the image-grounded lan- guage tasks we consider. # 2 Tasks 1 Introduction A picture may be worth a thousand words, but combining pictures and words is even better. There are many ways to marry vision and language: an image can be a great conversa- tion starter, or discussion point; text accompanying an image can be a mere descriptive caption, or some witty commen- tary. Humans can seamlessly blend these skills and use them for interaction, depending on the given setting and need. In order to probe this range of skills, a large set of image- and-text tasks have been devised by researchers, covering image captioning [Young et al., 2014; Chen et al., 2015; Shuster et al., 2019a], visual question answering [Goyal et al., 2017; Das et al., 2017], and dialogue based on an im- age [Mostafazadeh et al., 2017; Shuster et al., 2018]. Re- cent years have seen tremendous progress in both vision [He et al., 2016; Girshick et al., 2018; Mahajan et al., 2018] and language [Vaswani et al., 2017a; Radford et al., 2019; Devlin et al., 2019] applications, and in all these individual tasks [Goyal et al., 2017; Shuster et al., 2018; Shuster et al., 2019a] as well, so the time is ripe for exploring the possibility of a multi-tasking agent that would do well on all of them. # ∗Contact Author We first detail separate language and vision tasks that are con- sidered from prior work, and then describe the combined vi- sion and language tasks we consider for training an entire architecture for building an image-grounded conversational agent. A summary of these tasks is also provided in Table 1. # 2.1 Language-based Large-scale text corpora are commonly used to pre-train text encoders; we use these methods that have been developed In particular we first consider BERT-based in prior work. representations [Devlin et al., 2019] from [Humeau et al., 2019], which use 150 million (context, response) pairs ex- tracted from Wikipedia and Toronto Books. To make use of data potentially more related to dialogue and of a more col- loquial nature, we also use pre-training based on pushshift.io Reddit [Mazar´e et al., 2018; Humeau et al., 2019], consisting of 174 million (context, response) pairs. # 2.2 Vision-based Similarly, large-scale image datasets are commonly used to pre-train image encoders, in particular ImageNet [Deng et al., 2009] (1.28 million images), Instagram images [Mahajan et al., 2018] (3.5 billion images), and the Visual Genome (108k Images with 2.8 Million attributes) [Krishna et al., 2017]. Modalities Task # Images Train # Utterances # Images Valid # Utterances # Images Test # Utterances # Cands Language Vision Vision + Language Wiki. + Tor. Books pushshift.io Reddit ImageNet Instagram Visual Genome COCO Flickr30k Personality-Captions Image-Chat Image-Chat QA IGC VQA - - 1.28m 3.5b 108,077 82,783 29,000 186,858 186,782 19,702 - 82,783 150m 174m - - - 414,113 145,000 186,858 355,862 19,702 - 443,757 - - - - - 5,000 1014 5,000 5,000 1,129 1,613 40,504 - - - - - 25,000 5,070 5,000 15,000 1,129 4,839 214,354 - - - - - 5,000 1,000 10,000 9,997 2,224 2,591 81,834 - - - - - 25,000 5,000 50,000 29,991 2,224 7,773 447,793 - - - - - 5,000 1,000 500 100 100 100 3,129 Table 1: Dataset statistics for all relevant datasets. During evaluation, gold responses are scored against other candidates (#Cands). # 2.3 Vision + Language In the combined tasks we consider, images and language are possible inputs, and the output is a text response from the agent. The goal is that the tasks, when multi-tasked, can teach an agent how to respond appropriately in different situations using different skills. COCO Captions The COCO Captions dataset [Chen et al., 2015] requires that a model, given an image, predicts a cap- tion that factually summarizes the scene, for example “a large bus sitting next to a very tall building”. In the dataset used for the 2015 challenge, there are about 83k training images and 414k captions, as images are captioned multiple times, and a large validation set of about 40k images. Some works have merged some or all images from that validation set into the training set (we indicate this with an asterisk in Table 9). In this work, we only train on the 83k images of the original train set, to avoid training on images that also appear in the VQA validation set, and use the validation and test sets of 5k images each from [Karpathy and Fei-Fei, 2017]. Flickr30k Flickr30k [Young et al., 2014] is also a caption- ing dataset with factual summaries, although it is smaller with 29k training images and 145k captions. Personality Captions (PC) In contrast to the previous two datasets, Personality Captions [Shuster et al., 2019a] attempts to model human style when people speak about images. While the training set also consists of (image, response) pairs, each one also has a given style label out of 215 possible styles, such as “Sympathetic”, “Optimistic” or “Dramatic”. The cap- tions authored by humans then tend to be less factual in tone, and rather than simply stating what is in the image they are more conversational, e.g. “This sandwich looks so delicious! My goodness!”. It consists of about 187k training images, with one caption each. Image Chat [Shuster et al., 2018] is an Image Chat (IC) extension of the Personality Captions dataset to full dialogue. It also uses the same 215 style traits and images as input, but human-human conversations have been collected based on the images and traits instead, with each speaker pair in a given chat being assigned a possibly different random trait. The training set consists of the same 187k images with 356k total conversation turns. Image Chat QA (ICQA) Image Chat QA is the extraction of all the question-answer pairs that appear in the Image Chat dataset, to evaluate performance in answering such conver- sational image-grounded questions. The questions have been extracted heuristically, by assuming a question contains a ? or starts with who, what, when, where, why or how. This ex- tracts about 20k such training questions. Image-Grounded Conversations (IGCQ and IGCQA) Image-Grounded Conversations (IGC) [Mostafazadeh et al., 2017] is also a conversational dataset between pairs of hu- mans given an image. It does not contain a training set, but only validation and test portions. The conversations are three turns each, in the format of (context, question, response) tu- ples. We refer to the task of forming a question given the context as IGCQ, and the task of responding to the question as IGCQA. VQA Visual QA [Goyal et al., 2017] is a task involving open-ended questions about images which require an under- standing of vision, language, and commonsense knowledge to answer, such as “where is the child sitting?”’ or “who is wearing the glasses?”. It contains 83k training images and 444k QA pairs. Note this line of work has also been extended to multiple questions in sequence [Das et al., 2017] but we do not consider that task here. 3 Related Work Separately in the NLP field, and in the vision field, large ad- vancements have been recently made in terms of the quality of learnt representations. In NLP, word embedding representations [Bengio et al., 2003; Collobert and Weston, 2008; Mikolov et al., 2013; Joulin et al., 2017] have given way to multi-sentence, multi- layer, self-attentive representations through Transformers, with pre-training on large corpora such as Wikipedia and Toronto books [Vaswani et al., 2017a; Radford et al., 2018; Devlin et al., 2019; Bakhtin et al., 2019; Radford et al., 2019]. In dialogue, it has been shown that pre-training on ut- terances from large-scale conversational corpora such as from pushshift.io Reddit improves over large pre-training over re- sources like Wikipedia because they are more related to the task [Mazar´e et al., 2018; Humeau et al., 2019; Shuster et al., 2019b]. When training on downstream tasks, multi-tasking language tasks is also starting to become a more explored area [Collobert and Weston, 2008; McCann et al., 2018; Raffel et al., 2019]. In vision, conventional convolutional neural networks [Le- Cun et al., 1990; Krizhevsky et al., 2012] have been upgraded and improved by deeper ResNet architectures that incorporate skip connections [He et al., 2016], trained through ImageNet [Deng et al., 2009]. On tasks such as VQA which explic- itly ask questions about object properties, Faster R-CNN fea- tures [Girshick et al., 2018], which incorporate object detec- tion algorithms, have been shown to perform well. On tasks with large coverage of everyday images and commonsense knowledge about them, Instagram training has been shown to perform well [Mahajan et al., 2018; Shuster et al., 2018; Shuster et al., 2019a]. Given this improved performance across different modal- ities, a natural next step is methods that combine these ap- proaches for multimodal tasks involving language and vision. Several recent approaches have been built with this goal, in- cluding Vilbert [Lu et al., 2019a], VisualBERT [Li et al., 2019b], LXMERT [Tan and Bansal, 2019], Unicoder-vl [Li et al., 2019a], Vl-bert [Su et al., 2019] and UNITER [Chen et al., 2019]. A common theme is to borrow some of the pre-training ideas from BERT, but apply them to pre-training both language and vision, and then fine-tune these models on downstream tasks. Another recent work multi-tasks 12 vision and language tasks at once [Lu et al., 2019b]. Some- what differing from our work, the end tasks considered are not to aimed to build a unified conversational agent where the output is dialogue, but include any task including language and vision of some form, for example caption-based image- retrieval, referring expressions and region to phrase ground- ing, most of which we do not consider here. Recently, [Shus- ter et al., 2019b] proposed to multi-task to build a conversa- tional agent, but using mostly language-only tasks (10 tasks), although it does include two of the image tasks we consider here. # 4 Methods Our model is a retrieval architecture that outputs a candidate response from the training set. Like most multimodal archi- tectures, it comprises a text encoder, an image encoder, and a way to combine the two. However, unlike recent models that use various cross attention mechanisms to get the joint rep- resentation of the final context, our model simply uses a so- called multimodal combiner. An extra style encoder is also added to represent the different style traits inside the Person- ality Captions and Image Chat tasks, and to differentiate them from the other tasks. The model finally scores possible out- put candidates, using either a ranking or a classification head, depending on the task. An overview of the model architecture is given in Figure 1. Image Style 3224x224 1218 Dialog History IBpe tokenization| Response Candidate Bpe tokenization Y Linear | _ _ | Linear Share Weights | | Embedings Style Encoder Multimodal Combiner Dot Product Classifier | Classifier Head Trained z= Pretrained ~ Ranking Head Figure 1: Overview of our model, in its TransResNet-3AMMC (At- tentive Multimodal combiner) variant. The non-attentive variant (TransResNet-MMC) has a single Transformer combiner, instead of the three split Transformers followed by a weighted sum shown here. 4.1 Text Encoders We use two text encoders, one for the input context and one for the output candidates. The output candidate encoder en- codes the candidate utterances that will be scored, and the context encoder encodes the text input. Depending on the task, the context can be the previous dialogue history or a question posed about the image, or a combination of the two. Both text encoders are pre-trained Transformers. The final output of the candidate encoder is a single vector per candi- date, obtained by taking the mean of the per-token encodings. For the context encoder we retain the per-token output encod- ings, thus comprising the same length as the input sequence, for input into the multimodal combiner. During multimodal training we fine-tune both text encoders. 4.2 We consider two possible image encoders in our model. The first is a ResNeXt-based model trained on 3.5 billion In- stagram images [Mahajan et al., 2018], which we refer to as ResNeXt-IG-3.5B. This encodes the image into a 2048- dimensional vector. The weights are fixed during the sub- sequent training process. The second is an improved Faster R-CNN model [Girshick et al., 2018] trained on the Visual Genome dataset [Krishna et al., 2017]. We fix the network up to the fc6 layer and fine-tune the fc7 weights as in [Jiang et al., 2018]. We extract 100 2048-dimensional vectors (100- channel Faster R-CNN features). In addition to trying these models independently, we also investigate using both of their features concatenated together as input to the multimodal combiner. # 4.3 Multimodal Combiner The Multimodal Combiner (MMC) is used to combine the encodings from different components of the model. This in- cludes the 100-channel Faster R-CNN features, the ResNeXt- IG-3.5B features, the sequence-based context features (which depend on the text length), and the encoding from the style encoder. Prior to combination, the individual encodings are normalized with their own layer-norm layer; each is then fed into the MMC with a positional embedding to indicate which feature type it is. The Multimodal Combiner is simply a Transformer [Vaswani et al., 2017b] encoder without its em- bedding layer; thus, self-attention is applied to all features that then go through linear layers. A mean operation is per- formed in the end to get a single vectorial representation of the whole multimodal context. This joint representation is then either used for a dot product with the candidate encod- ings (for ranking) or sent to an additional linear layer (for classification), as detailed in Sec. 4.5. # 4.4 Attentive Multimodal Combiner We further propose an Attentive Multimodal Combiner (AMMC), shown in Fig. 1, where multiple Transformers are used and then combined through an attention mechanism, po- tentially allowing them to focus on different skills. We use the style encoding as the query, forward it to a linear layer of the same output dimension as the number of Transformers in the multimodal combiner (i.e., 2 to 4, denoted as 2AMMC, 3AMMC and 4AMMC), followed by a softmax. We hence use those outputs to perform a weighted sum of the outputs from all the multimodal Transformers. This attention mech- anism thus learns which Transformers to rely on more for which inputs, allowing the model to switch between skills. # 4.5 Output Heads and Loss For tasks like VQA where there is one factually correct an- swer and many wrong answers, it has been shown that strong performance can be achieved using a classification head, con- sidering all the possible most frequent answers as classes, and using a binary cross entropy loss1. We thus also consider this approach for VQA. For open-ended problems in which there may be multiple right answers, e.g. those in Image Chat, we consider an alternative approach of using a ranking head. In this approach, the gold label is contrasted with a subsample of negative candidates during training (chosen as the labels of other examples in the batch) but still using the binary cross entropy loss, which scales well to huge candidate sets. We compare these two methods in this work, and also consider- ing training both at the same time. We use batch sizes of 256 / 512 and adam for optimization. For multi-tasking we exper- imented with various kinds of dataset weighting schemes, but in the end we went for simplicity and report results of sam- pling from tasks equally, so that the same number of updates are done per task in an epoch, which was difficult to improve upon. 1As used in Pythia [Singh et al., 2019; Singh et al., 2018] (https: //github.com/facebookresearch/pythia). 5 Experiments We now describe our experiments, in which we perform anal- ysis and ablations of the different kinds of modules and inputs we use for training, and final results on our full architecture. For all models we choose hyperparameters on the validation set(s), and report on the test set; for VQA the numbers are reported on the test-dev set. All experiments were conducted in ParlAI [Miller et al., 2017], and we plan to make the code publicly available. Text Encoding We consider different possible text en- codings with different pre-training schemes: starting from random weights before training on our language + vision tasks; starting from initialized word embeddings from fast- Text [Joulin et al., 2017] only; starting from BERT weights [Devlin et al., 2019]; and starting from two versions of i.e., Transformers with 79M pushshift.io Reddit training, parameters from [Mazar´e et al., 2018] and 128M param- eters from [Humeau et al., 2019]. After initialization we then fine-tune the entire TransResNet-MMC, using ResNeXt- IG-3.5B image features, on three tasks separately: COCO, Flickr30k and Image Chat. The results are given in Table 3. We observe large improvements in accuracy with more text pre-training, for example on COCO going from 40.7% with no pre-training to 50.1% with BERT. BERT outperforms pushshift.io Reddit-128M slightly on COCO and Flickr30k, whereas pushshift.io Reddit-128M outperforms BERT on Im- age Chat. We hypothesize this is because the language is more formal on COCO and Flickr, matching BERT that is trained with Wikipedia, whereas Image Chat is more collo- quial, matching pushshift.io Reddit. However, the results are close and on average pushshift.io Reddit-128M does slightly better. We thus use the latter in all subsequent experiments2. Image Encoding We next consider different possible im- age encodings via different architectures and pre-training schemes: ResNeXt-IG-3.5B [Mahajan et al., 2018], Faster R- CNN features [Girshick et al., 2018], and finally a combina- tion of ResNeXt-IG-3.5B and Faster R-CNN features. After initialization we then fine-tune the entire TransResNet-MMC on four tasks: COCO, Flickr30k, Image Chat and VQA. We evaluate these settings both with single task fine-tuning, and with multi-task training. The results are given in Table 2. Faster R-CNN features are superior on VQA, which re- quires fine-grained localization of objects in order to answer questions, while ResNeXt-IG-3.5B features are superior on Flickr30k and Image Chat, which require a wide array of commonsense knowledge of different scenes. On average across the tasks (last column), however, they provide simi- lar performance. As they provide different qualities, they are a good candidate for combination. We thus provide both as input to our model and obtain superior single-task results on COCO, Flickr30k and VQA, with results on Image Chat as good as with ResNeXt-IG-3.5B and better than with Faster R-CNN. Multi-tasking performance also improves over pre- vious results. We thus adopt this combination strategy in sub- sequent experiments. 2Another choice would have been to combine them, but we did not do that here. Image Encoder COCO Flickr30k Image Chat VQA Avg ResNeXt-IG-3.5B ST MT 50.7 48.0 75.3 77.0 56.4 56.2 61.9 62.0 61.1 60.8 Faster R-CNN ST MT 49.3 52.1 68.2 72.4 54.2 53.2 66.3 66.3 59.5 61.0 ResNeXt-IG-3.5B+ Faster R-CNN ST MT 57.3 51.2 79.7 81.7 56.4 55.2 67.0 66.4 65.1 63.7 Table 2: Comparison between image representations as part of our TransResNet-MMC architecture with either single-task (ST) or multi-task (MT) training, evaluating on COCO, Flickr30k, Image Chat and VQA, and reporting average (Avg.) performance across the tasks. Text Encoder from scratch fastText init BERT Reddit-79M Reddit-128M COCO Flickr30k 40.7 44.9 50.1 44.3 48.8 65.5 69.0 72.0 68.4 71.8 Image Chat Avg. 37.6 45.6 52.1 50.3 55.2 48.0 53.2 58.1 54.3 58.6 ResNeXt-IG-3.5B ResNeXt-IG-3.5B + Faster R-CNN w/o MMC with MMC w/o MMC with MMC COCO Fl30k 48.8 50.7 71.8 75.3 53.6 57.3 75.6 79.7 IC 55.2 56.6 46.9 56.4 Table 3: Comparison between text encoding Transformer pre- training methods when used as part of TransResNet-MMC, report- ing accuracy on the respective test sets of three tasks, as well as the average (Avg.). Table 7: Comparison of with and without (w/o) the multimodal com- biner (MMC) as part of our TransResNet architecture, for COCO, Flickr30k (Fl30k) and Image Chat (IC), using either ResNeXt-IG- 3.5B (ResNeXt-IG) features alone or in combination with Faster R- CNN features. The MMC provides gains in all cases. Early Stop COCO Fl30k PC IC ICQA VQA COCO Fl30k IC VQA Avg. 54.0 51.4 52.4 53.4 51.2 83.4 83.0 81.3 81.9 81.7 55.0 55.9 58.8 58.0 58.0 50.5 53.1 55.9 54.0 55.2 43.9 47.2 51.4 30.6 49.9 66.1 60.3 66.5 66.6 66.4 Table 4: Training TransResNet-MMC on all tasks but only perform- ing early stopping on one specific dataset compared to stopping on the average accuracy across all datasets (“Avg.”). Fine Tune COCO Fl30k PC IC ICQA VQA COCO Flickr30k IC VQA All 59.6 50.7 52.4 36.6 51.2 76.5 84.0 81.3 65.6 81.7 34.0 54.2 58.8 47.1 58.0 31.8 52.1 55.9 38.6 55.2 30.0 47.1 51.4 30.7 49.9 58.2 60.8 66.5 66.2 66.4 Table 5: Training TransResNet-MMC on all tasks and then fine- tuning on each of the tasks, compared to the original best performing multi-task model (called “All”). VQA Model training class. head ranking head Classification head 67.0 n/a Ranking head n/a 54.0 Multi-head training 66.1 63.5 Multimodal Combiner We next assess the impact of the multimodal combiner module in our architecture; we first an- alyze the non-attentive version. We consider either using it, or replacing it with a simple sum over feature type represen- tations, see Section 4.3. We compare these alternatives on three tasks: COCO, Flickr30k and Image Chat, and exam- ine performance both for our best performing combination features as well as for ResNeXt-IG-3.5B alone. The results are given in Table 7. We see that without this component of the architecture, the model can still give somewhat reason- able performance. However, by combining modalities with a Transformer architecture we do see improvements across all tasks. The MMC module takes as input a sequence-based rep- resentation of the context history (token-level representation). We also experimented with giving the mean sequence repre- sentation as input to the MMC instead, which gave worse re- sults (5% regression on IC). We thus report subsequent exper- iments using the full combiner using sequence-based inputs. Freezing versus Fine-Tuning Encoders We compare the performance of our models when either freezing the image and text encoders after pre-training, or fine-tuning them in addition to the multimodal combiner of the language and vi- sion tasks. If they are frozen, only the multimodal combiner is trained. Table 8 presents the results, comparing multi- task performance across several of our tasks. There are clear wins from fine-tuning the encoders on the multimodal train- ing data. Ranking vs. Classification Head We compare the perfor- mance of training VQA with the classification and ranking heads, or training both at the same time. The results are shown in Table 6. Table 6: Training VQA with either a classification head, a ranking head, or multi-tasking both. Multi-tasking both helps the ranking head improve. Training with a classification head alone (first row) pro- vides the best performance on VQA. However, transfer to COCO Fl30k PC IC ICQA VQA Freeze Fine-tune 27.9 51.2 57.2 81.7 40.6 58.0 40.6 55.2 37.6 49.9 64.5 66.4 Table 8: Training TransResNet-MMC on all tasks with freezing or not the text and image encoders. other tasks, shown by evaluating them using the ranking head, gives poor results, understandably as that has not been trained for (see Table 10, row 4). Using a ranking head to train VQA gives far worse performance on VQA. We attribute this to the subsampling of negative candidates in the loss function, rather than considering ∼3k possible candidates at once in the classification head. The last row of Table 6 shows the per- formance of training both heads at once, and then evaluating the two heads. This dramatically improves the performance of the ranking head on VQA, as the classification head helps the model attain good weights. Single Task Results Using our best approach, we now re- port final results fine-tuned on each task independently. The results are given in Table 10. We report results across all evaluation sets for a given training target. E.g., the first row shows the model performance when training with COCO, evaluated on the test sets of COCO, Flickr30k, Personality Captions, Image Chat, Image Chat QA, VQA, IGCQ, IGCQA and VQA. As expected, we observe better results on the test set of the task being trained on than on other test sets. How- ever, there is some transfer between some of the tasks. For example, training on COCO gives non-trivial performance on Flickr30k, and vice-versa, although Flickr30k training helps less, probably due to its smaller size. Multi-Task Results Results of our Multi-Task models are given in Table 10, last four rows. We first assess the per- formance of the MMC MT model (without an attentive mul- timodal combiner). We achieve marginally superior perfor- mance on Personality Captions and Flickr30k, and marginally inferior performance on the other tasks compared to our best single-task (ST) models, but in a single conversational agent. The final column showing average performance across tasks makes this point clear, as those numbers are vastly superior for the multi-task models. Like our single-task counterparts, many of these results are still well above previous state of the art, e.g. on Personality Captions and Image Chat, and within range of the state of the art on COCO and Flickr30k. Multi-Task Results with Attentive Multimodal Combiner We next assess the effect of Multi-Task training with multi- ple Transformers in the multimodal combiner (2, 3 or 4 in- stead of the single Transformer in our base architecture). The bottom four rows in Table 10 show that using the attentive multimodal combiner leads to improvements in average per- formance over all tasks, with 2AMMC achieving the best re- sults on PC and IC tasks of all methods, and 3AMMC being slightly better on average. Note that the early stopping cri- terion for these experiments is the average performance over all tasks, which leads to performance gains shifting between tasks among architectures, while the average itself is con- trolled. This could be altered by selecting a different stop- ping criterion, as detailed further below and in Table 4. Ta- ble 11 breaks down the performance obtained on all tasks by each of the Transformers in the 3AMMC. There are striking differences between the tasks as to how performance is split among the three MMCs: on VQA, MMC-1 and MMC-2 have near 0 performance while MMC-3 performs as well as the full system, but this comes at the expense of much worse perfor- mance on all the conversational tasks compared to MMC-1 and MMC-2. On PC, MMC-1 performs nearly as well as the full system and much better than MMC-2 and MMC-3. The overall best performance on all other tasks requires com- bining all three MMCs. To see if the performance gains of AMMC come just from the network being larger, we com- pare to MMC modules with more layers up to an equivalent size, see Table 12. The results show that making standard MMC larger only hurts performance. Increasing the number of MMC heads similarly degrades performance (results not shown). These results highlight the benefits of the AMMC design. Multi-Tasking Small vs. Large Tasks The tasks we do see a performance gain in when multi-tasking, Flickr30k and Personality-Captions, tend to be smaller tasks where there is a larger related task also in the multi-tasking set, in this case COCO and Image Chat. To investigate the effects of train- ing set size on multi-tasking transfer we thus conducted ex- periments to see if we observe the same effects of improve- ment on another dataset if we downsampled it. We thus con- sider adjusting the training set size of COCO to be the same size as Flickr30k, and then consider multiples of that size, and observe the change in performance with changing size. We compare single-task training on that subset to multi-task training with all other tasks and that subset. For these ex- periments we considered a smaller hyperparameter sweep for simplicity, with a multimodal combiner of 2 layers and sweep across different number of heads for the multi-head attention, explaining the slightly lower results. We perform early stop- ping on COCO. The results are given in Table 14. We observe for single-task training a drop from 54% accuracy to 42.1% as we go from 83k examples down to 29k. Multi-tasking with the full COCO dataset also yields the same 54% accuracy, which makes it appear that multi-tasking is not useful for gen- eralization. However, subsampling COCO reveals a different story – the smaller the training set, the more the multi-tasking helps, with a gap of 42.1% to 49.3% in the 29k training exam- ple case. As researchers who construct new tasks often collect large scale datasets, this means multi-tasking will often have less effect than is observed in a few-shot setup. Multi-Tasking + Single-Task Fine-Tuning While our goal is to make a single agent that is good at all our tasks, we also investigate if multi-tasking can help improve performance on a single task, by either multi-tasking and early stopping on a particular task, or multi-tasking and then fine-tuning on a particular task. Early stopping test results are shown in Table 4. We re- port for each task out of COCO, Flickr30k, Image Chat and VQA the performance on the test set of that task itself, as well as transfer performance to the other tasks. These results can be compared to the results of optimizing multi-task perfor- Training data SCAN [Lee et al., 2018] SCG [Shi et al., 2019] Unicoder-VL [Li et al., 2019a] Unicoder-VL w/o pre-training UNITER Base UNITER Large HDC [Nguyen and Okatani, 2019] VisualBERT (ST) [Li et al., 2019b] VisualBERT (ST) w/o pre-training ViLBERT (ST) [Lu et al., 2019a] ViLBERT (ST) w/o pre-training Pythia [Jiang et al., 2018]3 Pythia 3 ViLBERT (MT) [Lu et al., 2019c] ViLBERT (MT + FT) TransResNet [Shuster et al., 2018] COCO Flickr30k 50.4* 56.6* 62.3* - 63.3* 66.6* 42.2 - - - - - - - - 44.3* 67.4 71.8 86.2 73.0 84.7 88.2 71.6 - - - - - - - - 68.4 PC - - - - - - - - - - - - - - - 53.5 IC - - - - - - - - - - - - - - - 50.3 ICQA IGCQ IGCQA VQA - - - - - - - - - - - - - - - 49.2 - - - - - - - - - - - - - - - 21.7 - - - - - - - - - - - - - - - 22.4 - - - - 72.3* 73.2* 69.3* 70.8 * 70.2 * 70.6* 69.0* 66.7 69.2* 72.6* 73.2* - # Existing Models Table 9: Previous state-of-the-art results. * indicates results achieved by training with some or all of the validation set added to the train set, whereas we only train on the train set. Note that the results in this table correspond to a different model for each column, as the architectures are fine-tuned on each task separately rather than training a single architecture in a multi-task way. The ViLBERT (MT) model is a multi-task model, but uses image retrieval settings on COCO and Flickr30k that are not comparable to the results presented here. The Pythia models on rows 9 and 10 are the same except they are trained with the VQA train set and VQA train + valid set respectively, thus we list both numbers. Arch. MMC MMC MMC MMC MMC MMC 57.2 27.7 20.0 0.0 59.6 51.2 54.2 52.7 53.2 69.4 79.7 40.5 0.3 84.0 81.7 82.0 82.9 81.8 PC 24.0 23.0 57.3 1.2 58.8 58.0 59.5 58.5 58.7 IC 16.5 16.3 56.3 1.3 55.9 55.2 56.9 56.1 56.2 ICQA IGCQ IGCQA VQA Avg 25.5 13.1 23.6 13.8 55.2 38.6 9.3 1.2 56.0 51.4 53.3 49.9 54.6 52.3 55.1 52.4 54.8 54.5 13.4 15.8 35.8 1.6 30.2 25.7 28.1 31.4 31.8 10.3 12.2 43.3 1.9 41.1 38.4 38.1 39.8 35.8 0.3 0.2 0.4 67.0 66.5 66.4 65.6 66.9 65.9 Table 10: Multi-tasking test results of our models. The first four rows show the transfer performance of our TransResNet-MMC model trained on a single task (ST), indicated in the Training data column. The fifth row shows a multi-task model which is then fine-tuned (MT+FT) on each single task separately (each column corresponds to a separate model, we hence report average performance in gray italics). The bottom four rows compare performance of single multi-task models with different types of multimodal combiners. The multi-task performance is close to single-task performance, and in some cases better across several tasks. The attentive multimodal combiner (AMMC) obtains the best overall average performance. Arch. 3AMMC (MMC-1) MT 3AMMC (MMC-2) MT 3AMMC (MMC-3) MT COCO Flickr30k 24.7 31.1 31.0 61.6 50.6 61.9 PC 48.9 19.5 21.3 IC 45.1 26.0 13.0 ICQA IGCQ IGCQA VQA Avg 32.0 27.8 27.0 27.8 28.2 9.5 24.0 27.4 8.9 22.1 33.5 13.0 1.1 0.0 66.9 Table 11: Results on each dataset when we evaluate our 3AMMC model by only taking a single MMC output as the context representation. The first MMC alone already gives good performance on PC and IC, and the third on VQA. All three are needed for some of the tasks. MMC Arch. 4 Layers 6 Layers 8 Layers (Compare to) (2 AMMC) MT (3 AMMC) MT (4 AMMC) MT COCO Flickr30k 51.4 48.6 35.6 81.0 78.1 65.6 PC 56.0 57.1 36.2 IC 53.3 53.4 33.6 ICQA IGCQ IGCQA VQA Avg 53.3 48.5 52.6 49.7 39.4 31.8 29.1 30.8 26.6 39.9 36.8 26.8 66.7 66.1 59.0 Table 12: Test results when we train our MMC models with varying numbers of layers, which we compare to our AMMC model sizes. Increasing the number of MMC layers only hurts performance. # Image # Output # Task # Coco TransResNet MMC there is a broken tree log on the ground # Task # Coco TransResNet MMC A large grass covered field under a mountain. # Task # Flickr30k TransResNet MMC A chaparral landscape scene void of human residence. Task Flickr30k TransResNet MMC A plane flying sideways. Task Context VQA What is the color of the mountain? TransResNet MMC gray Task Context VQA Does it appear to be rainy? TransResNet MMC no Task Personality Captions (Style: Happy) TransResNet MMC Wow what a beautiful and perfect shade of pink and red! I am so captivated! Task Personality Captions (Style: Attractive) TransResNet MMC Wow I would love for someone to create this setting in the sand for me. Task Image Chat (Style: Compassionate) Context Round 1: Something about the pattern calms me. TransResNet MMC The architecture calms you. Task Context Image Chat (Style: Emotional) Round 1: Airplanes are scary to get on, you never know if it will crash or not. Round 2: But these are professional pilots though. TransResNet MMC They are, and for many people they mean a lot. My grandfather loved planes! Table 13: Example output from our TransResNet MMC multi-task model for different tasks. mance in the last row ”Avg.”, see also Table 10 (sixth row). There are clear gains for each task that is early stopped, but at large expense for the other tasks. For example fine-tuning on COCO gives 54.0% compared to 51.2% when multi-tasking, but is still worse than the 57.2% when training as a single- task. Transfer to Flickr30k is still good, likely as they are sim- ilar tasks, but Image Chat results are then poor. On Flickr, the early stopping result of 83.0% is superior to both the multi- task result of 81.7% and the single-task result of 79.7%. This can be explained by Flickr30k being smaller than COCO, and thus benefiting from multi-tasking more, as we explained in the previous section. Multi-tasking followed by single task fine-tuning test re- sults are shown in Table 5 (also summarized in Table 10). Generally, these are superior to the multi-tasking per-task early stopping results. For example fine-tuning on COCO gives 59.6% compared to 54.0% when multi-tasking and early stopping, or even 57.2% when training as a single-task, COCO Train Size 1.0x Flickr30k (29000) 1.5x Flickr30k (43500) 2.0x Flickr30k (58000) 2.5x Flickr30k (72500) Full Size (82783) Multi-Task 49.3 51.6 53.7 53.8 54.0 Single-Task 42.1 50.3 51.9 53.6 54.0 Table 14: Accuracy on COCO test set when downsampling COCO during training to the same size as the Flickr30k training set, or multiples thereof. Smaller training sets are clearly helped by multi- tasking. Eventually there is enough data of the single task. so it is the best result we obtain over all methods. We also achieve our best results in this fashion on Flickr30k. For VQA the validation results were higher (not shown) in this setting, but resulted in slightly inferior test numbers, so a small amount of overfitting occurs. Comparison to Existing Results We give results from pre- vious work in Table 9. Our results compare favorably on the conversational tasks, i.e. PC, IC, ICQA, IGCQ and IGCQA. For the COCO, Flickr30k and VQA tasks, our results are within range of the state of the art, but are surpassed by some of the methods. We note that on COCO others used the vali- dation set for training whereas we did not (see Sec. 2.3, we do not want multi-task experiment train and valid data to over- lap). For VQA we report the number from Pythia3 as a com- parison point, as that method uses the train set only without VQA data augmentation from the Visual Genome, VisDial or other data augmentations (similar to us) and we used their setup as a starting point for our implementation. Our numer- ical results are comparable to theirs. Larger-Scale Cross-Module Pre-training Some of the best-performing methods on a subset of tasks rely on large- scale cross-module pre-training [Chen et al., 2019; Li et al., 2019a; Lu et al., 2019a], which leads to better perfor- mance but requires gigantic multimodal datasets like Con- ceptual Captions [Sharma et al., 2018] or Visual Genome [Krishna et al., 2017], as shown in Table 15, as well as high computing resources (e.g., 882 and 3645 V100 GPU hours for UNITER-base and UNITER-large, respectively). Pre-training on COCO alone as done in [Li et al., 2019b] gives more limited improvement (see Table 9). Our approach combines vision and text encoders with minimal additional multimodal training resources required. Even counting all the multi-task datasets used for training adds up to only 1M image-sentence (I-S) pairs, resulting in training that takes around 40 V100 GPU hours. We expect larger-scale cross- module pre-training would also improve the performance of our models, but this is beyond the scope of this work. Example Predictions We show example predictions of our MMC multi-task model in Table 13. We take test images, and for COCO, Flickr30k, Personality Captions and Image Chat we show the top ranked candidate using the ranking head, ranking all utterances from the given training set. For VQA 3https://learnpythia.readthedocs.io/en/latest/tutorials/ pretrained models.html#pretrained-models Model UNITER ViLBERT Unicoder-VL Dataset COCO,VG,CC,SBUC CC CC, SBUC Size (I-S Pair) 9.6 M 3.0 M 3.8 M Table 15: Sizes of multimodal pre-training datasets in terms of image-sentence pairs. Our model obtains comparable results on all tasks without any cross-module pre-training on large datasets such as Visual Genome (VG), Conceptual Captions (CC), or SBU Cap- tions (SBUC). Thus, multi-tasking can be viewed as a strong alter- native to large-scale pre-trainining, considering its simplicity and effectiveness in terms of computation power. we show the output of the classification head. We observe that the same underlying model can produce a diverse range of outputs depending on the task, ranging from factual cap- tioning to conversations grounded on the image. 6 Conclusion In order to build an image-grounded conversational agent, we have assembled disparate multimodal tasks and built a sin- gle architecture for multi-task training on them, incorporating a novel attentive multimodal combination module. Through detailed analysis and ablations, we have shown that our ap- proach can obtain strong performance across a number of tasks. Future work could investigate further how these skills are blended during interaction, rather than evaluate them as stand-alone tasks, and consider more tasks. 7 Acknowledgements We are grateful to Amanpreet Singh and Vedanuj Goswami for providing help and advice, comparison results and faster R-CNN image features. We also thank Mary Williamson and Eric Smith for very useful discussions. References [Bakhtin et al., 2019] Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc’Aurelio Ranzato, and Arthur Szlam. Real or fake? learning to discriminate machine from hu- arXiv preprint arXiv:1906.03351, man generated text. 2019. [Bengio et al., 2003] Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilis- tic language model. Journal of machine learning research, 3(Feb):1137–1155, 2003. [Chen et al., 2015] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco captions: arXiv preprint Data collection and evaluation server. arXiv:1504.00325, 2015. [Chen et al., 2019] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. UNITER: Learning UNiversal Image-TExt Representations. arXiv e-prints, page arXiv:1909.11740, Sep 2019. [Collobert and Weston, 2008] Ronan Collobert and Jason Weston. A unified architecture for natural language pro- cessing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Ma- chine learning, pages 160–167. ACM, 2008. [Das et al., 2017] Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos´e MF Moura, Devi Parikh, and Dhruv Batra. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 326–335, 2017. [Deng et al., 2009] Jia Deng, Wei Dong, Richard Socher, Li- Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. [Devlin et al., 2019] Jacob Devlin, Ming-Wei Chang, Ken- ton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapo- lis, Minnesota, June 2019. Association for Computational Linguistics. Ilija Radosavovic, Georgia Gkioxari, Piotr Doll´ar, and Kaiming He. De- tectron. https://github.com/facebookresearch/detectron, 2018. [Goyal et al., 2017] Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. Making the V in VQA matter: Elevating the role of image under- standing in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [He et al., 2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [Humeau et al., 2019] Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring. arXiv preprint arXiv:1905.01969, 2019. [Jiang et al., 2018] Yu Jiang, Vivek Natarajan, Xinlei Chen, Marcus Rohrbach, Dhruv Batra, and Devi Parikh. Pythia the Winning Entry to the VQA Challenge 2018. v0.1: arXiv e-prints, page arXiv:1807.09956, Jul 2018. [Joulin et al., 2017] Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for effi- cient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 427–431, Valencia, Spain, April 2017. Association for Computational Linguistics. [Karpathy and Fei-Fei, 2017] Andrej Karpathy and Li Fei- Fei. Deep visual-semantic alignments for generating im- age descriptions. IEEE Trans. Pattern Anal. Mach. Intell., 39(4):664–676, April 2017. [Krishna et al., 2017] Ranjay Krishna, Yuke Zhu, Oliver Groth, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. Inter- national Journal of Computer Vision, 123(1):32–73, 2017. [Krizhevsky et al., 2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural in- formation processing systems, pages 1097–1105, 2012. [LeCun et al., 1990] Yann LeCun, Bernhard E Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne E Hubbard, and Lawrence D Jackel. Handwritten digit recognition with a back-propagation network. In Advances in neural information processing systems, pages 396–404, 1990. [Lee et al., 2018] Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. Stacked Cross At- tention for Image-Text Matching. arXiv e-prints, page arXiv:1803.08024, Mar 2018. [Li et al., 2019a] Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, and Ming Zhou. Unicoder-VL: A Uni- versal Encoder for Vision and Language by Cross-modal Pre-training. arXiv e-prints, page arXiv:1908.06066, Aug 2019. [Li et al., 2019b] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. VisualBERT: A Sim- ple and Performant Baseline for Vision and Language. arXiv e-prints, page arXiv:1908.03557, Aug 2019. [Lu et al., 2019a] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. ViLBERT: Pretraining task-agnostic visi- olinguistic representations for vision-and-language tasks. arXiv preprint arXiv:1908.02265, 2019. [Lu et al., 2019b] Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 12-in-1: Multi- task vision and language representation learning. arXiv preprint arXiv:1912.02315, 2019. [Lu et al., 2019c] Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 12-in-1: Multi- Task Vision and Language Representation Learning. arXiv e-prints, page arXiv:1912.02315, Dec 2019. [Mahajan et al., 2018] Dhruv Mahajan, Ross Girshick, Vig- nesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Ex- ploring the limits of weakly supervised pretraining. In Vit- torio Ferrari, Martial Hebert, Cristian Sminchisescu, and Yair Weiss, editors, Proceedings of the European Confer- ence on Computer Vision, pages 185–201, Cham, 2018. Springer International Publishing. Samuel Humeau, Martin Raison, and Antoine Bordes. Training millions of personalized dialogue agents. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 2775–2779, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. Shirish Keskar, Caiming Xiong, and Richard Socher. The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730, 2018. [Mikolov et al., 2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed repre- sentations of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111–3119, 2013. [Miller et al., 2017] Alexander Miller, Will Feng, Dhruv Ba- tra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. ParlAI: A dialog research software platform. In Proceedings of the 2017 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 79–84, Copenhagen, Denmark, September 2017. Association for Computational Linguis- tics. [Mostafazadeh et al., 2017] Nasrin Mostafazadeh, Chris Brockett, Bill Dolan, Michel Galley, Jianfeng Gao, Image- Georgios Spithourakis, and Lucy Vanderwende. grounded conversations: Multimodal context for natural In Proceedings of question and response generation. the Eighth International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 462–472, Taipei, Taiwan, November 2017. Asian Federation of Natural Language Processing. and Takayuki Okatani. Multi-task learning of hierarchi- In Proceedings of cal vision-language representation. the IEEE Conference on Computer Vision and Pattern Recognition, pages 10492–10501, 2019. [Radford et al., 2018] Alec Radford, Karthik Narasimhan, Improving language Tim Salimans, and Ilya Sutskever. understanding by generative pre-training. 2018. [Radford et al., 2019] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019. [Raffel et al., 2019] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019. [Sharma et al., 2018] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In Proceedings of ACL, 2018. [Shi et al., 2019] Botian Shi, Lei Ji, Pan Lu, Zhendong Niu, and Nan Duan. Knowledge aware semantic concept ex- In Proceedings of the pansion for image-text matching. Twenty-Eighth International Joint Conference on Artifi- cial Intelligence, IJCAI-19, pages 5182–5189. Interna- tional Joint Conferences on Artificial Intelligence Orga- nization, 7 2019. [Shuster et al., 2018] Kurt Shuster, Samuel Humeau, An- toine Bordes, and Jason Weston. Engaging image chat: Modeling personality in grounded dialogue. arXiv preprint arXiv:1811.00945, 2018. [Shuster et al., 2019a] Kurt Shuster, Samuel Humeau, Hexi- ang Hu, Antoine Bordes, and Jason Weston. Engaging im- age captioning via personality. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 12516–12526, 2019. [Shuster et al., 2019b] Kurt Shuster, Da Ju, Stephen Roller, Emily Dinan, Y-Lan Boureau, and Jason Weston. The dialogue dodecathlon: Open-domain knowledge and im- arXiv preprint age grounded conversational agents. arXiv:1911.03768, 2019. [Singh et al., 2018] Amanpreet Singh, Vedanuj Goswami, Vivek Natarajan, Yu Jiang, Xinlei Chen, Meet Shah, Mar- cus Rohrbach, Dhruv Batra, and Devi Parikh. Pythia-a platform for vision & language research. In SysML Work- shop, NeurIPS, volume 2018, 2018. [Singh et al., 2019] Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. Towards vqa models that can read. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, 2019. [Su et al., 2019] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. Vl-bert: Pre-training of generic visual-linguistic representations. arXiv preprint arXiv:1908.08530, 2019. and Mohit Bansal. LXMERT: Learning cross-modality encoder represen- In Proceedings of the 2019 tations from transformers. Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5099–5110, Hong Kong, China, November 2019. Association for Computational Linguistics. [Vaswani et al., 2017a] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Sys- tems 30, pages 5998–6008. Curran Associates, Inc., 2017. [Vaswani et al., 2017b] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. arXiv e-prints, page arXiv:1706.03762, Jun 2017. [Young et al., 2014] Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics, 2:67–78, 2014.
{ "id": "1807.09956" }
2001.00059
Learning and Evaluating Contextual Embedding of Source Code
Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques developed for natural languages. A significant advancement in natural-language understanding has come with the development of pre-trained contextual embeddings, such as BERT, which can be fine-tuned for downstream tasks with less labeled data and training budget, while achieving better accuracies. However, there is no attempt yet to obtain a high-quality contextual embedding of source code, and to evaluate it on multiple program-understanding tasks simultaneously; that is the gap that this paper aims to mitigate. Specifically, first, we curate a massive, deduplicated corpus of 7.4M Python files from GitHub, which we use to pre-train CuBERT, an open-sourced code-understanding BERT model; and, second, we create an open-sourced benchmark that comprises five classification tasks and one program-repair task, akin to code-understanding tasks proposed in the literature before. We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples. Future work on source-code embedding can benefit from reusing our benchmark, and from comparing against CuBERT models as a strong baseline.
http://arxiv.org/pdf/2001.00059
Aditya Kanade, Petros Maniatis, Gogul Balakrishnan, Kensen Shi
cs.SE, cs.CL, cs.LG, cs.PL
Published in ICML 2020. This version (v.3) is the final camera-ready version of the paper. It contains the re-computed results, based on the open-sourced datasets
null
cs.SE
20191221
20200817
0 2 0 2 g u A 7 1 ] E S . s c [ 3 v 9 5 0 0 0 . 1 0 0 2 : v i X r a # Learning and Evaluating Contextual Embedding of Source Code # Aditya Kanade * 1 2 Petros Maniatis * 2 Gogul Balakrishnan 2 Kensen Shi 2 # Abstract # 1. Introduction Recent research has achieved impressive results on understanding and improving source code by building up on machine-learning techniques de- veloped for natural languages. A significant ad- vancement in natural-language understanding has come with the development of pre-trained con- textual embeddings, such as BERT, which can be fine-tuned for downstream tasks with less la- beled data and training budget, while achieving better accuracies. However, there is no attempt yet to obtain a high-quality contextual embed- ding of source code, and to evaluate it on multiple program-understanding tasks simultaneously; that is the gap that this paper aims to mitigate. Specifi- cally, first, we curate a massive, deduplicated cor- pus of 7.4M Python files from GitHub, which we use to pre-train CuBERT, an open-sourced code- understanding BERT model; and, second, we cre- ate an open-sourced benchmark that comprises five classification tasks and one program-repair task, akin to code-understanding tasks proposed in the literature before. We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, show- ing that CuBERT outperforms them all, even with shorter training, and with fewer labeled exam- ples. Future work on source-code embedding can benefit from reusing our benchmark, and from comparing against CuBERT models as a strong baseline. Modern software engineering places a high value on writing clean and readable code. This helps other developers under- stand the author’s intent so that they can maintain and extend the code. Developers use meaningful identifier names and natural-language documentation to make this happen (Mar- tin, 2008). As a result, source code contains substantial information that can be exploited by machine-learning algo- rithms. Indeed, sequence modeling on source code has been shown to be successful in a variety of software-engineering tasks, such as code completion (Hindle et al., 2012; Raychev et al., 2014), source code to pseudo-code mapping (Oda et al., 2015), API-sequence prediction (Gu et al., 2016), pro- gram repair (Pu et al., 2016; Gupta et al., 2017), and natural language to code mapping (Iyer et al., 2018), among others. The distributed vector representations of tokens, called to- ken (or word) embeddings, are a crucial component of neural methods for sequence modeling. Learning useful embeddings in a supervised setting with limited data is often difficult. Therefore, many unsupervised learning ap- proaches have been proposed to take advantage of large amounts of unlabeled data that are more readily available. This has resulted in ever more useful pre-trained token em- beddings (Mikolov et al., 2013a; Pennington et al., 2014; Bojanowski et al., 2017). However, the subtle differences in the meaning of a token in varying contexts are lost when each word is associated with a single representation. Recent techniques for learning contextual embeddings (McCann et al., 2017; Peters et al., 2018; Radford et al., 2018; 2019; Devlin et al., 2019; Yang et al., 2019) provide ways to com- pute representations of tokens based on their surrounding context, and have shown significant accuracy improvements in downstream tasks, even with only a small number of task-specific parameters. *Equal contribution 1Indian Institute of Science, Bangalore, India 2Google Brain, Mountain View, USA. Correspondence to: Aditya Kanade <[email protected]>, Petros Maniatis <mani- [email protected]>. Proceedings of the 37 th International Conference on Machine Learning, Online, PMLR 119, 2020. Copyright 2020 by the au- thor(s). Inspired by the success of pre-trained contextual embed- dings for natural languages, we present the first attempt to apply the underlying techniques to source code. In partic- ular, BERT (Devlin et al., 2019) produces a bidirectional Transformer encoder (Vaswani et al., 2017) by training it to predict values of masked tokens, and whether two sentences follow each other in a natural discourse. The pre-trained model can be fine-tuned for downstream supervised tasks and has been shown to produce state-of-the-art results on Learning and Evaluating Contextual Embedding of Source Code a number of natural-language understanding benchmarks. In this work, we derive a contextual embedding of source code by training a BERT model on source code. We call our model CuBERT, short for Code Understanding BERT. In order to achieve this, we curate a massive corpus of Python programs collected from GitHub. GitHub projects are known to contain a large amount of duplicate code. To avoid biasing the model to such duplicated code, we perform deduplication using the method of Allamanis (2018). The resulting corpus has 7.4 million files with a total of 9.3 billion tokens (16 million unique). For comparison, we also train Word2Vec embeddings (Mikolov et al., 2013a;b), namely, continuous bag-of-words (CBOW) and Skipgram embeddings, on the same corpus. • We show the efficacy of the pre-trained contextual em- bedding on five classification tasks. Our fine-tuned models outperform baseline LSTM models (with/with- out Word2Vec embeddings), as well as Transformers trained from scratch, even with reduced training data. • We evaluate CuBERT on a pointer prediction task and show that it outperforms state-of-the-art results signifi- cantly. • We make the models and datasets publicly available.1 We hope that future work benefits from our contribu- tions, by reusing our benchmark tasks, and by compar- ing against our strong baseline models. For evaluating CuBERT, we create a benchmark of five clas- sification tasks, and a sixth localization and repair task. The classification tasks range from classification of source code according to presence or absence of certain classes of bugs, to mismatch between a function’s natural language descrip- tion and its body, to predicting the right kind of exception to catch for a given code fragment. The localization and repair task, defined for variable-misuse bugs (Vasic et al., 2019), is a pointer-prediction task. Although similar tasks have appeared in prior work, the associated datasets come from different languages and varied sources; instead we create a cohesive multiple-task benchmark dataset in this work. To produce a high-quality dataset, we ensure that there is no overlap between pre-training and fine-tuning examples, and that all of the tasks are defined on Python code. # 2. Related Work Given the abundance of natural-language text, and the rel- ative difficulty of obtaining labeled data, much effort has been devoted to using large corpora to learn about language in an unsupervised fashion, before trying to focus on tasks with small labeled training datasets. Word2Vec (Mikolov et al., 2013a;b) computed word embeddings based on word co-occurrence and proximity, but the same embedding is used regardless of the context. The continued advances in word (Pennington et al., 2014) and subword (Bojanowski et al., 2017) embeddings led to publicly released pre-trained embeddings, used in a variety of tasks. We fine-tune CuBERT on each of the classification tasks and compare the results to multi-layered bidirectional LSTM (Hochreiter & Schmidhuber, 1997) models, as well as Transformers (Vaswani et al., 2017). We train the LSTM models from scratch and also using pre-trainined Word2Vec embeddings. Our results show that CuBERT consistently outperforms these baseline models by 3.2 % to 14.7 % across the classification tasks. We perform a number of additional studies by varying the sampling strategies used for training Word2Vec models, and by varying program lengths. In addition, we also show that CuBERT can be fine-tuned effectively using only 33% of the task-specific labeled data and with only 2 epochs, and that, even then, it attains results competitive to the baseline models trained with the full datasets and many more epochs. CuBERT, when fine-tuned on the variable-misuse localization and repair task, produces high classification, localization and localization+repair accuracies and outperforms published state-of-the-art models (Hellendoorn et al., 2020; Vasic et al., 2019). Our contributions are as follows: To deal with varying word context, contextual word embed- dings were developed (McCann et al., 2017; Peters et al., 2018; Radford et al., 2018; 2019), in which an embedding is learned for the context of a word in a particular sentence, namely the sequence of words preceding it and possibly following it. BERT (Devlin et al., 2019) improved natural- language pre-training by using a de-noising autoencoder. Instead of learning a language model, which is inherently sequential, BERT optimizes for predicting a noised word within a sentence. Such prediction instances are gener- ated by choosing a word position and either keeping it un- changed, removing the word, or replacing the word with a random wrong word. It also pre-trains with the objective of predicting whether two sentences can be next to each other. These pre-training objectives, along with the use of a Transformer-based architecture, gave BERT an accuracy boost in a number of NLP tasks over the state-of-the-art. BERT has been improved upon in various ways, including modifying training objectives, utilizing ensembles, combin- ing attention with autoregression (Yang et al., 2019), and expanding pre-training corpora and time (Liu et al., 2019). However, the main architecture of BERT seems to hold up as the state-of-the-art, as of this writing. • We present the first attempt at pre-training a BERT contextual embedding of source code. 1https://github.com/google-research/ google-research/tree/master/cubert Learning and Evaluating Contextual Embedding of Source Code In the space of programming languages, embeddings have been learned for specific software-engineering tasks (Chen & Monperrus, 2019). These include embeddings of variable and method identifiers using local and global context (Al- lamanis et al., 2015), abstract syntax trees (ASTs) (Mou et al., 2016; Zhang et al., 2019), AST paths (Alon et al., 2019), memory heap graphs (Li et al., 2016), and ASTs enriched with data-flow information (Allamanis et al., 2018; Hellendoorn et al., 2020). These approaches require an- alyzing source code beyond simple tokenization. In this work, we derive a pre-trained contextual embedding of tok- enized source code without explicitly modeling source-code- specific information, and show that the resulting embedding can be effectively fine-tuned for downstream tasks. CodeBERT (Feng et al., 2020) targets paired natural- language (NL) and multi-lingual programming-language (PL) tasks, such as code search and generation of code doc- umentation. It pre-trains a Transformer encoder by treating a natural-language description of a function and its body as separate sentences in the sentence-pair representation of BERT. We also handle natural language directly, but do not require such a separation. Natural-language tokens can be mixed with source-code tokens both within and across sentences in our encoding. One of our benchmark tasks, function-docstring mismatch, illustrates the ability of Cu- BERT to handle NL-PL tasks. # 3. Experimental Setup We now outline our benchmarks and experimental study. The supplementary material contains deeper detail aimed at reproducing our results. # 3.1. Code Corpus for Fine-Tuning Tasks We use the ETH Py150 corpus (Raychev et al., 2016) to gen- erate datasets for the fine-tuning tasks. This corpus consists of 150K Python files from GitHub, and is partitioned into a training split (100K files) and a test split (50K files). We held out 10K files from the training split as a validation split. We deduplicated the dataset in the fashion of Allamanis (2018). Finally, we drop from this corpus those projects for which licensing information was not available or whose licenses restrict use or redistribution. We call the resulting corpus the ETH Py150 Open corpus.2 This is our Python fine-tuning code corpus, and it consists of 74,749 training files, 8,302 validation files, and 41,457 test files. # 3.2. The GitHub Python Pre-Training Code Corpus Query’s public-data project, bigquery-public-data). We extracted all files ending in .py, under open-source, re- distributable licenses, removed symbolic links, and retained only files reported to be in the refs/heads/master branch. This resulted in about 16.2 million files. To avoid duplication between pre-training and fine-tuning data, we removed files that had high similarity to the files in the ETH Py150 Open corpus, using the method of Allamanis (2018). In particular, two files are considered similar to each other if the Jaccard similarity between the sets of tokens (identifiers and string literals) is above 0.8 and in addition, it is above 0.7 for multi-sets of tokens. This brought the dataset to 14.3 million files. We then further deduplicated the remaining files, by clustering them into equivalence classes holding similar files according to the same similarity metric, and keeping only one exemplar per equivalence class. This helps avoid biasing the pre-trained embedding. Finally, we removed files that could not be parsed. In the end, we were left with 7.4 million Python files containing over 9.3 billion tokens. This is our Python pre-training code corpus. # 3.3. Source-Code Modeling We first tokenize a Python program using the standard Python tokenizer (the tokenize package). We leave lan- guage keywords intact and produce special tokens for syn- tactic elements that have either no string representation (e.g., DEDENT tokens, which occur when a nested program scope concludes), or ambiguous interpretation (e.g., new-line char- acters inside string literals, at the logical end of a Python statement, or in the middle of a Python statement result in distinct special tokens). We split identifiers according to common heuristic rules (e.g., snake or Camel case). Finally, we split string literals using heuristic rules, on white-space characters, and on special characters. We limit all thus pro- duced tokens to a maximum length of 15 characters. We call this the program vocabulary. Our Python pre-training code corpus contained 16 million unique tokens. We greedily compress the program vocabulary into a subword vocabulary (Schuster & Nakajima, 2012) us- ing the SubwordTextEncoder from the Tensor2Tensor project (Vaswani et al., 2018)3, resulting in about 50K to- kens. All words in the program vocabulary can be losslessly encoded using one or more of the subword tokens. We tokenize programs first into program tokens, as de- scribed above, and then encode those tokens one by one in the subword vocabulary. The objective of this encod- ing scheme is to preserve syntactically meaningful bound- aries of tokens. For example, the identifier “snake case” We used the public GitHub repository hosted on Google’s BigQuery platform (the github repos dataset under Big- 2https://github.com/ google-research-datasets/eth_py150_open 3https://github.com/tensorflow/ tensor2tensor/blob/master/tensor2tensor/ data_generators/text_encoder.py Learning and Evaluating Contextual Embedding of Source Code could be encoded as “sna ke ca se”, preserving the snake case split of its characters, even if the subtoken “e c” were very popular in the corpus; the latter encoding might result in a smaller representation but would lose the intent of the programmer in using a snake-case identifier. Similarly, “i=0” may be very frequent in the corpus, but we still force it to be encoded as separate tokens i, =, and 0, ensuring that we preserve the distinction between operators and operands. Both the BERT model and the Word2Vec embeddings are built on the subword vocabulary. Swapped Operand Pradel & Sen (2018) propose the wrong binary operand task where a variable or constant is used incorrectly in an expression, but that task is quite similar to the variable-misuse task we already use. We therefore define another class of operand errors where the operands of non-commutative binary operators are swapped. The operands can be arbitrary subexpressions, and are not restricted to be just variables or constants. To simplify ex- ample generation, we restrict this task to examples in which the operator and operands all fit within a single line. # 3.4. Fine-Tuning Tasks To evaluate CuBERT, we design five classification tasks and a multi-headed pointer task. These are motivated by prior work, but unfortunately, the associated datasets come from different languages and varied sources. We want the tasks to be on Python code, and for accurate results, we ensure that there is no overlap between pre-training and fine-tuning datasets. We therefore create all the tasks on the ETH Py150 Open corpus (see Section 3.1). As discussed in Section 3.2, we ensure that there is no duplication between this and the pre-training corpus. We hope that our datasets for these tasks will be useful to others as well. The fine-tuning tasks are described below. A more detailed discussion is presented in the supplementary material. Function-Docstring Mismatch Developers are encour- aged to write descriptive docstrings to explain the function- ality and usage of functions. This provides parallel corpora between code and natural language sentences that have been used for machine translation (Barone & Sennrich, 2017), detecting uninformative docstrings (Louis et al., 2018) and to evaluate their utility to provide supervision in neural code search (Cambronero et al., 2019). We prepare a sentence- pair classification problem where the function and its doc- string form two distinct sentences. The positive examples come from the correct function-docstring pairs. We create negative examples by replacing correct docstrings with doc- strings of other functions, randomly chosen from the dataset. For this task, the existing docstring is removed from the function body. Variable-Misuse Classification Allamanis et al. (2018) observed that developers may mistakenly use an incorrect variable in the place of a correct one. These mistakes may occur when developers copy-paste similar code but forget to rename all occurrences of variables from the original fragment, or when there are similar variable names that can be confused with each other. These can be subtle errors that remain undetected during compilation. The task by Allamanis et al. (2018) is to choose the correct variable name at a location within a C# function. We take the classification version restated by Vasic et al. (2019), wherein, given a function, the task is to predict whether there is a variable misuse at any location in the function, without specifying a particular location to consider. Here, the classifier has to consider all variables and their usages to make the decision. In order to create negative (buggy) examples, we replace a variable use at some location with another variable that is defined within the function. Wrong Binary Operator Pradel & Sen (2018) proposed the task of detecting whether a binary operator in a given expression is correct. They use features extracted from limited surrounding context. We use the entire function with the goal of detecting whether any binary operator in the function is incorrect. The negative examples are created by randomly replacing some binary operator with another type-compatible operator. Exception Type While it is possible to write generic exception handlers (e.g., “except Exception” in Python), it is considered a good coding practice to catch and handle the precise exceptions that can be raised by a code fragment.4 We identified the 20 most common excep- tion types from the GitHub dataset, excluding the catch-all Exception (full list in Table 1 in the supplementary ma- terial). Given a function with an except clause for one of these exception types, we replace the exception with a spe- cial “hole” token. The task is the multi-class classification problem of predicting the original exception type. Variable-Misuse Localization and Repair As an in- stance of a non-classification task, we consider the joint classification, localization, and repair version of the variable- misuse task from Vasic et al. (2019). Given a function, the task is to predict one pointer (called the localization pointer) to identify a variable-misuse location, and another pointer (called the repair pointer) to identify a variable from the same function that is the right one to use at the faulty loca- tion. The model is also trained to classify functions that do not contain any variable misuse as bug-free by making the localization pointer point to a special location in the func- tion. We create negative examples using the same method 4https://google.github.io/styleguide/ pyguide.html#24-exceptions Learning and Evaluating Contextual Embedding of Source Code Train Validation Test Variable-Misuse Classification Wrong Binary Operator Swapped Operand Function-Docstring Exception Type Variable-Misuse Localization and Repair 700,708 459,400 236,246 340,846 18,480 700,708 8,192 (75,478) 8,192 (49,804) 8,192 (26,118) 8,192 (37,592) 2,088 (2,088) 8,192 (75,478) 378,440 251,804 130,972 186,698 10,348 378,440 Table 1. Benchmark fine-tuning datasets. Note that for validation, we have subsampled the original datasets (in parentheses) down to 8,192 examples, except for exception classification, which only had 2,088 validation examples, all of which are included. # as used in the Variable-Misuse Classification task. Table 1 lists the sizes of the resulting benchmark datasets extracted from the fine-tuning corpus. The Exception Type task contains significantly fewer examples than the other tasks, since examples for this task only come from functions that catch one of the chosen 20 exception types. by Vasic et al. (2019); whereas in that work, the pointers are computed from the output of an LSTM layer, in our model, they are computed from the last-layer hiddens of BERT. # 3.6. Baselines 3.6.1. WORD2VEC # 3.5. BERT for Source Code The BERT model (Devlin et al., 2019) consists of a multi- layered Transformer encoder. It is trained with two tasks: (1) to predict the correct tokens in a fraction of all positions, some of which have been replaced with incorrect tokens or the special [MASK] token (the Masked Language Model task, or MLM) and (2) to predict whether the two sentences separated by the special [SEP] token follow each other in some natural discourse (the Next-Sentence Prediction task, or NSP). Thus, each example consists of one or two sentences, where a sentence is the concatenation of con- tiguous lines from the source corpus, sized to fit the target example length. To ensure that every sentence is treated in multiple instances of both MLM and NSP, BERT by default duplicates the corpus 10 times, and generates independently derived examples from each duplicate. With 50 % proba- bility, the second example sentence comes from a random document (for NSP). A token is chosen at random for an MLM prediction (up to 20 per example), and from those chosen, 80 % are masked, 10 % are left undisturbed, and 10 % are replaced with a random token. CuBERT is similarly formulated, but a CuBERT line is a log- ical code line, as defined by the Python standard. Intuitively, a logical code line is the shortest sequence of consecutive lines that constitutes a legal statement, e.g., it has correctly matching parentheses. We count example lengths by count- ing the subword tokens of both sentences (see Section 3.3). We train Word2Vec models using the same pre-training corpus as the BERT model. To maintain parity, we gen- erate the dataset for Word2Vec using the same pipeline as BERT but by disabling masking and generation of negative examples for NSP. The dataset is generated without any duplication. We train both CBOW and Skipgram models using GenSim ( ˇReh˚uˇrek & Sojka, 2010). To deal with the large vocabulary, we use negative sampling and hierarchical softmax (Mikolov et al., 2013a;b) to train the two versions. In all, we obtain four types of Word2Vec embeddings. 3.6.2. BIDIRECTIONAL LSTM AND TRANSFORMER In order to obtain context-sensitive encodings of input se- quences for the fine-tuning tasks, we use multi-layered bidi- rectional LSTMs (Hochreiter & Schmidhuber, 1997) (BiL- STMs). These are initialized with the pre-trained Word2Vec embeddings. To further evaluate whether LSTMs alone are sufficient without pre-training, we also train BiLSTMs with an embedding matrix that is initialized from scratch with Xavier initialization (Glorot & Bengio, 2010). We also trained Transformer models (Vaswani et al., 2017) for our fine-tuning tasks. We used BERT’s own Transformer implementation, to ensure comparability of results. For com- parison with prior work, we use the unidirectional LSTM and pointer model from Vasic et al. (2019) for the Variable- Misuse Localization and Repair task. # 4. Experimental Results We train the BERT Large model having 24 layers with 16 attention heads and 1024 hidden units. Sentences are cre- ated from our pre-training dataset. Task-specific classifiers pass the embedding of a special start-of-example [CLS] token through feed-forward and softmax layers. For the pointer prediction task, the pointers are computed exactly as # 4.1. Training Details CuBERT’s dataset generation duplicates the corpus 10 times, whereas Word2Vec is trained without duplication. To com- pensate for this difference, we trained Word2Vec for 10 Learning and Evaluating Contextual Embedding of Source Code epochs and CuBERT for 1 epoch. We chose models by validation accuracy, both during hyperparameter searches, and during model selection within an experiment. We pre-train CuBERT with the default configuration of the BERT Large model, one model per example length (128, 256, 512, and 1,024 subword tokens) with batch sizes of 8,192, 4,096, 2,048, and 1,024 respectively, and the default BERT learning rate of 1 × 10−4. Fine-tuned models also used the same batch sizes as for pre-training, and BERT’s default learning rate (5 × 10−5). For both, we gradually warm up the learning rate for the first 10 % of examples, which is BERT’s default value. For Word2Vec, when training with negative samples, we choose 5 negative samples. The embedding size for all the Word2Vec pre-trained models is set at 1,024. For the baseline BiLSTM models, we performed a hyperparameter search on each task and pre-training configuration separately (5 tasks, each trained with the four Word2Vec embeddings, plus the randomly initialized embeddings), for the 512 ex- ample length. For each of these 25 task configurations, we varied the number of layers (1 to 3), the number of hid- den units (128, 256 and 512), the LSTM output dropout probability (0.1 and 0.5), and the learning rate (1 × 10−3, 1 × 10−4 and 1 × 10−5). We used the Adam (Kingma & Ba, 2014) optimizer throughout, and batch size 8,192 for all tasks except the Exception-Type task, for which we used batch size 64. Invariably, the best hyperparameter selection had 512 hidden units per layer and learning rate of 1 × 10−3, but the number of layers (mostly 2 or 3) and dropout prob- ability varied across best task configurations. Though no single Word2Vec configuration is the best, CBOW trained with negative sampling gives the most consistent results overall. timizer, a batch size of 256, and example length 512. In contrast to the original work (Vasic et al., 2019), we gen- erated one pair of buggy/bug-free examples per function (rather than one per variable use, per function, which would bias towards longer functions), and use CuBERT’s subword- tokenized vocabulary of 50K subtokens (rather than a lim- ited full-token vocabulary, which leaves many tokens out of vocabulary). We used TPUs for training our models, except for pre- training Word2Vec embeddings, and the pointer model by Vasic et al. (2019). For the rest, and for all evaluations, we used P100 or V100 GPUs. All experiments using pre- trained word or contextual embeddings continued to fine- tune weights throughout training. # 4.2. Research Questions We set out to answer the following research questions. We will address each with our results. 1. Do contextual embeddings help with source-code anal- ysis tasks, when pre-trained on an unlabeled code cor- pus? We compare CuBERT to BiLSTM models with and without pre-trained Word2Vec embeddings on the classification tasks (Section 4.3). 2. Does fine-tuning actually help, or is the Transformer model by itself sufficient? We compare fine-tuned CuBERT models to Transformer-based models trained from scratch on the classification tasks (Section 4.4). For the baseline Transformer models, we originally at- tempted to train a model of the same configuration as Cu- BERT. However, the sizes of our fine-tuning datasets seemed too small to train that large a Transformer. Instead, we performed a hyperparameter search for each task individ- ually, for the 512 example length. We varied the num- ber of transformer layers (1 to 6), hidden units (128, 256 and 512), learning rates (1 × 10−3, 5 × 10−4, 1 × 10−4, 5 × 10−5 and 1 × 10−5) and batch sizes (512, 1,024, 2,048 and 4,096). The best architecture varied across the tasks: for example, 5 layers with 128 hiddens and the highest learning rate worked best for the Function-Docstring task, whereas for the Exception-Type task, 2 layers, 512 hiddens, and the second lowest learning rate worked best. 3. How does the performance of CuBERT on the classifi- cation tasks scale with the amount of labeled training data? We compare the performance of fine-tuned Cu- BERT models when fine-tuning with 33 %, 66 % and 100 % of the task training data (Section 4.5). 4. How does context size affect CuBERT? We compare fine-tuning performance for different example lengths on the classification tasks (Section 4.6). 5. How does CuBERT perform on complex tasks, against state-of-the-art methods? We implemented and fine- tuned a model for a multi-headed pointer prediction task, namely, the Variable-Misuse Localization and Repair task (Section 4.7). We compare it to the models from (Vasic et al., 2019) and (Hellendoorn et al., 2020). Finally, for our baseline pointer model (referred to as LSTM+pointer below) we searched over the following hy- perparameter choices: hidden sizes of 512 and 1,024, token embedding sizes of 512 and 1,024, and learning rates of 1 × 10−1, 1 × 10−2 and 1 × 10−3. We used the Adam op- Except for Section 4.6, all the results are presented for se- quences of length 512. We give examples of classification instances in the supplementary material and include visual- izations of attention weights for them. Learning and Evaluating Contextual Embedding of Source Code Setting Misuse Operator Operand Docstring Exception BiLSTM (100 epochs) From scratch CBOW Skipgram ns hs ns hs 76.29 % 88.07 % 83.65 % 80.33 % 86.82 % 89.80 % 85.85 % 90.14 % 87.69 % 78.00 % 83.81 % 89.31 % 85.14 % 77.06 % 88.80 % 89.75 % 80.53 % 86.34 % 76.01 % 52.79 % 89.08 % 67.01 % 60.31 % 60.07 % 65.06 % CuBERT 2 epochs 10 epochs 20 epochs 94.04 % 95.14 % 95.21 % 89.90 % 92.15 % 92.46 % 92.20 % 93.62 % 93.36 % 97.21 % 98.08 % 98.09 % 61.04 % 77.97 % 79.12 % Transformer 100 epochs 78.28 % 76.55 % 87.83 % 91.02 % 49.56 % Table 2. Test accuracies of fine-tuned CuBERT against BiLSTM (with and without Word2Vec embeddings) and Transformer trained from scratch on the classification tasks. “ns” and “hs” respectively refer to negative sampling and hierarchical softmax settings used for training CBOW and Skipgram models. “From scratch” refers to training with freshly initialized token embeddings, without pre-training. # 4.3. Contextual vs. Word Embeddings The purpose of this analysis is to understand how much pre- trained contextual embeddings help, compared to word em- beddings. For each classification task, we trained BiLSTM models starting with each of the Word2Vec embeddings, namely, continuous bag of words (CBOW) and Skipgram trained with negative sampling or hierarchical softmax. We trained the BiLSTM models for 100 epochs and the Cu- BERT models for 20 epochs, and all models stopped im- proving by the end. The resulting test accuracies are shown in Table 2 (first 5 rows and next-to-last row). CuBERT consistently outper- forms BiLSTM (with the best task-wise Word2Vec configu- ration) on all tasks, by a margin of 3.2 % to 14.7 %. Thus, the pre-trained contextual embedding provides superior re- sults even with a smaller budget of 20 epochs, compared to the 100 epochs used for BiLSTMs. The Exception-Type classification task has an order of magnitude less training data than the other tasks (see Table 1). The difference be- tween the performance of BiLSTM and CuBERT is substan- tially higher for this task. Thus, fine-tuning is of much value for tasks with limited labeled training data. We analyzed the performance of CuBERT with the reduced fine-tuning budget of only 2 and 10 epochs (see the remain- ing rows of the CuBERT section in Table 2). Except for the Exception Type task, CuBERT outperforms the best 100-epoch BiLSTM within 2 fine-tuning epochs. On the Exception-Type task, CuBERT with 2 fine-tuning epochs outperforms all but two configurations of the BiLSTM base- line. This shows that, even when restricted to just a few fine-tuning epochs, CuBERT can reach accuracies that are comparable to or better than those of BiLSTMs trained with Word2Vec embeddings. trained embeddings. The results are shown in the first row of Table 2. Compared to those, the use of Word2Vec embed- dings performs better by a margin of 2.7 % to 14.2 %. # 4.4. Is Transformer All You Need? One may wonder if CuBERT’s promising results derive more from using a Transformer-based model for its classi- fication tasks, and less from the actual, unsupervised pre- training. Here we compare our results on the classification tasks to a Transformer-based model trained from scratch, i.e., without the benefit of a pre-trained embedding. As discussed in Section 4.1, the size of the training data limited us to try out Transformers that were substantially smaller than the CuBERT model (BERT Large architecture). All the Transformer models were trained for 100 epochs during which their performance stopped improving. We selected the best model within the chosen hyperparameters for each task based on best validation accuracy. As seen from the last row of Table 2, the performance of Cu- BERT is substantially higher than the Transformer models trained from scratch. Thus, for the same choice of archi- tecture (i.e., Transformer) pre-training seems to help by enabling training of a larger and better model. # 4.5. The Effects of Little Supervision The big draw of unsupervised pre-training followed by fine-tuning is that some tasks have small labeled datasets. We study here how CuBERT fares with reduced training data. We sampled uniformly the fine-tuning dataset to 33 % and 66 % of its size, and produced corresponding training datasets for each classification task. We then fine-tuned the pre-trained CuBERT model with each of the 3 different training splits. Validation and testing were done with the same original datasets. Table 3 shows the results. To sanity-check our findings about BiLSTMs, we also trained the BiLSTM models from scratch, without pre- The Function Docstring task seems robust to the reduction Learning and Evaluating Contextual Embedding of Source Code Best of # Epochs Train Fraction Misuse Operator Operand Docstring Exception 2 100 % 66 % 33 % 94.04 % 89.90 % 93.11 % 88.76 % 91.40 % 86.42 % 92.20 % 91.61 % 90.52 % 97.21 % 97.04 % 96.38 % 61.04 % 19.49 % 20.09 % 10 100 % 66 % 33 % 95.14 % 92.15 % 94.78 % 91.51 % 94.28 % 90.66 % 93.62 % 93.37 % 92.58 % 98.08 % 97.93 % 97.36 % 77.97 % 75.24 % 67.34 % 20 100 % 66 % 33 % 95.21 % 92.46 % 94.90 % 91.79 % 94.45 % 91.09 % 93.36 % 93.39 % 92.82 % 98.09 % 97.99 % 97.63 % 79.12 % 77.31 % 74.98 % Table 3. Effects of reducing training-split size on fine-tuning performance on the classification tasks. 83.97 % 79.29 % 92.02 % 88.19 % 95.21 % 92.46 % 95.83 % 93.38 % 78.02 % 88.03 % 93.36 % 95.62 % 98.19 % 98.14 % 98.09 % 97.90 % 62.03 % 72.80 % 79.12 % 81.27 % 74.32 % 78.47 % 80.33 % 81.92 % # Length Misuse Operator Operand Docstring Exception Misuse on BiLSTM Table 4. Best out of 20 epochs of fine-tuning, for four example lengths, on the classification tasks. For contrast, we also include results for Variable Misuse using the BiLSTM Word2Vec (CBOW + ns) classifier as length varies. of the training dataset, both early and late in the fine-tuning process (that is, within 2 vs. 20 epochs), whereas the Excep- tion Classification task is heavily impacted by the dataset reduction, given that it has relatively few training exam- ples to begin with. Interestingly enough, for some tasks, even fine-tuning for only 2 epochs and only using a third of the training data outperforms the baselines. For example, for Variable Misuse and Function Docstring, CuBERT at 2 epochs and 33 % of training data substantially outperforms the BiLSTM with Word2Vec and the Transformer baselines. comparison between the docstring and the function signa- ture, and including more context dilutes the model’s focus. For comparison, we also evaluated the BiLSTM model on varying example lengths for the Variable-Misuse task with CBOW and negative sampling (last column of Table 4). More context does seem to benefit the BiLSTM Variable- Misuse classifier as well. However, the improvement offered by CuBERT with increasing context is significantly greater. # 4.7. Evaluation on a Multi-Headed Pointer Task # 4.6. The Effects of Context Context size is especially useful in code tasks, given that some relevant information may lie many “sentences” away from its locus of interest. Here we study how reducing the context length (i.e., the length of the examples used to pre-train and fine-tune) affects performance. We produce data with shorter example lengths, by first pre-training a model on a given example length, and then fine-tuning that model on the corresponding task with examples of that same example length.5 Table 4 shows the results. Although context seems to be important to most tasks, the Function Docstring task paradoxically improves with less context. This may be because the task primarily depends on We now discuss the results of fine-tuning CuBERT to predict the localization and repair pointers for the variable-misuse task. For this task, we implement the multi-headed pointer model from Vasic et al. (2019) on top of CuBERT. The baseline consists of the same pointer model on a unidirec- tional LSTM as used by Vasic et al. (2019). We refer to these models as CuBERT+pointer and LSTM+pointer, re- spectively. Due to limitations of space, we omit the details of the pointer model and refer the reader to the above pa- per. However, the two implementations are identical above the sequence encoding layer; the difference is the BERT encoder versus an LSTM encoder. As reported in Section 4 of that work, to enable comparison with an enumerative approach, the evaluation was performed only on 12K test examples. Instead, here we report the numbers on all 378K of our test examples for both models. 5Note that we did not attempt to, say, pre-train on length 1,024 and then fine-tune that model on length 256-examples, which may also be a practical scenario. We trained the baseline model for 100 epochs and fine-tuned Learning and Evaluating Contextual Embedding of Source Code Model Test Data Setting True Positive Classification Localization Loc+Repair Accuracy Accuracy Accuracy LSTM C 100 epochs 82.41 % 79.30 % 64.39 % 56.89 % CuBERT C 2 epochs 10 epochs 20 epochs 96.90 % 97.23 % 97.27 % 94.87 % 95.49 % 95.40 % 91.14 % 92.33 % 92.12 % 89.41 % 90.84 % 90.61 % CuBERT H 2 epochs 10 epochs 20 epochs 95.63 % 96.07 % 96.14 % 90.71 % 91.71 % 91.49 % 83.50 % 85.37 % 84.85 % 80.77 % 82.91 % 82.30 % Hellendoorn et al. (2020) H 81.90 % 73.80 % Table 5. Variable-misuse localization and repair task. Comparison of the LSTM+pointer model (Vasic et al., 2019) to our fine-tuned CuBERT+pointer model. We also show results on the test data by Hellendoorn et al. (2020) computed by us and reported by the authors in their Table 1. In the Test Data column, C means our CuBERT test dataset, and H means the test dataset used by Hellendoorn et al. (2020). CuBERT for 2, 10, and 20 epochs. Table 5 gives the results along the same metrics as Vasic et al. (2019). The metrics are defined as follows: 1) True Positive is the percentage of bug-free functions classified as bug-free. 2) Classification Accuracy is the percentage of correctly classified examples (between bug-free and buggy). 3) Localization Accuracy is the percentage of buggy examples for which the localization pointer correctly identifies the bug location. 4) Localiza- tion+Repair Accuracy is the percentage of buggy examples for which both the localization and repair pointers make correct predictions. As seen from Table 5 (top 4 rows), CuBERT+pointer outperforms LSTM+pointer consistently across all the metrics, and even within 2 and 10 epochs. sive code corpus. We use only source-code tokens and leave it to the underly- ing Transformer model to infer any structural interactions between them through self-attention. Prior work (Allamanis et al., 2018; Hellendoorn et al., 2020) has argued for ex- plicitly using structural program information (e.g., control flow and data flow). It is an interesting avenue of future work to incorporate such information in pre-training using relation-aware Transformers (Shaw et al., 2018). However, our improved results in comparison to Hellendoorn et al. (2020) show that CuBERT is a simple yet powerful tech- nique and provides a strong baseline for future work on source-code representations. More recently, Hellendoorn et al. (2020) evaluated hybrid models for the same task, combining graph neural networks, Transformers, and RNNs, and greatly improving prior re- sults. To compare, we obtained the same test dataset from the authors, and evaluated our CuBERT fine-tuned model on it. The last four rows of Table 5 show our results and the results reported in that work. Interestingly, the models by Hellendoorn et al. (2020) make use of richer input rep- resentations, including syntax, data flow, and control flow. Nevertheless, CuBERT outperforms them while using only a lexical representation of the input program. While surpassing the accuracies achieved by CuBERT with newer models and pre-training/fine-tuning methods would be a natural extension to this work, we also envision other follow-up work. There is increasing interest in develop- ing pre-training methods that can produce smaller models more efficiently and that trade-off accuracy for reduced model size. Further, our benchmark could be valuable to techniques that explore other program representations (e.g., trees and graphs), in multi-task learning, and to develop related tasks such as program synthesis. # 5. Conclusions and Future Work # Acknowledgements We present the first attempt at pre-trained contextual em- bedding of source code by training a BERT model, called CuBERT, which we fine-tuned on five classification tasks, and compared against BiLSTM with Word2Vec embeddings and Transformer models. As a more challenging task, we also evaluated CuBERT on a multi-headed pointer predic- tion task. CuBERT outperformed the baseline models con- sistently. We evaluated CuBERT with less data and fewer epochs, highlighting the benefits of pre-training on a mas- We are indebted to Daniel Tarlow for his guidance and gen- erous advice throughout the development of this work. Our work has also improved thanks to feedback, use cases, help- ful libraries, and proofs of concept offered by David Bieber, Vincent Hellendoorn, Ben Lerner, Hyeontaek Lim, Rishabh Singh, Charles Sutton, and Manushree Vijayvergiya. Fi- nally, we are grateful to the anonymous reviewers, who gave useful, constructive comments and helped us improve our presentation and results. Learning and Evaluating Contextual Embedding of Source Code # References The adverse effects of code duplica- CoRR, tion in machine learning models of code. abs/1812.06469, 2018. URL http://arxiv.org/ abs/1812.06469. Allamanis, M., Barr, E. T., Bird, C., and Sutton, C. Sug- gesting accurate method and class names. In Proceed- ings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2015, pp. 38–49, New York, NY, USA, 2015. ACM. ISBN 978-1-4503- 3675-8. doi: 10.1145/2786805.2786849. URL http: //doi.acm.org/10.1145/2786805.2786849. Glorot, X. and Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Pro- ceedings of the thirteenth international conference on artificial intelligence and statistics, pp. 249–256, 2010. Gu, X., Zhang, H., Zhang, D., and Kim, S. Deep api In Proceedings of the 2016 24th ACM SIG- learning. SOFT International Symposium on Foundations of Soft- ware Engineering, FSE 2016, pp. 631–642, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4218-6. doi: 10.1145/2950290.2950334. URL http://doi.acm. org/10.1145/2950290.2950334. Allamanis, M., Brockschmidt, M., and Khademi, M. Learn- ing to represent programs with graphs. In International Conference on Learning Representations, 2018. Alon, U., Zilberstein, M., Levy, O., and Yahav, E. Code2vec: Learning distributed representations of code. Proc. ACM Program. Lang., 3(POPL):40:1–40:29, January 2019. ISSN 2475-1421. doi: 10.1145/3290353. URL http: //doi.acm.org/10.1145/3290353. Barone, A. V. M. and Sennrich, R. A parallel corpus of python functions and documentation strings for auto- mated code documentation and code generation. arXiv preprint arXiv:1707.02275, 2017. Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. En- riching word vectors with subword information. Transac- tions of the Association for Computational Linguistics, 5: 135–146, 2017. Cambronero, J., Li, H., Kim, S., Sen, K., and Chandra, S. When deep learning met code search. arXiv preprint arXiv:1905.03813, 2019. Gupta, R., Pal, S., Kanade, A., and Shevade, S. Deep- fix: Fixing common c language errors by deep learn- In Proceedings of the Thirty-First AAAI Confer- ing. ence on Artificial Intelligence, AAAI’17, pp. 1345–1351. AAAI Press, 2017. URL http://dl.acm.org/ citation.cfm?id=3298239.3298436. Hellendoorn, V. J., Sutton, C., Singh, R., Maniatis, P., and Bieber, D. Global relational models of source code. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum? id=B1lnbRNtwr. Hindle, A., Barr, E. T., Su, Z., Gabel, M., and Devanbu, P. On the naturalness of software. In 2012 34th International Conference on Software Engineering (ICSE), pp. 837– 847, June 2012. doi: 10.1109/ICSE.2012.6227135. Hochreiter, S. and Schmidhuber, J. Long short-term memory. Neural Comput., 9(8):1735–1780, Novem- ber 1997. ISSN 0899-7667. doi: 10.1162/neco.1997. 9.8.1735. URL http://dx.doi.org/10.1162/ neco.1997.9.8.1735. Chen, Z. and Monperrus, M. A literature study of embed- dings on source code. arXiv preprint arXiv:1904.03061, 2019. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for lan- guage understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, Minneapolis, Minnesota, June 2019. Asso- ciation for Computational Linguistics. doi: 10.18653/ v1/N19-1423. URL https://www.aclweb.org/ anthology/N19-1423. Feng, Z., Guo, D., Tang, D., Duan, N., Feng, X., Gong, M., Shou, L., Qin, B., Liu, T., Jiang, D., et al. Code- bert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155, 2020. Iyer, S., Konstas, I., Cheung, A., and Zettlemoyer, L. Mapping language to code in programmatic context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - November 4, 2018, pp. 1643– 1652, 2018. URL https://www.aclweb.org/ anthology/D18-1192/. Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Li, Y., Tarlow, D., Brockschmidt, M., and Zemel, R. S. Gated graph sequence neural networks. In 4th Interna- tional Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016. URL http://arxiv.org/ abs/1511.05493. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Roberta: Learning and Evaluating Contextual Embedding of Source Code A robustly optimized BERT pretraining approach. CoRR, abs/1907.11692, 2019. URL http://arxiv.org/ abs/1907.11692. Louis, A., Dash, S. K., Barr, E. T., and Sutton, C. Deep learning to detect redundant method comments. arXiv preprint arXiv:1806.04616, 2018. Martin, R. C. Clean Code: A Handbook of Agile Soft- ware Craftsmanship. Prentice Hall PTR, Upper Saddle ISBN 0132350882, River, NJ, USA, 1 edition, 2008. 9780132350884. McCann, B., Bradbury, J., Xiong, C., and Socher, R. Learned in translation: Contextualized word vectors. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fer- gus, R., Vishwanathan, S., and Garnett, R. (eds.), Ad- vances in Neural Information Processing Systems 30, pp. 6294–6305. Curran Associates, Inc., 2017. Mikolov, T., Chen, K., Corrado, G., and Dean, J. Efficient estimation of word representations in vector space. In 1st International Conference on Learning Representations, ICLR 2013, Scottsdale, Arizona, USA, May 2-4, 2013, Workshop Track Proceedings, 2013a. URL http:// arxiv.org/abs/1301.3781. Pradel, M. and Sen, K. Deepbugs: A learning approach to name-based bug detection. Proc. ACM Program. Lang., 2(OOPSLA):147:1–147:25, October 2018. ISSN 2475- 1421. doi: 10.1145/3276517. URL http://doi.acm. org/10.1145/3276517. Pu, Y., Narasimhan, K., Solar-Lezama, A., and Barzilay, R. Sk p: A neural program corrector for moocs. In Com- panion Proceedings of the 2016 ACM SIGPLAN Interna- tional Conference on Systems, Programming, Languages and Applications: Software for Humanity, SPLASH Com- panion 2016, pp. 39–40, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4437-1. doi: 10.1145/2984043. 2989222. URL http://doi.acm.org/10.1145/ 2984043.2989222. T., Improving language un- and Sutskever, URL derstanding by generative pre-training. https://s3-us-west-2. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf, 2018. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. Distributed representations of words and phrases and their compositionality. In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Sys- tems 26, pp. 3111–3119. Curran Associates, Inc., 2013b. Mou, L., Li, G., Zhang, L., Wang, T., and Jin, Z. Convolutional neural networks over tree structures In Proceed- for programming language processing. ings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pp. 1287–1293. AAAI Press, URL http://dl.acm.org/citation. 2016. cfm?id=3015812.3016002. Raychev, V., Vechev, M., and Yahav, E. Code com- In Proceed- pletion with statistical language models. ings of the 35th ACM SIGPLAN Conference on Pro- gramming Language Design and Implementation, PLDI ’14, pp. 419–428, New York, NY, USA, 2014. ACM. doi: 10.1145/2594291. ISBN 978-1-4503-2784-8. 2594321. URL http://doi.acm.org/10.1145/ 2594291.2594321. Raychev, V., Bielik, P., and Vechev, M. T. Probabilistic model for code with decision trees. In Proceedings of the 2016 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications, OOPSLA 2016, part of SPLASH 2016, Ams- terdam, The Netherlands, October 30 - November 4, 2016, pp. 731–747, 2016. Oda, Y., Fudaba, H., Neubig, G., Hata, H., Sakti, S., Toda, T., and Nakamura, S. Learning to generate pseudo-code from source code using statistical machine translation (t). In 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 574–584. IEEE, 2015. ˇReh˚uˇrek, R. and Sojka, P. Software Framework for Topic Modelling with Large Corpora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50, Valletta, Malta, May 2010. http://is.muni.cz/publication/ ELRA. 884893/en. Pennington, J., Socher, R., and Manning, C. D. Glove: Global vectors for word representation. In In EMNLP, 2014. Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. Deep contextualized word representations. In Proceedings of NAACL-HLT, pp. 2227–2237, 2018. Schuster, M. and Nakajima, K. Japanese and korean voice search. In International Conference on Acoustics, Speech and Signal Processing, pp. 5149–5152, 2012. Shaw, P., Uszkoreit, J., and Vaswani, A. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155, 2018. Learning and Evaluating Contextual Embedding of Source Code Vasic, M., Kanade, A., Maniatis, P., Bieber, D., and Singh, R. Neural program repair by jointly learning to localize and repair. CoRR, abs/1904.01720, 2019. URL http: //arxiv.org/abs/1904.01720. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. u., and Polosukhin, I. Atten- tion is all you need. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Gar- nett, R. (eds.), Advances in Neural Information Process- ing Systems 30, pp. 5998–6008. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/ 7181-attention-is-all-you-need.pdf. Vaswani, A., Bengio, S., Brevdo, E., Chollet, F., Gomez, A. N., Gouws, S., Jones, L., Kaiser, L., Kalchbrenner, N., Parmar, N., Sepassi, R., Shazeer, N., and Uszko- reit, J. Tensor2tensor for neural machine translation. In Proceedings of the 13th Conference of the Associa- tion for Machine Translation in the Americas, AMTA 2018, Boston, MA, USA, March 17-21, 2018 - Volume 1: Research Papers, pp. 193–199, 2018. URL https: //www.aclweb.org/anthology/W18-1819/. Yang, Z., Dai, Z., Yang, Y., Carbonell, J. G., Salakhut- dinov, R., and Le, Q. V. Xlnet: Generalized autore- gressive pretraining for language understanding. CoRR, abs/1906.08237, 2019. URL http://arxiv.org/ abs/1906.08237. Zhang, J., Wang, X., Zhang, H., Sun, H., Wang, K., and Liu, X. A novel neural source code representation based on abstract syntax tree. In 2019 IEEE/ACM 41st Interna- tional Conference on Software Engineering (ICSE), pp. 783–794. IEEE, 2019. Learning and Evaluating Contextual Embedding of Source Code # A. Open-Sourced Artifacts We release data and some source-code utilities at https://github.com/google-research/ google-research/tree/master/cubert. The repository contains the following: Exception task, which is a multi-class classification task, had a different number of examples per class (i.e., exception types). For the Exception task, we show the breakdown of example counts per label for our fine-tuning dataset splits in Table 6. GitHub Manifest A list of all the file versions we included into our pre-training corpus, after removing files simi- lar to the fine-tuning corpus6, and after deduplication. The manifest can be used to retrieve the file contents from GitHub or Google’s BigQuery. This dataset was retrieved from Google’s BigQuery on June 21, 2020. Vocabulary Our subword vocabulary, computed from the pre-training corpus. Pre-trained Models Pre-trained models on the pre- training corpus, after 1 and 2 epochs, for examples of length 512, and the BERT Large architecture. Task Datasets Datasets containing training, validation, and testing examples for each of the 6 tasks. For the clas- sification tasks, we provide original source code and classification labels. For the localization and repair task, we provide subtokenized code, and masks speci- fying the targets. Fine-tuned Models Fine-tuned models for the 6 tasks. Fine-tuning was done on the 1-epoch pre-trained model. For each classification task, we provide the checkpoint with highest validation accuracy; for the localization and repair task, we provide the checkpoint with highest localization and repair accuracy. These are the check- points we used to evaluate on our test datasets, and to compute the numbers in the main paper. Code-encoding Library We provide code for tokenizing Python code, and for producing inputs to CuBERT’s pre-training and fine-tuning models. # B.2. Fine-Tuning Task Datasets In this section, we describe in detail how we produced our fine-tuning datasets (Section 3.4 of the main paper). A common primitive in all our data generation is splitting a Python module into functions. We do this by parsing the Python file and identifying function definitions in the Abstract Syntax Tree that have no other function definition between themselves and the root of the tree. The resulting functions include functions defined at module scope, but also methods of classes and subclasses. Not included are functions defined within other function and method bodies, or methods of classes that are, themselves, defined within other function or method bodies. We do not filter functions by length, although task-specific data generation may filter out some functions (see below). When generating examples for a fixed-length pre-training or fine-tuning model, we prune all examples to the maximum target sequence length (in this paper we consider 128, 256, 512, and 1,024 subtokenized sequence lengths). Note that if a synthetically generated buggy/bug-free example pair differs only at a location beyond the target length (say on the 2,000-th subtoken), we still retain both examples. For instance, for the Variable-Misuse Localization and Repair task, we retain both buggy and bug-free examples, even if the error and/or repair locations lie beyond the end of the maximum target length. During evaluation, if the error or repair locations fall beyond the length limit of the example, we count the example as a model failure. B.2.1. REPRODUCIBLE DATA GENERATION Localization-and-repair Fine-tuning Model We provide a library for constructing the localization-and-repair model, on top of CuBERT’s encoder layers. For the classification tasks, the model is identical to that of BERT’s classification fine-tuning model. Please see the README for details, file encoding and schema, and terms of use. We make pseudorandom choices at various stages in fine- tuning data generation. It was important to design a pseu- dorandomness mechanism that gave (a) reproducible data generation, (b) non-deterministic choices drawn from the uniform distribution, and (c) order independence. Order independence is important because our data generation is done in a distributed fashion (using Apache Beam), so dif- ferent pseudorandom number generator state machines are used by each distributed worker. # B. Data Preparation for Fine-Tuning Tasks # B.1. Label Frequencies All four of our binary-classification fine-tuning tasks had an equal number of buggy and bug-free examples. The # 6https://github.com/ Pseudorandomness is computed based on an experiment- wide seed, but is independent of the order in which exam- ples are generated. Specifically, to make a pseudorandom choice about a function, we hash (using MD5) the seed and the function data (its source code and metadata about its provenance), and use the resulting hash as a uniform pseudo- google-research-datasets/eth_py150_open Learning and Evaluating Contextual Embedding of Source Code Exception Type ASSERTION_ERROR ATTRIBUTE_ERROR DOES_NOT_EXIST HTTP_ERROR IMPORT_ERROR INDEX_ERROR IO_ERROR KEY_ERROR KEYBOARD_INTERRUPT NAME_ERROR NOT_IMPLEMENTED_ERROR OBJECT_DOES_NOT_EXIST OS_ERROR RUNTIME_ERROR STOP_ITERATION SYSTEM_EXIT TYPE_ERROR UNICODE_DECODE_ERROR VALIDATION_ERROR VALUE_ERROR Test 155 1,372 7 55 690 586 721 1,926 232 78 119 95 779 107 270 105 809 134 92 2,016 Validation 29 274 2 9 170 139 136 362 58 19 24 16 131 34 61 16 156 21 16 415 Train 100% 66% 33% 86 834 2 38 363 346 427 1,112 166 60 72 71 459 80 131 52 531 63 39 1,117 323 2,444 3 104 1,180 1,035 1,318 3,384 509 166 206 197 1,396 247 432 200 1,564 196 159 3,417 189 1,599 3 78 750 684 881 2,272 336 117 127 142 901 159 284 120 1,038 135 96 2,232 Table 6. Example counts per class for the Exception Type task, broken down into the dataset splits. We show separately the 100% train dataset, as well as its 33% and 66% subsamples used in the ablations. random value from the function, for whatever needs the data generator has (e.g., in choosing one of multiple choices). In that way, the same function will always result in the same choices given a seed, regardless of the order in which each function is processed, thereby ensuring reproducible dataset generation. To select among multiple choices, we hash the function’s pseudorandom value along with all choices (sorted in a canonical order) and use the digest to compute an index within the list of choices. Note that given two choices over different candidates but for the same function, inde- pendent decisions will be drawn. We also use such order- independent pseudorandomness when subsampling datasets (e.g., to generate the validation datasets). In those cases, we hash a sample with the seed, as above, and turn the resulting digest into a pseudorandom number in [0, 1], which can be used to decide given a target sampling rate. To decide whether to generate examples from a function, we parse it, and collect all variable-use locations, and all defined variables, as described above. We discard the function if it has no variable uses, or if it defines fewer than two variables; this is necessary, since if there is only one variable defined, the model has no choice to make but the default one. We also discard the function if it has more than 50 defined variables; such functions are few, and tend to be auto-generated. For any function that we do not discard, i.e., an eligible function, we generate a buggy and a bug-free example, as described next. To generate a buggy example from an eligible function, we choose one variable use pseudorandomly (see above how multiple-choice decisions are done), and replace its current occupant with a different pseudorandomly-chosen variable defined in the function (with a separate multiple-choice decision). B.2.2. VARIABLE-MISUSE CLASSIFICATION A variable use is any mention of a variable in a load scope. This includes a variable that appears in the right-hand side of an assignment, or a field dereference. We regard as defined all variables mentioned either in the formal arguments of a function definition, or on the left-hand side of an assignment. We do not include in our defined variables those declared in module scope (i.e., globals). Note that in the work by Vasic et al. (2019), a buggy and bug-free example pair was generated for every variable use in an eligible function. In the work by Hellendoorn et al. (2020), a buggy and bug-free example pair was generated for up to three variable uses in an eligible function, i.e., some functions with one use would result in one example pair, whereas functions with many variable uses would result in three example pairs. In contrast, our work produces exactly one example pair for every eligible function. Eligibility was defined identically in all three projects. Learning and Evaluating Contextual Embedding of Source Code Arithmetic Comparison Membership Boolean Commutative +, * ==, !=, is, is not and, or Non-Commutative -, /, % <, <=, >, >= in, not in dataset, or functions that have an empty docstring. We split the rest into the function definition without the doc- string, and the docstring summary (i.e., the first line of text from its docstring), discarding the rest of the docstring. Table 7. Binary operators. We create bug-free examples by pairing a function with its own docstring summary. B.2.3. WRONG BINARY OPERATOR This task considers both commutative and non-commutative binary operators (unlike the Swapped-Argument Classifica- tion task). See Table 7 for the full list, and note that we have excluded relatively infrequent operators, e.g., the Python integer division operator //. To create buggy examples, we pair every function with an- other function’s docstring summary, according to a global pseudorandom permutation of all functions: for all i, we combine the i-th function (without its docstring) with the Pi-th function’s docstring summary, where P is a pseudoran- dom permutation, under a given seed. We discard pairings in which i == P [i], but for the seeds we chose, no such pathological permuted pairings occurred. If a function has no binary operators, it is discarded. Other- wise, it is used to generate a bug-free example, and a single buggy example as follows: one of the operators is chosen pseudorandomly (as described above), and a different oper- ator chosen to replace it from the same row of Table 7. So, for instance, a buggy example would only swap == with is, but not with not in, which would not type-check if we performed static type inference on Python. We take appropriate care to ensure the code parses after a bug is introduced. For instance, if we swap the operator in the expression 1==2 with is, we ensure that there is space between the tokens (i.e., 1 is 2 rather than the incorrect 1is2), even though the space was not needed before. B.2.4. SWAPPED OPERAND B.2.6. EXCEPTION TYPE Note that, unlike all other tasks, this task has no notion of buggy or bug-free examples. We discard functions that do not have any except clauses in them. For the rest, we collect all locations holding exception types within except clauses, and choose one of those locations to query the model for classification. Note that a single except clause may hold a comma-separated list of ex- ception types, and the same type may appear in multiple locations within a function. Once a location is chosen, we replace it with a special HOLE token, and create a clas- sification example that pairs the function (with the masked exception location) with the true label (the removed excep- tion type). Since this task targets swapping the arguments of binary operators, we only consider non-commutative operators from Table 7. The count of examples per exception type can be found in Table 6. Functions without eligible operators are discarded, and the choice of the operator to mutate in a function, as well as the choice of buggy operator to use, are done as above, but limiting choices only to non-commutative operators. To avoid complications due to format changes, we only consider expressions that fit in a single line (in contrast to the Wrong Binary Operator Classification task). We also do not consider expressions that look the same after swapping (e.g., a - a). B.2.7. VARIABLE MISUSE LOCALIZATION AND REPAIR The dataset for this task is identical to that for the Variable- Misuse Classification task (Section B.2.2). However, unlike the classification task, examples contain more features rele- vant to localization and repair. Specifically, in addition to the token sequence describing the program, we also extract a number of boolean input masks: B.2.5. FUNCTION-DOCSTRING MISMATCH In Python, a function docstring is a string literal that di- rectly follows the function signature and precedes the main function body. Whereas in other common programming languages, the function documentation is a comment, in Python it is an actual, semantically meaningful string literal. We discard functions that have no docstring from this • A candidates mask, which marks as True all tokens holding a variable, which can therefore be either the location of a bug, or the location of a repair. The first position is always a candidate, since it may be used to indicate a bug-free program. • A targets mask, which marks as True all tokens holding the correct variable, for buggy examples. Note that the correct variable may appear in multiple locations in a function, therefore this mask may have multiple True Learning and Evaluating Contextual Embedding of Source Code positions. Bug-free examples have an all-False targets mask. • An error-location mask, which marks as True the loca- tion where the bug occurs (for buggy examples) or the first location (for bug-free examples). All the masks mark as True some of the locations that hold variables. Because many variables are subtokenized into multiple tokens, if a variable is to be marked as True in the corresponding mask, we only mark as True its first subtoken, keeping trailing subtokens as False. # C. Attention Visualizations In this section, we provide sample code snippets used to test the different classification tasks. Further, Figures 1–5 show visualizations of the attention matrix of the last layer of the fine-tuned CuBERT model (?) for the code snippets. In the visualization, the Y-axis shows the query tokens and X-axis shows the tokens being attended to. The attention weight between a pair of tokens is the maximum of the weights assigned by the multi-head attention mechanism. The color changes from dark to light as weight changes from 0 to 1. Learning and Evaluating Contextual Embedding of Source Code Attention at layer 23 es f a os a os [cls] -| _ N+ def - on -| resize -| (- self -| event -| )- __NEWLINE__ - __INDENT__- event -| __DEDENT. __NEWLINE__ [SEP] - ( ) [cts] - def - on resize -| self -| event -| ) NEWLINE __ INDENT__ event -| apply -| zoom -| DEDENT__ NEWLINE __ [SEP] - # on_resize(self, event): event .apply_zoom() # def Figure 1. Variable Misuse Example. In the code snippet, ‘event.apply zoom’ should actually be ‘self.apply zoom’. The CuBERT variable-misuse model correctly predicts that the code has an error. As seen from the attention map, the query tokens are attending to the second occurrence of the ‘event’ token in the snippet, which corresponds to the incorrect variable usage. Learning and Evaluating Contextual Embedding of Source Code Attention at layer 23 a [CLS] NL det other ) __NEWLINE__ + __ INDENT, mi isinstance -| ( other int 4 and - other -| o- __NEWLINE__- ~_INDENT__ - return -| self -| get -| __NEWLINE__ __DEDENT. return get — )-| and -| other —| o- ( ) z7 is + not -| (- other -| int -| value -| self - DEDENT__ NEWLINE [SEP] -| isinstance - NEWLINE __ INDENT__ return -| self —| NEWLINE return -| other -| ~~ _DEDEN’ | def__gt__(self,other): if isinstance (other, int) and self.get_value()>0 other is not self # return # return # other= Figure 2. Wrong Operator Example. In this code snippet, ‘other is not self’ should actually be ‘other < self’. The CuBERT wrong-binary-operator model correctly predicts that the code snippet has an error. As seen from the attention map, the query tokens are all attending to the incorrect operator ‘is’. Learning and Evaluating Contextual Embedding of Source Code Attention at layer 23 [CLs] - NL - __INDENT__-| def -| contains -| model -} )- __NEWLINE__ -| __INDENT__- return -| ds -| registry -| in - model -} __DEDENT__- __DEDENT__-| __NEWLINE__ | [SEP] - ( ) in model -| [SEP] - : G1] lg =) 2 2 INDENT. contains — model NEWLINE INDENT. return -| registry - DEDENT. DEDENT. NEWLINE # def__contains__(cls,model): # return cls._registry in model Figure 3. Swapped Operand Example. In this code snippet, the return statement should be ‘model in cls. registry’. The swapped-operand model correctly predicts that the code snippet has an error. The query tokens are paying substantial attention to ‘in’ and the second occurrence of ‘model’ in the snippet. Learning and Evaluating Contextual Embedding of Source Code Attention at [cis] -] Get -] form -| initial -+] data -| __NEWLINE___ -| [SEP] -| _NL_ -| def -| add -| (4 self -| cov -| )- __NEWLINE__ -| __INDENT__ -] return -| Sum -| or - Kernel -| (4 self -| cov - )- __DEDENT__-| __NEWLINE__ -| [SEP] -] me eo = fe He tym le-eS ure ae qeERBS le |B IR '-§ 2 | |E€Sa-F 8B | |& 485s ran 8 55°2 & 8 im QZ fee Bue 22%" g z2% 2 | 28 gs Fe 2° af # Docstring: Function: # ’Get # form # initial # data.’ # def__add__(self,cov): (self, cov) # SumOfKernel # return Figure 4. Function Docstring Example. The CuBERT function-docstring model correctly predicts that the docstring is wrong for this code snippet. Note that most of the query tokens are attending to the tokens in the docstring. Learning and Evaluating Contextual Embedding of Source Code ' 1 ' ' 1 Attention at layer 23 Fe kes ~ By NEWLINE —_INDENT— subprocess cali hook valué NEWLINE a ret jsonify success True 2 NEWLINE — DEDENT— except HO LE as e NEWLINE —_INDENT— retam jsonify ( success False error str 400 DEDENT. NEWLINE [SEP] eee EEE Eee een ett rm [p> ~—x lot [ep~-v us co ae % 'g~ ee-gig og 2s | rh ar -- o retum -| GB 400 DEDENT. NEWLINE & ui n errot 2 8 & ® NEWLINE INDEN ubproc NEWLINE ret jso succ NEWLINE — _DEDEN exc NEWLINE INDEN a jsonify —| [SEP] # try: subprocess.call (hook_value) return jsonify(success=True), __HOLE__ jsonify (success=False, error=str(e)), 400 # except return # as # e: 200 Figure 5. Exception Classification Example. For this code snippet, the CuBERT exception-classification model correctly predicts ‘ HOLE ’ as ‘OSError’. The model’s attention matrix also shows that ‘ HOLE ’ is attending to ‘subprocess’, which is indicative of an OS-related error.
{ "id": "1806.04616" }
1912.09802
Taxonomy and Evaluation of Structured Compression of Convolutional Neural Networks
The success of deep neural networks in many real-world applications is leading to new challenges in building more efficient architectures. One effective way of making networks more efficient is neural network compression. We provide an overview of existing neural network compression methods that can be used to make neural networks more efficient by changing the architecture of the network. First, we introduce a new way to categorize all published compression methods, based on the amount of data and compute needed to make the methods work in practice. These are three 'levels of compression solutions'. Second, we provide a taxonomy of tensor factorization based and probabilistic compression methods. Finally, we perform an extensive evaluation of different compression techniques from the literature for models trained on ImageNet. We show that SVD and probabilistic compression or pruning methods are complementary and give the best results of all the considered methods. We also provide practical ways to combine them.
http://arxiv.org/pdf/1912.09802
Andrey Kuzmin, Markus Nagel, Saurabh Pitre, Sandeep Pendyam, Tijmen Blankevoort, Max Welling
cs.LG, cs.CV, stat.ML
null
null
cs.LG
20191220
20191220
9 1 0 2 c e D 0 2 ] G L . s c [ 1 v 2 0 8 9 0 . 2 1 9 1 : v i X r a # Taxonomy and Evaluation of Structured Compression of Convolutional Neural Networks # Andrey Kuzmin Qualcomm AI Research∗ Qualcomm Technologies Netherlands B.V. [email protected] # Markus Nagel Qualcomm AI Research∗ Qualcomm Technologies Netherlands B.V. [email protected] # Saurabh Pitre Qualcomm AI Research∗ Qualcomm Technologies, Inc. [email protected] # Sandeep Pendyam Qualcomm AI Research∗ Qualcomm Technologies, Inc. [email protected] # Tijmen Blankevoort Qualcomm AI Research∗ Qualcomm Technologies Netherlands B.V. [email protected] # Max Welling Qualcomm AI Research∗ Qualcomm Technologies Netherlands B.V. [email protected] # Abstract The success of deep neural networks in many real-world applications is leading to new challenges in building more efficient architectures. One effective way of making networks more efficient is neural network compression. We provide an overview of existing neural network compression methods that can be used to make neural networks more efficient by changing the architecture of the network. First, we introduce a new way to categorize all published compression methods, based on the amount of data and compute needed to make the methods work in practice. These are three ‘levels of compression solutions’. Second, we provide a taxonomy of tensor factorization based and probabilistic compression methods. Finally, we perform an extensive evaluation of different compression techniques from the literature for models trained on ImageNet. We show that SVD and probabilistic compression or pruning methods are complementary and give the best results of all the considered methods. We also provide practical ways to combine them. Keywords: Deep Learning, Convolutional Neural Networks, Model Compression, Struc- tured Pruning ∗. Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc. # Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling # 1. Introduction Due to the tremendous success of deep learning, neural networks can now be found in applications everywhere. Running in the cloud, on-device, or even on dedicated chips, large deep learning networks now form the foundation for many real-world applications. They are found in voice assistants, medical image analyzers, automatic translation tools, software that enhances photographs, and many other applications. In these real-world applications, the performance of neural networks is an important topic. Well-performing deep neural networks are large and expensive to execute, restricting their use in, e.g., mobile applications with limited compute. Even for large-scale cloud-based solutions, such as services that process millions of images or translations, neural network efficiency directly impacts compute and power costs. Alongside quantization (Krishnamoorthi (2018)) and optimizing kernels for efficient deep learning execution (Chetlur et al. (2014)), neural network compression is an effective way to make the run-time of these models more efficient. With compression, we mean improving the run-time of models, as opposed to compressing the actual size of the network for storage purposes. In this paper, we will describe and compare several methods for compressing large deep-learning architectures for improved run-time. Even for architectures that were designed to be efficient, such as MobilenetV2 (Sandler et al. (2018) and EfficientNet (Tan and Le (2019)), it is still helpful to do neural network compression (Liu et al., 2019; He et al., 2017). There has been a debate in the deep-learning literature on the efficacy of compression. Liu et al. (2018) argues that network compression does not help, and one could have trained that similar architecture from scratch. However, the “The lottery-ticket hypothesis”, Frankle and Carbin (2018) provides arguments for the hypothesis that it’s better to train a large network and compress it, rather than training a smaller model from scratch. We will see in our result section more evidence for the latter, indicating it helps to compress networks after training, as opposed to starting with a more efficient architecture. In this paper, we systematically categorize the many different compression methods that have been published and test all of them on a large scale image classification task. We group methods by their practical usage into 3 different levels. Level 1: Methods that do not use data. Level 2: methods that do not use back-propagation and Level 3: methods that use a training procedure. Within these categories, we look at several different ways of doing neural network compressing, including tensor-decomposition, channel-pruning, and several Bayesian inspired approaches. Specifically, we look only at structured pruning approaches, where the size of the tensors of the network decreases in size. This is opposed to unstructured pruning methods, such as (Han et al. (2015)) and (Molchanov et al. (2017)), that remove individual weights from the network. These types of pruning methods require specific hardware to obtain speed-ups, whereas structured pruning methods more directly provide improved speed on most devices. # 2. Related work SVD-based methods. SVD decomposition was first used by Denil et al. (2013) to demonstrate redundancy in weight parameters in deep neural networks. Following this ap- 2 Review of Structured CNN compression proach, several works employ low-rank filter approximation (Jaderberg et al., 2014; Denton et al., 2014) to reduce inference time for pre-trained CNN models. One of the first methods for accelerating convolutional layers by applying low-rank approximation to the kernel ten- sors is Denton et al. (2014). The authors suggest several decompositions approaches applied to parts of the kernel tensor obtained by bi-clustering. The spatial decomposition method from Jaderberg et al. (2014) decomposes a k × k filter into a k × 1 and 1 × k while exploiting redundancy among multiple channels. Another notable improvement of SVD compression is reducing the error introduced by filter approximation based on input data. The approach suggested by Zhang et al. (2016) uses per-layer minimization of errors in activations for compressed layers. Tensor decomposition-based methods. Several approaches for structured CNN com- pression based on tensor decomposition applied to 4D convolutional kernels were suggested. An overview of tensor decomposition techniques is given in Kolda and Bader (2009). The authors of Lebedev et al. (2014) apply CP-decomposition to compress a kernel of a convo- lutional filter. The work of Kim et al. (2015) suggests a CNN compression approach based on the Tucker decomposition. The authors also suggest employing analytic solutions for variational Bayesian matrix factorization (VBMF) by Nakajima et al. (2013) for the rank selection. Another tensor decomposition approach that was applied to model compression is the tensor-train decomposition (Oseledets, 2011). It is used in Novikov et al. (2015) for compression of fully-connected layers, and Garipov et al. (2016) applies it for convolutional layers. Another direction in convolutional layer compression is to increase the dimensionality by reshaping a kernel into a higher-dimensional tensor (Su et al., 2018; Novikov et al., 2015). For example, a 3 × 3 convolutional kernel with 64 input, and 64 output channels represented as 6-dimensional 8×8×8×8×3×3 tensor instead 64×64×3×3, where 8×8 corresponds to one way of factorizing 64. Using any other way of factorizing 64 in combination with any of the three tensor decomposition techniques yields a new compression technique. An extensive study of applying CP-decomposition, Tucker decomposition, and tensor-train decomposition in combination with factorizing kernel dimensions were published by Su et al. (2018). They consider compression of both fully-connected and convolutional layers. Pruning methods. One of the ways to reduce inference time for pre-trained models is to prune redundant channels. The work of Li et al. (2016) is focused on using channel norm magnitude as a criterion for pruning. Another approach is to use a lasso feature selection framework for choosing redundant channels while minimizing reconstruction error for the output activation based on input data (He et al., 2017). Compression ratio selection methods. As every layer of a neural network has differ- ent sensitivity to compression, any SVD or tensor decomposition technique can be further improved by optimizing per layer compression ratios. The methods Kim et al. (2019); Kim and Kyung (2018) suggest efficient search strategies for the corresponding discrete opti- mization problem. A learning-based strategy based on reinforcement learning is suggested in He et al. (2018). Loss-aware compression. While compression and pruning methods reduce complexity, most of the methods assume equal importance of every model parameter for the accuracy 3 Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling Alimited portion — Full of training data data Level 1. Data-free xX x compression Level 2. Data-optimized VY xX x compression Level 3. Full data SA VY compression Backpropagation Figure 1: Levels used for comparison of the model compression methods. of the final model. One way to improve compression methods is to estimate the importance of each of the weights, and use this information while pruning. Several methods suggest introducing importance based on loss function increase (Wang et al., 2019; Gao et al., 2018; LeCun et al., 1990; Hassibi et al., 1993). The increase of the loss function is often estimated based on first or second-order linear approximations. Probabilistic compression. Another family of methods from the literature suggests adding a term to a loss function that controls the complexity of the model. Typically, a collection of stochastic gates is included in a network, which determines which weights are to be set to zero. Methods following this approach include Louizos et al. (2018); Neklyudov et al. (2017); Dai et al. (2018), and a recent survey is provided in Gale et al. (2019). Efficient architecture design. Several works aim at finding the optimal trade-off be- tween model efficiency and prediction accuracy. MobileNet V1 (Howard et al., 2017) is based on combining depth-wise separable convolutions and depth-wise convolutions to re- duce the number of FLOPs. MobileNet V2 (Sandler et al., 2018) is based on the linear bottleneck and inverted residual structure and further improves the efficiency of the model. MnasNet (Tan et al., 2019) is based on a combination of squeeze and excitation blocks. Another efficient architecture (Zhang et al., 2018) leverages group convolution, and channel shuffle operations. Some of the more recent architectures (Howard et al., 2019; Wu et al., 2019) are based on combining efficient handcrafted layers with neural architecture search. # 2.1 Levels of compression solutions To facilitate a comparison of the methods proposed in the literature, we refer to practical use cases of model compression. The following levels of compression solutions are introduced in a way similar to Nagel et al. (2019). The definition of each level depends on the amount of training data and computational resources available when using a compression method. • Level 1. Data-free compression. No data or training pipeline is available in this case. Nevertheless, the goal is to produce an efficient model with the predictions as close to the original model as possible. 4 Review of Structured CNN compression • Level 2. Data-optimized compression. A limited number of batches of the training data are used to guide the compression method with no ground-truth labels being used. In this case, layer-wise optimization of the parameters of the compressed model is used to improve the predictions. No back-propagation is used at this level. • Level 3. Full data compression. This level corresponds to fine-tuning of the com- pressed model using the full training set or training an efficient model from scratch using the full amount of data. Full back-propagation is used in this case so that the computational complexity is comparable to the complexity of the original model training procedure. Different from Nagel et al. (2019), in the current work we omit introducing one more level for the methods which introduce architecture changes, as compression is complementary to architecture search methods and allows to obtain further performance improvement even if applied for handcrafted or learning based efficient architectures He et al. (2018); Liu et al. (2019). The compression levels are summarized in figure 1. Using the levels formulation, all the compression methods can be categorized and compared in a similar setting. The practical choice of compression level depends on the specific envisioned use case. # 3. Structured compression methods overview To define a quantitative measure of compression, we use the number of multiply-accumulate operations (MAC units, or MACs) used by a neural network at inference time. Given a network with L layers with ci operations in each, the total computational complexity C is expressed as: C= a. (1) i=l Assuming that a compression technique reduces the number of operations per layer to ˆci, per layer compression ratio αi can be computed as: a =1- Si (2) Ci The whole model compression rate α can be defined in a similar way: a=1- am (3) where C is the total number of operations in the compressed model. In practice, the model’s accuracy has a different sensitivity to the compression of different layers. The problem of selecting an optimal compression ratio for each of the layers given the target whole-model compression ratio is considered in section 3.4. 5 # Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling (a) (b) (c) Figure 2: Diagram of SVD-based decomposition approaches for a convolutional layer. Each of the nodes represents a factor in the decomposed layer. Edges depict indices of the factors used for the summation. Edges connecting two factors represent pairs of indices used for sum of product operation. (a) Original convolutional layer, (b) weight SVD, (c) spatial SVD. 3.1 Level 1. Data-free compression methods A convolutional layer is specified by the kernel W ∈ Rk×k×s×t, where s is the number of input channels, t is the number of output channels and the spatial size of the filter is k × k. The kernel is assumed to be square of odd size k for simplicity, δ = (k − 1)/2 denotes its “half-width”. A convolution is a linear transformation of the feature map X ∈ Rs×w×h into an output tensor Y ∈ Rt×w×h. We assume the spatial dimensions of the input and output feature maps are equal in order to avoid notational clutter. The convolution is defined as follows: 8 ta +6 ytd Y (it, iz, ty) = > > > W(t, — ix + 6,41, — by +0,i5, 41) X (is, t,t). (4) is=1 il, Sig 6 iy —5 We omit the bias term for notation simplicity. The number of MACs in a convolutional layer is c = k2sthw. # 3.1.1 SVD methods To leverage low-rank matrix approximation for the compression of a convolutional layer, the kernel tensor is transformed into a matrix. In this case, the dimensions of a tensor are referred to as modes. There are seven types of possible matricizations of a 4-dimensional tensor. Two of these are used in the compression methods that are introduced in the following paragraphs. Weight SVD. This method is based on reshaping the kernel tensor into a matrix W ∈ Rk2s×t followed by a low-rank approximation. This type of matricization corresponds to merging three of the four original modes k × k × s × t into a single supermode. The approximate kernel W of rank r is expressed as follows: : Wi, tts.) = D> Wilt, ty, ts, tr) Wo (in, tt). (5) ip=l The schematic diagram of the summation is given in the figure 2(b). The factors can 1 be obtained using SVD decomposition of W = USVT and assigning W1 = US 2 and 6 Review of Structured CNN compression 1 2 VT . The first factor W1 corresponds to a convolution with a filter size k × k, W2 = S s input channels and r output channels whereas the second factor corresponds to 1 × 1 convolution with r input channels and t output channels. The total number of MACs in the decomposed layer equals c(r) = k2srhw + rthw. The compression ratio of the decomposed layer is fully determined by the rank r. Spatial SVD. This method is based on reshaping the kernel to a matrix W ∈ Rsk×tk. The corresponding low-rank approximation of rank r can be expressed as (see figure 2 (c)): r Wis, titi) = So Wrliss ts tr)Wo(ins test). (6) ip=l The factor W, (iy, i, iy) corresponds to a convolution with a vertical filter of size k x 1 and the factor W),(¢s,¢,,%) corresponds to a horizontal 1 x k convolution. The total number of MACs is c(r) = krswh + krtwh. The trade-off between the computational complexity and approximation error is defined by the rank r. The decomposition was introduced in Jaderberg et al. (2014). In the original paper, an iterative optimization algorithm based on conjugate gradient descent was used to calculate In a subsequent work of Tai et al. (2015), the iterative scheme was the factorization. replaced by a closed-form solution based on SVD decomposition. # 3.1.2 Tensor decompositions In addition to matricization of the convolutional kernel, several compression techniques based on tensor decompositions were suggested in Lebedev et al. (2014); Kim et al. (2015); Su et al. (2018) where the kernel is directly treated as a 4-dimensional tensor and de- composed. In this case, different choice of dimensions order in the kernel yields different factorizations. CP-decomposition. For a kernel W ∈ Rs×k×k×t, the CP-decomposition of rank r is defined as follows (Kolda and Bader (2009)): Wis, t,t, = owe ig, tr )WO (Hh, i) WO (U,, ip) W (it, iy). (7) ip=l1 7 # Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling (a) (b) is iy ly Ut (c) Figure 3: Schematic view of the considered tensor decomposition approaches. (a) CP- decomposition. All the four factors share the same rank r used in the summation in Eqn. 7 and have one index for each corresponding to the original indices 7/,, iy, is, and i of the convolution (Eqn. 4). (b) Tucker decomposition. The core factor Gi, t,t, tr.) which can be viewed as a compressed version of the original tensor shares indices i,, and i;, with the factors Wi(is,ir,) and We(it, ir), respectively. (c) Tensor-train decomposition. The original operation is decomposed into a chain of four factors WO(ig, ir,), W) (in, ies ing), WA (ip, t, ry), WA (ip5, i), each of which keeps one index of the original convolution kernel. Each of the ranks 1, rg, and r3 is shared between a pair of subsequent factors. 8 # Review of Structured CNN compression Given the factorization, the original convolution (Eqn. 4) can be approximately computed in four steps (Lebedev et al. (2014)): XO (60,0) = ow ig, tr) X(t, ty, ts), (8) is=1 # iy +6 XO tin) = YE WOK, iy + 6,1 )XO (Hts tn), (9) it, =iy-5 # itd Xin, iy, ir) = > we a, — tz + 6,ir), KOE, iy, tr), (10) i =ig—O i =ig—O ->> Y (it, ix, iy) = W (t)(it, ir)X (3)(ix, iy, ir), (11) ir=1 where X(i,, 7, y)> XAG iy,i,), and X) (iz, iy,%,) are tensors which are intermediate results after each step. The diagram of the decomposition and the indices used in the sum- mations is given in the figure 3(a). Computing X (i,, 7/,, i,) and Y (it, ix, iy) corresponds to convolutions with filter size 1 x 1. The steps of computing X)(i!,, i iy, ty) and XB (ip, i iy, tr) in turn corresponds to convolutions with vertical and horizontal filters, respectively. The total number of MACs in the decomposed layer is c(r) = (swh + 2kwh + twh)r. Thus, it depends solely on the value of the rank r which controls the approximation error and computational complexity. Tucker decomposition. For a kernel W ∈ Rk×k×s×t, a partial Tucker decomposition (Kolda and Bader (2009)) is defined as: Wit, ty.is, i) = x S> al Gil ty ins ing) W (is, in, JW (it, tng), 12) pal tpg=l where G(ix, iy, ir1, ir2) is a core tensor of size k ×k ×r1 ×r2 and W (1)(is, ir1) and W (2)(it, ir2) are the factor matrices. Computation of the convolution can be decomposed into the fol- lowing three steps (Kim et al. (2015)): X(( (tiny 8,4 iy) -S> wi (is, tr) X (a7, ty dys is), 13) is=1 int ytd ory XOVinsiysinn) = Yo SS SO GG, = in + 6,0, = ty + Bin ry) XO (HH iy), (14) U=tz— ty =ty—6 try =1 Y (tt, tz, ty) -> we (it, tpg )X (in, iy, tr), 15) ipg=l where the steps in Eqn. 13 and Eqn. 15 correspond to convolutions with filter size 1 × 1, and the step in Eqn. 14 corresponds to a convolution with the original filter size with r1 input 9 Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling channels and r2 output channels. The total number of MACs is c(r) = sr1wh + k2r1r2wh + tr2wh and defined by two ranks r1 and r2, so that there is one degree of freedom when selecting the ranks given a predefined compression ratio. The original work by Kim et al. (2015) suggests using variational Bayesian matrix factorization (Nakajima et al. (2013)) for rank selection. Tensor-train decomposition. After reordering the modes as W ∈ Rs×k×k×t, a tensor- train decomposition for the kernel is defined as the following sequence of matrix products (Oseledets (2011)): Wis, t,,@ byt -> 3 3 WO (ig, ing DW (ing th, ing) WO) ing, ty, ing WO (ing it). tp =litpg=l ipg=l The original convolution (Eqn. 4) can be computed in four stages (Su et al. (2018)): X(( (tiny 8,4 iy) -S> wi (is, in) X (i, Yas iy, is), 17) is=1 TL ta td XO Gg, ttm) = YO YO Win i, = te +4, 12) XO (UH in), 18) inp =1 i, 12-6 ry ty +6 XP (in, iy, ing) = > > WO (in, t, — ty +6, ing) XO (in, th, ing), 19) ing=1 tt, =iy—5 # ing=1 tt, =iy—5 Y (it, ix, iy) = W (4)(it, ir2)X (3)(ix, iy, ir3). (20) r3=1 The steps in Eqn. 17, and Eqn. 20 correspond to 1×1 convolutions and the steps in Eqn. 18, and Eqn. 19 correspond to a convolution with vertical and horizontal filters, respectively. The total number of MACs is c(r) = sr1wh+kr1r2wh+kr2r3wh+r3twh. The decomposition has three ranks r1, r2, and r3 that determine the approximation error and the computational complexity of the compressed layer. # 3.2 Level 2. Data driven compression methods # 3.2.1 Per-layer data-optimized SVD methods All level 1 compression methods minimize the kernel approximation error. This does not use any information of the actual data which is being processed by the layer of the network. One of the ways to improve level 1 methods is to formulate a method that minimizes the error in the activations produced by the compressed layer. A method based on minimizing the error of the output for the specific data allows one to significantly decrease the loss 10 (16) Review of Structured CNN compression Original Weight SVD Spatial SVD CP decomposition Tucker decomposition Tensor-train Num. parameters k2st (k2s + t)r (ksr + kt)r (2k + s + t)r sr1 + k2r1r2 + tr2 sr1 + kr1r2 + kr2r3 + r3t Comp. complexity k2whst (k2whs + wht)r (kwhs + kwht)r (swh + 2kwh + twh)r sr1wh + k2r1r2wh + tr2wh sr1wh + kr1r2wh+ kr2r3wh + r3twh Num. ranks - 1 1 1 2 3 Table 1: Comparison of SVD and tensor decomposition methods in terms of computational complexity and the number of parameters. in accuracy after compression. This section describes multiple approaches for per-layer data-optimized SVD compression. Data SVD. Given a kernel tensor W reshaped into a matrix of shape t × k2s, an input vector x ∈ Rk2s, the response y ∈ Rt is given by: y = Wx + b. (21) Given the output data, the optimal projection matrix M ∈ Rt×t is given as a solution of the following optimization problem: n argmin (vi -—¥) — M(yi - Vils M » : ee (22) i=1 rank M ≤ r, # s.t. where yj; are outputs sampled from the training set, y is the sample mean, and n is the number of samples. The solution is given by principal component analysis (PCA) as fol- lows (Golub and Van Loan, 1996). Let Y © R” be a matrix which concatenates the entries of (yj —y). Given the eigendecomposition of the covariance matrix YYT = USU’, the values of M are given by: M = UrUT r , (23) where Ur are the first r are eigenvectors. This solution for M can be used to approximate the original layer. Under the low rank assumption for vector y, the output can be expressed as: y = MWx + b, (24) where x is the input vector and b is the bias. Using Eqn. 23, the original kernel W can be approximated as W = W,We, where W,; = Up, and W2 = UtLw. This method corresponds to a data-optimized version of the weight SVD decomposition (Eqn. 5). Asymmetric data SVD. One of the main issues in neural network compression is the accumulation of error when compressing a deep model. Since every layer is compressed subsequently, compressed layers could take into account the error introduced by previous layers in their decomposition for better performance. An asymmetric formulation was intro- duced to do this in Zhang et al. (2016). As opposed to optimizing the reconstruction error 11 # Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling KuzMIN, NAGEL, PITRE, PENDYAM, BLANKEVOORT AND WELLING conv k x k convix1 sxhxw rxhxw —y txhxw # (a) Weight SVD convkx1 conv 1xk => 9} => La (b) Spatial SVD (c) CP-decomposition conv 1xk conv1x1 depth-wise separable conv 1 xk depth-wise separable convix1 sxhxw a rxhxw al rxhxw aly rxhxw —y txhxw (d) Tucker decomposition (e) Tensor-train decomposition convk xk conv1x1 conv 1x1 sxhxw —y roxXhxw mxhxw —y txhxw conv kx 1 convix1 conv 1xk convix1 sxhxw JA Pg rxhxw q rxhxw eel xhxw Se bxhxw Figure 4: Overview of different tensor decomposition approaches. for the approximated layer based on the original input data, the asymmetric formulation is based on the input data from the previous approximated layer. This approach allows one to significantly reduce the full-model accuracy drop in level 2 settings using a limited amount of input data at the cost of solving a more general optimization problem. Given the output of the previous compressed layer X, the activations are given by: (25) # z= WS +b. 12 # Review of Structured CNN compression In order to minimize the error introduced by compression, the following optimization prob- lem is solved: # h F = z\I2 argmin II(vi-Â¥) — Mui — 2) Ilo M 2X (26) # i=1 rank M ≤ r. # s.t. The problem is based on minimizing the same error as in Eqn. 22, but it depends on both the original layer outputs y;, and the compressed layer outputs z;. After combining responses (yi — y), and (z; — Z) into matrices Y and Z, the minimization can be written as: a ay2 argmin ly - Ma{| M F (27) # s.t. # rank M ≤ r. The problem has a closed form solution for M based on generalized SVD (Takane and Jung, 2006). The new bias for the compressed layer can be computed as bnew = z − My. The reconstruction error of the asymmetric data SVD can be further improved be in- corporating the activation function into the formulation in Eqn. 27. a al argmin Iw) — f(MZ+ b)| Ps F Mb (28) # s.t. rank M ≤ r, Where Y is a matrix concatenating the entries of yj with no mean subtracted, and b is a new bias. This problem is solved using the following relaxation: ~ AI|2 argmin || f(Y) — f(Z)||2. +A \z —~MZ— b| M,b,Z F (29) s.t. rankM <7, where Z is an auxiliary variable, and is a penalty parameter. The second term of the objective is equivalent to Eqn. 27. The first term can be minimized using SGD for any activation function, or in the case of the ReLU function, it can be solved analytically (Zhang et al. (2016)). Minimization of the objective Eqn. 29 is performed by using alternating minimization. The first sub-problem corresponds to fixing Z and solving for M, b, and vice versa for the second sub-problem. Increasing values of parameters A are used through the iterations of the method. Asym3D. The authors of Zhang et al. (2016) further propose to use the formulation in Eqn. 27 to perform a double decomposition based on the spatial and data SVD methods. Given two spatial SVD layers W,, W),, the formulation in Eqn. 27 can be applied in order to perform a further decomposition of the second layer W,. The trade-off between accuracy and the computational complexity in this case is determined by two ranks: r, is the rank of the original spatial SVD decomposition and rank rq is the rank of the data optimized decomposition applied to the factor W,. The final decomposed architecture consists of a k x 1 filter with r, output channels followed by a 1 x k filter with rg output channels and a 1 x 1 convolutional layer with t output channels. 13 # Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling KuzMIN, NAGEL, PITRE, PENDYAM, BLANKEVOORT AND WELLING Data optimized spatial SVD. In addition to Asym3D method, the framework for per- layer optimization (Eqn. 26) can be used to obtain a data-optimized version of the spatial SVD method. If we consider the optimization problem in Eqn. 26 without the constraint on the rank: # n argmin Ye ili-Â¥) -—M@-2)IB, (30) i=l the solution for M can be used to improve the predictions by refining weights of a com- pressed network layer based on some input and output data. Consider a convolutional layer decomposed using the spatial SVD decomposition (Eqn. 6). Given the original weights W, the layer can be decomposed into two layers: W = WvWh. (31) Given an input vector X, the output z is given by: Z= W,W)X. (32) After solving Eqn. 30 for z above and the reference output, the data-optimized version of the weights W is given as: W = MW = (MW,)W,,. (33) In practice, the refined value Ww. = MW, can be used instead W, for the second layer. # 3.2.2 Channel pruning Some compression methods introduced in the literature are based on pruning channels of a convolutional filter based on different channel importance criteria. In particular, the method suggested in Li et al. (2016) is based on the weight magnitudes. Another pruning method which is optimized for data was introduced in He et al. (2017). This method uses lasso feature selection to find the set of channels to prune. In order to formulate the pruning method as an optimization, the authors consider computing the output of a convolutional layer with a kernel W € R‘™*s****® on input volumes K € R”"*s**x* sampled from the feature map of the uncompressed model, where n is the number of samples. The corresponding output volume Y is a matrix of shape n x t. The original number of channels is reduced to s’ (0 < s’ < s) in a way that the reconstruction error for the output volume is minimized. The objective function is: 2 argmin fee aX, swt : (34) IBllo <8’, where Xi ∈ Rn×k2 is an i-th channel of the input concatenated for multiple data samples, and Wi ∈ Rt×k2 is i-th channel of the filter, both are reshaped into matrices. Vector β is the coefficient vector for channel selection. If the value βi = 0 then the corresponding 14 # Review of Structured CNN compression channel can be pruned. In order to solve the problem, the L0 norm is relaxed to L1 and the minimization is formulated as follows: 2 + A116 hy argmin fee aX, swt . (35) =1 # fee aX, Wile =1 # s.t. The minimization is performed in two steps by fixing β or W and solving the corresponding sub-problems. # 3.3 Level 3. Compression based on training Some compression methods require full training of the model. Either by fine-tuning an already trained model for a few training epochs, or training the model entirely from scratch. All of the procedures in the previous paragraphs can be extended this way into an iterative compression and fine-tuning scheme. Here we focus on probabilistic compression methods that need fine-tuning or training from scratch. # 3.3.1 Probabilistic compression Several methods have been proposed in the literature that add a, potentially probabilistic, multiplicative factor z to each channel in the convolutional network. Such that we have for a single layer with input x, weight matrix W and output y: y = z(α) · W ∗ x, (36) with z the same dimensionality as the output y, and α one or more learnable parameters that control the gate. The idea is that when z equals 0, the output channel is off and can be removed from the network. The factor z can also be interpreted as a gate that is on or off. Similarly, in the probabilistic setting, if the gate is sampled close to 0 with a high likelihood or has a very high variance, the channel can be removed. This multiplicative factor is regularized by a penalty term in the loss function, such that during training the network optimizes for the trade-off between the loss function and the model complexity as follows: ˆL(X, Y) = L(X, Y, α) + λF (α), (37) where L is the original loss function, F a differentiable function of the complexity of the network, parametrized by (learnable) parameters α that control the gates, and λ a trade-off factor between the two loss functions. In all methods, λ is a hyperparameter that is set by the user. L[o-regularization. The technique from Louizos et al. (2018) applies the Lo-norm to the amount of non-zero entries in a vector 6. Generally, this norm cannot be optimized directly, but the paper extends the continuous relaxation trick from Maddison et al. (2016); Jang et al. (2017) to optimize the gates. Louizos et al. (2018) introduces the hard-concrete distribution for the 15 # Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling gate, which is a clipped version of the concrete distribution: u ∼ U(0, 1), (38) s = Sigmoid((log(u) − log(1 − u) + log(α))/β), ¯s = s(ζ − γ) + γ, z = min(1, max(0, ¯s)) (39) (40) (41) by parameter α. In the forward pass, a sample is drawn from the hard-concrete distribution for each gate, creating a stochastic optimization procedure. β is the temperature parame- ter, set as β = 2/3 in the paper, which controls the skew of the sigmoid. Parameters ζ, γ are stretching factors for clipping some values to actual 0s and 1s, which are set to 1.1 and −0.1, respectively. The method penalizes the probability that each gate is sampled as 1. Channels corresponding to gates that have a low probability of being active can be removed from the network. This corresponds to a small parameter α. The regularization factor chosen here is: # Ng F(α) = j=1 Sigmoid(log(αj) − β log( −γ ζ )), (42) where Ng is the total number of gates in the network. Variational Information Bottleneck. Dai et al. (2018) introduces a Gaussian gate that is multiplied with each channel in the network. In the forward pass, a sample is drawn from the Gaussian N (µ, σ) by using the reparametrization trick from Kingma et al. (2015). This corresponds to gates z such that: e~N(0,1), (43) zZ=pte-o, (44) where µ and σ are learnable parameters, corresponding to the mean and standard deviation of the Gaussian. The corresponding regularization factor is derived to be Ng 2 F(,0) = Y log (: + 4) ; (45) j=l j where again, Ng is the number of gates in the network. The channels that have a small ratio µ2 h can be removed from the network, as they are either multiplied with a small mean value or have a very large variance. The methods from Louizos et al. (2017) and Neklyudov et al. (2017) are variants of this method with different regularization functions F. # 3.4 Compression ratio selection for whole-model compression Per layer compression ratio selection is one of the important aspects of neural network In this section we introduce two different methods for compression ratio compression. selection which we used for our experiments. 16 Review of Structured CNN compression # 3.4.1 Equal accuracy loss To compare different SVD and tensor decomposition based compression techniques in similar settings, we suggest using the following ratio selection method. The main advantage of this method is that it can be defined for any decomposition approach in a similar way. To introduce the rank selection method, we first define a layer-wise accuracy metric based on a verification set. The verification set is a subset of the training set used for the rank selection method to avoid using the validation set. For a layer l, the accuracy Pl(r) is obtained by compressing the layer l using a vector of ranks r, while the rest of the networks remains uncompressed. The network with the single compressed layer is evaluated on the verification set to calculate the value Pl(r). In order to avoid extra computational overhead, in practice the layer-wise accuracy metric is calculated only for some values of r, e.g., values of r that correspond to per-layer compression ratios {0.1, 0.2, . . . , 0.9}. We denote the combination of all rank values for all the layers as R = (ri,...,rz where each rank r; is a scalar in case of SVD decomposition, and a vector in case of high- dimensional tensor decomposition techniques. The set of ranks R can be calculated as the solution to the following optimization problem. The input consists of per-layer accuracy- based metric P;(r;), the full compressed model complexity C(R) = an c(rz), the original model accuracy Porig and the original model complexity Corig: R = (r1, . . . , rL)T = argmin τ Z=(z1,...,zL) s.t. Pl(zl) ≥ Porig − τ, C(Z) Corig ≤ α, (46) where τ is the tolerance in per-layer accuracy metric decrease. The tolerance value is iteratively adjusted to meet the desired full model compression ratio α. 3.4.2 Greedy algorithm based on singular values To facilitate comparison of data-optimized SVD methods, we use the following method introduced in Zhang et al. (2016). The method is based on the assumption that the whole- model performance is related to the following PCA energy: Lo" E(R) = Il > Fk» (47) l=1k=1 where σl,: are the singular values of layer l. To choose the ranks for SVD decomposition, the energy is maximized subject to the constraint on the total number of MACs in the compressed model: # max E(R) C(R) Corig s.t. ≤ α. (48) To optimize the objective, the greedy strategy of Zhang et al. (2016) is used. This approach has a relatively low computational cost and does not require using the validation set. 17 # Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling Imagenet (Resnet18) —$ * 70 60 8 g 4 f 5 30 § “ | —< uncompressed model 20 —— Spatial SVD —e Weight SVD 10 + Tucker decomposition —< Tensor-train decomposition = CP-decomposition 0 oa 06 08 10 12 14 16 18 20 emacs Imagenet (VGG16) 70 so 8 8 40 5 30 § “ — uncompressed model 20 —<— Spatial SVD —< Weight SVD 10 —- Tucker decomposition >< Tensor-train decomposition —— CP-decomposition 0 a 6 @ 10 2 a 16 emacs Imagenet (Resnet18) Imagenet (VGG16) —$ * 70 70 60 so 8 8 g 4 f 8 40 5 30 5 30 § § “ | —< uncompressed model “ — uncompressed model 20 —— Spatial SVD 20 —<— Spatial SVD —e Weight SVD —< Weight SVD 10 + Tucker decomposition 10 —- Tucker decomposition —< Tensor-train decomposition >< Tensor-train decomposition = CP-decomposition —— CP-decomposition 0 0 oa 06 08 10 12 14 16 18 20 a 6 @ 10 2 a 16 emacs emacs Imagenet (Resnet50) Imagenet (Inceptionv3) 70 5 70 ar 60 60 7 50 50 / 10 / ° / 40 30 top1 accuracy (%) top1 accuracy (%) = uncompressed model 20 >< Spatial SVD >< Weight SvD += Tucker decomposition = Tensor-train decomposition >< cP-decomposition >< uncompressed model >< Spatial SVD >< Weight SvD += Tucker decomposition <= Tensor-train decomposition >< P-decomposition 10 10 LS 2.0 25 30 35 4.0 45 1 2 3 4 5 6 macs GMacs Imagenet (Resnet50) 70 5 60 50 40 30 top1 accuracy (%) = uncompressed model 20 >< Spatial SVD >< Weight SvD += Tucker decomposition = Tensor-train decomposition >< cP-decomposition 10 10 LS 2.0 25 30 35 4.0 45 macs Imagenet (Inceptionv3) 70 ar 60 7 50 / 10 / ° / top1 accuracy (%) >< uncompressed model >< Spatial SVD >< Weight SvD += Tucker decomposition <= Tensor-train decomposition >< P-decomposition 1 2 3 4 5 6 GMacs Figure 5: Level 1 compression. Comparison of different SVD and tensor decomposition methods for Resnet18, VGG16, Resnet50, and InceptionV3 pre-trained on ImageNet. Over- all, the best performance is mostly achieved using CP-decomposition for every model. For the greatest part of the experiments, the second best method is the spatial SVD decompo- sition. The ranking of the other methods depends on the model. # 4. Experiments To evaluate the performance of different compression techniques at different levels, we used a set of the models from PyTorch (Paszke et al. (2017)) model zoo, including Resnet18, Resnet50, VGG16, InceptionV3, and MobileNetV2 trained on ImageNet data set. For every model we used 1.33x, 2x, 3x, and 4x compression ratios in terms of MACs, which serves as a proxy for run-time. # 4.1 Level 1 compression To compare the performance of level 1 compression techniques, we used Resnet18, VGG16, Resnet50, and InceptionV3 models, no fine-tuning or data-aware optimization was used. Five different compression techniques were evaluated, including spatial SVD, weight 18 Review of Structured CNN compression SVD, Tucker decomposition, tensor-train decomposition, and CP-decomposition. To com- pute compression ratios per layer, the method based on equal accuracy loss was used for all the methods. For decomposition approaches that have a single rank value including spatial SVD, weight SVD, and CP-decomposition, the rank value is fully determined by the compression ratio. For Tucker decomposition, we add an additional constraint a & a to calculate the ranks, where 7; and 72 are the maximum values of ranks r; and r2 respectively (definition of ranks for Tucker decomposition is given in Eqn. 12), and the equality is approximate due to integer values of the ranks. In a similar way, for tensor-train decomposition we use the pair of constraints a ~] e & A to determine the set of three ranks based on the compression ratio value, where 71, 72, 73 are maximum values of the ranks r1, re, 73, respectively (the ranks for the tensor-train decomposition are defined in Eqn. 16). The results are shown on the figure 5. The best accuracy versus compression ratio is achieved by the method based on CP-decomposition (Lebedev et al. (2014)) across all four models. The second best method across all the considered models is the Spatial SVD decomposition (Jaderberg et al. (2014)). We conjecture that good performance of both methods is due to the highly efficient final architecture that is based on horizontal and vertical filters that require few MAC units. In the case of CP-decomposition, the resulting CNN architecture is based on depth-wise separable convolutions, which results in even more savings in computational complexity. The ranking of the other three methods depends on the model. Thus, choosing the optimal method requires empirical validation. The results show that using higher-level decomposition such as Tucker or tensor-train does not necessarily lead to better performance compared to approaches based on matricization such as weight SVD or spatial SVD. # 4.2 Level 2 compression In this section, we present the results of the ablation study for Level 2 methods from Zhang et al. (2016), and compare it to channel pruning suggested by He et al. (2017) for Resnet18, and VGG models pre-trained on ImageNet. For data-aware reconstruction, we use 5000 images. For each image, ten k × k feature map patches at random locations were sampled. For the Resnet18 model, five methods were evaluated, including data SVD, asymmetric data SVD, channel pruning, Asym3D, and data-optimized spatial SVD. The best perfor- mance for lower compression ratios such as 1.33x and 2x compression is achieved with data-optimized spatial SVD, whereas for higher compression ratios including 3x and 4x compression, better accuracy is achieved using Asym3D (see figure 6 on the left). The data-optimized spatial SVD method can be seen as three improvements on top of the most basic Level 1 weight SVD compression. The first step is using data for per-layer optimization of the compressed model (Eqn. 22), the second is asymmetric formulation (Eqn. 26), and finally, some improvement is obtained by using efficient spatial SVD archi- tecture (Eqn. 33). In order to compare improvements due to each step, we performed the following ablation study. As results in figure 6 suggest, all three steps are equally important for the compressed model performance. 19 # Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling Imagenet (Resnet18) 10 x 60 50 5 30 3 © uncompressed model “ © Weight svD 20 © Data SVD —e Asym3D 10 —+— Data optimized spatial SVD Se Asym —< Channel pruning 0 oa 06 08 10 12 14 16 18 20 emacs Imagenet (VGG16) 1 x 60 50 5 30 3 =< uncompressed model “ —< Weight svD 20 —<— Data SVD ee Asym3D 10 —<— Data optimized spatial SVD < Asym <= Channel pruning 0 4 6 @ 10 2 a 16 emacs Imagenet (Resnet18) Imagenet (VGG16) 10 x 1 x 60 60 50 50 30 5 30 © uncompressed model 3 =< uncompressed model © Weight svD “ —< Weight svD 20 © Data SVD 20 —<— Data SVD —e Asym3D ee Asym3D 10 —+— Data optimized spatial SVD 10 —<— Data optimized spatial SVD Se Asym < Asym —< Channel pruning <= Channel pruning 0 0 oa 06 08 10 12 14 16 18 20 4 6 @ 10 2 a 16 emacs emacs Figure 6: Level 2 compression. Comparison of different data-optimized SVD approaches for Resnet18 trained on ImageNet. The best performance for Resnet18 and VGG16 is achieved using data-optimized spatial SVD. The accuracy of channel pruning is mostly on par or comparable with the data SVD method (figure 6) as both methods use data-aware reconstruction based on the same amount of data without leveraging the asymmetric formulation in Eqn. 27. Imagenet (VGG16) 70 60 50 40 30 top1 accuracy (%) = uncompressed model Se Asym3D 20 x Asym3D ReLU Se Data optimized spatial SVD x= Data optimized spatial SVD ReLU = Asym —x> Asym ReLU 10 4 6 8 10 2 FY 16 macs Figure 7: Level 2 compression of VGG16 with ReLU activation function included in the formulation. As the VGG16 model has many convolutional layers followed by ReLU non-linearities without batch normalization in between, this model allows adding activation function into the data-aware reconstruction for methods such as asymmetric data SVD, Asym3D, and data-optimized Spatial SVD. The results with the activation function included into the formulation are presented in figure 7. The most important part of the methods is using the ReLU function in the optimization, which is necessary for the performance of both methods. Overall for VGG16 model, Similar to the Resnet18 results, the three improvements on top of level 1 compression, such as using data-aware optimization, the asymmetric formu- 20 # Review of Structured CNN compression Imagenet (MobileNetV2) 70 * 60 50 40 30 top1 accuracy (%) 20 4 uncompressed model +e Weight svb = Channel pruning = Data SVD = Asym 0.10 0.15 0.20 0.25 0.30 macs Figure 8: Level 2 compression of MobileNet V2. The best performance is obtained using channel pruning, which is applicable to both depth-wise separable and point-wise convolu- tions. As data-optimized SVD decomposition is only applicable to 1x1 point-wise layers, the performance of this family of methods is lower than channel pruning. However, us- ing data-aware optimization still allows to improve the results of level 1 compression with weight SVD. lation, and using efficient Spatial SVD architecture are equally crucial for the accuracy of the compressed model. For this model, channel pruning demonstrates poor performance, which is comparable to level 1 compression using the weight SVD method. The MobileNet V2 architecture is based on depth-wise separable convolutions; therefore, spatial SVD is not possible, and the set of applicable compression methods is restricted to variants of data-optimized SVD, and channel pruning. As the SVD decomposition can only be used for 1x1 convolutional layers and is not applicable for depth-wise separable convolutions, data-optimized SVD methods, including data SVD, asymmetric data SVD, demonstrate poor performance (figure 8) which is still better than data-free weight SVD method. In contrast, channel pruning is applicable for both types of layers, which leads to better accuracy of the compressed model. # 4.3 Level 3 compression # 4.3.1 Fine-tuned SVD and tensor decompositions To recover the performance of compressed models, we used the same fine-tuning scheme for different compression methods, the summary for each model is given in the table 2. All the models were fine-tuned using SGD with 0.9 momentum for 20 epochs with learning rate dropped at epochs 10 and 15. Different hyperparameters for each model, including learning rate, batch size, and weight decay value, are given in the table 2. In figure 9 we show the results for level 3 compression of Resnet18, Resnet50, VGG16, and InceptionV3. The best accuracy for all models is achieved using spatial SVD decom- position. The CP-decomposition shows the best results before fine-tuning. However, the fine-tuning scheme used for all the other methods does not recover the accuracy after com- pression. We were not able to find any fine-tuning hyperparameters that would allow us 21 # Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling KuzMIN, NAGEL, PITRE, PENDYAM, BLANKEVOORT AND WELLING Model Resnet-18 Resnet-50 VGG16 InceptionV3 MobileNetV2 learning rate batch size weight decay 256 64 64 64 256 10−4 10−4 10−4 10−4 2.0 × 10−4 0.01 0.01 0.001 0.001 0.007 Table 2: Fine-tuning schemes for different models trained on ImageNet dataset. All the models were fine-tuned using SGD with 0.9 momentum for 20 epochs with learning rate dropped at epochs 10 and 15. Imagenet (Resnet18) 70 x 68 > g 66 a gos 8 = uncompressed model + Spatial SVD fine-tuned + Weight SVD fine-tuned 62 —— Tucker decomposition fine-tuned = Tensor-train decomposition fine-tuned = Channel pruning fine-tuned 60 o4 06 08 10 22 14 16 18 20 macs Imagenet (VGG16) R x 70 > 68 g 5 66 3 8 4 uncompressed model 64 9c Spatial SVD fine-tuned © Weight SVD fine-tuned —- Tucker decomposition fine-tuned 0 3 Tensor-train decomposition fine-tuned = Channel pruning fine-tuned 4 6 8 10 Fey us macs Imagenet (Resnet18) Imagenet (VGG16) R 70 x x 68 70 > > 68 g 66 g a 5 66 gos 3 8 = uncompressed model 8 4 uncompressed model + Spatial SVD fine-tuned 64 9c Spatial SVD fine-tuned + Weight SVD fine-tuned © Weight SVD fine-tuned 62 —— Tucker decomposition fine-tuned —- Tucker decomposition fine-tuned = Tensor-train decomposition fine-tuned 0 3 Tensor-train decomposition fine-tuned = Channel pruning fine-tuned = Channel pruning fine-tuned 60 o4 06 08 10 22 14 16 18 20 4 6 8 10 Fey us macs macs Imagenet (Resnet50) Imagenet (InceptionV3) 715 x 76 x 75.0 4 an _? & & S10 § 10.0 2 68 8 675 § § Os 4 uncompressed model » 50 = Spatial SVD fine-tuned ° —— uncompressed model 4 Weight SVD fine-tuned & Spatial SVD fine-tuned 64 —#- Tucker decomposition fine-tuned 62.5 —- Weight SVD fine-tuned = Tensor-train decomposition fine-tuned 4 Tensor-train decomposition fine-tuned 62 = Channel pruning fine-tuned 60.0 = Channel pruning fine-tuned 10 15 20 25 30 35 40 a5 1 2 3 4 5 macs omacs Imagenet (Resnet50) 76 x 4 an & S10 2 68 § Os 4 uncompressed model = Spatial SVD fine-tuned 4 Weight SVD fine-tuned 64 —#- Tucker decomposition fine-tuned = Tensor-train decomposition fine-tuned 62 = Channel pruning fine-tuned 10 15 20 25 30 35 40 a5 macs Imagenet (InceptionV3) 715 x 75.0 _? & § 10.0 8 675 § » 50 ° —— uncompressed model & Spatial SVD fine-tuned 62.5 —- Weight SVD fine-tuned 4 Tensor-train decomposition fine-tuned 60.0 = Channel pruning fine-tuned 1 2 3 4 5 omacs Figure 9: Level 3 compression. Comparison of different SVD and tensor decomposition methods for Resnet18 trained on ImageNet. The best accuracy is achieved with spatial SVD method across all four models. Ranking of the other methods is different for each specific model. to recover the model accuracy. This observation agrees with the results from the original paper by Lebedev et al. (2014). The results for level 3 compression of MobileNetV2 are given in figure 10. There are only two methods applicable, and channel pruning outperforms fine-tuned weight SVD across all the compression ratios. 22 # Review of Structured CNN compression Imagenet (MobileNetV2) 70 * 60 50 40 30 top1 accuracy (%) —— uncompressed model —— Weight SVD fine-tuned =< Channel pruning fine-tuned 0.10 0.15 0.20 0.25 0.30 macs Figure 10: Level 3 compression of MobileNetV2. Channel pruning gives better accuracy after fine-tuning compared to weight SVD for all the compression rates. Comparing the results for level 1 and level 3 (see figures 5 and 9, respectively) suggests that the ranking of compression methods depends on the level, i.e., the best level 1 com- pression method does not necessarily correspond to the best level 3 compression method. # 4.3.2 Fine-tuning data-optimized SVD The following experiment was performed to estimate the potential benefit of combining data-aware optimization with full data fine-tuning for SVD methods. We compressed the Resnet18 network using the level 1 spatial SVD method and level 2 data-optimized spatial SVD. The two methods used the same SVD rank values provided by the greedy method based on singular values so that the resulting network architectures are identical. We fine- tuned both models using the same fine-tuning scheme used for level 3 compression. The results are shown in figure 11. Despite the substantial difference in accuracy between level 1 and level 2 methods, the difference becomes negligible after the networks are fine- tuned. Therefore, we conclude that there is no benefit in using data-optimized compression if the network is fine-tuned after compression. # 4.3.3 Probabilistic compression Contrary to previously discussed methods, methods based on probabilistic compression usually train the network from scratch using a special regularization instead of starting from a pre-trained model1. Since the compression is indirectly enforced using a regularization term and it is not possible to target a specific compression rate directly. However, by line search over regularization strength λ we can achieve comparable compression targets than in the other experiments. Similar to the original model, we train models with probabilistic compression using SGD with a learning rate of 0.1, momentum of 0.9, and a weight decay of 10−4 for 120 epochs. We 1. Probabilistic compression can also be used in combination with a pre-trained model. However, in most cases, this results in lower performance than starting from a randomly initialized model, especially when targeting a high compression rate. 23 # Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling KuzMIN, NAGEL, PITRE, PENDYAM, BLANKEVOORT AND WELLING Imagenet (Resnet18) — w & g 2 x 8 & 8 g 3 x| topl accuracy (%) 8 = uncompressed model = Spatial SVD fine-tuned + Spatial SVD —+ Spatial SVD data optimized and fine-tuned + Spatial SVD data optimized 5 0.4 0.6 08 1.0 12 14 16 18 2.0 GMACs Figure 11: Results of performing fine-tuning for a data-optimized compression method. The data-optimized method advantage vanishes after fine-tuning as level 1 spatial SVD method and level 2 spatial method give similar accuracy if fine-tuning is applied after compression. The cyan curve (data-free spatial SVD after fine-tuning) is not visible as it coincides with the red curve (data-optimized spatial SVD after fine-tuning). drop the learning rate by a factor of 0.1 at epoch 30, 60, and 90. The regularization strength λ depends on the method applied; for the variational information bottleneck (VIBNet) we used values between 10−6 and 5 · 10−6 to achieve a compression of approximately 1.3x to 3x. For L0 based regularization we used 3 · 10−9 to 10−8 resulting in similar compression rates. In case the architecture has residual connections, we add a gate z to the input of the first convolution of each residual block. Thus we can prune the input and output channels of each convolution to achieve an optimal compression rate. Note, in a chain-like CNN pruning the input of a convolution is done implicitly since it depends only on the output of the previous convolution. The results for probabilistic compression of Resnet18 are shown in figure 12. We observe that VIBNet consistently outperforms L0 by a small margin. Compared to the previous best level 3 decomposition method, fine-tuned spatial SVD, VIBNets have a slight edge for lower compression rates but perform worse for very high compression rate. The latter might be due to the fact that spatially decomposing the convolutional filter can lead to a more efficient architecture than only pruning channels. Both VIBNets and L0 consistently outperform fine-tuned channel pruning, which can lead to the same architectures. 24 # Review of Structured CNN compression Imagenet (Resnetis) 70 68 66 top1 accuracy (%) 04 + uncompressed model Se Spatial SVD fine-tuned —+= Channel pruning fine-tuned —e VIBNet elo 62 60 0.4 6 = Og. 10 1.2 La 16 1s 20 macs Figure 12: Bayesian compression method versus spatial SVD. # 4.3.4 Combining probabilistic compression and channel pruning with SVD compression Imagenet (Resnetis) 70 68 66 top1 accuracy (%) = uncompressed model + Spatial SVD fine-tuned + Channel pruning fine-tuned —e viBNet —# Channel pruning + Spatial SVD —# VIBNet + Spatial SVD 62 60 0.4 6 = Og. 10 1.2 La 16 1s 20 macs Figure 13: Combinations of level 3 compression methods. The best model accuracy is achieved using combining VIBNets trained from scratch with spatial SVD compression. Another practically useful combination is channel pruning applied for the model compressed with spatial SVD. Both combinations allow to improve performance of level 3 compression. We found that different level 3 compression approaches are complementary. In fact, spatial SVD can be combined with channel pruning or probabilistic compression, which yields better model accuracy compared to compression using a single full-data method. The results for the combinations of the methods are given on the figure 13. In the first case, spatial SVD was applied after probabilistic compression with the VIBNet approach. The VIBNet compressed model was trained from scratch, then spatial SVD was applied for the resulting model, and finally, the compressed model was fine-tuned using the scheme from table 2. In the second case, channel pruning was applied after the spatial SVD. After each com- pression step, we fine-tuned the network with the scheme from table 2. The combination 25 Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling of the VIBNet approach and the spatial SVD achieves the best results, and allows to sig- nificantly improve the spatial SVD method. # 4.3.5 Compression versus training from scratch One of the important questions related to compression is whether a compressed model gives better performance than training the same architecture from scratch. In order to answer this question, we performed the following experiment. We compressed Resnet18 and VGG16 pre-trained on ImageNet using spatial SVD and channel pruning and then compared the accuracy of the fine-tuned models to the models trained from scratch. The architecture for the models trained from scratch was identical to the architecture obtained by applying the compression techniques. The level 3 fine-tuning schemes (table 2) were used for fine-tuning of the compressed models. Whereas for training from scratch, for Resnet18 we use 90 epochs with similar parameters including the starting learning rate 0.1 with dropping it at epochs 30, 60, 90, and for VGG16 62 epochs were used with a learning rate 0.01 dropped at epochs 30, and 60. Using these training parameters for training uncompressed models from scratch gives accuracy equal to the accuracy of the corresponding pre-trained models, which were used for compression. The results are shown in figure 14. For channel pruning, using compression always gives better results compared to training from scratch. For spatial SVD using compression out- performs training from scratch for lower compression rates, but training from scratch gives better performance for more aggressive compression. We conjecture that more aggressive compression effectively leaves little information in the pre-trained model. In such cases, training from scratch with random initialization is often better. Our results for lower com- pression rates agree with the lottery ticket hypothesis Frankle and Carbin (2018), which claims that better accuracy can be achieved by training and pruning a larger model than training a smaller model directly. # 4.4 Compression ratio selection One of the important aspects of compression methods is the per layer compression ratio selection. As layers of a network have different sensitivity to compression, different choice of compression ratios can improve or deteriorate the accuracy of the compressed model. The problem of the compression ratio selection can be regarded as a discrete optimization problem. Specifying the full model compression ratio beforehand results in a constraint imposed on the solution. The choice of the objective function for the optimization corresponds to several different practical use cases of model compression. Besides the obvious choice of maximizing the accuracy of the compressed model, compression ratio selection can be used to minimize the inference time of the compressed model on specific hardware leading to hardware-optimized compression. In addition to inference time, the objective function can be based on the memory footprint of the model at inference time as well as use any combination of the quantities mentioned above. Practical usage of the compression ratio optimization faces a challenge related to the need for time-consuming model fine-tuning to recover the compressed model accuracy. Using 26 # Review of Structured CNN compression Imagenet (Resnetis) 10 68 66 top1 accuracy (%) “ >< uncompressed model —< Spatial SVD fine-tuned >< Channel pruning fine-tuned 62 Spatial SVD from scratch + Channel pruning from scratch x Training from scratch (linear scaling) 60 0.4 6 = Og. 10 1.2 La 16 1s 20 macs Imagenet (VGG16) 7) 70 68 66 top1 accuracy (%) 64 + uncompressed model + Spatial SVD fine-tuned — * * 10 Channel pruning fine-tuned ~ Spatial SVD from scratch + Channel pruning from scratch 62 4 6 8 GMacs 12 Fry 16 Imagenet (Resnetis) Imagenet (VGG16) 10 7) 70 68 68 66 66 top1 accuracy (%) “ >< uncompressed model —< Spatial SVD fine-tuned 64 + uncompressed model >< Channel pruning fine-tuned + Spatial SVD fine-tuned 62 Spatial SVD from scratch — + Channel pruning from scratch * x Training from scratch (linear scaling) * 10 Channel pruning fine-tuned ~ Spatial SVD from scratch + Channel pruning from scratch 62 60 0.4 6 = Og. 10 1.2 La 16 1s 20 4 6 8 macs GMacs 12 Fry 16 Figure 14: Full data compression compared to training from scratch for Resnet18, and VGG16 compressed with spatial SVD, and channel pruning. For spatial SVD, training from scratch achieves better accuracy for higher compression rates, and full data compression is more beneficial for moderate compression. For channel pruning a larger model always gives better results than training from scratch. Resnet-18 ImageNet (top-1) 68.5} * “te 5 ree ee te es ooyte 68.0 . . oe a . . ° @ 67.5 . . £ . fe & 67.0 ra . by % 66.5 fy 2 66.0 65.5 t) 10 20 30 40 Pre-FT accuracy Figure 15: Spatial SVD, pre-finetuning accuracy versus post-finetuning accuracy for dif- ferent sets of SVD ranks. The plot is based on the Resnet18 network compressed using spatial SVD with a 2x compression ratio. We fine-tuned 50 different compressed models with different values of SVD ranks to check whether the pre-finetuning accuracy for each model is a good proxy for its post-finetuning accuracy. All the 50 models have equal MAC count. The results of the experiment suggest that there is no correlation between the two accuracies so that it is not possible to use pre-finetuning accuracy to optimize per-layer compression ratios. model accuracy after fine-tuning as an objective function for optimization is prohibitively expensive in this case. One way to alleviate this problem used in the literature (e.g., He et al. (2018)) is the following: a model is compressed using a set of compression ratios and 27 # Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling evaluated on the validation set without fine-tuning. Then this accuracy value is being used in the optimization as a proxy of the accuracy of the compressed model after fine-tuning. In this case, it is assumed that a better compressed model accuracy before fine-tuning leads to a better compressed model accuracy after fine-tuning. To quantitatively validate this assumption, we performed the following experiment. First, we compressed the Resnet18 model with a 2x compression ratio; the compression ratios per layer were selected using the greedy method based on singular values. Second, we randomly perturbed the compression ratios in a way such that the full model complexity is preserved under the perturbations. This way, we obtained 50 different compressed Resnet- 18 models of the same computational complexity. To verify whether the model accuracy before fine-tuning is a suitable proxy for the model accuracy after fine-tuning, we fine-tuned all the models using the same fine-tuning scheme, which was used for level 3 compression (see table 2). The figure 15 shows the results as a scatter plot with the horizontal axis corresponding to the model accuracy before fine-tuning and vertical axis corresponding to the accuracy after fine-tuning. As the results suggest, there is no correlation between the two accuracy values. This does not agree with the assumption made above and leaves the problem of practical compression ratio optimization for architecture search methods wide open. # 5. Conclusion In this paper, we performed an extensive experimental evaluation of different neural network compression techniques. We considered several methods, including methods based on SVD, tensor factorization, channel pruning, and probabilistic compression methods. We introduced a methodology for the comparison of different compression techniques based on levels of compression solutions. Level 1 corresponds to data-free compression with no fine-tuning or optimization used to improve the compressed model. Level 2 corresponds to data-optimized compression based on a limited number of training data batches used to improve the predictions by performing layer-wise optimization of the parameters of the compressed model. No back-propagation is used at this level. Level 3 corresponds to fine- tuning the compressed model on the full training set using back-propagation. We hope these levels help distinguish between different types of compression methods more clearly, as the vocabulary is adopted. Experimental evaluation of the considered methods shows that the performance ranking of the considered methods depends on the level chosen for experiments. At level 1, CP- decomposition shows the best accuracy for most of the models. The ranking of the other methods depends on the model. The best results for Level 2 compression are achieved using per-layer optimization based on the combination of asymmetric formulation (Zhang et al. (2016)); however, our exper- iments show that applying the GSVD method for optimizing the second factor of spatial SVD decomposition yields better results than the original Asym3D double decomposition approach from the same paper. For level 3 compression, the best performance is given by VIBNet and L0 methods for moderate compression, and by the spatial SVD for higher compression ratios. In additional experiments, we show that SVD compression is complementary to channel pruning and 28 # Review of Structured CNN compression probabilistic pruning approaches so that using the combination of VIBnet and spatial SVD gives the best performance overall of any of the considered compression techniques. In further experiments, we demonstrate that level 3 compression of a larger network achieves better performance compared to training a smaller network from scratch, both for SVD-based compression, and pruning methods. These results are in agreement with the lottery ticket hypothesis (Frankle and Carbin (2018)) and indicate that compression methods should be applied after training, and are not just a way of doing neural architecture search. # Acknowledgments We would like to thank Arash Behboodi, Christos Louizos and Roberto Bondesan for their helpful discussions and valuable feedback. # References Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cudnn: Efficient primitives for deep learning. CoRR, abs/1410.0759, 2014. URL http://arxiv.org/abs/1410.0759. Bin Dai, Chen Zhu, and David Wipf. Compressing neural networks using the variational information bottleneck. arXiv preprint arXiv:1802.10399, 2018. Misha Denil, Babak Shakibi, Laurent Dinh, Nando De Freitas, et al. Predicting parameters in deep learning. In Advances in neural information processing systems, pages 2148–2156, 2013. Emily L Denton, Wojciech Zaremba, Joan Bruna, Yann LeCun, and Rob Fergus. Exploiting In Advances in linear structure within convolutional networks for efficient evaluation. neural information processing systems, pages 1269–1277, 2014. Jonathan Frankle and Michael Carbin. The lottery ticket hypothesis: Training pruned neural networks. CoRR, abs/1803.03635, 2018. URL http://arxiv.org/abs/1803. 03635. Trevor Gale, Erich Elsen, and Sara Hooker. The state of sparsity in deep neural networks. arXiv preprint arXiv:1902.09574, 2019. Weihao Gao, Yu-Han Liu, Chong Wang, and Sewoong Oh. Rate distortion for model compression: From theory to practice. arXiv preprint arXiv:1810.06401, 2018. Timur Garipov, Dmitry Podoprikhin, Alexander Novikov, and Dmitry Vetrov. Ulti- arXiv preprint mate tensorization: arXiv:1611.03214, 2016. compressing convolutional and fc layers alike. Gene H Golub and Charles F Van Loan. Matrix computations the john hopkins university press. Baltimore and London, 1996. 29 Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling Song Han, Huizi Mao, and William J Dally. Deep compression: Compressing deep neu- ral networks with pruning, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015. Babak Hassibi, David G Stork, and Gregory J Wolff. Optimal brain surgeon and general network pruning. In IEEE international conference on neural networks, pages 293–299. IEEE, 1993. Yihui He, Xiangyu Zhang, and Jian Sun. Channel pruning for accelerating very deep neural In Proceedings of the IEEE International Conference on Computer Vision, networks. pages 1389–1397, 2017. Yihui He, Ji Lin, Zhijian Liu, Hanrui Wang, Li-Jia Li, and Song Han. Amc: Automl for model compression and acceleration on mobile devices. In Proceedings of the European Conference on Computer Vision (ECCV), pages 784–800, 2018. Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, Weijun Wang, Yukun Zhu, Ruoming Pang, Vijay Vasudevan, Quoc V. Le, and Hartwig Adam. Searching for mobilenetv3. CoRR, abs/1905.02244, 2019. URL http://arxiv. org/abs/1905.02244. Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, To- bias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017. Max Jaderberg, Andrea Vedaldi, and Andrew Zisserman. Speeding up convolutional neural networks with low rank expansions. arXiv preprint arXiv:1405.3866, 2014. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparametrization with gumbel- In Proceedings International Conference on Learning Representations 2017. softmax. OpenReviews.net, April 2017. URL https://openreview.net/pdf?id=rkE3y85ee. Hyeji Kim and Chong-Min Kyung. Automatic rank selection for high-speed convolutional neural network. arXiv preprint arXiv:1806.10821, 2018. Hyeji Kim, Muhammad Umar Karim Khan, and Chong-Min Kyung. Efficient neural net- In Proceedings of the IEEE Conference on Computer Vision and work compression. Pattern Recognition, pages 12569–12577, 2019. Yong-Deok Kim, Eunhyeok Park, Sungjoo Yoo, Taelim Choi, Lu Yang, and Dongjun Shin. Compression of deep convolutional neural networks for fast and low power mobile appli- cations. arXiv preprint arXiv:1511.06530, 2015. Durk P Kingma, Tim Salimans, and Max Welling. Variational dropout and the local reparameterization trick. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2575–2583. Curran Associates, Inc., 2015. URL http://papers.nips.cc/paper/ 5666-variational-dropout-and-the-local-reparameterization-trick.pdf. 30 Review of Structured CNN compression Tamara G Kolda and Brett W Bader. Tensor decompositions and applications. SIAM review, 51(3):455–500, 2009. Raghuraman Krishnamoorthi. Quantizing deep convolutional networks for efficient infer- ence: A whitepaper. arXiv preprint arXiv:1806.08342, art. arXiv:1806.08342, Jun 2018. Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, and Victor Lempitsky. Speeding-up convolutional neural networks using fine-tuned cp-decomposition. arXiv preprint arXiv:1412.6553, 2014. Yann LeCun, John S Denker, and Sara A Solla. Optimal brain damage. In Advances in neural information processing systems, pages 598–605, 1990. Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710, 2016. Zechun Liu, Haoyuan Mu, Xiangyu Zhang, Zichao Guo, Xin Yang, Tim Kwang-Ting Cheng, and Jian Sun. Metapruning: Meta learning for automatic neural network channel pruning. arXiv preprint arXiv:1903.10258, 2019. Zhuang Liu, Mingjie Sun, Tinghui Zhou, Gao Huang, and Trevor Darrell. Rethinking the value of network pruning. CoRR, abs/1810.05270, 2018. URL http://arxiv.org/abs/ 1810.05270. Christos Louizos, Karen Ullrich, and Max Welling. Bayesian compression for deep learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish- wanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems 30, pages 3288–3298. Curran Associates, Inc., 2017. URL http://papers.nips.cc/ paper/6921-bayesian-compression-for-deep-learning.pdf. Christos Louizos, Max Welling, and Diederik P. Kingma. Learning sparse neural networks through l0 regularization. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=H1Y8hhg0b. Chris J Maddison, Andriy Mnih, and Yee Whye Teh. The concrete distribution: A contin- uous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712, 2016. Dmitry Molchanov, Arsenii Ashukha, and Dmitry Vetrov. Variational dropout sparsifies deep neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2498–2507. JMLR. org, 2017. Markus Nagel, Mart van Baalen, Tijmen Blankevoort, and Max Welling. Data-free quanti- zation through weight equalization and bias correction. arXiv preprint arXiv:1906.04721, 2019. Shinichi Nakajima, Masashi Sugiyama, S Derin Babacan, and Ryota Tomioka. Global analytic solution of fully-observed variational bayesian matrix factorization. Journal of Machine Learning Research, 14(Jan):1–37, 2013. 31 Kuzmin, Nagel, Pitre, Pendyam, Blankevoort and Welling Kirill Neklyudov, Dmitry Molchanov, Arsenii Ashukha, and Dmitry P Vetrov. Struc- tured bayesian pruning via log-normal multiplicative noise. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- Information Processing Systems 30, pages nett, editors, Advances 6775–6784. Curran Associates, Inc., 2017. URL http://papers.nips.cc/paper/ 7254-structured-bayesian-pruning-via-log-normal-multiplicative-noise.pdf. Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural networks. In Advances in neural information processing systems, pages 442–450, 2015. Ivan V Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33 (5):2295–2317, 2011. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic dif- ferentiation in PyTorch. In NIPS Autodiff Workshop, 2017. Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 4510–4520, 2018. Jiahao Su, Jingling Li, Bobby Bhattacharjee, and Furong Huang. Tensorized spectrum preserving compression for neural networks. arXiv preprint arXiv:1805.10352, 2018. Cheng Tai, Tong Xiao, Yi Zhang, Xiaogang Wang, et al. Convolutional neural networks with low-rank regularization. arXiv preprint arXiv:1511.06067, 2015. Yoshio Takane and Sunho Jung. Generalized constrained redundancy analysis. Behav- iormetrika, 33(2):179–192, 2006. Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks. CoRR, abs/1905.11946, 2019. URL http://arxiv.org/abs/1905. 11946. Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. Mnasnet: Platform-aware neural architecture search for mobile. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2820–2828, 2019. Chaoqi Wang, Roger Grosse, Sanja Fidler, and Guodong Zhang. Eigendamage: Structured pruning in the kronecker-factored eigenbasis. arXiv preprint arXiv:1905.05934, 2019. Bichen Wu, Xiaoliang Dai, Peizhao Zhang, Yanghan Wang, Fei Sun, Yiming Wu, Yuandong Tian, Peter Vajda, Yangqing Jia, and Kurt Keutzer. Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10734–10742, 2019. 32 Review of Structured CNN compression Xiangyu Zhang, Jianhua Zou, Kaiming He, and Jian Sun. Accelerating very deep convo- lutional networks for classification and detection. IEEE transactions on pattern analysis and machine intelligence, 38(10):1943–1955, 2016. Xiangyu Zhang, Xinyu Zhou, Mengxiao Lin, and Jian Sun. Shufflenet: An extremely In Proceedings of the IEEE efficient convolutional neural network for mobile devices. Conference on Computer Vision and Pattern Recognition, pages 6848–6856, 2018. 33
{ "id": "1806.08342" }
1912.07242
More Data Can Hurt for Linear Regression: Sample-wise Double Descent
In this expository note we describe a surprising phenomenon in overparameterized linear regression, where the dimension exceeds the number of samples: there is a regime where the test risk of the estimator found by gradient descent increases with additional samples. In other words, more data actually hurts the estimator. This behavior is implicit in a recent line of theoretical works analyzing "double-descent" phenomenon in linear models. In this note, we isolate and understand this behavior in an extremely simple setting: linear regression with isotropic Gaussian covariates. In particular, this occurs due to an unconventional type of bias-variance tradeoff in the overparameterized regime: the bias decreases with more samples, but variance increases.
http://arxiv.org/pdf/1912.07242
Preetum Nakkiran
stat.ML, cs.LG, cs.NE, math.ST, stat.TH
null
null
stat.ML
20191216
20191216
9 1 0 2 c e D 6 1 ] L M . t a t s [ 1 v 2 4 2 7 0 . 2 1 9 1 : v i X r a # More Data Can Hurt for Linear Regression: Sample-wise Double Descent # Preetum Nakkiran Harvard University # Abstract In this expository note we describe a surprising phenomenon in overparameterized linear regression, where the dimension exceeds the number of samples: there is a regime where the test risk of the estimator found by gradient descent increases with additional samples. In other words, more data actually hurts the estimator. This behavior is implicit in a recent line of theoretical works analyzing “double descent” phenomena in linear models. In this note, we isolate and understand this behavior in an extremely simple setting: linear regression with isotropic Gaussian covariates. In particular, this occurs due to an unconventional type of bias-variance tradeoff in the overparameterized regime: the bias decreases with more samples, but variance increases. 1 # 1 Introduction Common statistical intuition suggests that more data should never harm the performance of an estimator. It was recently highlighted in [Nakkiran et al., 2019] that this may not hold for overparameterized models: there are settings in modern deep learning where training on more data actually hurts. In this note, we analyze a simple setting to understand the mechanisms behind this behavior. - Test Risk vs. Samples —— Test MSE 4 Tale! = 7 2 t™ : — 0 250 500 750 1000 1250 1500 1750 2000 Num. Samples : Test Risk vs. Samples (Theory) —— Test MSE l@mm Bias 4 lm Variance 0 250 500 750 1000 1250 1500 1750 2000 Num. Samples - Test Risk vs. Samples : Test Risk vs. Samples (Theory) —— Test MSE —— Test MSE l@mm Bias 4 4 lm Variance Tale! = 7 2 t™ : — 0 250 500 750 1000 1250 1500 1750 2000 0 250 500 750 1000 1250 1500 1750 2000 Num. Samples Num. Samples (a) Test MSE for d = 1000, σ = 0.1. (b) Test MSE in theory for d = 1000, σ = 0.1 Figure 1: Test MSE vs. Num. Train Samples for the min-norm ridgeless regression estimator in d = 1000 dimensions. The distribution is a linear model with noise: covariates x ∼ N (0, Id) and response y = hx, βi + N (0, σ2), for d = 1000, σ = 0.1, and ||β||2 = 1. The estimator is ˆβ = X †y. Left: Solid line shows mean over 50 trials, and individual points show a single trial. Right: Theoretical predictions for the bias, variance, and risk from Claims 1 and 2. 1 We focus on well-specified linear regression with Gaussian covariates, and we analyze the test risk of the minimum-norm ridgeless regression estimator— or equivalently, the estimator found by gradient descent on the least squares objective. We show that as we increase the number of samples, performance is non- monotonic: The test risk first decreases, and then increases, before decreasing again. Such a “double-descent” behavior has been observed in the behavior of test risk as a function of the model size in a variety of machine learning settings [Opper, 1995, Opper, 2001, Advani and Saxe, 2017, Belkin et al., 2018, Spigler et al., 2018, Geiger et al., 2019, Nakkiran et al., 2019]. Many of these works are motivated by understanding the test risk as function of model size, for a fixed number of samples. In this work, we take a complementary view and understand the test risk as a function of sample size, for a fixed model. We hope that understanding such simple settings can eventually lead to understanding the general phenomenon, and lead us to design learning algorithms which make the best use of data (and in particular, are monotonic in samples). We note that similar analyses appear in recent works, which we discuss in Section 1.1– our focus is to highlight the sample non-monotonicity implicit in these works, and give intuitions for the mechanisms behind it. We specifically refer the reader to [Hastie et al., 2019, Mei and Montanari, 2019] for analysis in a setting most similar to ours. Organization. We first define the linear regression setting in Section 2. Then in Section 3 we state the form of the estimator found by gradient descent, and give intuitions for why this estimator has a peak in test risk when the number of samples is equal to the ambient dimension. In Section 3.1, we decompose the expected excess risk into bias and variance contributions, and we state approximate expressions for the bias, variance, and excess risk as a function of samples. We show that these approximate theoretical predictions closely agree with practice, as in Figure 1. The peak in test risk turns out to be related to the conditioning of the data matrix, and in Section 3.2 we give intuitions for why this matrix is poorly conditioned in the “critical regime”, but well conditioned outside of it. We also analyze the marginal effect of adding a single sample to the test risk, in Section 3.3. We conclude with discussion and open questions in Section 4. # 1.1 Related Works This work was inspired by the long line of work studying “double descent” phenomena in deep and shallow models. The general principle is that as the model complexity increases, the test risk of trained models first decreases and then increases (the standard U-shape), and then decreases again. The peak in test risk occurs in the “critical regime” when the models are just barely able to fit the training set. The second descent occurs in the “overparameterized regime”, when the model capacity is large enough to contain several interpolants on the training data. This phenomenon appears to be fairly universal among natural learning algorithms, and is observed in simple settings such as linear regression, random features regres- sion, classification with random forests, as well as modern neural networks. Double descent of test risk with model size was introduced in generality by [Belkin et al., 2018], building on similar behavior observed as early as [Opper, 1995, Opper, 2001] and more recently by [Advani and Saxe, 2017, Neal et al., 2018, Spigler et al., 2018, Geiger et al., 2019]. A generalized double descent phenomenon was demonstrated on modern deep networks by [Nakkiran et al., 2019], which also highlighted “sample-wise nonmonotonicity” as a consequence of double descent – showing that more data can hurt for deep neural networks. A number of recent works theoretically analyze the double descent behavior in simplified settings, often for linear models [Belkin et al., 2019, Hastie et al., 2019, Bartlett et al., 2019, Muthukumar et al., 2019, Bibas et al., 2019, Mitra, 2019, Mei and Montanari, 2019, Liang and Rakhlin, 2018, Liang et al., 2019, Xu and Hsu, 2019, Dereziński et al., 2019, Lampinen and Ganguli, 2018, Deng et al., 2019]. At a high level, 2 these works analyze the test risk of estimators in overparameterized linear regression with different assump- tions on the covariates. We specifically refer the reader to [Hastie et al., 2019, Mei and Montanari, 2019] for rigorous analysis in a setting most similar to ours. In particular, [Hastie et al., 2019] considers the asymptotic risk of the minimum norm ridgeless regression estimator in the limit where dimension d and number of samples n are scaled as d → ∞, n = γd. We instead focus on the sample-wise perspective: a fixed large d, but varying n. In terms of technical content, the analysis technique is not novel to our work, and similar calculations appear in some of the prior works above. Our main contribution is highlighting the sample non-monotonic behavior in a simple setting, and elaborating on the mechanisms responsible. While many of the above theoretical results are qualitatively similar, we highlight one interesting distinction: our setting is well-specified, and the bias of the estimator is monotone nonincreasing in number of samples (see Equation 3, and also [Hastie et al., 2019, Section 3]). In contrast, for misspecified problems (e.g. when the ground-truth is nonlinear, but we learn a linear model), the bias can actually increase with number of samples in addition to the variance increasing (see [Mei and Montanari, 2019]). # 2 Problem Setup Consider the following learning problem: The ground-truth distribution D is (x, y) ∈ Rd × R with covariates x ∼ N (0, Id) and response y = hx, βi + N (0, σ2) for some unknown, arbitrary β such that ||β||2 ≤ 1. That is, the ground-truth is an isotropic Gaussian with observation noise. We are given n samples (xi, yi) from the distribution, and we want to learn a linear model f ˆβ(x) = hx, ˆβi for estimating y given x. That is, we want to find ˆβ with small test mean squared error R( ˆβ) := E [(hx, ˆβi − y)2] (x,y)∼D = || ˆβ − β||2 + σ2 (for isotropic x ∼ N (0, Id)) Suppose we do this by performing ridgeless linear regression. Specifically, we run gradient descent initialized at 0 on the following objective (the empirical risk). min ˆβ ||X ˆβ − y||2 (1) where X ∈ Rn×d is the data-matrix of samples xi, and y ∈ Rn are the observations. The solution found by gradient descent at convergence is ˆβ = X †y, where † denotes the Moore–Penrose pseudoinverse1. Figure 1a plots the expected test MSE of this estimator EX,y[R( ˆβ))] as we vary the number of train samples n. Note that it is non-monotonic, with a peak in test MSE at n = d. There are two surprising aspects of the test risk in Figure 1a, in the overparameterized regime (n < d): 1. The first descent: where test risk initially decreases even when we have less samples n than dimen- sions d. This occurs because the bias decreases. 2. The first ascent: where test risk increases, and peaks when n = d. This is because the variance increases, and diverges when n = d. When n > d, this is the classical underparameterized regime, and test risk is monotone decreasing with number of samples. Thus overparameterized linear regression exhibits a bias-variance tradeoff : bias decreases with more samples, but variance can increase. Below, we elaborate on the mechanisms and provide intuition for this non- monotonic behavior. 1To see this, notice that the iterates of gradient descent lie in the row-space of X. 3 # 3 Analysis The solution found by gradient descent, ˆβ = X †y, has different forms depending on the ratio n/d. When n ≥ d, we are in the “underparameterized” regime and there is a unique minimizer of the objective in Equation 1. When n < d, we are “overparameterized” and there are many minimizers of Equation 1. In fact, since X is full rank with probability 1, there are many minimizers which interpolate, i.e. X ˆβ = y. In this regime, gradient descent finds the minimum with smallest ‘2 norm || ˆβ||2. That is, the solution can be written as ˆβ = X †y = argmin β:Xβ=y argmin β ||β||2 when n ≤ d ||Xβ − y||2 when n > d (“Overparameterized”) (“Underparameterized”) The overparameterized form yields insight into why the test MSE peaks at n = d. Recall that the observations are noisy, ie. y = X8 +7 where n ~ N(0,07I,). When n < d, there are many interpolating estimators {8 : XB = y}, and in particular there exist such B with small norm. In contrast, when n = d, there is exactly one interpolating estimator (xB = y), but this estimator must have high norm in order to fit the noise 7. More precisely, consider ˆβ = X †y = X †(Xβ + η) = X †Xβ | {z } signal The signal term X †Xβ is simply the orthogonal projection of β onto the rows of X. When we are “critically parameterized” and n ≈ d, the data matrix X is very poorly conditioned, and hence the noise term X †η has high norm, overwhelming the signal. This argument is made precise in Section 3.1, and in Section 3.2 we give intuition for why X becomes poorly conditioned when n ≈ d. The main point is that when n = d, forcing the estimator ˆβ to interpolate the noise will force it to have very high norm, far from the ground-truth β. (See also Corollary 1 of [Hastie et al., 2019] for a quantification of this point). # 3.1 Excess Risk and Bias-Variance Tradeoffs For ground-truth parameter β, the excess risk2 of an estimator ˆβ is: R( ˆβ) := E [(hx, ˆβi − y)2] − E [(hx, βi − y)2] (x,y)∼D (x,y)∼D = E x∼N (0,I),η∼N (0,σ2) [(hx, ˆβi − hx, βi + η)2] − σ2 = || ˆβ − β||2 For an estimator ˆβX,y that is derived from samples (X, y) ∼ Dn, we consider the expected excess risk of ˆβ = ˆβX,y in expectation over samples (X, y) : E X,y [R( ˆβX,y)] = E X,y [|| ˆβ − β||2] = ||β − E[ ˆβ]||2 } | {z Bias Bn + E[|| ˆβ − E[ ˆβ]||2] } {z Variance Vn | (2) Where Bn, Vn are the bias and variance of the estimator on n samples. 2For clarity, we consider the excess risk, which omits the unavoidable additive σ2 error in the true risk. 4 For the specific estimator ˆβ = X †y in the regime n ≤ d, the bias and variance can be written as (see Appendix A.1): [ProjX ⊥(β)]||2 Bn = || E (3) Va = Bll[Pr0jx(8) ~ B[Pr0§ x(8)||P] +0? B[H((XX7)*)] (4) (A) (B) where ProjX is the orthogonal projector onto the rowspace of the data X ∈ Rn×d, and ProjX ⊥ is the projector onto the orthogonal complement of the rowspace. From Equation 3, the bias is non-increasing with samples (Bn+1 ≤ Bn), since an additional sample can only grow the rowspace: X ⊥ n . The variance in Equation 4 has two terms: the first term (A) is due to the randomness of X, and is bounded. But the second term (B) is due to the randomness in the noise of y, and diverges when n ≈ d since X becomes poorly conditioned. This trace term is responsible for the peak in test MSE at n = d. We can also approximately compute the bias, variance, and excess risk. Claim 1 (Overparameterized Risk). Let γ := n variance are: d < 1 be the underparameterization ratio. The bias and (5) Bn = (1 − γ)2||β||2 Vn ≈ γ(1 − γ)||β||2 + σ2 γ 1 − γ (6) And thus the expected excess risk for γ < 1 is: E[R( ˆβ)] ≈ (1 − γ)||β||2 + σ2 γ 1 − γ n )||β||2 + σ2 n d (7) = (1 − d − n (8) These approximations are not exact because they hold asyptotically in the limit of large d (when scal- ing n = γd), but may deviate for finite samples. In particular, the bias Bn and term (A) of the vari- ance can be computed exactly for finite samples: ProjX is simply a projector onto a uniformly random n-dimensional subspace, so E[ProjX (β)] = γβ, and similarly E[||ProjX (β)||2] = γ||β||2. The trace term (B) is nontrivial to understand for finite samples, but converges3 to γ 1−γ in the limit of large n, d (e.g. Lemma 3 of [Hastie et al., 2019]). In Section 3.3, we give intuitions for why the trace term converges to this. For completeness, the bias, variance, and excess risk in the underparameterized regime are given in [Hastie et al., 2019, Theorem 1] as: Claim 2 (Underparameterized Risk, [Hastie et al., 2019]). Let γ := n ratio. The bias and variance are: d > 1 be the underparameterization Bn = 0 , Vn ≈ σ2 γ − 1 Figure 1 shows that Claims 1 and 2 agree with the excess risk experimentally even for finite d = 1000. 3 For large d, the spectrum of (X X7) is understood by the Marchenko~Pastur law [Maréenko and Pastur, 1967]. Lemma 3 of [Hastie et al., 2019] uses this to show that Tr((XXT)-!) > rn 5 # 3.2 Conditioning of the Data Matrix Here we give intuitions for why the data matrix X € R"*¢ is well conditioned for n < d, but has small singular values for n = d. # 3.2.1 Near Criticality First, let us consider the effect of adding a single sample when n = (d − 1). For simplicity, assume the first (d − 1) samples xi are just the standard basis vectors, scaled appropriately. That is, assume the data matrix X ∈ R(d−1)×d is # X= [dla—1 0}. X= [dla—1 0}. This has all non-zero singular values equal to d. Then, consider adding a new isotropic Gaussian sample xn+1 ∼ N (0, Id). Split this into coordinates as xn+1 = (g1, g2) ∈ Rd−1 × R. The new data matrix is dIg_; 0 X, = maa [eo] We claim that X,,,1 has small singular values. Indeed, consider left-multiplication by vl i= {gu —d]: dIg-1 0 wv Xnyi = [g1 —d] ng ]= 0 -a Thus, ||vT Xn+1||2 ≈ d2, while ||v||2 ≈ 2d2. Since Xn+1 is full-rank, it must have a singular value less than roughly 1√ . That is, adding a new sample has shrunk the minimum non-zero singular value of X from d to 2 less than a constant. The intuition here is: although the new sample xn+1 adds rank to the existing samples, it does so in a very fragile way. Most of the ‘2 mass of xn+1 is contained in the span of existing samples, and xn only contains a small component outside of this subspace. This causes Xn+1 to have small singular values, which in turn causes the ridgeless regression estimator (which applies X †) to be sensitive to noise. A more careful analysis shows that the singular values are actually even smaller than the above simplification suggests — since in the real setting, the matrix X was already poorly conditioned even before the new sample xn+1. In Section 3.3 we calculate the exact effect of adding a single sample to the excess risk. # 3.2.2 Far from Criticality When n < d, the data matrix X does not have singular values close to 0. One way to see this is to notice that since our data model treats features and samples symmetrically, X is well conditioned in the regime n <d for the same reason that standard linear regression works in the classical underparameterized regime n > d (by “transposing” the setting). More precisely, since X is full rank, its smallest non-zero singular value can be written as σmin(X) = min v∈Rn:||v||2=1 ||vT X||2 Since X has entries i.i.d N (0, 1), for every fixed vector v we have EX [||vT X||2] = d||v||2 = d. Moreover, for d = Ω(n) uniform convergence holds, and ||vT X||2 concentrates around its expectation for all vectors v in the ‘2 ball. Thus: σmin(X)2 ≈ E X ||vT X||2 [||vT X||2] = d E X min v∈Rn:||v||2=1 ≈ min v 6 # 3.3 Effect of Adding a Single Sample Here we show how the trace term of the variance in Equation [4] changes with increasing samples. Specifically, the following claim shows how Tr((X.X7)~") grows when we add a new sample to X. Claim 3. Let X € R"*¢ be the data matrix after n samples, and let x € R¢ be the (n+ 1)th sample. The Claim 3. Let X € R"*¢ be the data matrix after n samples, and let x € R¢ be the (n+ 1)th sample. The x new data matrix is Xn41 = |p|, and x 1+ [(X7)tal? THX Xa) 1) = DEXA] + Te ep in Appendix [A.2] Proof. By computation in Appendix A.2. If we heuristically assume the denominator concentrates around its expectation, ||ProjX ⊥ (x)||2 ≈ d − n, then we can use Claim 3 to estimate the expected effect of a single sample: 1+ Ee ||(XX7) 1 Xa? d—n 1 1 1+ Ee ||(XX7) 1 Xa? d—n ET((Xn41 X41) ') = Tr[(XX7)-1] 4 (9) X41) ') = Tr[(XX7)-1] 4 1 1 =Tr[(XX7)""] (1 1 r[(XX7) Ia+ ee (10) We can further estimate the growth by taking a continuous limit for large d. Let F(4) := E[Tr((X,X7)~+)]. Then for 7 := 4, Equation [10] yields the differential equation dF (γ) dγ = (1 − γ)−1F + (1 − γ)−1 # 72.. This heuristic derivation that E(Tr(XX?) 1] which is solved by F (γ) = γ rigorous asymptotics given in [Hastie et al., 2019, Lemma 3] and used in Claim 1. ] → γ 1−γ is consistent with the # 4 Discussion We hope that understanding such simple settings can eventually lead to understanding the general behavior of overparameterized models in machine learning. We consider it extremely unsatisfying that the most popular technique in modern machine learning (training an overparameterized neural network with SGD) can be nonmonotonic in samples [Nakkiran et al., 2019]. We hope that a greater understanding here could help develop learning algorithms which make the best use of data (and in particular, are monotonic in samples). In general, we believe it is interesting to understand when and why learning algorithms are monotonic – especially when we don’t explicitly enforce them to be. # Acknowledgements We especially thank Jacob Steinhardt and Aditi Raghunathan for discussions and suggestions that motivated this work. We thank Jarosław Błasiok, Jonathan Shi, and Boaz Barak for useful discussions throughout this work, and we thank Gal Kaplun and Benjamin L. Edelman for feedback on an early draft. This work supported in part by supported by NSF awards CCF 1565264, CNS 1618026, and CCF 1715187, a Simons Investigator Fellowship, and a Simons Investigator Award. 7 # References [Advani and Saxe, 2017] Advani, M. S. and Saxe, A. M. (2017). High-dimensional dynamics of generalization error in neural networks. arXiv preprint arXiv:1710.03667. [Bartlett et al., 2019] Bartlett, P. L., Long, P. M., Lugosi, G., and Tsigler, A. (2019). Benign overfitting in linear regression. arXiv preprint arXiv:1906.11300. [Belkin et al., 2018] Belkin, M., Hsu, D., Ma, S., and Mandal, S. (2018). Reconciling modern machine learning and the bias-variance trade-off. arXiv preprint arXiv:1812.11118. [Belkin et al., 2019] Belkin, M., Hsu, D., and Xu, J. (2019). Two models of double descent for weak features. arXiv preprint arXiv:1903.07571. [Bibas et al., 2019] Bibas, K., Fogel, Y., and Feder, M. (2019). A new look at an old problem: A universal learning approach to linear regression. arXiv preprint arXiv:1905.04708. [Deng et al., 2019] Deng, Z., Kammoun, A., and Thrampoulidis, C. (2019). A model of double descent for high-dimensional binary linear classification. arXiv preprint arXiv:1911.05822. [Dereziński et al., 2019] Dereziński, M., Liang, F., and Mahoney, M. W. (2019). Exact expressions for double descent and implicit regularization via surrogate random design. [Geiger et al., 2019] Geiger, M., Spigler, S., d’Ascoli, S., Sagun, L., Baity-Jesi, M., Biroli, G., and Wyart, M. (2019). Jamming transition as a paradigm to understand the loss landscape of deep neural networks. Physical Review E, 100(1):012115. [Hastie et al., 2019] Hastie, T., Montanari, A., Rosset, S., and Tibshirani, R. J. (2019). Surprises in high- dimensional ridgeless least squares interpolation. [Lampinen and Ganguli, 2018] Lampinen, A. K. and Ganguli, S. (2018). An analytic theory of generalization dynamics and transfer learning in deep linear networks. arXiv preprint arXiv:1809.10374. [Liang and Rakhlin, 2018] Liang, T. and Rakhlin, A. (2018). Just interpolate: Kernel" ridgeless" regression can generalize. arXiv preprint arXiv:1808.00387. [Liang et al., 2019] Liang, T., Rakhlin, A., and Zhai, X. (2019). On the risk of minimum-norm interpolants and restricted lower isometry of kernels. arXiv preprint arXiv:1908.10292. [Marčenko and Pastur, 1967] Marčenko, V. A. and Pastur, L. A. (1967). Distribution of eigenvalues for some sets of random matrices. Mathematics of the USSR-Sbornik, 1(4):457. [Mei and Montanari, 2019] Mei, S. and Montanari, A. (2019). The generalization error of random features regression: Precise asymptotics and double descent curve. arXiv preprint arXiv:1908.05355. [Mitra, 2019] Mitra, P. P. (2019). Understanding overfitting peaks in generalization error: Analytical risk curves for l2 and l1 penalized interpolation. ArXiv, abs/1906.03667. [Muthukumar et al., 2019] Muthukumar, V., Vodrahalli, K., and Sahai, A. (2019). Harmless interpolation of noisy data in regression. arXiv preprint arXiv:1903.09139. [Nakkiran et al., 2019] Nakkiran, P., Kaplun, G., Bansal, Y., Yang, T., Barak, B., and Sutskever, I. (2019). Deep double descent: Where bigger models and more data hurt. arXiv preprint arXiv:1912.02292. [Neal et al., 2018] Neal, B., Mittal, S., Baratin, A., Tantia, V., Scicluna, M., Lacoste-Julien, S., and Mitliagkas, I. (2018). A modern take on the bias-variance tradeoff in neural networks. arXiv preprint arXiv:1810.08591. 8 [Opper, 1995] Opper, M. (1995). Statistical mechanics of learning: Generalization. The Handbook of Brain Theory and Neural Networks, 922-925. [Opper, 2001] Opper, M. (2001). Learning to generalize. Frontiers of Life, 3(part 2), pp.763-775. [Spigler et al., 2018] Spigler, S., Geiger, M., d’Ascoli, S., Sagun, L., Biroli, G., and Wyart, M. (2018). A jamming transition from under-to over-parametrization affects loss landscape and generalization. arXiv preprint arXiv:1810.09665. [Xu and Hsu, 2019] Xu, J. and Hsu, D. J. (2019). On the number of variables to use in principal component regression. In Advances in Neural Information Processing Systems, pages 5095–5104. # A Appendix: Computations # A.1 Bias and Variance The computations in this section are standard. Assume the data distribution and problem setting from Section 2. For samples (X, y), the estimator is: ( ˆβ = X †y = X T (XX T )−1y when n ≤ d (X T X)−1X T y when n > d (11) Lemma 1. For n ≤ d, the bias and variance of the estimator ˆβ = X †y is Bu = || E [Projx (III? Vi = Bill Proj (3) ~ [Proj (8)]IP?] +0? g[T+((XX7))] eS eS (A) (B) Proof. Bias. Note that β − E[ ˆβ] = β − E X,η [X T (XX T )−1(Xβ + η)] = E X = E X [(I − X T (XX T )−1X)β] [P rojX ⊥(β)] Thus the bias is Bn = ||β − E[ ˆβ]||2 = ||EXn [P rojX ⊥ n (β)]||2 Variance. 9 [|| ˆβ − E[ ˆβ]||2] Vn = E ˆβ = E X,η = E X,η [||X T (XX T )−1(Xβ + η) − E X [||(S − S)β + X T (XX T )−1η||2] [X T (XX T )−1Xβ]||2] (S := X T (XX T )−1X, S := E[S]) = E X = E X [||(S − S)β||2] + E X,η [||X T (XX T )−1η||2] [||(S − S)β||2] + σ2T r((XX T )−1) Notice that S is projection onto the rowspace of X, i.e. S = P rojX . Thus, Vn := E X [||P rojX (β) − E X [P rojX (β)]||2] + σ2T r((XX T )−1) # A.2 Trace Computations Proof of Claim [3 Let X € R"*@ be the data matrix after n samples, and let x € R¢ be the (n+1)th sample. The new data matrix is Xp41 = | ; and XXT Xx Xnyt Xs = [exe x] Now by Schur complements: _)_[XxX? Xa)" (Xnat Xn) ts [er x] ppl ¥T ext T ‘9 = [AXP A) SOX . (|]? — 27 XT(XX7T)-1Xa)-t Thus T r((Xn+1X T n+1)−1) = T r((XX T − = T r((XX T − = T r((XX T − XxxT X T ||x||2 XxxT X T ||x||2 XxxT X T ||x||2 )−1) + (||x||2 − xT X T (XX T )−1Xx)−1 )−1) + (xT (x − P rojX (x)))−1 )−1) + 1 ||P rojX ⊥ (x)||2 By Sherman-Morrison: (XX T − =⇒ T r(XX T − XxxT X T ||x||2 XxxT X T ||x||2 )−1 = (XX T )−1 + (XX T )−1XxxT X T (XX T )−1 ||x||2 − xT X T (XX T )−1Xx )−1 = T r[(XX T )−1] + ||(XX T )−1Xx||2 ||P rojX ⊥ (x)||2 10 Finally, we have T r((Xn+1X T n+1)−1) = T r[(XX T )−1] + 1 + ||(XX T )−1Xx||2 ||P rojX ⊥ (x)||2 or equivalently: T r((Xn+1X T n+1)−1) = T r[(XX T )−1] + 1 + ||(X T )†x||2 ||P rojX ⊥ (x)||2 or equivalently: T r((Xn+1X T n+1)−1) = T r[(XX T )−1] + 1 + ||γ||2 ||P rojX ⊥ (x)||2 where γ := argmin v ||X T v − x||2 11
{ "id": "1906.11300" }
1912.07076
Multilingual is not enough: BERT for Finnish
Deep learning-based language models pretrained on large unannotated text corpora have been demonstrated to allow efficient transfer learning for natural language processing, with recent approaches such as the transformer-based BERT model advancing the state of the art across a variety of tasks. While most work on these models has focused on high-resource languages, in particular English, a number of recent efforts have introduced multilingual models that can be fine-tuned to address tasks in a large number of different languages. However, we still lack a thorough understanding of the capabilities of these models, in particular for lower-resourced languages. In this paper, we focus on Finnish and thoroughly evaluate the multilingual BERT model on a range of tasks, comparing it with a new Finnish BERT model trained from scratch. The new language-specific model is shown to systematically and clearly outperform the multilingual. While the multilingual model largely fails to reach the performance of previously proposed methods, the custom Finnish BERT model establishes new state-of-the-art results on all corpora for all reference tasks: part-of-speech tagging, named entity recognition, and dependency parsing. We release the model and all related resources created for this study with open licenses at https://turkunlp.org/finbert .
http://arxiv.org/pdf/1912.07076
Antti Virtanen, Jenna Kanerva, Rami Ilo, Jouni Luoma, Juhani Luotolahti, Tapio Salakoski, Filip Ginter, Sampo Pyysalo
cs.CL
null
null
cs.CL
20191215
20191215
9 1 0 2 c e D 5 1 ] L C . s c [ 1 v 6 7 0 7 0 . 2 1 9 1 : v i X r a # Multilingual is not enough: BERT for Finnish # Antti Virtanen1 Jenna Kanerva Rami Ilo2 Tapio Salakoski Jouni Luoma3 Juhani Luotolahti Sampo Pyysalo # Filip Ginter Turku NLP group, University of Turku [email protected], [email protected], [email protected], [email protected] # Abstract Deep learning-based language models pre- trained on large unannotated text corpora have been demonstrated to allow efficient transfer learning for natural language processing, with recent approaches such as the transformer- based BERT model advancing the state of the art across a variety of tasks. While most work on these models has focused on high- resource languages, in particular English, a number of recent efforts have introduced mul- tilingual models that can be fine-tuned to ad- dress tasks in a large number of different lan- guages. However, we still lack a thorough un- derstanding of the capabilities of these models, in particular for lower-resourced languages. In this paper, we focus on Finnish and thor- oughly evaluate the multilingual BERT model on a range of tasks, comparing it with a new Finnish BERT model trained from scratch. The new language-specific model is shown to systematically and clearly outperform the multilingual. While the multilingual model largely fails to reach the performance of pre- viously proposed methods, the custom Finnish BERT model establishes new state-of-the-art results on all corpora for all reference tasks: part-of-speech tagging, named entity recog- nition, and dependency parsing. We release the model and all related resources created for this study with open licenses at https: //turkunlp.org/finbert # Introduction Transfer learning approaches using deep neural network architectures have recently achieved sub- stantial advances in a range of natural language processing (NLP) tasks ranging from sequence la- beling tasks such as part-of-speech (POS) tagging and named entity recognition (NER) (Peters et al., 2018b) to dependency parsing (Kondratyuk and Straka, 2019) and natural language understanding (NLU) tasks (Devlin et al., 2018). While the great majority of this work has focused primarily on En- glish, a number of studies have also targeted other languages, typically through multilingual models. The BERT model of Devlin et al. (2018) has been particularly influential, establishing state-of- the-art results for English for a range of NLU tasks and NER when it was released. For most lan- guages, the only currently available BERT model is the multilingual model (M-BERT) trained on pooled data from 104 languages. While M-BERT has been shown to have a remarkable ability to generalize across languages (Pires et al., 2019), several studies have also demonstrated that mono- lingual BERT models, where available, can no- tably outperform M-BERT. Such results include the evaluation of the recently released French BERT model (Martin et al., 2019), the preliminary results accompanying the release of a German BERT model, and the evaluation of R¨onnqvist et al. (2019) comparing M-BERT with English and German monolingual models. In this paper, we study the application of language-specific and multilingual BERT models to Finnish NLP. We introduce a new Finnish BERT model trained from scratch and perform a com- prehensive evaluation comparing its performance to M-BERT on established datasets for POS tag- ging, NER, and dependency parsing as well as a range of diagnostic text classification tasks. The results show that 1) on most tasks the multilingual model does not represent an advance over previous state of the art, indicating that multilingual models may fail to deliver on the promise of deep transfer learning for lower-resourced languages, and 2) the custom Finnish BERT model systematically out- performs the multilingual as well as all previously proposed methods on all benchmark tasks, show- ing that language-specific deep transfer learning models can provide comparable advances to those reported for much higher-resourced languages. # 2 Related Work The current learning methods have evolved from word embedding techniques, such as word2vec (Mikolov et al., 2013), GLoVe (Pen- nington et al., 2014) and fastText (Joulin et al., 2016), to take into account the textual context of words. Crucially, incorporating the context avoids the obvious limitations stemming from the one-vector-per-unique-word assumption inherent to the previous word embedding methods. The current successful wave of work proposing and ap- plying different contextualized word embeddings was launched with ELMo (Peters et al., 2018b), a context embedding method based on bidirec- tional LSTM networks. Another notable example is the ULMFit model (Howard and Ruder, 2018), which specifically focuses on techniques for do- main adaptation of LSTM-based language models. Following the introduction of the attention-based (as opposed to recurrent) Transformer architecture (Vaswani et al., 2017), BERT was proposed by De- vlin et al. (2018), demonstrating superior perfor- mance on a broad array of tasks. The BERT model has been further refined in a number of follow-up studies (e.g. Liu et al., 2019; Sanh et al., 2019) and, presently, BERT and related models form the de facto standard approach to embedding text seg- ments as well as individual words in context. Unlike the previous generation of models, train- ing BERT is a computationally intensive task, re- quiring substantial resources. As of this writ- ing, Google has released English and Chinese monolingual BERT models and the multilingual M-BERT model covering 104 languages.1 Sub- sequently, monolingual BERT models have been published for German2 and French (Martin et al., 2019). In a separate line of work, a cross-lingual BERT model for 15 languages was published by Lample and Conneau (2019), leveraging also cross-lingual signals. Finally, a number of stud- ies have introduced monolingual models focus- ing on particular subdomains of English, such as BioBERT (Lee et al., 2019) and SciBERT (Belt- agy et al., 2019) for biomedical publications and scientific text. 1https://github.com/google-research/ bert # 2https://deepset.ai/german-bert News Discussion Crawl Total Sents Docs 0.9B 68M 4M 4.5B 351M 83M 11M 8.1B 591M 98M 1 010M 13.5B Tokens Chars 6B 28B 55B 89B Table 1: Pretraining text source statistics. Tokens are counted using BERT basic tokenization. # 3 Pretraining We next introduce the sources of unlabeled data used to pretrain FinBERT and present the data filtering and cleanup, vocabulary generation, and pretraining processes. # 3.1 Pretraining Data To provide a sufficiently large and varied unanno- tated corpus for pretraining, we compiled Finnish texts from three primary sources: news, online discussion, and an internet crawl. All of the unannotated texts were split into sentences, tok- enized, and parsed using the Turku Neural Parser pipeline (Kanerva et al., 2018). Table 1 summa- rizes the initial statistics of the three sources prior to cleanup and filtering. News We combine two major sources of Finnish news: the Yle corpus3, an archive of news pub- lished by Finland’s national public broadcasting company in the years 2011-2018, and The STT corpus4 of newswire articles sent to media out- lets by the Finnish News Agency (STT) between 1992 and 2018. The combined resources contain approx. 900 million tokens, with 20% originating from the Yle corpus and 80% from STT. Online discussion The Suomi24 corpus5 (ver- sion 2017H2) contains all posts to the Suomi24 online discussion website from 2001 to 2017. Suomi24 is one of the largest social networking fo- rums in Finland and covers a broad range of topics and levels of style and formality in language. The corpus is also roughly five times the size of the available news resources. Internet crawl Two primary sources were used to create pretraining data from unrestricted crawls. First, we compiled documents from the dedicated 3http://urn.fi/urn:nbn:fi: lb-2017070501 4http://urn.fi/urn:nbn:fi: lb-2019041501 5http://urn.fi/urn:nbn:fi: lb-2019010801 News Discussion Crawl Total Docs 3M Sents 36M 15M 118M 79M 21M 234M 3M Tokens Chars 4B 12B 8B 24B 0.5B 1.7B 1.1B 3.3B Table 2: Pretraining text statistics after cleanup and fil- tering internet crawl of the Finnish internet of Luoto- lahti et al. (2015) run between 2014 and 2016 using the SpiderLing crawler (Suchomel et al., 2012). Second, we selected texts from the Com- mon Crawl project6 by running a a map-reduce language detection job on the plain text material from Common Crawl. These sources were supple- mented with plain text extracted from the Finnish Wikipedia using the mwlib library. Following initial compilation, this text collection was ana- lyzed for using the Onion deduplication tool.7 Du- plicate documents were removed, and remaining documents grouped by their level of duplication. Cleanup and filtering As quality can be more important than quantity for pretraining data (Raf- fel et al., 2019), we applied a series of custom cleaning and filtering steps to the raw textual data. Initial cleaning removed header and tag material In the first filtering from newswire documents. step, machine translated and generated texts were removed using a simple support vector machine (SVM) classifier with lexical features trained on data from the FinCORE corpus (Laippala et al., 2019). The remaining documents were then ag- gressively filtered using language detection and hand-written heuristics, removing documents that e.g. had too high a ratio of digits, uppercase or non-Finnish alphabetic characters, or had low av- erage sentence length. A delexicalized SVM clas- sifier operating on parse-derived features was then trained on news (positives) and heuristically fil- tered documents (negatives) and applied to re- move documents that were morphosyntactically similar to the latter. Finally, all internet crawl- sourced documents featuring 25% or more dupli- cation were removed from the data. The statistics of the final pretraining data produced in this pro- cess are summarized in Table 2. We note that even with this aggressive filtering, this data is roughly 30 times the size of the Finnish Wikipedia in- cluded in M-BERT pretraining data. # 6https://commoncrawl.org 7http://corpus.tools/wiki/Onion Texts Vocabulary En BERT cased En BERT uncased En M-BERT cased M-BERT uncased En FinBERT cased Fi FinBERT uncased Fi Fi M-BERT cased Fi M-BERT uncased Table 3: Vocabulary statistics for tokenizing Wikipedia texts # 3.2 Vocabulary generation To generate dedicated BERT vocabularies for Finnish, a sample of cleaned and filtered sentences were first tokenized using BERT BasicTokenizer, generating both a cased version where punctua- tion is separated, and an uncased version where characters are additionally mapped to lowercase and accents stripped.8 We then used the Sentence- Piece (Kudo and Richardson, 2018) implementa- tion of byte-pair-encoding (BPE) (Sennrich et al., 2016) to generate cased and uncased vocabularies of 50,000 word pieces each. To assess the coverage of the generated cased and uncased vocabularies and compare these to previously introduced vocabularies, we sampled a random 1% of tokens extracted using WikiEx- tractor9 from the English and Finnish Wikipedias and tokenized the texts using various vocabularies to determine the number of word pieces and un- known pieces per basic token. Table 3 shows the results of this evaluation. For English, both BERT and M-BERT generate less than 1.2 WordPieces per token, meaning that the model will represent the great majority of words as a single piece. For Finnish, this ratio is nearly 2 for M-BERT. While some of this difference is explained by the mor- phological complexity of the language, it also re- flects that only a small part of the M-BERT vocab- ulary is dedicated to Finnish: using the language- specific FinBERT vocabularies, this ratio remains notably lower even though the size of these vocab- ularies is only half of the M-BERT vocabularies. 8We note that accent stripping makes two pairs of Finnish vowels ambiguous (a/ and o/), which may be perceived as detrimental to understanding text. This step is nevertheless required for compatibility with BERT implementations. 9https://github.com/attardi/ wikiextractor FinBERT cased FinBERT uncased M-BERT cased M-BERT uncased Suomessa vaihtuu kesn aikana sek pministeri ett valtiovarain ##ministeri . suomessa vaihtuu kesan aikana seka paaministeri etta valtiovarain ##ministeri . Suomessa vai ##htuu kes ##n aikana sek p ## ##minister ##i ett valt ##io ##vara ##in ##minister ##i . suomessa vai ##htuu kesan aikana seka paa ##minister ##i etta valt ##io ##vara ##in ##minister ##i . Table 4: Examples of tokenization with different vocabularies Sentences Tokens Train 12,217 162,827 TDT Dev 1,364 18,311 Test 1,555 21,070 Train 14,981 127,845 FTB Dev 1,875 15,754 PUD Train Dev Test 1,861 — 16,311 — Test — 1,000 — 15,812 Table 5: Statistics for the Turku Dependency Treebank, FinnTreeBank and Parallel UD treebank corpora Table 4 shows examples of tokenization using the FinBERT and M-BERT vocabularies. # 3.3 Pretraining example generation We used BERT tools to create pretraining exam- ples using the same masked language model and next sentence prediction tasks used for the orig- inal BERT. Separate duplication factors were set for news, discussion and crawl texts to create a roughly balanced number of examples from each source. We also used whole-word masking, where all pieces of a word are masked together rather than selecting masked word pieces independently. We otherwise matched the parameters and pro- cess used to create pretraining data for the origi- nal BERT, including generating separate examples with sequence lengths 128 and 512 and setting the maximum number of masked tokens per sequence separately for each (20 and 77, respectively). # 4 Evaluation We next present an evaluation of the M-BERT and FinBERT models on a series of Finnish datasets representing both downstream NLP tasks and di- agnostic evaluation tasks. Unless stated otherwise, all experiments follow the basic setup used in the experiments of Devlin et al. (2018), selecting the learning rate, batch size and the number of epochs11 used for fine-tuning separately for each model and dataset combination using a grid search with evaluation on the develop- ment data. Other model and optimizer parameters were kept at the BERT defaults. Excepting for the parsing experiments, we repeat each experiment 5- 10 times and report result mean and standard de- viation. # 4.1 Part of Speech Tagging # 3.4 Pretraining process We pretrained cased and uncased models config- ured similarly to the base variants of BERT, with 110M parameters for each. The models were trained using 8 Nvidia V100 GPUs across 2 nodes on the Puhti supercomputer of CSC, the Finnish IT Center for Science10. Following the approach of Devlin et al. (2018), each model was trained for 1M steps, where the initial 90% used a maxi- mum sequence length of 128 and the last 10% the full 512. A batch size of 140 per GPU was used for primary training, giving a global batch size of 1120. Due to memory constraints, the batch size was dropped to 20 per GPU for training with se- quence length 512. We used the LAMB optimizer (You et al., 2019) with warmup over the first 1% of steps to a peak learning rate of 1e-4 followed by decay. Pretraining took approximately 12 days to complete per model variant. Part of speech tagging is a standard sequence la- beling task and several Finnish resources are avail- able for the task. Data To assess POS tagging performance, we use the POS annotations of the three Finnish tree- banks included in the Universal Dependencies (UD) collection (Nivre et al., 2016): the Turku De- pendency Treebank (TDT) (Pyysalo et al., 2015), FinnTreeBank (FTB) (Voutilainen et al., 2012) and Parallel UD treebank (PUD) (Zeman et al., 2017). A broad range of methods were applied to tagging these resources as a subtask in the recent CoNLL shared tasks in 2017 and 2018 (Zeman et al., 2018a), and we use the CoNLL 2018 ver- sions (UD version 2.2) of these corpora to assure comparability with their results. The statistics of these resources are shown in Table 5. As the PUD corpus only provides a test set, we train and select parameters on the training and development sets 10https://research.csc.fi/csc-s-servers 11Learning rate {5e-5, 3e-5, 2e-5} and epochs {2, 3, 4}. Batch size 32 was not used due to memory limitations. FinBERT cased FinBERT uncased M-BERT cased M-BERT uncased (Che et al., 2018) (Lim et al., 2018) FTB TDT 98.39 (0.03) 98.23 (0.04) 98.28 (0.07) 98.12 (0.03) 95.87 (0.09) 96.97 (0.06) 96.00 (0.07) 96.59 (0.05) 97.30 — 96.70 — 97.60 — 97.12 — 96.20 — 97.65 — PUD 98.08 (0.04) 97.94 (0.03) 97.58 (0.03) 97.48 (0.03) Table 6: Results for POS tagging (standard deviation in parentheses) Sentences Tokens Entities Train 13,498 180,178 17,644 Dev 986 13,564 1,223 Test Wiki-test 3,360 49,752 5,831 3,512 46,363 4,124 Table 7: FiNER named entity recognition corpus statistics of the compatibly annotated TDT corpus for eval- uation on PUD. The CoNLL shared task proceeds from raw text and thus requires sentence splitting and tokenization in order to assign POS tags. To focus on tagging performance while maintaining comparability, we predict tags for the tokens pre- dicted by the Uppsala system (Smith et al., 2018a), distributed as part of the CoNLL’18 shared task system outputs (Zeman et al., 2018b). are modest in absolute terms, the relative reduc- tions in errors are notable: in particular, the Fin- BERT cased error rate on FTB is less than half of the best CoNLL’18 result (Che et al., 2018). We also note that the uncased models are surprisingly competitive with their cased equivalents for a task where capitalization has long been an important feature: for example, FinBERT uncased perfor- mance is within approx. 0.1% points of FinBERT cased for all corpora. Methods We implement the BERT POS tagger straightforwardly by attaching a time-distributed dense output layer over the top layer of BERT and using the first piece of each wordpiece-tokenized input word to represent the word. The implemen- tation and data processing tools are openly avail- able.12 We compare POS tagging results to the best-performing methods for each corpus in the CoNLL 2018 shared task, namely that of Che et al. (2018) for TDT and FTB and Lim et al. (2018) for PUD. We report performance for the UPOS metric as implemented by the official CoNLL 2018 eval- uation script. Results Table 6 summarizes the results for POS tagging. We find that neither M-BERT model im- proves on the previous state of the art for any of the three resources, with results ranging 0.1-0.8% points below the best previously published results. By contrast, both language-specific models out- perform the previous state of the art, with abso- lute improvements for FinBERT cased ranging be- tween 0.4 and 1.7% points. While these improve- ments over the already very high reference results # 4.2 Named Entity Recognition Like POS tagging, named entity recognition is conventionally cast as a sequence labeling task. During the development of FinBERT, only one corpus was available for Finnish NER. Data FiNER, a manually annotated NER corpus for Finnish, was recently introduced by Ruoko- lainen et al. (2019). The corpus annotations cover five types of named entities – person, organiza- tion, location, product and event – as well as dates. The primary corpus texts are drawn from a Finnish technology news publication, and it additionally contains an out-of-domain test set of documents drawn from the Finnish Wikipedia. In addition to conventional CoNLL-style named entity annota- tion, the corpus includes a small number of nested annotations (under 5% of the total). As Ruoko- lainen et al. (2019) report results also for top-level (non-nested) annotations and the recognition of nested entity mentions would complicate evalua- tion, we here consider only the top-level annota- tions of the corpus. Table 7 summarizes the statis- tics of these annotations. # 12https://github.com/spyysalo/bert-pos FinBERT cased FinBERT uncased M-BERT cased M-BERT uncased FiNER-tagger (G¨ung¨or et al., 2018) Rec. Prec. 93.52 (0.10) 91.30 (0.12) 92.67 (0.19) 90.37 (0.35) 91.25 (0.17) 89.35 (0.21) 90.07 (0.22) 88.07 (0.25) 90.41 — 83.51 — 86.82 — 83.59 — 85.62 — 84.59 — F1 92.40 (0.09) 91.50 (0.24) 90.29 (0.14) 89.06 (0.21) Table 8: NER results for in-domain test set (standard deviation in parentheses) FinBERT cased FinBERT uncased M-BERT cased M-BERT uncased FiNER-tagger (G¨ung¨or et al., 2018) Rec. Prec. 82.35 (0.33) 80.61 (0.61) 79.38 (0.68) 80.74 (0.31) 76.71 (0.61) 75.60 (0.49) 75.73 (0.73) 71.93 (1.01) 88.66 — 72.74 — 79.91 — 67.46 — 55.07 — 60.64 — F1 81.47 (0.46) 80.05 (0.42) 76.15 (0.50) 73.78 (0.81) Table 9: NER results for out of domain test set (standard deviation in parentheses) Methods Our NER implementation is based on the approach proposed for CoNLL English NER by Devlin et al. (2018). A dense layer is attached on top of the BERT model to predict IOB tags in- dependently, without a CRF layer. To include doc- ument context for each sentence, we simply con- catenate as many of the following sentences as can fit in the 512 wordpiece sequence. The FiNER data does not identify document boundaries, and therefore not all these sentences are necessarily from the same document. We make the our im- plementation available under an open licence.13 domain (Wikipedia) test set. We find that while M-BERT is able to outperform the best previously published results on the in-domain test set, it fails to reach the performance of FiNER-tagger on the out-of-domain test set. As for POS tagging, the language-specific FinBERT model again outper- forms both M-BERT as well as all previously pro- posed methods, establishing new state-of-the-art results for Finnish named entity recognition. # 4.3 Dependency Parsing We compare NER results to the rule-based FiNER-tagger (Kettunen and L¨ofberg, 2017) de- veloped together with the FiNER corpus and to the neural network-based model of G¨ung¨or et al. (2018) targeted specifically toward morphologi- cally rich languages. The former achieved the highest results on the corpus and the latter was the best-performing machine learning-based method in the experiments of Ruokolainen et al. (2019). Named entity recognition performance is evalu- ated in terms of exact mention-level precision, re- call and F-score as implemented by the standard conlleval script, and F-score is used to com- pare performance. Results The results for named entity recognition are summarized in Table 8 for the in-domain (tech- nology news) test set and Table 9 for the out-of- Dependency parsing involves the prediction of a directed labeled graph over tokens. Finnish de- pendency parsing has a long history and several established resources are available for the task. Data The CoNLL 2018 shared task addressed end-to-end parsing from raw text into dependency structures on 82 different corpora representing 57 languages (Zeman et al., 2018a). We evaluate the pre-trained BERT models on the dependency pars- ing task using the three Finnish UD corpora intro- duced in Section 4.1: the Turku Dependency Tree- bank (TDT), FinnTreeBank (FTB) and the Paral- lel UD treebank (PUD). To allow direct compar- ison with CoNLL 2018 results, we use the same versions of the corpora as used in the shared task (UD version 2.2) and evaluate performance using the official script provided by the task organizers. These corpora are the same used in the part-of- speech tagging experiments, and their key statis- tics were summarized above in Table 5. 13https://github.com/jouniluoma/ keras-bert-ner TDT FTB PUD Model FinBERT cased FinBERT uncased M-BERT cased M-BERT uncased (Che et al., 2018) (Kulmizev et al., 2019) — p.seg. 91.93 91.73 86.32 86.74 88.73 — g.seg 93.56 93.42 87.99 88.61 p.seg. 92.16 91.92 85.52 86.03 88.53 — — g.seg. 93.95 93.63 87.46 87.98 87.0* — p.seg 92.54 92.32 89.18 89.52 90.23 — — — g.seg. 93.10 92.86 89.75 89.95 Table 10: Labeled attachment score (LAS) parsing results for for predicted (p.seg) and gold (g.seg) segmentation. *Best performing combination in the TDT treebank (ELMo + transition-based parser). Methods We evaluate the models using the Ud- ify dependency parser recently introduced by Kon- dratyuk and Straka (2019). Udify is a multi-task model that support supporting multi- or monolin- gual fine-tuning of pre-trained BERT models on UD treebanks. Udify implements a multi-task net- work where a separate prediction layer for each task is added on top of the pre-trained BERT en- coder. Additionally, instead of using only the top encoder layer representation in prediction, Udify adds a layers-wise dot-product attention, which calculates a weighted sum of all intermediate rep- resentation of 12 BERT layers for each token. All prediction layers as well as layer-wise attention are trained simultaneously, while also fine-tuning the pre-trained BERT weights. We train separate Udify parsing models using monolingual fine-tuning for TDT and FTB. The TDT models are used to evaluate performance also on PUD, which does not include a training set. We report parser performance in terms of Labeled Attachment Score (LAS). Each parser model is fine-tuned for 160 epochs with BERT weights kept frozen during the first epoch and subsequently up- dated along with other weights. The learning rate scheduler warm-up period is defined to be approx- imately one epoch. Otherwise, parameters are the same as used in Kondratyuk and Straka (2019). As the Udify model does not implement sentence or token segmentation, we use UDPipe (Straka and Strakov´a, 2017) to pre-segment the text when re- porting LAS on predicted segmentation. pendency parser used in the HIT-SCIR system is the biaffine graph-based parser of Dozat et al. (2017) with deep contextualized word embeddings (ELMo) (Peters et al., 2018a) trained monolin- gually on web crawl and Wikipedia data provided by Ginter et al. (2017). The final HIT-SCIR model is an ensemble over three parser models trained with different parameter initializations, where the final prediction is calculated by averaging the soft- maxed output scores. We also compare results to the recent work of Kulmizev et al. (2019), where the merits of two parsing architectures, graph-based (Kiperwasser and Goldberg, 2016) and transition-based (Smith et al., 2018b), are studied with two different deep contextualized embeddings, ELMo and BERT. We include results for their best-performing combina- tion on the Finnish TDT corpus, the transition- based parser with monolingual ELMo embed- dings.14 Results Table 10 shows LAS results for pre- dicted and gold segmentation. While Udify ini- tialized with M-BERT fails to outperform our strongest baseline (Che et al., 2018), Udify initial- ized with FinBERT achieves notably higher per- formance on all three treebanks, establishing new state-of-the-art parsing results for Finnish with a large margin. Depending on the treebank, Udify with cased FinBERT LAS results are 2.3–3.6% points above the previous state of the art, decreas- ing errors by 24%–31% relatively. We compare our results to the best-performing system in the CoNLL 2018 shared task for the LAS metric, HIT-SCIR (Che et al., 2018). In ad- dition to having the highest average score over the system also all achieved the highest LAS among 26 participants for each of the three Finnish treebanks. The de- Casing seem to have only a moderate impact in parsing, as the performance of cased and uncased models falls within 0.1–0.6% point range in each treebank. However, in each case the trend is that 14Note that although the UD version reported in Kulmizev et al. (2019) is version 2.3, the results are fully comparable as there were no changes in the Finnish TDT corpus between the version 2.2 used here and version 2.3. with FinBERT the cased version always outper- forms the uncased one, while with M-BERT the story is opposite, the uncased always outperform- ing the cased one. To relate the high LAS of 93.56 achieved with the combination of the Udify parser and our pre- trained FinBERT model to human performance, we refer to the original annotation of the TDT corpus (Haverinen et al., 2014), where individ- ual annotators were measured against the double- annotated and resolved final annotations. The comparison is reported in terms of LAS. Here, one must take into account that the original TDT cor- pus was annotated in the Stanford Dependencies (SD) annotation scheme (De Marneffe and Man- ning, 2008), slightly modified to be suitable for the Finnish language, while the work reported in this paper uses the UD version of the corpus. Thus, the reported numbers are not directly comparable, but keeping in mind the similarities of SD and UD annotation schemes, give a ballpark estimate of human performance in the task. Haverinen et al. (2014) report the average LAS of the five human annotators who participated in the treebank con- struction as 91.3, with individual LAS scores rang- ing from 95.9 to 71.8 (or 88.0 ignoring an anno- tator who only annotated 2% of the treebank and was still in the training phrase). Based on these numbers, the achieved parser LAS of 93.56 seems to be on par with or even above average human level performance and approaching the level of a well-trained and skilled annotator. # 4.4 Text classification Finnish lacks the annotated language resources to construct a comprehensive collection of classifi- cation tasks such as those available for English (Rajpurkar et al., 2016; Wang et al., 2018; Zellers et al., 2018). To assess model performance at text classification, we create two datasets based on Finnish document collections with topic informa- tion, one representing formal language (news) and the other informal (online discussion). Data Documents in the Yle news corpus (Sec- tion 3.1) are annotated using a controlled vocab- ulary to identify subjects such as sports, politics, and economy. We identified ten such upper-level topics that were largely non-overlapping in the data and sampled documents annotated with ex- actly one selected topic to create a ten-class clas- sification dataset. As the Yle corpus is avail- able for download under a license that does not allow redistribution, we release tools to recreate this dataset.15 The Ylilauta corpus16 consists of the text of discussions on the Finnish online dis- cussion forum Ylilauta from 2012 to 2014. Each posted message belongs to exactly one board, with topics such as games, fashion and television. We identified the ten most frequent topics and sampled messages consisting of at least ten tokens to cre- ate a text classification dataset from the Ylilauta data.17 To facilitate analysis and comparison, we down- sample both corpora to create balanced datasets with 10000 training examples as well as 1000 de- velopment and 1000 test examples of each class. To reflect generalization performance to new doc- uments, both resources were split chronologically, drawing the training set from the oldest texts, the test set from the newest, and the development set from texts published between the two. To assess classifier performance across a range of training dataset sizes, we further downsampled the train- ing sets to create versions with 100, 316, 1000, and 3162 examples of each class (102, 102.5, . . .). Finally, we truncated each document to a maxi- mum of 256 basic tokens to minimize any advan- tage the language-specific model might have due to its more compact representation of Finnish. Methods We implement the text classification methods following Devlin et al. (2018), minimiz- ing task-specific architecture and simply attaching a dense output layer to the initial ([CLS]) token of the top layer of BERT. We establish baseline text classification performance using fastText18 (Joulin et al., 2016). We evaluated a range of parameter combinations and different pretrained word vectors for the method using the develop- ment data, selecting character n-gram features of lengths 3–7, training for 25 epochs, and initializa- tion with subword-enriched embeddings induced from Wikipedia texts19 (Bojanowski et al., 2017) for the final experiments. 15https://github.com/spyysalo/ yle-corpus 16http://urn.fi/urn:nbn:fi: lb-2015031802 17https://github.com/spyysalo/ ylilauta-corpus 18https://fasttext.cc/ 19https://fasttext.cc/docs/en/ pretrained-vectors.html FinBERT cased FinBERT uncased M-BERT cased M-BERT uncased FastText 1K 87.99 (0.35) 87.86 (0.37) 83.22 (0.72) 84.92 (0.37) 78.50 (0.00) ∼3K 89.49 (0.11) 89.52 (0.13) 86.56 (0.18) 87.14 (0.26) 81.71 (0.03) 10K 90.57 (0.15) 90.58 (0.09) 88.44 (0.14) 88.69 (0.15) 85.90 (0.00) ∼32K 91.42 (0.14) 91.23 (0.08) 89.34 (0.22) 89.63 (0.11) 88.36 (0.05) 100K 91.74 (0.13) 91.76 (0.10) 90.28 (0.18) 90.49 (0.19) 89.40 (0.00) Table 11: Yle news 10-class text classification accuracy for varying training set sizes (percentages, standard devi- ation in parentheses) FinBERT cased FinBERT uncased M-BERT cased M-BERT uncased FastText 1K 75.00 (0.34) 75.71 (0.24) 45.28 (12.65) 51.20 (3.76) 47.74 (0.05) ∼3K 77.48 (0.17) 77.88 (0.24) 59.09 (2.72) 63.13 (0.42) 56.66 (0.05) 10K 79.18 (0.20) 79.79 (0.20) 67.92 (0.43) 69.01 (0.35) 64.27 (0.05) ∼32K 80.89 (0.16) 81.25 (0.12) 72.84 (0.15) 73.89 (0.29) 70.86 (0.05) 100K 82.51 (0.12) 82.80 (0.14) 76.51 (0.16) 77.38 (0.19) 74.71 (0.03) Table 12: Ylilauta online discussion 10-class text classification accuracy for varying training set sizes (percentages, standard deviation in parentheses) 92 87 82 64 — FinBERT uncased 59 — FinBERT uncased -* FinBERT cased -+ FinBERT cased — M-BERT uncased 34 — M-BERT uncased =* M-BERT cased 49 =: M-BERT cased —_ FastText wiaa —_FastText 44 1K 10K 100K. 1K 10K 100K Figure 1: Text classification accuracy with different training data sizes for Yle news (left) and Ylilauta online discussion (right). (Note log x scales and different y ranges.) Results The text classification results for vari- ous training set sizes are shown in Table 11 for Yle news and in Table 12 for Ylilauta online discus- sion and illustrated in Figure 1. We first note that performance is notably higher for the news corpus, with error rates for a given method and data set size more than doubling when moving from news to the discussion corpus. As both datasets rep- resent 10-class classification tasks with balanced classes, this suggests that the latter task is inher- ently more difficult, perhaps in part due to the in- cidence of spam and off-topic messages on online discussion boards. The cased and uncased variants of FinBERT perform very similarly for both datasets and all training set sizes, while for M-BERT the uncased model consistently outperforms the cased – as was also found for parsing – with a marked advantage for small dataset sizes. Comparing M-BERT and FinBERT, we find that the language-specific models outperform the multilingual models across the full range of train- ing data sizes for both datasets. For news, the four BERT variants have broadly similar learn- ing curves, with the absolute advantage for Fin- BERT models ranging from 3% points for 1K ex- amples to just over 1% point for 100K examples, and relative reductions in error from 20% to 13%. For online discussion, the differences are much more pronounced, with M-BERT models perform- ing closer to the FastText baseline than to Fin- BERT. Here the language-specific BERT outper- forms the multilingual by over 20% points for the smallest training data and maintains a 5% point absolute advantage even with 100,000 training ex- amples, halving the error rate of the multilingual model for the smallest training set and maintain- ing an over 20% relative reduction for the largest. FinBERT cased FinBERT uncased M-BERT cased M-BERT uncased 72.10 BiShift 78.29 CoordInv 78.34 ObjNum 96.26 Tense 40.41 SentLen SubjNum 83.81 38.57 TreeDepth 11.05 WC Table 13: Probing results (standard deviation in parentheses). These contrasting results for the news and dis- cussion corpora may be explained in part by do- main mismatch: while the news texts are writ- ten in formal Finnish resembling the Wikipedia texts included as pretraining data for all BERT models as well as the FastText word vectors, only FinBERT pretraining material included informal Finnish from online discussions.20 This suggests that in pretraining BERT models care should be taken to assure that not only the targeted language but also the targeted text domains are sufficiently represented in the data. which of 1000 mid-frequency words occurs in a sentence, where only one of the words is present in any one sentence. Syntactic tasks The tree depth (TreeDepth) task is used to test how well the model can identify the depth of the syntax tree of a sentence. We used dependency trees to maintain comparability with the work of Ravishankar et al. (2019), whereas the original task used constituency trees. Bigram shift (BiShift) tests the model’s ability to recog- nize when two adjacent words have had their po- sitions swapped. # 4.5 Probing Tasks Finally, we explored the ability of the models to capture linguistic properties using the probing tasks proposed by Conneau et al. (2018). We use the implementation and Finnish data introduced for these tasks by Ravishankar et al. (2019),21 which omit the TopConst task defined in the origi- nal paper. We also left out the Semantic odd-man- out (SOMO) task, as we found the data to have errors making the task impossible to perform cor- rectly. All of the tasks involve freezing the BERT layers and training a dense layer on top of it to function as a diagnostic classifier. The only infor- mation passed from BERT to the classifier is the state represented by the [CLS] token. number tasks Semantic (SubjNum) the subject, task the number of i.e. singular or plural, connected to the main verb of a sentence is predicted. Object number (ObjNum) is similar to the previous task but for objects of the main verb. The Coordination inversion (CoordInv) has the order of two clauses joined by a coordinating conjunction reversed in half the examples. The model then has to predict whether or not a given example was inverted. In the Tense task the classifier has to predict whether a main verb of a sentence is in the present or past tense. In brief, the tasks can be roughly categorized into 3 different groups: surface, syntactic and se- mantic information. In the sentence length (SentLen) Surface tasks task, sentences are classified into 6 classes de- pending on their length. The word content (WC) task measures the model’s ability to determine 20The online discussions included in FinBERT pretraining data were drawn from the Suomi24 corpus and thus did not include any of the Ylilauta messages used in this evaluation. 21https://github.com/ltgoslo/xprobe Results Table 13 presents results comparing the FinBERT models to replicated M-BERT results from Ravishankar et al. (2019). We find that the best performance is achieved by either the cased or uncased language-specific model for all tasks except TreeDepth, where M-BERT reaches the highest performance. The differences between the results for the language-specific and multilingual models are modest for most tasks with the excep- tion of the BiShift task, where the FinBERT mod- els are shown to be markedly better at identifying sentences with inverted words. While this result supports the conclusion of our other experiments that FinBERT is the superior language model, re- sults for the other tasks offer only weak support at best. We leave for future work the question whether these tasks measure aspects where the language-specific model does not have a clear ad- vantage over the multilingual or if the results re- flect limitations in the implementation or data of the probing tasks. # 5 Discussion We have demonstrated that it is possible to cre- ate a language-specific BERT model for a lower- resourced language, Finnish, that clearly outper- forms the multilingual BERT at a range of tasks and advances the state of the art in many NLP tasks. These findings raise the question whether it would be possible to realize similar advantages for other languages that currently lack dedicated mod- els of this type. It is likely that the feasibility of training high quality deep transfer learning mod- els hinges on the availability of pretraining data. As of this writing, Finnish ranks 24th among the different language editions of Wikipedia by ar- ticle count,22 and 25th in Common Crawl by page count.23 There are thus dozens of languages for which unannotated corpora of broadly comparable size or larger than that used to pretrain FinBERT could be readily assembled from online resources. Given that language-specific BERT models have been shown to outperform multilingual ones also for high-resource languages such as French (Mar- tin et al., 2019) – ranked 3rd by Wikipedia ar- ticle count – it is further likely that the benefits of a language-specific model observed here ex- tend at least to languages with more resources than Finnish. (We are not aware of efforts to establish the minimum amount of unannotated text required to train high-quality models of this type.) The methods we applied to collect and filter texts for training FinBERT have only few lan- guage dependencies, such as the use of UD pars- ing results for filtering. As UD resources are al- ready available for over 70 languages, the specific approach and tools introduced in this work could be readily applied to a large number of languages. To facilitate such efforts, we also make all of the supporting tools developed in this work available under open licenses. 22https://en.wikipedia.org/wiki/List_ of_Wikipedias 23https://commoncrawl.github.io/ cc-crawl-statistics/plots/languages # 6 Conclusions In this work, we compiled and carefully fil- tered a large unannotated corpus of Finnish, trained language-specific FinBERT models, and presented evaluations comparing these to multi- lingual BERT models at a broad range of natu- ral language processing tasks. The results indi- cate that the multilingual models fail to deliver on the promises of deep transfer learning for lower- resourced languages, falling behind the perfor- mance of previously proposed methods for most tasks. By contrast, the newly introduced FinBERT model was shown not only to outperform multilin- gual BERT for all downstream tasks, but also to establish new state-of-the art results for three dif- ferent Finnish corpora for part-of-speech tagging and dependency parsing as well as for named en- tity recognition. The FinBERT models and all of the tools and re- sources introduced in this paper are available un- der open licenses from https://turkunlp. org/finbert. # Acknowledgments We gratefully acknowledge the support of CSC IT Center for Science through its Grand Challenge program, the Academy of Finland, the Google Digital News Innovation Fund and collaboration of the Finnish News Agency STT, as well as the NVIDIA Corporation GPU Grant Program. # References Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. Scib- ert: Pretrained language model for scientific text. In EMNLP. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135–146. Wanxiang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, and Ting Liu. 2018. Towards better UD parsing: Deep contextualized word embeddings, ensemble, and treebank concatenation. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Alexis Conneau, German Kruszewski, Guillaume Lample, Loc Barrault, and Marco Baroni. 2018. What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic proper- ties. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Marie-Catherine De Marneffe and Christopher D Man- ning. 2008. The stanford typed dependencies repre- sentation. In Coling 2008: proceedings of the work- shop on cross-framework and cross-domain parser evaluation, pages 1–8. Association for Computa- tional Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language under- standing. arXiv preprint arXiv:1810.04805. Timothy Dozat, Peng Qi, and Christopher D Manning. 2017. Stanford’s graph-based neural dependency parser at the CoNLL 2017 Shared Task. In Proceed- ings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 20–30. Filip Ginter, Jan Hajiˇc, Juhani Luotolahti, Milan Straka, and Daniel Zeman. 2017. CoNLL 2017 shared task - automatically annotated raw texts and word embeddings. LINDAT/CLARIN digital li- brary at the Institute of Formal and Applied Linguis- tics ( ´UFAL), Faculty of Mathematics and Physics, Charles University. Onur G¨ung¨or, Suzan ¨Usk¨udarlı, and Tunga G¨ung¨or. 2018. Improving named entity recognition by jointly learning to disambiguate morphological tags. In COLING 2018. Jenna Nyblom, Timo Viljanen, Veronika Laippala, Samuel Kohonen, Anna Missil¨a, Stina Ojala, Tapio Salakoski, and Filip Ginter. 2014. Building the essential resources for Finnish: the Turku Dependency Treebank. Language Resources and Evaluation, 48:493–531. Open access. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. Jenna Kanerva, Filip Ginter, Niko Miekka, Akseli Leino, and Tapio Salakoski. 2018. Turku Neu- ral Parser Pipeline: An end-to-end system for the In Proceedings of the CoNLL 2018 shared task. CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Associ- ation for Computational Linguistics. Kimmo Kettunen and Laura L¨ofberg. 2017. Tagging named entities in 19th century and modern Finnish newspaper material with a Finnish semantic tagger. In Proceedings of the 21st Nordic Conference on Computational Linguistics, NoDaLiDa, 131, pages 29–36. Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional LSTM feature representations. Transactions of the Association for Computational Linguistics, 4:313–327. 75 lan- guages, 1 model: Parsing universal dependencies In Proceedings of the 2019 Confer- universally. ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2779–2795, Hong Kong, China. As- sociation for Computational Linguistics. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226. Johannes Gontrum, Elena Fano, and Joakim Nivre. 2019. Deep contextualized word embeddings in transition- based and graph-based dependency parsing–a tale of In Proceedings of the 2019 two parsers revisited. Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP’19). Veronika Laippala, Roosa Kyll¨onen, Jesse Egbert, Douglas Biber, and Sampo Pyysalo. 2019. To- ward multilingual identification of online registers. In Proceedings of the 22nd Nordic Conference on Computational Linguistics, pages 292–297. Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. arXiv preprint arXiv:1901.07291. Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. BioBERT: a pre-trained biomedical for biomedical text mining. Bioinformatics. KyungTae Lim, Cheoneum Park, Changki Lee, and Thierry Poibeau. 2018. SEx BiST: A multi-source trainable parser with deep contextualized lexical representations. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. Juhani Luotolahti, Jenna Kanerva, Veronika Laippala, Sampo Pyysalo, and Filip Ginter. 2015. Towards In Proceedings of the universal web parsebanks. Third International Conference on Dependency Lin- guistics (Depling 2015), pages 211–220. Louis Martin, Benjamin Muller, Pedro Javier Ortiz Surez, Yoann Dupont, Laurent Romary, ric Ville- monte de la Clergerie, Djam Seddah, and Benot Sagot. 2019. Camembert: a tasty french language model. Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- Efficient estimation of word arXiv preprint frey Dean. 2013. representations in vector space. arXiv:1301.3781. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal De- pendencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Confer- ence on Language Resources and Evaluation (LREC 2016), pages 1659–1666. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018a. Deep contextualized word rep- In Proceedings of the 2018 Confer- resentations. ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018b. Deep contextualized word rep- resentations. arXiv preprint arXiv:1802.05365. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. arXiv How multilingual is multilingual bert? preprint arXiv:1906.01502. Jenna Kanerva, Anna Missil¨a, Veronika Laippala, and Filip Ginter. 2015. Univer- sal Dependencies for Finnish. In Proceedings of the 20th Nordic Conference of Computational Linguis- tics (Nodalida 2015), pages 163–172. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv e-prints. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Vinit Ravishankar, Memduh G¨okırmak, Lilja Øvrelid, and Erik Velldal. 2019. Multilingual probing of In Proceed- deep pre-trained contextual encoders. ings of the First NLPL Workshop on Deep Learn- ing for Natural Language Processing, pages 37– 47, Turku, Finland. Link¨oping University Electronic Press. Samuel R¨onnqvist, Jenna Kanerva, Tapio Salakoski, and Filip Ginter. 2019. Is multilingual BERT fluent in language generation? In Proceedings of the First NLPL Workshop on Deep Learning for Natural Lan- guage Processing. Link¨oping University Electronic Press. Teemu Ruokolainen, Pekka Kauppinen, Miikka Sil- fverberg, and Krister Lind´en. 2019. A Finnish news corpus for named entity recognition. Language Re- sources and Evaluation, pages 1–26. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics, pages 1715–1725. Association for Compu- tational Linguistics. Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, and Sara Stymne. 2018a. 82 treebanks, 34 models: Universal dependency parsing with multi-treebank models. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Pars- ing from Raw Text to Universal Dependencies, pages 113–123. Aaron Smith, Bernd Bohnet, Miryam de Lhoneux, Joakim Nivre, Yan Shao, and Sara Stymne. 2018b. 82 treebanks, 34 models: Universal Dependency parsing with multi-treebank models. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Pars- ing from Raw Text to Universal Dependencies, pages 113–123, Brussels, Belgium. Association for Com- putational Linguistics. Milan Straka and Jana Strakov´a. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 88–99, Vancouver, Canada. Association for Computational Linguistics. V´ıt Suchomel, Jan Pomik´alek, et al. 2012. Efficient web crawling for large text corpora. In Proceedings of the seventh Web as Corpus Workshop (WAC7), pages 39–43. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998–6008. Atro Voutilainen, Kristiina Muhonen, Tanja Katariina Purtonen, Krister Lind´en, et al. 2012. Specifying treebanks, outsourcing parsebanks: Finntreebank 3. In Proceedings of LREC 2012 8th ELRA Conference on Language Resources and Evaluation. European Language Resources Association (ELRA). Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Yang You, Jing Li, Sashank Reddi, Jonathan Hseu, Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan Song, James Demmel, Kurt Keutzer, and Cho-Jui Hsieh. 2019. Large batch optimization for deep learning: Training BERT in 76 minutes. Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. Swag: A large-scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326. Daniel Zeman, Jan Hajiˇc, Martin Popel, Martin Pot- thast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018a. CoNLL 2018 shared task: mul- tilingual parsing from raw text to universal depen- dencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 1–21. Jan Hajiˇc, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, et al. 2017. CoNLL 2017 shared task. In Proceedings of the CoNLL 2017 Shared Task Multilingual Parsing from Raw Text to Universal Dependencies. Associa- tion for Computational Linguistics. Daniel Zeman, Martin Potthast, Elie Duthoo, Olivier Mesnard, Piotr Rybak, Alina Wr´oblewska, Wanxi- ang Che, Yijia Liu, Yuxuan Wang, Bo Zheng, et al. 2018b. CoNLL 2018 shared task system outputs. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics ( ´UFAL), Faculty of Mathematics and Physics, Charles University.
{ "id": "1808.06226" }
1912.06872
Towards Robust Toxic Content Classification
Toxic content detection aims to identify content that can offend or harm its recipients. Automated classifiers of toxic content need to be robust against adversaries who deliberately try to bypass filters. We propose a method of generating realistic model-agnostic attacks using a lexicon of toxic tokens, which attempts to mislead toxicity classifiers by diluting the toxicity signal either by obfuscating toxic tokens through character-level perturbations, or by injecting non-toxic distractor tokens. We show that these realistic attacks reduce the detection recall of state-of-the-art neural toxicity detectors, including those using ELMo and BERT, by more than 50% in some cases. We explore two approaches for defending against such attacks. First, we examine the effect of training on synthetically noised data. Second, we propose the Contextual Denoising Autoencoder (CDAE): a method for learning robust representations that uses character-level and contextual information to denoise perturbed tokens. We show that the two approaches are complementary, improving robustness to both character-level perturbations and distractors, recovering a considerable portion of the lost accuracy. Finally, we analyze the robustness characteristics of the most competitive methods and outline practical considerations for improving toxicity detectors.
http://arxiv.org/pdf/1912.06872
Keita Kurita, Anna Belova, Antonios Anastasopoulos
cs.CL
to appear at EDSMLS 2020
null
cs.CL
20191214
20191214
9 1 0 2 c e D 4 1 ] L C . s c [ 1 v 2 7 8 6 0 . 2 1 9 1 : v i X r a # Towards Robust Toxic Content Classification # Keita Kurita Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] # Anna Belova Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] # Antonios Anastasopoulos Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] # Abstract Toxic content detection aims to identify content that can offend or harm its recipients. Automated classifiers of toxic content need to be robust against adversaries who deliberately try to bypass filters. We propose a method of generating realistic model-agnostic attacks using a lexicon of toxic tokens, which attempts to mislead toxicity classifiers by diluting the toxicity signal either by obfuscating toxic tokens through character- level perturbations, or by injecting non-toxic distractor tokens. We show that these realistic attacks reduce the detection recall of state-of-the-art neural toxicity detectors, including those using ELMo and BERT, by more than 50% in some cases. We explore two approaches for defending against such attacks. First, we examine the effect of training on synthetically noised data. Second, we propose the Contextual Denoising Autoen- coder (CDAE): a method for learning robust representations that uses character-level and contextual information to de- noise perturbed tokens.1 We show that the two approaches are complementary, improving robustness to both character-level perturbations and distractors, recovering a considerable por- tion of the lost accuracy. Finally, we analyze the robustness characteristics of the most competitive methods and outline practical considerations for improving toxicity detectors. you are an idiot h 4 t ' 4 tt ' + 4 yOu are an Figure 1: Our proposed CDAE Architecture enhances the Transformer model with character level information, allowing for better handling of adversarial text. adversarial attacks that aim to prevent the classifier from detecting their harmful content while retaining readability for the receiving user. For example, a change of a single charac- ter to an asterisk, which requires minimal effort, may allow a hurtful content to bypass the toxic content filter (e.g., “shut up” to “s*ut up”). Introduction Toxic content on the internet prevents the constructive ex- change of ideas, excludes sensitive individuals from online dialogue, and inflicts mental and physical health impacts on the recipients. Notable examples of toxic content include hate speech and profanity. Given the sheer scale of internet communications, manual filtering of such content is difficult, requiring methods of automated filtering. If such simple attacks are effective at fooling automated toxic content classifiers, the utility of these classifiers would diminish greatly: determined users could still easily produce toxic content at a large scale. Therefore, useful toxic content classifiers need to be robust to adversarial attacks by making the transmission of toxic content sufficiently difficult and discouraging users from posting this type of content. Previous work in toxic content classification has so far focused on constructing classifiers that can flag toxic con- tent with a high degree of accuracy on datasets curated from sources such as Twitter and Wikipedia. However, these datasets do not acknowledge the possibility for malicious users to attempt to deliberately bypass these classifiers. In the presence of toxic content filters, these users could formulate Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. 1Our code is publicly available github.com/keitakurita/robust toxicity detection. at In this paper, we investigate the robustness of state-of-the- art toxic content classifiers to realistic adversarial attacks as well as various defenses against them. We find that these classifiers are vulnerable to extremely simple, model-agnostic attacks, with the toxic comment recall rate dropping by nearly 50% in some cases. To address these vulnerabilities, we explore two types of defenses. The first is adversarial training, which we find to be effective against adversarial text, yet degrades performance on clean data. We also propose the Contextual Denoising Autoencoder (CDAE), a novel method for learning robust representations. The CDAE uses character-level and contex- Dataset Source Usage Examples Training Test Toxic (%) Jigsaw Wikipedia 2018 comments Jigsaw Wikipedia 2019 comments OffensEval 2019 Twitter Model train/eval 159K Background 1.78M Model train/eval 13K 64K n/a 860 9.5 5.9 33 Table 1: Dataset Statistics tual information to “denoise” obfuscated tokens. We find that our approach outperforms several strong baselines with respect to character-level obfuscations, but is still vulnerable to distractors (i.e., injected sequences of non-toxic tokens). We experimentally find that the two best-performing models (our proposed CDAE and BERT) have different robustness characteristics, but a model ensemble allows us to leverage both their advantages. Task and Datasets Toxic content detection attempts to identify content that can offend or harm its recipients, including hate speech (Wang 2018), racism (Waseem and Hovy 2016), and offensive lan- guage (Wu, Kambhatla, and Sarkar 2018). Given the subjec- tivity of these categorizations, we do not limit the scope of our work to any specific type and address toxic content in general. We work with three datasets summarized in Table 1. The Jigsaw 2018 dataset focuses on the general toxic con- tent detection task and it is comprised of approximately 215,000 annotated comments from Wikipedia talk pages labeled by crowd workers. It provides both a general tox- icity label and more fine-grained annotations such as severe toxicity, obscenity, threat, insult, and identity hate. The Jigsaw Unintended Bias in Toxicity Classification dataset (Jigsaw 2019)2 extends the Jigsaw 2018 dataset with 1.8 million comments, each annotated by up to 10 annotators for multiple labels. Jigsaw 2019 contains a field for toxicity which provides the fraction of annotators who labeled the comment as “toxic” (7.99% ≥0.5). We use the Jigsaw 2019 corpus as our background corpus for generating adversarial attacks. The OffensEval 20193 dataset consists of 13,240 tweets annotated by crowdworkers. The data contains labels for whether the content is offensive and whether it is targeted, with 33% of the tweets being labeled as offensive. Generating Realistic Adversarial Attacks Previous work generating adversarial examples in text often assumes access to either the weights (Ebrahimi et al. 2018; Liang et al. 2017) or the raw prediction scores of the classifier (Li et al. 2018; Liang et al. 2018; Alzantot et al. 2018; Gao et al. 2018; Samanta and Mehta 2017). However, it is unlikely that users would have access to this information. Instead, the 2Publicly available at https://www.kaggle.com/c/jigsaw- unintended-bias-in-toxicity-classification/overview. 3https://competitions.codalab.org/competitions/20011 users most likely would only have weak signals from what gets flagged as well as access to public datasets with toxicity labels. To mimic this setup, we use a large background corpus (Jigsaw 2019) with labels indicating toxicity.4 Our adversarial attack consists of two steps: (1) constructing a lexicon of toxic tokens and (2) using it to applying noise to the test set. To identify “toxic” tokens, we train a logistic regression classifier on bag-of-words utterance representations from our background corpus. We use the coefficients of the logistic regression classifier as a signed measure of the association between the token and toxicity and select the 50,000 tokens with the strongest positive association with toxicity to be our toxic lexicon. We provide a list of top 100 toxic lexicon tokens in the Supplemental Material. We treated any token that did not appear in our lexicon as non-toxic. Using this toxic lexicon, we generate noised versions of the corpora using two settings: token obfuscation and distractor injection. Figure 2 provides an illustration of all our proposed attacks. # Token Obfuscation We apply character-level perturbations to the tokens of the utterance that belong to our toxic lexicon. For each toxic to- ken we randomly select one of the following three perturbing operations: character scrambling, homoglyph substitution, and dictionary-based near-neighbor replacement. Details of the perturbing operations are given below. Character scrambling consists in randomly permuting the token’s characters without deletions and substitutions, as ap- plied in other work (Heigold et al. 2018; Belinkov and Bisk 2018; Michel et al. 2019). Prior research shows that humans can read sufficiently long scrambled words, albeit not without an effort, especially if starting and ending letters remain in place (Rayner, White, and Liversedge 2006). Thus, for this operation, we ignore tokens with fewer than three charac- ters and keep the first and the last character unchanged. The remaining characters are split into groups of three consec- utive characters and each group is permuted randomly and independently. Homoglyph substitution consists in replacing one or more Latin letters with similar-looking international characters from a handcrafted confusion map (see Supplemental Mate- rial). If homoglyph substitution operation is selected, each character of the toxic token is replaced with 20% probability. This type of obfuscation is common in social media (Rojas- Galeano 2017) and cybercrime (Ginsberg and Yu 2018; Elsayed and Shosha 2018). Dictionary-based near-neighbor replacement uses a base vocabulary to find the closest (but distinct) token in terms of Levenshtein distance. If relative Levenshtein distance (i.e., Levenshtein distance divided by maximum word length) is greater than 0.75, we use this nearest neighbor as a replace- ment. We leave the original toxic token unchanged otherwise. This form of noise produces common misspellings. As such, it introduces deletions, insertions, and substitutions that are not overly artificial. This procedure is distinct from that used 4We follow Jigsaw 2019 guidelines for conversion of their con- tinuous toxicity scores into binary labels. ‘Dictionary-based substitution within edit distance two No, heis an arrogantly , self serving , Get it right. right it right immature idoit f Homoglyph ~ Permutation | Distractors Homoglyph ~ Permutation | Distractors Figure 2: An Example of a Noised Sentence by Belinkov and Bisk (2018), who generate naturally occur- ring errors using corpora of error corrections. Distractor Injection In this setting, we inject distractor tokens by repeating ran- domly selected sequences of non-toxic tokens. We split the utterance into two parts at a random position and find the maximum-length sequence of non-toxic words that starts in each of the parts. Search localization introduces variety in the identified distractor sequences, which helps to avoid the appearance of easily detectable vandalism. Once a suitable sequence is found, it is appended to the end of the utterance. Both, token obfuscation and distractor injection are model- agnostic, simple, and subject to easy automation. Hence, toxic content classifiers that are vulnerable to these attacks can be easily and systematically exploited. We emphasize that the noise we present here is differ- ent from “naturally” occurring noise (e.g., misspellings and slang) that does not deliberately attempt to hide toxic to- kens. The datasets we use have not been constructed in the presence of a toxicity filter, implying that the users had no incentive to obfuscate toxicity of their comments. Hence, the synthetic noise we present here is not the noise that we observe frequently in these datasets. Effect of Adversarial Noise We have implemented an experiment to assess whether our perturbations retained the toxicity of the toxic comments to human readers. In that, we have randomly sampled 200 com- ments from the Jigsaw 2018 dataset, half of which labeled as toxic by original Jigsaw 2018 crowd-workers. For each comment, we have a native English speaker rate either the perturbed or unperturbed version of the comment, taking care not to show both versions to the same individual. Overall, our experiment involved 10 participants, with each individual providing a toxicity rating for 80 comments. As such, we have obtained a total of 800 ratings, with each version of the comment receiving two independent ratings. We have tested whether the toxicity rating of the unperturbed com- ment tended to be higher than that of the perturbed comment using the Wilcoxon signed rank test (Wilcoxon 1992) applied to pairs of unperturbed/perturbed toxicity scores averaged at the comment-level. The original comment was perceived as more toxic (based on the average rating of two distinct users) 14% of the time and we found no statistically significant difference at the 1% significance level. Thus, we conclude that it is unlikely that our perturbations remove the toxicity signals for human readers. We evaluate the effect of our adversarial noise on toxic content classifiers on the Jigsaw 2018 and OffensEval 2019 datasets 5 The general toxic content classifier architecture is straightforward. The tokens x1, . . . , xT of an utterance X are first embedded into a continuous vector space and then passed through an LSTM encoder which produces a sequence of intermediate representations H = h1, . . . , hT . These representations are then used to produce a single vector representation hc using mean- and max-pooling as well as attention: hc = [maxpool(H); meanpool(H); attention(H)] which is, in turn, put through an MLP and used to make a prediction ˆy of the toxicity of the utterance through a sigmoid function: ˆy = σ(MLP(hc)). To demonstrate the effect of our adversarial attacks, we experiment with fastText (Bojanowski et al. 2017) and ELMo (Peters et al. 2018) embeddings, both of which are capable of handling out-of-vocabulary words. For ELMo, we follow the recommendations of Peters et al. (2018) and apply a 0.5 dropout to the representations and a weight decay of 1e-4 to the scalar weights of all layers. We only fine-tune the scalar weights and keep the language model weights fixed. We also experiment with BERT, applying a single affine layer to the embedding of the [CLS] token for classification and fine- tune all weights. In addition, we report the performance of a simple logistic regression baseline. All hyperparameters are tuned on the Jigsaw 2018 dataset and are listed in the Supplemental Material. Preprocessing steps include tokenization, lower-casing, removal of hyper- links and removal of characters that are repeated more than three times in a row (e.g., “stupiiiiddddd” is converted to “stupid”, but “iidiot” remains unchanged). All punctuation is retained. For consistency across datasets, we evaluate models on the “toxic”/“offensive” labels that include all types of toxicity (obscenity, hate speech, targeted/untargeted offense, and others). To convert probabilistic outputs of the models to binary classes, we threshold the predictions to maximize the F1 score on the training set. We focus on the ability of various models to classify toxic content correctly since this is where adversarial attacks are most likely to take place (users that post non-toxic content are not motivated to have the system misclassify their content as toxic). The effects of our combined adversarial attacks are sum- marized in Table 2. The logistic regression classifier is ef- fectively incapable of handling out-of-vocabulary words and performs the worst when noise is applied, with more than 50% recall lost. Despite this limitation, however, its performance does not drop to zero. This means that our obfuscation does not completely remove all words that the logistic regression classifier uses to detect toxicity. Indeed, we found that some tokens that are quite obviously toxic (e.g., “motherf*cker”) 5Note that the Jigsaw 2018 and Jigsaw 2019 datasets are distinct and we remove all examples in the Jigsaw 2019 dataset from the Jigsaw 2018 dataset to prevent leakage. We use the Jigsaw 2019 dataset as a background corpus and not to train the model for the Jigsaw 2018 dataset. Noise Jigsaw 2018 Recall % Change OffensEval 2019 Recall % Change Logistic Regression None C+D 0.822 0.344 -58.2 0.621 0.246 -60.4 fastText-based model None C+D 0.902 0.485 -46.2 0.633 0.350 -44.7 ELMo-based model None C+D 0.887 0.569 -35.9 0.596 0.350 -41.3 BERT-based model None C+D 0.914 0.597 -34.7 0.721 0.342 -52.6 Table 2: The combination of character-level perturbations (C) and distractors (D) leads to substantial loss of recall across all models on both test sets. were not included in our toxic lexicon. Therefore, it is likely that improving the lexicon by finding a larger dataset or manually curating more toxic words could further enhance the effect of adversarial noise. Although neural models fare slightly better, recall on the adversarial test sets still drops significantly, with losses of over 30% in all cases. We present randomly sampled examples of toxic sentences that were misclassified by the fastText model due to the adversarial noise in Table 3. Although not all of them retain grammatical correctness, it is our view that their toxicity is preserved and they should be properly handled by any toxic content classifier # Defenses Against Adversarial Attacks Next we consider potential defenses against the aforemen- tioned attacks: adversarial training and contextual denoising autoencoder. We note that our objective with these # Adversarial Training One possible defense is adversarial training (Szegedy et al. 2014; Goodfellow, Shlens, and Szegedy 2015), applying sim- ilar noise to the training dataset. Adversarial training has been applied successfully in tasks including machine trans- lation (Belinkov and Bisk 2018) and morphological tagging (Heigold et al. 2018). One limitation to this approach is that one would need to know the details of the incoming attack, including the lexicon the adversary might use to generate noise. This is a major limitation, since adversaries can easily change their lexicon. Another limitation is that there is no guarantee that the adversarial noise will produce a reliable pattern that the model can generalize to. For example, for fastText embeddings, the same operation of swapping two characters would produce completely different changes in the subwords for different source words, resulting in different changes in embedding space. The model could also overfit to the adversarial noise, resulting in worsened performance on clean data. Contextual Denoising Autoencoder With token obfuscation, the underlying problem is that small character perturbations can cause large and unpredictable changes in embedding space. To resolve this problem, the underlying text representations themselves need to be robust against character-level perturbations. To learn such robust rep- resentations, we train a denoising autoencoder that receives noised tokens as input and predicts the denoised version of the token. When denoising tokens, the surrounding context can provide strong hints as to what the original token was. Some words like “duck” can be used both as obfuscations of profanity and as standard language, meaning context is crucial in effective denoising. Thus, we use a model that takes the context a sequence of potentially noised tokens as input and predicts the denoised tokens using contextual information. We call this model the Contextual Denoising Autoencoder (CDAE). Due to its impressive performance across a wide range of tasks, we use a Transformer (Vaswani et al. 2017) as the underlying architecture. For word representations, we employ the character convolutional neural network (CNN) encoder used in the ELMo model. We feed the outputs of the CNN en- coder to the Transformer with learned positional embeddings, 6 layers and 4 attention heads in each layer where the outputs of each layer are of size 128. We show the overall scheme of the CDAE in Figure 1. Not using wordpieces leads to massive vocabulary size, especially with corpora obtained from the web. We therefore use the CNN-softmax method combined with importance sampling loss (J´ozefowicz et al. 2016) to ac- celerate training. We apply noise to 70% of tokens according to the scheme in Section and mask all tokens uniformly with a probability of 10%. We train our denoising autoencoder on a random subset of the UMBC webbase corpus (Han et al. 2013) (a large-scale corpus constructed from web crawls) and the Jigsaw 2019 dataset, taking care to remove any examples from the Jigsaw 2018 dataset. We note that this approach does require knowledge of what character-level perturbations will be applied. However, the space of possible character-level perturbations that retain readability of the original token is limited. Crucially, unlike adversarial training, the CDAE does not require knowledge of the adversary’s lexicon, making this approach more suitable for a wider range of attacks. Effect of Defenses In order to evaluate our proposed defenses, we measure AUC, F1 score, and recall over the toxic class for all models. The model architecture for CDAE is similar to the one we used for fastText and ELMo. For CDAE, we use the mean of the final 4 layers of the model and concatenate them with fastText embeddings, because we found that this leads to superior performance.6 The detailed results of applying adversarial 6We hypothesize that this is because the fastText embeddings were trained on much more data so captured some semantic aspects that the CDAE did not. Perturbed Table 3: Examples of Toxic Sentences that were Misclassified by the fastText Model due to Adversarial Noise. Model Train Noise Test Noise AUC Jigsaw 2018 F1 Recall OffensEval 2019 F1 AUC Recall Logistic Regression None None C+D None C+D C+D 0.959 0.877 0.906 0.652 0.459 0.565 0.822 0.344 0.632 0.813 0.639 0.682 0.619 0.340 0.432 0.621 0.246 0.408 FastText None None C+D None C+D C+D 0.973 0.905 0.932 0.674 0.546 0.591 0.902 0.485 0.643 0.850 0.755 0.763 0.670 0.450 0.532 0.633 0.350 0.540 ELMo None None C+D None C+D C+D 0.970 0.880 0.917 0.654 0.538 0.568 0.887 0.569 0.696 0.843 0.725 0.759 0.644 0.429 0.549 0.596 0.350 0.579 BERT None None C+D None C+D C+D 0.974 0.901 0.940 0.685 0.596 0.614 0.914 0.604 0.765 0.889 0.734 0.769 0.730 0.462 0.554 0.721 0.342 0.446 CDAE (ours) None None C+D None C+D C+D 0.973 0.918 0.932 0.677 0.597 0.604 0.894 0.597 0.733 0.861 0.747 0.769 0.665 0.479 0.547 0.742 0.388 0.596 Table 4: Detailed results of models on two datasets. The best results for each setting are highlighted. C+D refers to character- level perturbations and distractors combined. BERT generally performs strongest in clean settings. The CDAE is better at handling noise without adversarial training, while BERT and CDAE perform comparably when adversarial training is introduced. training as well as CDAE’s performance on the Jigsaw 2018 and OffensEval 2019 datasets are shown in Table 4. 7 Overall, we find that BERT performs well in the absence of noise on both datasets (None–None setting). As expected, the addition of noise hurts its performance. CDAE, on the other hand, performs well in the noised test set without adver- sarial training (None–C+D setting), indicating that it indeed manages to at least partly denoise the adversarial utterances. When additional adversarial training is introduced (C+D– C+D setting), BERT and CDAE perform comparably, out- performing all other methods. For OffenEval, we found that BERT was more biased towards the non-toxic class compared to the CDAE, causing it to have much higher precision but slightly lower recall. Adversarial training improves performance across the board, although performance does not recover to the clean- data standards. Interestingly, classifiers that were more vul- 7The OffensEval challenge evaluates models with a macro- averaged F1 score over both classes, so our numbers are signifi- cantly lower than the numbers reported there. We achieve a 0.84 macro-averaged F1 score, beating the state-of-the-art. nerable to attacks before adversarial training tend to perform poorly even with adversarial training. This implies that the representation of text needs to be inherently robust for adver- sarial training to be an effective defense. Despite using char- acter CNNs, ELMo was more vulnerable to noise compared to our CDAE (cf. 36% vs. 33% degradation in recall on the Jigsaw 2018 dataset without adversarial training), showing that character CNNs need to be explicitly trained to handle noise/out-of-vocabulary words in order to exhibit robustness. To better understand the robustness characteristics of our two best models (BERT and CDAE) models, we perform ablations under various noise settings (only character pertur- bations, only distractors, adversarial training with a clean test set). Results are shown in Table 5 and we summarize our findings below. Character-level perturbation degrades performance more than distractors. For both datasets and models, character-level perturbations lead to significantly larger drops in performance across all metrics. This is reasonable, given that obfuscation directly removes the toxicity signal. The distractors, instead, simply dilute it. Train Noise Test Noise Jigsaw 2018 OffensEval 2019 AUC F1 Recall AUC F1 Recall None C 0.911 0.313 -6.44% -12.3% -33.4% -14.8% -38.8% -56.6% 0.600 0.608 0.757 0.446 C C 0.922 0.529 -5.37% -14.0% -23.8% -11.9% -15.3% -26.5% 0.588 0.697 0.783 0.618 None D 0.970 0.650 -0.47% -0.41% -3.56% -0.97% -3.49% -9.83% 0.682 0.882 0.881 0.705 D D 0.971 0.650 -0.33% -0.20% -1.10% -0.50% -2.21% -9.83% 0.683 0.904 0.885 0.714 C + D None 0.968 0.739 -0.59% -4.60% -1.08% -1.64% -6.16% -11.4% 0.653 0.904 0.875 0.685 None C 0.925 0.354 -4.90% -9.87% -28.1% -11.9% -30.1% -52.2% 0.610 0.642 0.758 0.465 C C 0.533 0.932 -4.18% -11.9% -21.0% -8.72% -5.47% -28.0% 0.596 0.706 0.786 0.629 None D 0.965 0.864 -0.75% -1.20% -9.40% 0.36% 0.669 0.810 0.708 0.672 0.98% -4.50% D D 0.862 0.970 -0.25% -0.74% -1.39% 0.15% 0.672 0.882 0.683 0.691 3.78% -7.87% C + D None 0.968 0.721 -0.54% -3.84% 2.01% -0.35% -1.50% -2.81% 0.912 0.651 0.858 0.655 # Model # BERT # CDAE Table 5: Detailed Results for the CDAE and BERT (Percentages below are changes relative to the unnoised baseline). C refers to character-level perturbations and D refers to distractors. Numbers in bold are best results for each setting/metric. Adversarial training helps most against character-level perturbations. The CDAE is stronger against character-level perturbations, whereas BERT performs well in the presence of distractors. Adversarial training reduces performance on clean data. Although adversarial training consistently improves robust- ness to noise, it also slightly reduces performance on clean data. This undesirable byproduct can probably be attributed to models overfitting to the training noise. The CDAE is more resilient against character perturba- tions compared to BERT. We find that the performance of the CDAE drops less with character-level perturbations both before and after adversarial training. For example the recall drops by 33% and 24% for BERT before and after adversarial training, whereas for the CDAE the recall drop is 28% and 21% respectively. This reveals the advantage of the CDAE: it is explicitly trained to address character-level perturbations. BERT’s vulnerability to such noise cannot be easily remedied due to its reliance to a wordpiece tokenizer. BERT performs better in the presence of distractors com- pared to the CDAE. In contrast to the CDAE, BERT is weak to character-level perturbations but strong against distractors. For both datasets, BERT performs stronger in terms of final performance, aside from recall on OffensEval where BERT was more inclined to predict the non-toxic class compared to the CDAE for all settings. For the Jigsaw dataset, BERT per- formance drops less in relative terms although the opposite holds for OffensEval. For OffensEval, the distractors tended to be shorter compared to the Jigsaw dataset since the original text was also generally shorter. This difference in response to distractors may suggest that BERT and the CDAE have different robustness characteristics regarding distractors. A possible explanation might lie in the architecture: BERT is entirely self-attention-based while the CDAE features are fed into a recurrent LSTM. The effect of the different archi- tectures on the robustness characteristics towards distractors remains an open question. Ensembling. Based on our findings, we also examine the performance of an ensemble of BERT and the CDAE, in the hope that it will combine their advantages. The final prediction is made with arithmetic mean of the two models’ predicted probabilities. Results are shown in Table 6. Indeed, the ensemble outperforms both the single CDAE and BERT models when tested on combined noise, exactly because it combines their different robustness characteristics. This suggests that although it may be difficult to train a single model to be robust to all possible attacks, specialized models can be trained to handle different attacks and their ensemble may be a simple, cheap approach that will boost robustness of the entire system. Related Work Toxic Content Classification. Since toxic content classi- fication is a text classification task, traditional techniques ranging from bag of words models (Georgakopoulos et al. 2018) to CNNs (Georgakopoulos et al. 2018) and RNNs (van 2 Model Train Noise Test Noise Jigsaw 2018 OffensEval 2019 AUC F1 Recall AUC F1 BERT None C+D C+D C+D 0.901 0.940 0.596 0.614 0.604 0.765 0.734 0.769 0.462 0.554 0.342 0.446 CDAE (ours) None C+D C+D C+D 0.918 0.932 0.597 0.604 0.597 0.733 0.747 0.769 0.479 0.547 0.388 0.596 Ensemble None C + D C + D C + D 0.921 0.942 0.590 0.628 0.725 0.799 0.774 0.827 0.505 0.612 0.404 0.604 Table 6: Results of the Ensemble of CDAE and BERT. The ensemble performs the strongest against combined noise among all of our methods. Aken et al. 2018; Gunasekara and Nejadgholi 2018) have all been applied. Both van Aken et al. (2018) and Gunasekara and Nejadgholi (2018) have shown that among the various approaches, bidirectional RNNs with attention using pre- trained fastText embeddings (Joulin et al. 2016) have strong performance, with Gunasekara and Nejadgholi (2018) acheiv- ing the best single-model performance on the Jigsaw 2018 dataset using a bidirectional LSTM with attention. Mishra, Yannakoudakis, and Shutova (2018) developed an approach that uses a character-level models to mimic GloVe (Penning- ton, Socher, and Manning 2014) embeddings, thus inferring the embeddings for unseen words. Crucially, this method can only train the model on in-vocabulary words, meaning it is incapable of handling targeted character obfuscations that do not appear naturally in the GloVe vocabulary. Noise and Adversarial Attacks in Text. Belinkov and Bisk (2018) demonstrated the brittleness of neural machine translation (NMT) systems to both natural and synthetic noise. They showed that training on synthetically noised data im- proves robustness towards similar synthetic noise but not to naturally occurring noise (e.g. omissions). In contrast to their work, we focus on targeted adversarial attacks that deliber- ately attempt to fool a classifier. Multiple works have pro- posed white-box attacks (attacks assuming access to model gradients) in NLP for tasks such as NMT (Ebrahimi, Lowd, and Dou 2018) and text classification (Ebrahimi et al. 2018). Samanta and Mehta (2017) constructed a lexicon of words to construct adversarial examples, which is similar but cru- cially different from our approach in that they assume access to model gradients. Other work has explored black-box at- tacks (Liang et al. 2018). In particular, Hosseini et al. (2017) generated adversarial attacks against the Google Perspective API, a public API for detecting toxic content, and showed the brittleness of this system. However, these methods rely on multiple queries to the underlying prediction scores of the model which are not always exposed to a user and can be seen as a form of internal knowledge. Reddy and Knight (2016) showed that the gender of posters on social media can be obfuscated by using a background corpus to identify words indicative of each gender and replacing those words with semantically similar words. Our method differs in that we re- place words with similar-looking instead of similar-meaning character sequences, since our aim is to fool the system while maintaining readability. Defenses. One straightforward, yet non-scalable approach to solving the problem of adversarial noise is to manually curate a lexicon of the most frequent obfuscations (Wang et al. 2014). On the other hand, Rojas-Galeano (2017) pro- posed a method to automatically match obfuscated words to their original forms using a custom edit distance function. Although their approach is more scalable, it still requires the manual construction of inflexible rules for measuring the distance caused by different transformation and thus can easily be circumvented by adversaries. Serr`a et al. (2017) proposed to use the errors from a class-conditioned character- level language model to classify out-of-vocabulary words as toxic or non-toxic. Sakaguchi et al. (2017) proposed the semi-character level recurrent neural network (scRNN) as a method of generating robust word representations. Although their method showed strong performance in spell checking, it is unable to handle anagrams (e.g. “there” and “three”) and homoglyph substitutions, and ignores contextual infor- mation. One limitation of these approaches is that they do not consider the possibility of toxic words being mapped to in-vocabulary words. For instance, “suck my duck” is likely an obfuscation, but the word “duck” itself is common. These problems require the usage of context: for instance, “duck” in “the duck is swimming” is not toxic, but this can only be inferred based on the context. Moreover, none of these ap- proaches consider distractor injection as a potential method of attack. # Conclusion and Future Work In this paper, we show that we can easily degrade the perfor- mance of state-of-the-art toxic content classifiers in a model- agnostic manner by using a background corpus of toxicity and introducing character-level perturbations as well as distrac- tors. We also explore defenses against these attacks, and we find that adversarial training improves robustness in general, but decreases performance on clean data. We also propose the Contextual Denoising Auto-Encoder (CDAE), a method of learning robust representations, and show that these represen- tations are more robust against character-level perturbations, whereas a BERT-based model performs strongly in the pres- ence of distractors. An ensemble of BERT and the CDAE is the most robust approach towards combined noise. References [2018] Alzantot, M.; Sharma, Y.; Elgohary, A.; Ho, B.; Sri- vastava, M. B.; and Chang, K. 2018. Generating natural language adversarial examples. In In Proc. EMNLP 2018. [2018] Belinkov, Y., and Bisk, Y. 2018. Synthetic and natural noise both break neural machine translation. In Proc. ICLR. [2017] Bojanowski, P.; Grave, E.; Joulin, A.; and Mikolov, T. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguis- tics. [2018] Ebrahimi, J.; Rao, A.; Lowd, D.; and Dou, D. 2018. Hotflip: White-box adversarial examples for text classifica- tion. In Proc. ACL. [2018] Ebrahimi, J.; Lowd, D.; and Dou, D. 2018. On adver- sarial examples for character-level neural machine translation. In Proc. COLING. [2018] Elsayed, Y., and Shosha, A. 2018. Large scale de- tection of idn domain name masquerading. In Proc. APWG Symposium on Electronic Crime Research (eCrime). [2018] Gao, J.; Lanchantin, J.; Soffa, M. L.; and Qi, Y. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In Proc. DLSW. [2018] Georgakopoulos, S. V.; Tasoulis, S. K.; Vrahatis, A. G.; and Plagianakos, V. P. 2018. Convolutional neural networks for toxic comment classification. In Proc. Hellenic Confer- ence on AI. [2018] Ginsberg, A., and Yu, C. 2018. Rapid homoglyph prediction and detection. In Proc ICDIS. [2015] Goodfellow, I.; Shlens, J.; and Szegedy, C. 2015. Ex- plaining and harnessing adversarial examples. In Proc. ICLR. [2018] Gunasekara, I., and Nejadgholi, I. 2018. A review of standard text classification practices for multi-label toxicity identification of online content. In Proc. ALW2. [2013] Han, L.; Kashyap, A. L.; Finin, T.; Mayfield, J.; and Weese, J. 2013. UMBC EBIQUITY-CORE: Semantic Tex- tual Similarity Systems. In Proc. CLCS2. [2018] Heigold, G.; Varanasi, S.; Neumann, G.; and van Gen- abith, J. 2018. How robust are character-based word embed- dings in tagging and MT against wrod scramlbing or randdm nouse? In Proc. AMTA. [2017] Hosseini, H.; Kannan, S.; Zhang, B.; and Poovendran, R. 2017. Deceiving google’s perspective API built for detect- ing toxic comments. arXiv:1702.08138. [2016] Joulin, A.; Grave, E.; Bojanowski, P.; and Mikolov, T. 2016. Bag of tricks for efficient text classification. arXiv:1607.01759. [2016] J´ozefowicz, R.; Vinyals, O.; Schuster, M.; Shazeer, N.; and Wu, Y. 2016. Exploring the limits of language modeling. arXiv:1602.02410. [2018] Li, J.; Ji, S.; Du, T.; Li, B.; and Wang, T. 2018. Textbugger: Generating adversarial text against real-world applications. arXiv:1812.05271. [2017] Liang, B.; Li, H.; Su, M.; Bian, P.; Li, X.; and Shi, W. 2017. Deep text classification can be fooled. In Proc. IJCAI. [2018] Liang, B.; Li, H.; Su, M.; Bian, P.; Li, X.; and Shi, W. 2018. Deep text classification can be fooled. In Proc. IJCAI. [2019] Michel, P.; Li, X.; Neubig, G.; and Pino, J. M. 2019. On evaluation of adversarial perturbations for sequence-to- sequence models. In Proc. NAACL. [2018] Mishra, P.; Yannakoudakis, H.; and Shutova, E. 2018. Neural character-based composition models for abuse detec- tion. In 2nd Workshop on Abusive Language Online (ALW2). Brussels, Belgium: Association for Computational Linguis- tics. [2014] Pennington, J.; Socher, R.; and Manning, C. 2014. Glove: Global vectors for word representation. In Proceed- ings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1532–1543. Doha, Qatar: Association for Computational Linguistics. [2018] Peters, M. E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; and Zettlemoyer, L. 2018. Deep contextu- alized word representations. In Proc. NAACL-HLT. [2006] Rayner, K.; White, S. J.; and Liversedge, S. 2006. Raeding wrods with jubmled lettres: There is a cost. [2016] Reddy, S., and Knight, K. 2016. Obfuscating gender in social media writing. In Proc. NLP and CSS. [2017] Rojas-Galeano, S. 2017. On obstructing obscenity obfuscation. ACM Trans. Web. [2017] Sakaguchi, K.; Duh, K.; Post, M.; and Van Durme, B. 2017. Robsut wrod reocginiton via semi-character recurrent neural network. In Proc. AAAI. [2017] Samanta, S., and Mehta, S. 2017. Towards crafting text adversarial samples. arXiv:1707.02812. [2017] Serr`a, J.; Leontiadis, I.; Spathis, D.; Stringhini, G.; Blackburn, J.; and Vakali, A. 2017. Class-based prediction errors to detect hate speech with out-of-vocabulary words. In Proc. ALW. [2014] Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I. J.; and Fergus, R. 2014. Intriguing properties of neural networks. In Proc. ICLR. [2018] van Aken, B.; Risch, J.; Krestel, R.; and L¨oser, A. 2018. Challenges for toxic comment classification: An in- depth error analysis. arXiv:1809.07572. [2017] Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is all you need. In Proc. NeurIPS. [2014] Wang, W.; Chen, L.; Thirunarayan, K.; and Sheth, A. P. 2014. Cursing in english on twitter. In Proc. ACM CSCW. [2018] Wang, C. 2018. speech classifiers. In Proc. ALW2. [2016] Waseem, Z., and Hovy, D. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In Proc. SRW (HLT-NAACL). [1992] Wilcoxon, F. 1992. Individual comparisons by ranking methods. In Breakthroughs in statistics. Springer. 196–202. [2018] Wu, Z.; Kambhatla, N.; and Sarkar, A. 2018. Deci- pherment for adversarial offensive language detection. In Proc. ALW2. # Appendix Table 7: Homoglyph Confusion Map (2 9: 0 8. 6, 7. 7, 0. 0; / ce . oe. PL the ae 'O': ioe ae ce wen Woe Cy er ou ‘d': io he ‘el: 'e', ae 7 ‘g's 'g', Wie 'h', i', ae aby "k': kK ee] - 'm': 'm', SD lots tol tnt tpt gt tg! tts tr! 70:0, Pp: Pp, q:q,r:T, siish'th'ttuls'uy ‘v's 'v', 'w's'w', xx yt ty '2''Z5 TAAL ‘BI 'BY CC, De Dp, Ee Fe AAS TS S'S IK: : iY, Wa \Y/ MP © © "a= © Ma © RISAY'S' SS TTY U's v 'V's V4, WE BX XSI YSZ: 'Z! Table 8: Hyperparameter Settings for BiLSTM model Parameter Learning rate Batch size Training epochs Input dropout Word dropout Output dropout Learning rate schedule Baseline 4e-3 128 5 0.2 0.1 0.3 slanted triangular Table 9: Hyperparameter Settings for BERT Parameter Learning rate Batch size Training epochs Learning rate schedule Table 10: Top 100 Tokens of the Toxic Lexicon Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Token idiot idiots stupidity stupid idiotic shit asshole morons stupidest imbecile bullshit moron imbeciles fuck bitch hypocrite idiocy scumbag fucking dumbest whore crap pussy ass pathetic Rank 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 Token moronic jerk assholes dumbass scum jackass fools jerks dumb damn hypocrites bastards bastard sucks dammit penis ignorant arse foolish darn sh*t rubbish scumbags vagina fool Rank 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 Token whores dimwit f**k suck suckers buffoon imbecilic dumber cretin stupidly loser clowns bitches shitty ridiculous clown silly coward sucker garbage schmuck damned hypocritical lunatics losers Rank 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 Token stupider pussies asses daft schmucks anal retarded asinine nitwit parasites p***y fucked wtf slut pigs buffoons testicles dork troll disgusting f*ck liar dumbed cursed masturbation Original # Perturbed wtf ur problem, open up lodestone nerde == concerned == you are a horrible person you know that and i can do what ever i want and dont track me down or i will to you Totally an accident. If | meant to write it, then | wouldn't have deleted it, now would |? And yeah, I'll grow up, woman who still watches Thundercats. God, you OCers are pathetic. == pruning == In response to May 2009 box about needing to be cut down, | pruned this a bit. Not sure it's enough to warrant removing the box. IM GONNA KILL U AND UR FAMILY! PEACE OUT D that overemphasizes asian porno. |Doesn't meet the username policy as it matches a vandal meme of creating a range of similar usernames to do petty vandalism. = = wif = = wif ur problem , open up lodestone nerde problem , open up lodestone nerde lodestone nerde lodestone nerde = = concerned = = you are a horrible peOson you know that and know that and i can do what ever i want and do nt track me down or track me down or i will to you Totally an accident . If | meant to write it , then | wouldn 't have deleted it , now would | ? And ? And leah , I'll gorw up , woman who still wtcaehs Thundercats . God , you OCers are pathetic . = = pruning = = In response to May 2009 box about needing to be cut down , | pruned tihs a bit . Not sure it" it' s euongh to to tarrant rmoeivng the box . IM GONNA KILL U AND UR FAMILY ! PEACE OUT D PEACE OUT DDD that overemphasizes Asian pornos that overemphasizes Asian pornos Asian pornos Asian pornos . |Doesn't meet the username policy as it matches a vandal meme of creating a range of similar usernames to do range of similar usernames to do usernames to do usernames to do petty vandalism . Table 11: Examples of original and perturbed comments (part 1). # Original # Perturbed : If Arteitle's post were on Facebook, I'd give it a triple thumbs up. These definitions are completely ass-backwards. He was a gay faggot who liked the D == The truth == Lugia 2453 is in no uncertain terms a pile of human excrement who engages in forced acts of oral sexual practises with minors. | just corrected the numbers on this page. Please do not revert them again. Thank you. 29 June 2005 01:06 (UTC) == Edit request from 69.60.69.109, 25 March 2011 == Jordan is gay. Shut the fuck up, this is wikipedia. "([ think ""official"" is just an argument some apologetics writers try to use to address propriety.) " well go to hell. : 1 If Arteitle 's post were on Facebook , |'' d give it a triple thumbs up triple thumbs up . Thuse definitions are completely ass - backwards - backwards . He was was a gay faggot who liked the D who liked the D = = The truth = = Lugia 2453 is in no uncertain terms a pyle of human excrement who engages in forced acts of who engages in forced acts of oarl sexual practive with minors minors . | jsut corrected the numbers on thisn page . Please do not Please do not reevrt thei again . Thank you . 29 June 29 June 2005 01:06 ( UTC ) = = Edit request from 69.60.69.109 , 25 March 2011 Edit request from 69.60.69.109 , 25 March 2011 = = Jordan is Jordan is gay . Scut the fuck up , up , this is is wikipedia . "(1 think "" official" is jusy an argument some apologetics writers try to use to address try to use to address to address to address prpoierty . )" well go to to Gell . Table 12: Examples of original and perturbed comments (part 2). # Original # Perturbed == Augustus Baker Pierce == REDIRECT to Augustus Baker Peirce is no longer a broken link and can be restored if possible. Thank you. Justintmartin|snoop]] justin hey whats up birch? me just chillin. i just fucked with mr.buddylove so we will see what happens. gotta go. REDIRECT Talk:Shetland football team :Oppose - All these different terms refer to slightly or completely different things. That's why they use distinct terms (or were coined), and not the same term. - | hate you. You need to leave America. == Porque quitar las fotos == Todavia no entiendo porque quitaste sin mi permiso las fotos...... Quiero una explicancion, IDIOTA = = Augustus Baker Pierce = = REDIRECTER to Augustus Baekr Peirce is no longer a broken link and can can be restored if possible restored if possible . Thank you . Justintmartin|snoop J ] jstuin hey what s up birch ? me just chillin . i just feuked with mr.buddylove so we will see just fcuked with mr.buddylove so we will see so we will see so we will see what happens . got tago. REDIRECT Talk : Shetland football team : Shetland football team : Oppose - All these different terms refer to sgilhtly or completely different things or completely different things . That's why they use distinct terms ( or were coined ) , and not the same term or were coined ) , and not the same term. - | hate you . You need to leave America need to leave America . = = Porque quitar Las fotos = = Todavia no entiendo porque quitaste sin mi permiso entiendo porque quitaste sin mi permiso Las fotos .. Quiero UNA explicancion , IDIOTA explicancion , IDIOTA Table 13: Examples of original and perturbed comments (part 3).
{ "id": "1812.05271" }
1912.06680
Dota 2 with Large Scale Deep Reinforcement Learning
On April 13th, 2019, OpenAI Five became the first AI system to defeat the world champions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approximately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task.
http://arxiv.org/pdf/1912.06680
OpenAI, :, Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique P. d. O. Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, Susan Zhang
cs.LG, stat.ML
null
null
cs.LG
20191213
20191213
9 1 0 2 c e D 3 1 ] G L . s c [ 1 v 0 8 6 6 0 . 2 1 9 1 : v i X r a # Dota 2 with Large Scale Deep Reinforcement Learning OpenAI, ∗ Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław “Psyho" Dębiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Scott Gray, Catherine Olsson, Jakub Pachocki, Michael Petrov, Henrique Pondé de Oliveira Pinto, Jonathan Raiman, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, Jie Tang, Filip Wolski, Susan Zhang December 24, 2021 # Abstract On April 13th, 2019, OpenAI Five became the first AI system to defeat the world cham- pions at an esports game. The game of Dota 2 presents novel challenges for AI systems such as long time horizons, imperfect information, and complex, continuous state-action spaces, all challenges which will become increasingly central to more capable AI systems. OpenAI Five leveraged existing reinforcement learning techniques, scaled to learn from batches of approxi- mately 2 million frames every 2 seconds. We developed a distributed training system and tools for continual training which allowed us to train OpenAI Five for 10 months. By defeating the Dota 2 world champion (Team OG), OpenAI Five demonstrates that self-play reinforcement learning can achieve superhuman performance on a difficult task. 1 # 1 Introduction The long-term goal of artificial intelligence is to solve advanced real-world challenges. Games have served as stepping stones along this path for decades, from Backgammon (1992) to Chess (1997) to Atari (2013)[1–3]. In 2016, AlphaGo defeated the world champion at Go using deep reinforcement learning and Monte Carlo tree search[4]. In recent years, reinforcement learning (RL) models have tackled tasks as varied as robotic manipulation[5], text summarization [6], and video games such as Starcraft[7] and Minecraft[8]. Relative to previous AI milestones like Chess or Go, complex video games start to capture the complexity and continuous nature of the real world. Dota 2 is a multiplayer real-time strategy game produced by Valve Corporation in 2013, which averaged between 500,000 and 1,000,000 concurrent players between 2013 and 2019. The game is actively played by full time professionals; the prize pool for the 2019 international championship exceeded $35 million (the largest of any esports game in the world)[9, 10]. The game presents challenges for reinforcement learning due to long time horizons, partial observability, and high dimensionality of observation and action spaces. Dota 2’s ∗Authors listed alphabetically. Please cite as OpenAI et al., and use the following bibtex for citation: https: //openai.com/bibtex/openai2019dota.bib 1 rules are also complex — the game has been actively developed for over a decade, with game logic implemented in hundreds of thousands of lines of code. The key ingredient in solving this complex environment was to scale existing reinforcement learning systems to unprecedented levels, utilizing thousands of GPUs over multiple months. We built a distributed training system to do this which we used to train a Dota 2-playing agent called OpenAI Five. In April 2019, OpenAI Five defeated the Dota 2 world champions (Team OG1), the first time an AI system has beaten an esport world champion2. We also opened OpenAI Five to the Dota 2 community for competitive play; OpenAI Five won 99.4% of over 7000 games. One challenge we faced in training was that the environment and code continually changed as our project progressed. In order to train without restarting from the beginning after each change, we developed a collection of tools to resume training with minimal loss in performance which we call surgery. Over the 10-month training process, we performed approximately one surgery per two weeks. These tools allowed us to make frequent improvements to our strongest agent within a shorter time than the typical practice of training from scratch would allow. As AI systems tackle larger and harder problems, further investigation of settings with ever-changing environments and iterative development will be critical. In section 2, we describe Dota 2 in more detail along with the challenges it presents. In section 3 we discuss the technical components of the training system, leaving most of the details to appendices cited therein. In section 4, we summarize our long-running experiment and the path that lead to defeating the world champions. We also describe lessons we’ve learned about reinforcement learning which may generalize to other complex tasks. # 2 Dota 2 Dota 2 is played on a square map with two teams defending bases in opposite corners. Each team’s base contains a structure called an ancient; the game ends when one of these ancients is destroyed by the opposing team. Teams have five players, each controlling a hero unit with unique abilities. During the game, both teams have a constant stream of small “creep” units, uncontrolled by the players, which walk towards the enemy base attacking any opponent units or buildings. Players gather resources such as gold from creeps, which they use to increase their hero’s power by purchasing items and improving abilities.3 To play Dota 2, an AI system must address various challenges: • Long time horizons. Dota 2 games run at 30 frames per second for approximately 45 minutes. OpenAI Five selects an action every fourth frame, yielding approximately 20,000 steps per episode. By comparison, chess usually lasts 80 moves, Go 150 moves[11]. • Partially-observed state. Each team in the game can only see the portion of the game state near their units and buildings; the rest of the map is hidden. Strong play requires making inferences based on incomplete data, and modeling the opponent’s behavior. 1https://www.facebook.com/OGDota2/ 2Full game replays and other supplemental can be downloaded from: https://openai.com/blog/ how-to-train-your-openai-five/ 3Further information the rules and gameplay of Dota 2 is readily accessible online; a good introductory resource is https://purgegamers.true.io/g/dota-2-guide/ 2 • High-dimensional action and observation spaces. Dota 2 is played on a large map containing ten heroes, dozens of buildings, dozens of non-player units, and a long tail of game features such as runes, trees, and wards. OpenAI Five observes ∼ 16, 000 total values (mostly floats and categorical values with hundreds of possibilities) each time step. We discretize the action space; on an average timestep our model chooses among 8,000 to 80,000 actions (de- pending on hero). For comparison Chess requires around one thousand values per observation (mostly 6-possibility categorical values) and Go around six thousand values (all binary)[12]. Chess has a branching factor of around 35 valid actions, and Go around 250[11]. Our system played Dota 2 with two limitations from the regular game: • Subset of 17 heroes — in the normal game players select before the game one from a pool of 117 heroes to play; we support 17 of them.4 • No support for items which allow a player to temporarily control multiple units at the same time (Illusion Rune, Helm of the Dominator, Manta Style, and Necronomicon). We removed these to avoid the added technical complexity of enabling the agent to control multiple units. # 3 Training System # 3.1 Playing Dota using AI Humans interact with the Dota 2 game using a keyboard, mouse, and computer monitor. They make decisions in real time, reason about long-term consequences of their actions, and more. We adopt the following framework to translate the vague problem of “play this complex game at a superhuman level" into a detailed objective suitable for optimization. Although the Dota 2 engine runs at 30 frames per second, OpenAI Five only acts on every 4th frame which we call a timestep. Each timestep, OpenAI Five receives an observation from the game engine encoding all the information a human player would see such as units’ health, position, etc (see Appendix E for an in-depth discussion of the observation). OpenAI Five then returns a discrete action to the game engine, encoding a desired movement, attack, etc. Certain game mechanics were controlled by hand-scripted logic rather than the policy: the order in which heroes purchase items and abilities, control of the unique courier unit, and which items heroes keep in reserve. While we believe the agent could ultimately perform better if these actions were not scripted, we achieved superhuman performance before doing so. Full details of our action space and scripted actions are described in Appendix F. Some properties of the environment were randomized during training, including the heroes in the game and which items the heroes purchased. Sufficiently diverse training games are necessary to ensure robustness to the wide variety of strategies and situations that arise in games against human opponents. See subsection O.2 for details of the domain randomizations. We define a policy (π) as a function from the history of observations to a probability distribution over actions, which we parameterize as a recurrent neural network with approximately 159 million parameters (θ). The neural network consists primarily of a single-layer 4096-unit LSTM [13] (see Figure 1). Given a policy, we play games by repeatedly passing the current observation as input and sampling an action from the output distribution at each timestep. 4See Appendix P for experiments characterizing the effect of hero pool size. 3 Tied Weights _, Flattened ( Observation , mw \ ) a \ Hero _ ) Embedding Figure 1: Simplified OpenAI Five Model Architecture: The complex multi-array observation space is processed into a single vector, which is then passed through a 4096-unit LSTM. The LSTM state is projected to obtain the policy outputs (actions and value function). Each of the five heroes on the team is controlled by a replica of this network with nearly identical inputs, each with its own hidden state. The networks take different actions due to a part of the observation processing’s output indicating which of the five heroes is being controlled. The LSTM composes 84% of the model’s total parameter count. See Figure 17 and Figure 18 in Appendix H for a detailed breakdown of our model architecture. Separate replicas of the same policy function (with identical parameters θ) are used to control each of the five heroes on the team. Because visible information and fog of war (area that is visible to players due to proximity of friendly units) are shared across a team in Dota 2, the observations are nearly5 identical for each hero. Instead of using the pixels on the screen, we approximate the information available to a human player in a set of data arrays (see Appendix E for full details of the observations space). This approximation is imperfect; there are small pieces of information which humans can gain access to which we have not encoded in the observations. On the flip side, while we were careful to ensure that all the information available to the model is also available to a human, the model does get to see all the information available simultaneously every time step, whereas a human needs to actively click to see various parts of the map and status modifiers. OpenAI Five uses this semantic observation space for two reasons: First, because our goal is to study strategic planning and high-level decision- making rather than focus on visual processing. Second, it is infeasible for us to render each frame to pixels in all training games; this would multiply the computation resources required for the project many-fold. Although these discrepancies exist, we do not believe they introduce significant bias when benchmarking against human players. To allow the five networks to choose different actions, the LSTM receives an extra input from the observation processing, indicating which of the five heroes is being controlled, detailed in Figure 17. Because of the expansive nature of the problem and the size and expense of each experiment, it was not practical to investigate all the details of the policy and training system. Many details, even 5We do include a very small number of derived features which depend on the hero being controlled, for example the “distance to me” feature of each unit in the game. 4 some large ones, were set for historical reasons or on the basis of preliminary investigations without full ablations. # 3.2 Optimizing the Policy Our goal is to find a policy which maximizes the probability of winning the game against professional human experts. In practice, we maximize a reward function which includes additional signals such as characters dying, collecting resources, etc. We also apply several techniques to exploit the zero- sum multiplayer structure of the problem when computing the reward function — for example, we symmetrize rewards by subtracting the reward earned by the opposing team. We discuss the details of the reward function in Appendix G. We constructed the reward function once at the start of the project based on team members’ familiarity with the game. Although we made minor tweaks when game versions changed, we found that our initial choice of what to reward worked fairly well. The presence of these additional signals was important for successful training (as discussed in Appendix G). The policy is trained using Proximal Policy Optimization (PPO)[14], a variant of advantage actor critic[15, 16].6 The optimization algorithm uses Generalized Advantage Estimation [17] (GAE), a standard advantage-based variance reduction technique [15] to stabilize and accelerate training. We train a network with a central, shared LSTM block, that feeds into separate fully connected layers producing policy and value function outputs. The training system is represented in Figure 2. We train our policy using collected self-play experience from playing Dota 2, similar to [18]. A central pool of optimizer GPUs receives game data and stores it asynchronously in local buffers called experience buffers. Each optimizer GPU computes gradients using minibatches sampled randomly from its experience buffer. Gradients are averaged across the pool using NCCL2[19] allreduce before being synchronously applied to the parameters. In this way the effective batch size is the batch size on each GPU (120 samples, each with 16 timesteps) multiplied by the number of GPUs (up to 1536 at the peak), for a total batch size of 2,949,120 time steps (each with five hero policy replicas). We apply the Adam optimizer [20] using truncated backpropagation through time[21] over sam- ples of 16 timesteps. Gradients are additionally clipped per parameter to be within between ±5 v where v is the running estimate of the second moment of the (unclipped) gradient. Every 32 gradi- ent steps, the optimizers publish a new version of the parameters to a central Redis7 storage called the controller. The controller also stores all metadata about the state of the system, for stopping and restarting training runs. “Rollout” worker machines run self-play games. They run these games at approximately 1/2 real time, because we found that we could run slightly more than twice as many games in parallel at this speed, increasing total throughput. We describe our integration with the Dota 2 engine in Appendix K. They play the latest policy against itself for 80% of games, and play against older policies for 20% of games (for details of opponent sampling, see Appendix N). The rollout machines run the game engine but not the policy; they communicate with a separate pool of GPU machines which run forward passes in larger batches of approximately 60. These machines frequently poll the controller to gather the newest parameters. 6Early on in the project, we considered other algorithms including other policy gradient methods, q-learning, and evolutionary strategies. PPO was the first to show initial learning progress. # 7http://redis.io 5 Publish new parameter version Distribute new C every ~1 min (32 optimizer steps) parameter versions ——___ Controller \ Optimizer (512 GPUs) Exp.Buffer 1920 samples every ~2s MPT allreduce to average gradients across all GPUs Rollout Worker 57,600 Workers Observation every ~0.25s CAAA ORY 256 samples every ~1 min from each rollout worker from each rollout worker Rollout Worker Control Code (Python) ——————* Converts state to numerical obs = Computes reward & GAE To Forward Send training data to optimizers Pass GPU t every 256 steps; sample of (ob, gRPC ac, rew, etc) for each timestep Docker Container Dota Engine gRPC Server Scripting API (Go) (Lua) Figure 2: System Overview: Our training system consists of 4 primary types of machines. Roll- outs run the Dota 2 game on CPUs. They communicate in a tight loop with Forward Pass GPUs, which sample actions from the policy given the current observation. Rollouts send their data to Optimizer GPUs, which perform gradient updates. The Optimizers publish the parameter versions to storage in the Controller, and the Forward Pass GPUs occasionally pull the latest parameter version. Machine numbers are for the Rerun experiment described in subsection 4.2; OpenAI Five’s numbers fluctuated between this scale and approximately 3x larger. 6 Rollout machines send data asynchronously from games that are in progress, instead of waiting for an entire game to finish before publishing data for optimization®; see Figure 8 in Appendix C for more discussion of how rollout data is aggregated. See Figure 5b for the benefits of keeping the rollout-optimization loop tight. Because we use GAE with = 0.95, the GAE rewards need to be smoothed over a number of timesteps >> 1/A = 20; using 256 timesteps causes relatively little loss. The entire system runs on our custom distributed training platform called Rapid[5], running on Google Cloud Platform. We use ops from the blocksparse library for fast GPU training[22]. For a full list of the hyperparameters used in training, see Appendix C. # 3.3 Continual Transfer via Surgery As the project progressed, our code and environment gradually changed for three different reasons: 1. As we experimented and learned, we implemented changes to the training process (reward structure, observations, etc) or even to the architecture of the policy neural network. 2. Over time we expanded the set of game mechanics supported by the agent’s action and obser- vation spaces. These were not introduced gradually in an effort to build a perfect curriculum. Rather they were added incrementally as a consequence of following the standard engineering practice of building a system by starting simple and adding complexity piece by piece over time. 3. From time to time, Valve publishes a new Dota 2 version including changes to the core game mechanics and the properties of heroes, items, maps, etc; to compare to human players our agent must play on the latest game version. These changes can modify the shapes and sizes of the model’s layers, the semantic meaning of categorical observation values, etc. When these changes occur, most aspects of the old model are likely relevant in the new envi- ronment. But cherry-picking parts of the parameter vector to carry over is challenging and limits reproducibility. For these reasons training from scratch is the safe and common response to such changes. However, training OpenAI Five was a multi-month process with high capital expenditure, mo- tivating the need for methods that can persist models across domain and feature changes. It would have been prohibitive (in time and money) to train a fresh model to a high level of skill after each such change (approximately every two weeks). For example, we changed to Dota 2 version 7.21d, eight days before our match against the world champions (OG); this would not have been possible if we had not continued from the previous agent. Our approach, which we term “surgery”, can be viewed as a collection of tools to perform offline operations to the old model πθ to obtain a new model ˆπˆθ compatible with the new environment, which performs at the same level of skill even if the parameter vectors ˆθ and θ have different sizes and semantics. We then begin training in the new environment using ˆπˆθ. In the simplest case where the environment, observation, and action spaces did not change, our standard reduces to insisting 8Rollout machines produce 7.5 steps per second; they send data every 256 steps, or 34 seconds of game play. Because our rollout games run at approximately half-speed, this means they push data approximately once per minute. 7 that the new policy implements the same function from observed states to action probabilities as the old: ∀o ˆπˆθ(o) = πθ(o) This case is a special case of Net2Net-style function preserving transformations [23]. We have developed tools to implement Equation 1 exactly when possible (adding observations, expanding layers, and other situations), and approximately when the type of modification to the environment, observation space, or action space precludes satisfying it exactly. See Appendix B for further discussion of surgery. In the end, we performed over twenty surgeries (along with many unsuccessful surgery attempts) over the ten-month lifetime of OpenAI Five (see Table 1 in Appendix B for a full list). Surgery enabled continuous training without loss in performance (see Figure 4). In subsection 4.2 we discuss our experimental verification of this method. # 4 Experiments and Evaluation OpenAI Five is a single training run that ran from June 30th, 2018 to April 22nd, 2019. After ten months of training using 770±50 PFlops/s·days of compute, it defeated the Dota 2 world champions in a best-of-three match and 99.4% of human players during a multi-day online showcase. In order to utilize this level of compute effectively we had to scale up along three axes. First, we used batch sizes of 1 to 3 million timesteps (grouped in unrolled LSTM windows of length 16). Second, we used a model with over 150 million parameters. Finally, OpenAI Five trained for 180 days (spread over 10 months of real time due to restarts and reverts). Compared AlphaGo[4], we use 50 to 150 times larger batch size, 20 times larger model, and 25 times longer training time. Simultaneous works in recent months[7, 24] have matched or slightly exceeded our scale. # 4.1 Human Evaluation Over the course of training, OpenAI Five played games against numerous amateur players, pro- fessional players, and professional teams in order to gauge progress. For a complete list of the professional teams OpenAI Five played against over time, see Appendix I. On April 13th, OpenAI Five played a high-profile game against OG, the reigning Dota 2 world champions, winning a best-of-three (2-0) and demonstrating that our system can learn to play at the highest levels of skill. For detailed analysis of our agent’s performance during this game and its overall understanding of the environment, see Appendix D. Machine Learning systems often behave poorly when confronted with unexpected situations[25]. While winning a single high-stakes showmatch against the world champion indicates a very high level of skill, it does not prove a broad understanding of the variety of challenges the human community can present. To explore whether OpenAI Five could be consistently exploited by creative or out- of-distribution play, we ran OpenAI Five Arena, in which we opened OpenAI Five to the public for competitive online games from April 18-21, 2019. In total, Five played 3,193 teams in 7,257 total games, winning 99.4% 9. Twenty-nine teams managed to defeat OpenAI Five for a total of 42 games 9Human players often abandoned losing games rather than playing them to the end, even abandoning games right after an unfavorable hero selection draft before the main game begins. OpenAI Five does not abandon games, so we count abandoned games as wins for OpenAI Five. These abandoned games (3140 of the 7215 wins) likely includes a small number of games that were abandoned for technical or personal reasons. 8 = wn co 2 250 4 OG (world champions) Benchmark (casters) Test team A (semi-pro) 200 4 Test team B (amateur) iso 4 : 100 4 Hand-scripted 50 4 —— OpenAl Five @ Pro matches won 04 Random Calibration matches T T : : : T T T 0 100 200 300 400 500 600 700 800 Compute (PFLOPs/s-days) Figure 3: TrueSkill over the course of training for OpenAI Five. To provide informal context for how TrueSkill corresponds to human skill, we mark the level at which OpenAI Five begins to defeat various opponents, from random to world champions. Note that this is biased against earlier models; this TrueSkill evaluation is performed using the final policy and environment (Dota 2 version 7.21d, all non-illusion items, etc), even though earlier models were trained in the earlier environment. We believe this contributes to the inflection point around 600 PFLOPs/s-days — around that point we gave the policy control of a new action (buyback) and performed a major Dota 2 version upgrade (7.20). We speculate that the rapid increase to TrueSkill 200 early in training is due to the exponential nature of the scale — a constant TrueSkill difference of approximately 8.3 corresponds to an 80% winrate, and it is easier to learn how to consistently defeat bad agents. lost. In Dota 2, the key measure of human dexterity is reaction time 10. OpenAI Five can react to a game event in 217ms on average. This quantity does not vary depending on game state. It is difficult to find reliable data on Dota 2 professionals’ reaction times, but typical human visual reaction time is approximately 250ms[26]. See Appendix L for more details. While human evaluation is the ultimate goal, we also need to evaluate our agents continually during training in an automated way. We achieve this by comparing them to a pool of fixed reference agents with known skill using the TrueSkill rating system [27]. In our TrueSkill environment, a rating of 0 corresponds to a random agent, and a difference of approximately 8.3 TrueSkill between two agents roughly corresponds to an 80% winrate of one versus the other (see Appendix J for details of our TrueSkill setup). OpenAI Five’s TrueSkill rating over time can be seen in Figure 3. OpenAI Five’s “playstyle" is difficult to analyze rigorously (and is likely influenced by our shaped reward function) but we can discuss in broad terms the flavor of comments human players made to describe how our agent approached the game. Over the course of training, OpenAI Five developed 10Contrast with RTS games like Starcraft, where the key measure is actions per minute due to the large number of units that need to be supplied with actions. 9 a distinct style of play with noticeable similarities and differences to human playstyles. Early in training, OpenAI Five prioritized large group fights in the game as opposed to accumulating resources for later, which led to games where they were significantly behind if the enemy team avoided fights early. This playstyle was risky and would result in quick wins in under 20 minutes if OpenAI Five got an early advantage, but had no way to recover from falling behind, leading to long and drawn out losses often over 45 minutes. As the agents improved, the playstyle evolved to align closer with human play while still main- taining many of the characteristics learned early on. OpenAI Five began to concentrate resources in the hands of its strongest heroes, which is common in human play. Five relied heavily on large group battles, effectively applying pressure when holding a significant advantage, but also avoided fights and focused on gathering resources if behind. The final agent played similar to humans in many broad areas, but had a few interesting dif- ferences. Human players tend to assign heroes to different areas of the map and only reassign occasionally, but OpenAI Five moved heroes back and forth across the map much more frequently. Human players are often cautious when their hero has low health; OpenAI Five seemed to have a very finely-tuned understanding of when an aggressive attack with a low-health hero was worth a risk. Finally OpenAI Five tended to more readily consume resources, as well as abilities with long cooldowns (time it takes to reload), while humans tend to hold on to those in case a better opportunity arises later. # 4.2 Validating Surgery with Rerun In order to validate the time and resources saved by our surgery method (see subsection 3.3), we trained a second agent between May 18, 2019 and June 12, 2019, using only the final environment, model architecture, etc. This training run, called “Rerun”, did not go through a tortuous route of changing game rules, modifications to the neural network parameters, online experiments with hyperparameters, etc. Rerun took 2 months and 150 ± 5 PFlops/s·days of compute (see Figure 4). This timeframe is significantly longer than the frequency of our surgery changes (which happened every 1-2 weeks). As a naive comparison, if we had trained from scratch after each of our twenty major surgeries, the project would have taken 40 months instead of 10 (in practice we likely would have made fewer changes). Another benefit of surgery was that we had a very high-skill agent available for evaluation at all times, significantly tightening the iteration loop for experimental changes. In OpenAI Five’s regime — exploring a novel task and building a novel environment — perpetual training is a significant benefit. Of course, in situations where the environment is pre-built and well-understood from the start, we see little need for surgery. Rerun took approximately 20% of the resources of OpenAI Five; if we had access to the final training environment ahead of time there would be no reason to start training e.g. on a different version of the game. Rerun continued to improve beyond OpenAI Five’s skill, and reached over 98% winrate against the final version of OpenAI Five. We wanted to validate that our final code and hyperparameters would reproduce OpenAI Five performance, so we ceased training at that point. We believe Rerun would have continued improving, both because of its upward trend and because we had yet to fully anneal hyperparameters like learning rate and horizon to their final OpenAI Five settings. This process of surgery successfully allowed us to change the environment every week. However, the model ultimately plateaued at a weaker skill level than the from-scratch model was able to 10 Final OpenAl Five TrueSkill = 254 250 ene 200 5 150 2 cE 100 50 —— Rerun — OpenAl Five 800 250 ; l, 15x 200 = 150 2 = 100 50 —— hypothetical always-restart run time spent retraining from scratch 0 L f 1 0 200 400 600 800 Total project compute (PFLOPs/s-days) # more Figure 4: Training in an environment under development: In the top panel we see the full history of our project - we used surgery methods to continue training OpenAI Five at each environment or policy change without loss in performance; then we restarted once at the end to run Rerun. On the bottom we see the hypothetical alternative, if we had restarted after each change and waited for the model to reach the same level of skill (assuming pessimistically that the curve would be identical to OpenAI Five). The ideal option would be to run Rerun-like training from the very start, but this is impossible — the OpenAI Five curve represents lessons learned that led to the final codebase, environment, etc., without which it would not be possible to train Rerun. 11 achieve. Learning how to continue long-running training without affecting final performance is a promising area for future work. Ultimately, while surgery as currently conceived is far from perfect, with proper tooling it becomes a useful method for incorporating certain changes into long-running experiments without paying the cost of a restart for each. # 4.3 Batch Size In this section, we evaluate the benefits of increasing the batch size using small scale experiments. Increasing the batch size in our case means two things: first, using twice as many optimizer GPUs to optimize over the larger batch, and second, using twice as many rollout machines and forward pass GPUs to produce twice as many samples to feed the increased optimizer pool. One compelling benchmark to compare against when increasing the batch size is linear speedup: using 2x as much compute gets to the same skill level in 1/2 the time. If this scaling property holds, it is possible to use the same total amount of GPU-days (and thus dollars) to reach a given result[28]. In practice we see less than this ideal speedup, but the speedup from increasing batch size is still noticeable and allows us to reach the result in less wall time. To understand how batch size affects training speed, we calculate the “speedup” of an experiment to reach various TrueSkill thresholds, defined as: speedup(T ) = Versions for baseline to first reach TrueSkill T Versions for experiment to first reach TrueSkill T (2) The results of varying batch size in the early part of training can be seen in Figure 5. Full details of the experimental setup can be found in Appendix M. We find that increasing the batch size speeds up training through the regime we tested, up to batches of millions of observations. Using the scale of Rerun, we were able to reach superhuman performance in two months. In Figure 5a, we see that Rerun’s batch size (983k time steps) had a speedup factor of around 2.5x over the baseline batch size (123k). If we had instead used the smaller batch size, then, we might expect to wait 5 months for the same result. We speculate that it would likely be longer, as the speedup factor of 2.5 applies at TrueSkill 175 early in training, but it appears to increase with higher TrueSkill. Per results in [28], we hoped to find (in the early part of training) linear speedup from increasing batch size; i.e. that it would be 2x faster to train an agent to certain thresholds if we use 2x the compute and data. Our results suggest that speedup is less than linear. However, we speculate that this may change later in training when the problem becomes more difficult. Also, given the relevant compute costs, in this ablation study we did not tune hyperparameters such as learning rate separately for each batch size. # 4.4 Data Quality One unusual feature of our task is the length of the games; each rollout can take up to two hours to complete. For this reason it is infeasible for us to optimize entirely on fully on-policy trajectories; if we waited to apply gradient updates for an entire rollout game to be played using the latest parameters, we could make only one update every two hours. Instead, our rollout workers and optimizers operate asynchronously: rollout workers download the latest parameters, play a small 12 o 2 75 —— Batch size 1966k — Batch size 983k 50 — Batch size 492k — Batch size 246k 25 — Batch size 123k (b) —— Batch size 61k i) T T T T T T Ok 2k 5k 7k 10k 12k 15k Parameter versions 3.0 4 | 254 4 2 2.0 4 { ged ais J s au 1.0 4 TS175 os | +@> 15125 : -@> Ts100 sess Linear speedup 0.0 4 + _ 61k 123k 246k 492k 983k 1966k Batch size (in frames, log scale) 200 200 175 175, 150 150 Bs 5s 3 3 = 100 2 100 & fa — Queue lenath 0 (b) _——_— ia mr Gueue fens 6 — Sample Reuse — Queueing 50 — Queue length 4 50 — Sample Reuse — queue length 8 — Sample Reuse 25 —— Queue length 16 25 —— Sample Reuse — Queue length 32 — Sample Reuse 0 ot Ok 2k 4k 6k 8k 10k 0k 2k 4k 6k Parameter versions Parameter versions 12 S150 12 -@- 75125 Ts125 . -@- T5100 Lo . Ts100 10 0.8 08 5 3 Fos Bos M go & & 0.4 oa 0.2 0.2 2 4 8 16 32 05 1.0 2.0 4.0 Measured staleness (log) Measured sample reuse (log) 4 & 3 0.5 1 (b) 2 4 8 8.0 (a) Batch size: Larger batch size speeds up training. In the early part of training studied here, the speedup is sublinear in the computation and samples re- quired. See subsection M.1 for ex- periment details. (b) Data Staleness: Training rollout data causes on stale significant training in speed. Queue length estimates the amount of artificial staleness introduced; see subsection M.2 for experiment details. (c) Sample Reuse: Reusing each sample of training data causes significant slowdowns. See subsection M.3 for experiment de- tails. Figure 5: Batch Size and data quality in early training: For each parameter, we ran multiple training runs varying only that parameter. These runs cover early training (approximately one week) at small scale (8x smaller than Rerun). On the left we plot TrueSkill over time for each run. On the right, we plot the “speedup” to reach fixed TrueSkill thresholds of 100, 125, 150, and 175 as a function of the parameter under study compared to the baseline (marked with ‘b’); see Equation 2. Higher speedup means that training was faster and more efficient. These four thresholds are chosen arbitrarily; a few are omitted when the uncertainties are too large (for example in Figure 5c fewer than half the experiments reach 175, so that speedup curve would not be informative). 13 portion of the game, and upload data to the experience buffer, while optimizers continually sample from whatever data is present in the experience buffer to optimize (Figure 2). Early on in the project, we had rollout workers collect full episodes before sending it to the optimizers and downloading new parameters. This means that once the data finally enters the optimizers, it can be several hours old, corresponding to thousands of gradient steps. Gradients computed from these old parameters were often useless or destructive. In the final system rollout workers send data to optimizers after only 256 timesteps, but even so this can be a problem. If a sample was generated by parameter version N and we are now optimizing version M , then we define the staleness of that data to be M − N . In Figure 5b, we see that increasing staleness by ∼ 8 versions causes significant slowdowns. Note that this level of staleness corresponds to a few minutes in a multi- month experiment. Our final system design targeted a staleness between 0 and 1 by sending game data every 30 seconds of gameplay and updating to fresh parameters approximately once a minute, making the loop faster than the time it takes the optimizers to process a single batch (32 PPO gradient steps). Because of the high impact of staleness, in future work it may be worth investigating whether optimization methods more robust to off-policy data could provide significant improvement in our asynchronous data collection regime. Because optimizers sample from an experience buffer, the same piece of data can be re-used many times. If data is reused too often, it can lead to overfitting on the reused data[18]. To diagnose this, we defined a metric called the sample reuse of the experiment as the instantaneous ratio between the rate of optimizers consuming data and rollouts producing data. If optimizers are consuming samples twice as fast as rollouts are producing them, then on average each sample is being used twice and we say that the sample reuse is 2. In Figure 5c, we see that reusing the same data even 2-3 times can cause a factor of two slowdown, and reusing it 8 times may prevent the learning of a competent policy altogether. Our final system targets sample reuse ∼ 1 in all our experiments. These experiments on the early part of training indicate that high quality data matters even more than compute consumed; small degradations in data quality have severe effects on learning. Full details of the experiment setup can be found in Appendix M. # 4.5 Long term credit assignment Dota 2 has extremely long time dependencies. Where many reinforcement learning environment episodes last hundreds of steps ([4, 29–31]), games of Dota 2 can last for tens of thousands of time steps. Agents must execute plans that play out over many minutes, corresponding to thousands of timesteps. This makes our experiment a unique platform to test the ability of these algorithms to understand long-term credit assignment. In Figure 6, we study the time horizon over which our agent discounts rewards, defined as H = T 1 − γ (3) Here γ is the discount factor [17] and T is the real game time corresponding to each step (0.133 seconds). This measures the game time over which future rewards are integrated, and we use it as a proxy for the long-term credit assignment which the agent can perform. In Figure 6, we see that resuming training a skilled agent using a longer horizon makes it perform better, up to the longest horizons we explored (6-12 minutes). This implies that our optimization was capable of accurately assigning credit over long time scales, and capable of learning policies and 14 Win rate 0 500 1000 1500 2000 Parameter versions Figure 6: Effect of horizon on agent performance. We resume training from a trained agent using different horizons (we expect long-horizon planning to be present in highly-skilled agents, but not from-scratch agents). The base agent was trained with a horizon of 180 seconds (γ = 0.9993), and we include as a baseline continued training at horizon 180s. Increasing horizon increases win rate over the trained agent at the point training was resumed, with diminishing returns at high horizons. actions which maximize rewards 6-12 minutes into the future. As the environments we attempt to solve grow in complexity, long-term planning and thinking will become more and more important for intelligent behavior. # 5 Related Work The OpenAI Five system builds upon several bodies of work combining deep reinforcement learning, large-scale optimization of deep learning models, and using self-play to explore environments and strategies. Competitive games have long served as a testbed for learning. Early systems mastered Backgam- mon [1], Checkers [32], and Chess [2]. Self-play was shown to be a powerful algorithm for learning skills within high-dimensional continuous environments [33] and a method for automatically gener- ating curricula [34]. Our use of self-play is similar in spirit to fictitious play [35], which has been successfully applied to poker [36] - in this work we learn a distribution over opponents and use the latest policy rather than an average policy. Using a combination of imitation learning human games and self-play, Silver et al. demonstrated a master-level Go player [4]. Building upon this work, AlphaGoZero, AlphaZero, and ExIt discard imitation learning in favor of using Monte-Carlo Tree Search during training to obtain higher quality trajectories [12, 37, 38] and apply this to Go, Chess, Shogi, and Hex. Most recently, human-level play has been demonstrated in 3D first-person multi-player environments [30], professional-level play in the real-time strategy game StarCraft 2 using AlphaStar [7], and superhuman performance 15 in Poker [39]. AlphaStar is particularly relevant to this paper. In that effort, which ran concurrently to our own, researchers trained agents to play Starcraft 2, another complex game with real-time perfor- mance requirements, imperfect information, and long time horizons. The model for AlphaStar used a similar hand-designed architecture to embed observations and an autoregressive action decoder, with an LSTM core to handle partial observability. Both systems used actor critic reinforcement learning methods as part of the overall objective. OpenAI Five has certain sub-systems hard-coded (such as item buying), whereas AlphaStar handled similar decisions (e.g. building order) by con- ditioning (during training) on statistics derived from human replays. OpenAI Five trained using self play, while AlphaStar used a league consisting of multiple agents, where agents were trained to beat certain subsets of other agents. Finally, AlphaStar’s value network observed full information about the game state (including observations hidden from the policy); this method improved their training and exploring its application to Dota 2 is a promising direction for future work. Deep reinforcement learning has been successfully applied to learning control policies from high dimensional input. In 2013, Mnih et al.[3] show that it is possible to combine a deep convolutional neural network with a Q-learning algorithm[40] and a novel experience replay approach to learn policies that can reach superhuman performance on the Atari ALE games. Following this work, a variety of efforts have pushed performance on the remaining Atari games[16], reduced the sample complexity, and introduced new challenges by focusing on intrinsic rewards [41–43]. As more computational resources have become available, a body of work has developed address- ing the use of distributed systems in training. Larger batch sizes were found to accelerate training of image models[44–46]. Proximal Policy Optimization[14] and A3C [47] improve the ability to asyn- chronously collect rollout data. Recent work has demonstrated the benefit of distributed learning on a wide array of problems including single-player video games[48] and robotics[5]. The motivation for our surgery method is similar to prior work on Net2Net style function preserv- ing transformations [23] which attempt to add model capacity without compromising performance, whereas our surgery technique was used in cases where the inputs, outputs, and recurrent layer size changed. Past methods have grown neural networks by incrementally training and freezing parts of the network [49], [50], [51]. Li & Hoiem [52] and Rusu et al. [53] use similar methods to use a trained model to quickly learn novel tasks. Distillation [54] and imitation learning [55, 56] offer an alternate approach to surgery for making model changes in response to a shifting environment. In concurrent work, OpenAI et al. [24] has reported success using behavioral cloning for similar purposes. # 6 Conclusion When successfully scaled up, modern reinforcement learning techniques can achieve superhuman performance in competitive esports games. The key ingredients are to expand the scale of compute used, by increasing the batch size and total training time. In order to extend the training time of a single run to ten months, we developed surgery techniques for continuing training across changes to the model and environment. While we focused on Dota 2, we hypothesize these results will apply more generally and these methods can solve any zero-sum two-team continuous environment which can be simulated in parallel across hundreds of thousands of instances. In the future, environments and tasks will continue to grow in complexity. Scaling will become even more important (for current methods) as the tasks become more challenging. 16 # Acknowledgements Myriad individuals contributed to this work from within OpenAI, within the Dota 2 community, and elsewhere. We extend our utmost gratitude to everyone who helped us along the way! We would like to especially recognize the following contributions: • Technical discussions with numerous people within OpenAI including Bowen Baker, Paul Christiano, Danny Hernandez, Sam McCandlish, Alec Radford • Review of early drafts by Bowen Baker, Danny Hernandez, Jacob Hilton, Quoc Le, Luke Metz, Matthias Plappert, Alec Radford, Oriol Vinyals • Event Support from Larissa Schiavo, Diane Yoon, Loren Kwan • Communication, writing, and outreach support from Ben Barry, Justin Wang, Shan Carter, Ashley Pilipiszyn, Jack Clark • OpenAI infrastructure support from Eric Sigler • Google Cloud Support (Solomon Boulos, JS Riehl, Florent de Goriainoff, Somnath Roy, Win- ston Lee, Andrew Sallaway, Danny Hammo, Jignesh Naik) • Microsoft Azure Support (Jack Kabat, Jason Vallery, Niel Mackenzie, David Kalmin, Dina Frandsen) • Dota 2 Support from Valve (special thanks to Chris Carollo) • Dota 2 guides and builds from Tortedelini (Michael Cohen) and buyback saving strategy from Adam Michalik • Dota 2 expertise and community advice from Blitz (William Lee) • Dota 2 Casters: Blitz (William Lee), Capitalist (Austin Walsh), Purge (Kevin Godec), ODPixel (Owen Davies), Sheever (Jorien van der Heijden), Kyle Freedman • Dota 2 World Champions (OG): ana (Anathan Pham), Topson (Topias Taavitsainen), Ceb (Sébastien Debs), JerAx (Jesse Vainikka), N0tail (Johan Sundstein) • Dota 2 Professional Teams: Team Secret, Team Lithium, Alliance, SG E-sports • Benchmark Players: Moonmeander (David Tan), Merlini (Ben Wu), Capitalist (Austin Walsh), Fogged (Ioannis Loucas), Blitz (William Lee) • Playtesting: Alec Radford, Bowen Baker, Alex Botev, Pedja Marinkovic, Devin McGarry, Ryan Perron, Garrett Fisher, Jordan Beeli, Aaron Wasnich, David Park, Connor Mason, James Timothy Herron, Austin Hamilton, Kieran Wasylyshyn, Jakob Roedel, William Rice, Joel Olazagasti, Samuel Anderson • We thank the entire Dota 2 community for their support and enthusiasm. We especially profusely thank all 39,356 Dota 2 players from 225 countries who participated in OpenAI Five Arena and all the players who played against the 1v1 agent during the LAN event at The International 2017! 17 # Author Contributions This manuscript is the result of the work of the entire OpenAI Dota team. For each major area, we list the primary contributors in alphabetical order. Scott Gray, Jakub Pachocki, Michael Petrov, Henrique Pondé de Oliveira Pinto, Jonathan Raiman, Szymon Sidor, Jie Tang, Filip Wolski, and Susan Zhang developed and trained OpenAI Five, including developing surgery, expanding tools for large-scale distributed RL, expanding the ca- pabilities to the 5v5 game, and running benchmarks against humans including the OG match and OpenAI Arena. • Christopher Berner, Greg Brockman, Vicki Cheung, Przemysław “Psyho" Dębiak, Quirin Fis- cher, Shariq Hashme, Chris Hesse, Rafal Józefowicz, Catherine Olsson, Jakub Pachocki, Tim Salimans, Jeremy Schlatter, Jonas Schneider, Szymon Sidor, Ilya Sutskever, and Jie Tang developed the 1v1 training system, including the Dota 2 gym interface, building the first Dota agent, and initial exploration of batch size scaling. • Brooke Chan, David Farhi, Michael Petrov, Henrique Pondé de Oliveira Pinto, Jonathan Raiman, Jie Tang, and Filip Wolski wrote this manuscript, including running Rerun and all of the ab- lation studies. • Jakub Pachocki and Szymon Sidor set research direction throughout the project, including developing the first version of Rapid to demonstrate initial benefits of large scale computing in RL. • Greg Brockman and Rafal Józefowicz kickstarted the team. 18 # References 1. Tesauro, G. TD-Gammon, a self-teaching backgammon program, achieves master-level play. Neural computation 6, 215–219 (1994). 2. Campbell, M., Hoane Jr., A. J. & Hsu, F.-h. Deep Blue. Artif. Intell. 134, 57–83. issn: 0004- 3702 (Jan. 2002). 3. Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D. & Riedmiller, M. Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013). 4. Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. Mastering the game of Go with deep neural networks and tree search. nature 529, 484 (2016). 5. OpenAI. Learning Dexterity https : / / openai . com / blog / learning - dexterity/. [Online; accessed 28-May-2019]. 2018. 6. Paulus, R., Xiong, C. & Socher, R. A Deep Reinforced Model for Abstractive Summarization 2017. arXiv: 1705.04304 [cs.CL]. 7. Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., et al. Grandmaster level in StarCraft II using multi-agent reinforcement learning. Nature, 1–5 (2019). 8. Guss, W. H., Codel, C., Hofmann, K., Houghton, B., Kuno, N., Milani, S., Mohanty, S. P., Liebana, D. P., Salakhutdinov, R., Topin, N., Veloso, M. & Wang, P. The MineRL Competition on Sample Efficient Reinforcement Learning using Human Priors. CoRR abs/1904.10079. arXiv: 1904.10079. <http://arxiv.org/abs/1904.10079> (2019). 9. Wikipedia contributors. Dota 2 — Wikipedia, The Free Encyclopedia https://en.wikipedia. org/w/index.php?title=Dota_2&oldid=913733447. [Online; accessed 9-September- 2019]. 2019. 10. Wikipedia contributors. The International 2018 — Wikipedia, The Free Encyclopedia https: //en.wikipedia.org/w/index.php?title=The_International_2018&oldid= 912865272. [Online; accessed 9-September-2019]. 2019. 11. Allis, L. V. Searching for solutions in games and artificial intelligence in (1994). 12. Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362, 1140–1144 (2018). 13. Gers, F. A., Schmidhuber, J. & Cummins, F. Learning to forget: Continual prediction with LSTM (1999). 14. Schulman, J., Wolski, F., Dhariwal, P., Radford, A. & Klimov, O. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017). 15. Konda, V. R. & Tsitsiklis, J. N. Actor-critic algorithms in Advances in neural information processing systems (2000), 1008–1014. 16. Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D. & Kavukcuoglu, K. Asynchronous methods for deep reinforcement learning in International conference on ma- chine learning (2016), 1928–1937. 19 17. Schulman, J., Moritz, P., Levine, S., Jordan, M. I. & Abbeel, P. High-Dimensional Continuous Control Using Generalized Advantage Estimation. CoRR abs/1506.02438 (2016). 18. Horgan, D., Quan, J., Budden, D., Barth-Maron, G., Hessel, M., van Hasselt, H. & Silver, D. Distributed Prioritized Experience Replay. CoRR abs/1803.00933. arXiv: 1803.00933. <http://arxiv.org/abs/1803.00933> (2018). 19. NVIDIA. NVIDIA Collective Communications Library (NCCL) https : / / developer . nvidia.com/nccl. [Online; accessed 9-September-2019]. 2019. 20. Kingma, D. P. & Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). 21. Williams, R. J. & Peng, J. An Efficient Gradient-Based Algorithm for On-Line Training of Recurrent Network Trajectories. Neural Computation 2, 490–501 (1990). 22. Gray, S., Radford, A. & Kingma, D. P. GPU Kernels for Block-Sparse Weights 2017. 23. Chen, T., Goodfellow, I. J. & Shlens, J. Net2Net: Accelerating Learning via Knowledge Transfer in 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings (2016). <http://arxiv.org/abs/ 1511.05641>. 24. OpenAI, Akkaya, I., Andrychowicz, M., Chociej, M., Litwin, M., McGrew, B., Petron, A., Paino, A., Plappert, M., Powell, G., Ribas, R., Schneider, J., Tezak, N., Tworek, J., Welinder, P., Weng, L., Yuan, Q., Zaremba, W. & Zhang, L. Solving Rubik’s Cube with a Robot Hand 2019. arXiv: 1910.07113 [cs.LG]. 25. Dalvi, N., Domingos, P., Mausam, Sanghai, S. & Verma, D. Adversarial Classification in Pro- ceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (ACM, Seattle, WA, USA, 2004), 99–108. isbn: 1-58113-888-1. doi:10.1145/ 1014052.1014066. <http://doi.acm.org/10.1145/1014052.1014066>. 26. Jain, A., Bansal, R., Kumar, A. & Singh, K. A comparative study of visual and auditory reaction times on the basis of gender and physical activity levels of medical first year students. International journal of applied and basic medical research 5, 125–127 (2015). 27. Herbrich, R., Minka, T. & Graepel, T. TrueSkill: a Bayesian skill rating system in Advances in neural information processing systems (2007), 569–576. 28. McCandlish, S., Kaplan, J., Amodei, D. & Team, O. D. An empirical model of large-batch training. arXiv preprint arXiv:1812.06162 (2018). 29. Cobbe, K., Klimov, O., Hesse, C., Kim, T. & Schulman, J. Quantifying Generalization in Reinforcement Learning. CoRR abs/1812.02341. arXiv: 1812.02341. <http://arxiv. org/abs/1812.02341> (2018). 30. Jaderberg, M., Czarnecki, W. M., Dunning, I., Marris, L., Lever, G., Castaneda, A. G., Beattie, C., Rabinowitz, N. C., Morcos, A. S., Ruderman, A., et al. Human-level performance in first- person multiplayer games with population-based deep reinforcement learning. arXiv preprint arXiv:1807.01281 (2018). 31. Moravčík, M., Schmid, M., Burch, N., Lisý, V., Morrill, D., Bard, N., Davis, T., Waugh, K., Johanson, M. & Bowling, M. Deepstack: Expert-level artificial intelligence in heads-up no-limit poker. Science 356, 508–513 (2017). 20 32. Schaeffer, J., Culberson, J., Treloar, N., Knight, B., Lu, P. & Szafron, D. A world championship caliber checkers program. Artificial Intelligence 53, 273–289. issn: 0004-3702 (1992). 33. Bansal, T., Pachocki, J., Sidor, S., Sutskever, I. & Mordatch, I. Emergent complexity via multi-agent competition. arXiv preprint arXiv:1710.03748 (2017). 34. Sukhbaatar, S., Lin, Z., Kostrikov, I., Synnaeve, G., Szlam, A. & Fergus, R. Intrinsic motivation and automatic curricula via asymmetric self-play. arXiv preprint arXiv:1703.05407 (2017). 35. Brown, G. W. in Activity Analysis of Production and Allocation (ed Koopmans, T. C.) (Wiley, New York, 1951). 36. Heinrich, J. & Silver, D. Deep Reinforcement Learning from Self-Play in Imperfect-Information Games. CoRR abs/1603.01121. arXiv: 1603.01121 (2016). 37. Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al. Mastering the game of go without human knowledge. Nature 550, 354 (2017). 38. Anthony, T., Tian, Z. & Barber, D. Thinking fast and slow with deep learning and tree search in Advances in Neural Information Processing Systems (2017), 5360–5370. 39. Brown, N. & Sandholm, T. Superhuman AI for multiplayer poker. Science, eaay2400 (2019). 40. Watkins, C. J. & Dayan, P. Q-learning. Machine learning 8, 279–292 (1992). 41. Kulkarni, T. D., Narasimhan, K., Saeedi, A. & Tenenbaum, J. Hierarchical deep reinforce- ment learning: Integrating temporal abstraction and intrinsic motivation in Advances in neural information processing systems (2016), 3675–3683. 42. Burda, Y., Edwards, H., Storkey, A. & Klimov, O. Exploration by random network distillation. arXiv preprint arXiv:1810.12894 (2018). 43. Ecoffet, A., Huizinga, J., Lehman, J., Stanley, K. O. & Clune, J. Montezuma’s revenge solved by go-explore, a new algorithm for hard-exploration problems (sets records on pitfall too). Uber Engineering Blog, Nov (2018). 44. Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y. & He, K. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour (June 2017). 45. You, Y., Gitman, I. & Ginsburg, B. Scaling SGD Batch Size to 32K for ImageNet Training (Aug. 2017). 46. You, Y., Zhang, Z., Hsieh, C.-J., Demmel, J. & Keutzer, K. ImageNet Training in Minutes in Proceedings of the 47th International Conference on Parallel Processing (ACM, Eugene, OR, USA, 2018), 1:1–1:10. isbn: 978-1-4503-6510-9. doi:10.1145/3225058.3225069. <http: //doi.acm.org/10.1145/3225058.3225069>. 47. Mnih, V., Badia, A. P., Mirza, M., Graves, A., Lillicrap, T., Harley, T., Silver, D. & Kavukcuoglu, K. Asynchronous Methods for Deep Reinforcement Learning in Proceedings of The 33rd Inter- national Conference on Machine Learning (eds Balcan, M. F. & Weinberger, K. Q.) 48 (PMLR, New York, New York, USA, June 2016), 1928–1937. <http://proceedings.mlr.press/ v48/mniha16.html>. 48. Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., Dunning, I., et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561 (2018). 21 49. Fahlman, S. E. & Lebiere, C. in (ed Touretzky, D. S.) 524–532 (Morgan Kaufmann Publish- ers Inc., San Francisco, CA, USA, 1990). isbn: 1-55860-100-7. <http : // dl . acm. org / citation.cfm?id=109230.107380>. 50. Wang, Y., Ramanan, D. & Hebert, M. Growing a Brain: Fine-Tuning by Increasing Model Capacity in 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017 (2017), 3029–3038. doi:10.1109/CVPR.2017.323. <https://doi.org/10.1109/CVPR.2017.323>. 51. Czarnecki, W. M., Jayakumar, S. M., Jaderberg, M., Hasenclever, L., Teh, Y. W., Osindero, S., Heess, N. & Pascanu, R. Mix&Match - Agent Curricula for Reinforcement Learning. CoRR abs/1806.01780. arXiv: 1806 . 01780. <http : / / arxiv . org / abs / 1806 . 01780> (2018). 52. Li, Z. & Hoiem, D. Learning without Forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence 40, 2935–2947. issn: 0162-8828 (Dec. 2018). 53. Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R. & Hadsell, R. Progressive Neural Networks. arXiv preprint arXiv:1606.04671 (2016). 54. Hinton, G., Vinyals, O. & Dean, J. Distilling the Knowledge in a Neural Network in NIPS Deep Learning and Representation Learning Workshop (2015). <http://arxiv.org/abs/ 1503.02531>. 55. Ross, S., Gordon, G. & Bagnell, D. A Reduction of Imitation Learning and Structured Predic- tion to No-Regret Online Learning in Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics (eds Gordon, G., Dunson, D. & Dudík, M.) 15 (PMLR, Fort Lauderdale, FL, USA, Apr. 2011), 627–635. <http://proceedings.mlr.press/ v15/ross11a.html>. 56. Levine, S. & Koltun, V. Guided Policy Search in Proceedings of the 30th International Con- ference on International Conference on Machine Learning - Volume 28 (JMLR.org, Atlanta, GA, USA, 2013), III-1–III-9. <http://dl.acm.org/citation.cfm?id=3042817. 3042937>. 57. OpenAI. AI and Compute https://openai.com/blog/ai- and- compute/. [Online; accessed 9-Sept-2019]. 2018. 58. Ng, A. Y., Harada, D. & Russell, S. Policy invariance under reward transformations: Theory and application to reward shaping in In Proceedings of the Sixteenth International Conference on Machine Learning (Morgan Kaufmann, 1999), 278–287. 59. Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J. & Zaremba, W. OpenAI Gym. CoRR abs/1606.01540. arXiv: 1606.01540. <http://arxiv.org/ abs/1606.01540> (2016). 60. Balduzzi, D., Garnelo, M., Bachrach, Y., Czarnecki, W. M., Pérolat, J., Jaderberg, M. & Graepel, T. Open-ended Learning in Symmetric Zero-sum Games. CoRR abs/1901.08106. arXiv: 1901.08106. <http://arxiv.org/abs/1901.08106> (2019). 61. Williams, R. J. & Peng, J. Function optimization using connectionist reinforcement learning algorithms. Connection Science 3, 241–268 (1991). 22 # Appendix # Table of Contents A Compute Usage B Surgery C Hyperparameters D Evaluating agents’ understanding D.1 Understanding OpenAI Five Finals . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 D.2 Hero selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 E Observation Space F Action Space F.1 Scripted Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 G Reward Weights H Neural Network Architecture I Human Games J TrueSkill: Evaluating a Dota 2 Agent Automatically K Dota 2 Gym Environment K.1 Data flow between the training environment and Dota 2 . . . . . . . . . . . . . . . 49 L Reaction time M Scale and Data Quality Ablation Details M.1 Batch Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 M.2 Sample Quality — Staleness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 M.3 Sample Quality — Sampling and Sample Reuse . . . . . . . . . . . . . . . . . . . . 56 N Self-play O Exploration 25 29 30 37 39 43 45 49 49 49 51 52 59 59 O.1 Loss function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 23 25 O.2 Environment Randomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Q.1 Manually Tuned Hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Q.2 Zero Team Spirit Embedding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Q.3 Learning path dependency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 63 65 # P Hero Pool Size # Q Bloopers 24 # A Compute Usage We estimate the optimization compute usage as follows: We break the experiment in segments between each major surgery or batch size change. For each of those, we calculate the number of gradient descent steps taken (number of iterations × 32). We estimate the compute per step per GPU using TensorFlow’s tf.profiler.total_float_ops, then multiply together: total compute = > 32 x (iterationeng — iterationgstart) X (4) segment. (# gpus) × (compute per step per gpu) (5) Our uncertainty on this estimate comes primarily from ambiguities about what computation “counts.” For example the tensorflow metrics include all ops in the graph including metric logging, nan-checking, etc. It also includes the prediction of auxiliary heads such as win probability, which are not necessary for gameplay or training. It does not count non-GPU compute on the optimizer machines such as exporting parameter versions to the rollouts. We estimate these and other ambi- guities to be around 5%. In addition, for OpenAI Five (although not for Rerun) we use a simplified history of the experiment, rather than keeping track of every change and every time something crashed and needed to be restarted; we estimate this does not add more than 5% error. We combine these rough error estimates into a (very crude) net ambiguity estimate of 5-10%. This computation concludes that OpenAI Five used 770±50 PFlops/s·days of total optimization compute on GPUs at the time of playing the world champions (April 13, 2019), and 820±50 total optimization compute when it was finally turned off on April 22nd, 2019. Rerun, on the other hand, used 150 ± 5 PFlops/s·days between May 18th and July 12th, 2019. We adopted the methodology from [57] to facilitate comparisons. This has several important caveats. First, the above computation only considers compute used for optimization. In fact this is a relatively small portion of the total compute budget for the training run. In addition to the GPU machines doing optimization (roughly 30% of the cost by dollars spent) there are approximately the same number of GPUs running forward passes for the rollout workers (30%), as well as the actual rollouts CPUs running the selfplay games (30%) and the overhead of controllers, TrueSkill evaluators, CPUs on the GPU machines, etc (10%). Second, with any research project one needs to run many small studies, ablations, false starts, etc. One also inevitably wastes some computing resources due to imperfect utilization. Traditionally the AI community has not counted these towards the compute used by the project, as it is much easier to count only the resources used by the final training run. However, with our advent of surgery, the line becomes much fuzzier. After 5 months of training on an older environment, we could have chosen to start from scratch in the new environment, or performed surgery to keep the old model. Either way, the same total amount of compute gets used; but the above calculation ignores all the compute used up until the last time we chose to restart. For these reasons the compute number for OpenAI Five should be taken with a large grain of salt, but this caveat does not apply to Rerun, which was trained without surgery. # B Surgery As discussed in 3.3, we designed “surgery” tools for continuing to train a single set of paramters across changes to the environment, model architecture, observation space, and action space. The 25 1 81,821 84,432 remove “cheating” observations 8/26/2018 9/27/2018 10/3/2018 10/12/2018 10/19/2018 91,471 123,821 130,921 140,402 144,121 156,737,674 Double LSTM size 156,809,485 Support for more heroes 156,809,501 Obs: Roshan spawn timing 156,811,805 Item: Bottle 156,286,925 Obs: Stock counts; Obs: Remove some obsolete obs 10/24/2018 11/7/2018 150,111 161,482 156,286,867 Obs: Neutral creep & rune spawn timers 156,221,309 Obs: Item swap cooldown; 11/28/2018 185,749 156,221,669 Obs: Remove some obsolete obs Item: Divine rapier; Obs: Improve observation of stale enemy heroes 12/10/2018 12/14/2018 193,701 196,800 157,378,165 Obs: Modifiers on nonhero units. 157,650,795 Action: Consumables on allies; Obs: Line of sight information; Obs: next item this hero will purchase; Action: buyback 12/20/2018 203,241 changes map, etc; Obs: number of empty inventory slots 1/23/2019 211,191 158,495,991 Obs: Improve observations of area of effects; 157,679,655 Dota 2 version 7.20 adds new items, new item slot, Obs: improve observation of modifiers’ duration; Obs: Improve observations about item Power Treads. 4/5/2019 220,076 158,502,815 Dota 2 version 7.21 adds new items, abilities, etc. Table 1: All successful surgeries and major environment changes performed during the training of OpenAI Five. This table does not include surgeries which were ultimately reverted due to training failures, nor minor environment changes (such as improvements to partial reward weights or scripted logic). “Obs” indicates than a new observation was added as an input to the model or an existing one was changed. “Action” indicates that a new game action was made available, along with appropriate observations about the state of that action. “Item” indicates that a new item was introduced, including observation of the item and the action to use the item. The Dota 2 version updates (7.19, 7.20 and 7.21) include many new items, actions, and observations. 26 goal in each case is to resume training after the change without the agent losing any skill from the change. Table 1 lists the major surgeries we performed in the lifetime of the OpenAI Five experiment. For changes which add parameters, one of the key questions to ask is how to initialize the new parameters. If we initialize the parameters randomly and continue optimization, then noise will flow into other parts of the model, causing the model to play badly and causing large gradients which destroy the learned behaviors. In the rest of this appendix we provide details of the tools we used to continue training across each type of change. In general we had a high-skill model πθ trained to act in one environment, and due to a change to the problem design we need to begin training a newly-shaped model ˆπˆθ in a new environment. Ultimately the goal is for the TrueSkill of agent ˆπˆθ to match that of πθ. Changing the architecture In the most straightforward situation, the observation space, action space, and environment do not change. In this case, per Equation 1, we can insist that the new policy ˆπˆθ implement exactly the same mathematical function from observations to actions as the old policy. A simple example here would be adding more units to an internal fully-connected layer of the model. Suppose that before the change, some part of the interior of the model contained an input vector x (dimension dx), which is transformed to an activation vector y = W1x + B1 (dimension dy), which is then consumed by another fully-connected layer z = W2y + B2 (dimension dz). We desire to increase the dimension of of y from dy to ˆdy. This causes the shapes of three parameter arrays to change: W1 (from [dx, dy] to [dx, ˆdy]), B1 (from [dy] to [ ˆdy]), and W2 (from [dy, dz] to [ ˆdy, dz]). . Ww ] . B : wa=[ 0] a= pi | W=[W 0] (6) Where R() indicates a random initialization. The initializations of ˆW1 and ˆB1 ensure that the first dy dimensions of activations ˆy will be the same data as the old activations y, and the remained will be randomized. The randomization ensures that symmetry is broken among the new dimensions. The initialization of ˆW2, on the other hand, ensures that the next layer will ignore the new random activations, and the next layer’s activations will be the same as in the old model; ˆz = z. The weights which are initialized to zero will move away from zero due to the gradients, if the corresponding new dimensions in y are useful to the downstream function. Initializing neural network weights to zero is a dangerous business, because it can introduce undesired symmetries between the indices of the output vector. However we found that in most cases of interest, this was easy to avoid by only zero-ing the minimal set of weights. In the example above, the symmetry is broken by the randomization of ˆW1 and ˆB1. A more advanced version of this surgery was required when we wanted to increase the model capacity dramatically, by increasing the hidden dimension of our LSTM from 2048 units to 4096 units. Because the LSTM state is recurrent, there was no way to achieve the separation present in Equation 6; if we randomize the new weights they will impact performance, but if we set them to zero then the new hidden dimensions will be symmetric and gradient updates will never differentiate them. In practice we set the new weights to random small values — rather than randomize new weight values on the same order of magnitude as the existing weights, we randomized new weights 27 significantly smaller. The scale of randomization was set empirically by choosing the highest scale which did not noticeably decrease the agent’s TrueSkill. Changing the Observation Space Most of our surgeries caused the observation space changes, for example when we added 3 new float observations encoding the time until neutral creeps, bounties, and runes would spawn. In these cases it is impossible to insist that the new policy implement the same function from observation space to action space, as the input domain has changed. However, in some sense the input domain has not changed; the game state is still the same. In reality our system is not only a function π : o → a; before the policy sees the observation arrays, an “encoder” function E has turned a game state s into an input array o: (Game State Protobuf s) E−→ (Observation Arrays o) π−→ (Action a) (7) By adding new observations we are enhancing the encoder function E, making it take the same game state and simply output richer arrays for the model to consume. Thus in this case while we cannot ensure that ˆπˆθ = πθ, we can ensure the functions are identical if we go one step back: ∀s ˆπˆθ( ˆE(s)) = πθ(E(s)) (8) When the change is simply additive, this can then be enforced as in the previous section. Suppose the new observations extend a vector x from dimension dx to dimension ˆdx, and the input vector x is consumed by a weight matrix W via y = W x (and y is then processed by the rest of the model downstream). Then we initialize the new weights ˆW as: w=[W 0] (9 As before, this ensures that the rest of the model is unchanged, as the output is unchanged (ˆy = y). The weights which are initialized to zero will move away from zero due to the gradients, if the corresponding observations are found to be useful. Changing the Environment or Action Space The second broad class of changes are those which change the environment itself, either by making new actions available to the policy (e.g. when we replaced scripted logic for the Buyback action with model-controlled logic) or by simply changing the Dota 2 rules (for example when we moved to Dota 2 version 7.21, or when we added new items). For some of these changes, such as upgrading Dota 2 version, we found simply making the change on the rollout workers to be relatively stable; the old policy played well enough in the new environment that it was able to smoothly adapt. Even so, whenever possible, we attempted to “anneal” in these new features, starting with 0% of rollout games played with the new environment or actions, and slowly ramping up to 100%. This prevents a common problem where a change in one part of the agent’s behavior could force unnecessary relearning large portions of the strategy. For example, when we attempted to give the model control of the Buyback action without annealing, the model-based control of the action was (at first) worse than the scripted version had been, causing the agent to adapt its overall strategies to games where allies and enemies alike often misuse this action. This would cause the agent to significantly drop in overall skill; while it would likely eventually recover, it may require “repeating" the investment of a large amount of compute. By annealing the new action in gradually, we ensure that the model never loses overall skill due to a sudden change of one part of the environment; 28 when we observe the model losing TrueSkill during the annealing process, we revert and attempt the anneal at a slower rate. This annealing process makes sense even if the environment is becoming fundamentally “harder" because our agent’s skill is measured through winrates against other models; the opponent also has to play in the new environment. Removing Model Parts Requiring exact policy equivalence after the surgery outlaws many types of surgery. For example, most surgeries which remove parameters are not possible in this framework. For this reason our model continued to observe some “deprecated” observations, which were simply always set to constants. Further work such as [24] has already begun to explore alternate methods of surgery which avoid this constraint. Smooth Training Restart The gradient moments stored by the Adam optimizer present a nuisance when restarting training with new parameter shape. To ensure that the moments have enough time to properly adjust, we use a learning rate of 0 for the first several hours of training after surgery. This also ensures that the distribution of rollout games has entered steady state by the time we begin training in earnest. One additional nuisance when changing the shape of the model is the entire history of parameters which are stored (in the past opponent manager, see Appendix N), and used as opponents in rollouts. Because the rollout GPUs will be running the newest code, all of these past versions must be updated in the same way as the current version to ensure compatibility. If the surgery operation fails to exactly preserve the policy function, these frozen past agents will forever play worse, reducing the quality of the opponent pool. Therefore it is crucial to ensure agent behavior is unchanged after surgery. Benefits of Surgery These surgeries primarily permitted us to have a tighter iteration loop for these features. When we added a new game feature which we expect to only matter at high skill, it would simply be impossible to test and iterate on it by training from scratch. Using surgery from the current OpenAI Five, we could have a more feasible process, which allowed us to safely include many minor features and improvements that otherwise would have been impossible to verify, such as adding long-tail items (Bottle, Rapier), minor improvements to the observation space (stock counts, modifiers on nonheroes), and others. # C Hyperparameters The optimization algorithm has several important hyperparameters that have different settings throughout the training process. Over the course of training of OpenAI Five, these hyperparam- eters were modified by looking for improvement plateaus. Because of compute limitations which prevented us from testing hyperparameter changes in separate experiments, OpenAI Five’s long- running training process included numerous experimental hyperparameter changes. Some of these worked well and were kept, others were reverted as our understanding developed over the course of the 10-month period. As it is impossible for us to scan over any of these hyperparameters when our experiment is so large, we make no claim that the hyperparams used are optimal. When we ran Rerun we simplified the hyperparameter schedule based on the lessons we had learned. In the end we made changes to only four key hyperparameters: 29 Iteration Time (days) TrueSkill 0 0 0 15k 13 210 23k 20 232 43k 33 245 54k 42 258 Team Spirit GAE Horizon Entropy coefficient Learning Rate 0.8 0.3 180 secs 0.01 5e-5 360 secs 0.001 5e-6 Figure 7: Hyperparameter changes during Rerun. Changes are displayed in table-form on the left, and called out in the trueskill vs iterations graph of the training run on the right. Each hyperpa- rameter change was applied gradually over the course of 1-2 days, corresponding to several thousand iterations (the reported time in the table is the start of the change). Our pre-planned schedule in- cluded further changes to bring the experiment into line with OpenAI Five’s final hyperparameters (Horizon to 840 sec, team spirit to 1.0, and learning rate to 1e-6), but Rerun reached OpenAI Five’s skill level before we reached those hyperparameters. • Learning Rate • Entropy penalty coefficient (see Appendix O) • Team Spirit (see Appendix G) • GAE time horizon (see Equation 3) This schedule is far from optimized as it was used in only our second iteration of this large experiment. In future work it could likely be significantly improved. There are many other hyperparams that were not changed during the final Rerun experiment. Their values are listed in Table 2. Some of these were changed in the original OpenAI Five out of necessity (e.g. batch size changed many times as more or less compute resources became available, or SampleReuse changed as the relative speeds of different machine pools fluctuated), and others were changed experimentally in the original OpenAI Five run but were ultimately not important as evidenced by Rerun working without those changes (e.g. increasing the time horizon from 360 seconds to 840 seconds). # D Evaluating agents’ understanding It is often difficult to infer the intentions of an RL agent. Some actions are obviously useful — hitting an enemy that is low on health, or freezing them as they’re trying to escape — but many other decisions can be less obvious. This is tightly coupled with questions on intentionality: does our agent plan on attacking the tower, or doe it opportunistically deal the most damage possible in next few seconds? To assess this, we attempt to predict future state of various features of the game from agent’s LSTM state: • Win probability: Binary label of either 0 or 1 at the end of the game. 30 Param Frameskipe LSTM Unroll lengthe Samples Per Segmente Number of optimizer GPUs Batch Size/optimizer GPU (samples) Total Batch Size (samples)a Total Batch Size (timesteps)a Number of rollout GPUs Number of rollout CPUs Steps per Iteration LSTM Size Sample Reuse Team Spirit GAE Horizon GAE λ PPO clipping Value loss weightc Entropy coefficient Learning rate Adam β1 Adam β2 Past opponentsb Past Opponents Learning Rated a Batch size can be measured in samples (each an unrolled LSTM of 16 frames) or in individual timesteps. b Fraction of games played against past opponents (as opposed to self-play). c We normalize rewards using a running estimate of the standard deviation, and the value loss weight is applied post-normalization. d See Appendix N. e See Figure 8 for definitions of the various timescale subdivisions of a rollout episode. Rerun 4 16 16 512 120 61,440 983,040 512 51,200 32 4096 1.0 ↔ 1.1 0.3 → 0.8 180 secs → 360 secs 0.95 0.2 1.0 0.01 → 0.001 5e-5 → 5e-6 0.9 0.999 20% 0.01 OpenAI Five 4 16 16 480↔1,536 120↔128 61,440↔196,608 983,040↔3,145,728 500↔1,440 80,000↔172,800 32 2048 → 4096 0.8↔2.7 0.3 → 1.0 60 secs → 840 secs 0.95 0.2 0.25 ↔ 1.0 0.01 → 0.001 5e-5 ↔ 1e-6 0.9 0.999 20% 0.01 Baseline 4 16 16 64 120 7,680 122,880 64 6,400 32 4096 1.0↔1.1 0.3 180 secs 0.95 0.2 1.0 0.01 5e-5 0.9 0.999 20% 0.01 OpenAI Frameskip® 4 4 4 LSTM Unroll length® 16 16 16 Samples Per Segment® 16 16 16 Number of optimizer GPUs 512 480441,536 64 Batch Size/optimizer GPU (samples) 120 12044128 120 Total Batch Size (samples)? 61,440 61,440+196,608 7,680 Total Batch Size (timesteps)* 983,040 983,040443,145,728 122,880 Number of rollout GPUs 512 5004+ 1,440 64 Number of rollout CPUs 51,200 80,000¢+172,800 6,400 Steps per Iteration 32 32 32 LSTM Size 4096 2048 — 4096 4096 Sample Reuse 104 1.1 0.8452.7 1061.1 Team Spirit 0.3 > 0.8 0.3 > 1.0 0.3 GAE Horizon 180 secs —> 360 secs 60 secs —> 840 secs 180 secs GAE \ 0.95 0.95 0.95 PPO clipping 0.2 0.2 0.2 Value loss weight® 1.0 0.25 4 1.0 1.0 Entropy coefficient 0.01 — 0.001 0.01 — 0.001 0.01 Learning rate 5e-5 — 5e-6 5e-5 4+ le-6 5e-5 Adam 0.9 0.9 0.9 Adam 0.999 0.999 0.999 Past opponents? 20% 20% 20% Past Opponents Learning Rate* 0.01 0.01 0.01 Table 2: Hyperparameters: The OpenAI Five and Rerun columns indicate what was done for those individual experiments. For those which were modified during training, x → y indicates a smooth monotonic transition (usually a linear change over one to three days), and x ↔ y indicates a less controlled variation due to either ongoing experimentation or distributed systems fluctuations. The “Baseline” indicates the default values for all the experiments in Appendix M (each individual experiment used these hyperparameters other than any it explicitly studied, for example in sub- section M.1 the batch size was changed from the baseline in each training run, but all the other hyperparameters were from this table). 31 Dota game engine: 30 frames per second | | | | | | | I I I I I | | 4 game engine frames *— (0.133 seconds) per policy time step {| | | | | | tt | tt |_| PTT TTTTrTrrt ry = 16time steps (2.1 seconds) per _. sample (LSTM unroll length) { | | | | ft tt | | |_| PTT?T?T TTT lr] _— 16 samples (34 seconds) per — segment labelled together (GAE) and sent to optimizer One episode (30+ minutes) broken down into many segments. Figure 8: Timescales and Staleness: The breakdown of a rollout game. Rather than collect an entire game before sending it to the optimizers, rollout machines send data in shorter segments. The segment is further subdivided into samples of 16 policy actions which are optimized together using truncated BPTT. Each policy action bundles together four game engine frames. 32 • Net worth rank: Which rank among the team (1-5) in terms of total resources collected will this hero be at the end of the game? This prediction is used by scripted item-buying logic to decide which agents buy items shared by the team such as wards. In human play (which the scripted logic is based on) this task is traditionally performed by heroes who will have the lowest net worth at the end of the game. • Team objectives / enemy buildings: whether this hero will help the team destroy a given enemy building in the near future. We added small networks of fully-connected layers that transform LSTM output into predic- tions of these values. For historical reasons, win probability passes gradients to the main LSTM and rest of the agent with a very small weight; the other auxiliary predictions use Tensorflow’s stop_gradient method to train on their own. One difficulty in training these predictors is that we train our agent on 30-second segments of the game (see Figure 8), and any given 30-second snippet may not contain the ground truth (e.g. for win probability and networth position, we only have ground truth on the very last segment of the game). We address this by training these heads in a similar fashion to how we train value functions. If a segment contains the ground truth label, we use the ground truth label for all time steps in that segment; if not, we use the model’s prediction at the end of the segment as the label. For win probability, for example, more precisely the label y for a segment from time t1 to t2 is given by: y = 1 0 ˆy(t2) last segment of the game, we win last segment of the game, we lose else (10) Where ˆy(t2) is the model’s predicted win probability at the end of the segment. Although this requires information to travel backward through the game, we find it trains these heads to a degree of calibration and accuracy. For the team objectives, we are additionally interested in whether the event will happen soon. For these we apply an additional discount factor with horizon of 2 minutes. This means that the enemy building predictions are not calibrated probabilities, but rather probabilities discounted by the expected time to the event. # D.1 Understanding OpenAI Five Finals We used these supervised predictions to look closer at the game 1 from OpenAI Five Finals. In Figure 9 we explore the progression of win probability predictions over the course of training Rerun, illustrating the evolution of understanding. Version 5,000 of the agent (early in the training process and low performance) already has a sense of what situations in the game may lead to eventual win. The prediction continues to get better and better as training proceeds. This matches human performance at this task, where even spectators with relatively little gameplay experience can estimate who is ahead based on simple heuristics, but with more gameplay practice human experts can estimate the winner more and more accurately. On the winrate graph two dramatic game events are marked, at roughly the 5 and 18 minute point. One of them illustrates OpenAI Five’s win probability drop, due to an unexpected loss of 3 heroes in close succession. The other shows how the game turns from good to great as a key enemy hero is killed. 33 5min Dire Kills 18min Team Fight 1.0 ok — version 0k 3 — version 5k 8 — version 2 0.6 : Fa — version g — version Boa — version g — version 3 — version a — 0G agent = 45 9 0.0 () 5 10 15 20 25 30 35 40 Game Time (minutes) 10k 20k 30k 40k 50k 56k Figure 9: Win Probability prediction of game 1 of OpenAI Five Finals In red we show the (OpenAI Five) agent’s win probability prediction over the game (which can be viewed by downloading the replay from https://openai.com/blog/ how-to-train-your-openai-five/). Marked are two significant events that significantly affected win probability prediction. At roughly 5 minutes in the human team killed several of Ope- nAI Five’s heroes, making it doubt its lead. At roughly 18 minutes in, OpenAI Five team killed three human heroes in a row, regrouped all their heroes at the mid lane, and marched on declaring 95% probability of victory. Versions 0-56k are progressive versions of Rerun agent predicting win probabilities by replaying the same game; as we can see, prediction converges to that of the bot that actually played the game (original OpenAI Five), despite training over self-play games from separate training runs. 34 to bottom tower 1 mid tower 1 top tower 1 — ayrocopter — crystal maiden — death prophet 0.6 | — sniper os 00 10 Fu bottom tower 2 mid tower 2 See Fig 11b 18 Fry 20 bottom tower 3 mid tower 3 top tower 3 26 00 30 31 32 18 18 20 30 31 32 Figure 10: Continuous prediction of destroying enemy buildings by OpenAI Five in Finals game 1. Predictions by different heroes differ as they specifically predict whether they will participate in bringing given building down. Predictions should not be read as calibrated probabilities, because they are trained with a discount factor. See Figure 11a and Figure 11b for descriptions of the events corresponding to two of these buildings. We also looked at heroes participation in destroying objectives. In Figure 10 we can see different heroes’ predictions for each of the objectives in game 1 of OpenAI Five Finals. In several cases all heroes predict they will participate in the attack (and they do). In few cases one or two heroes are left out, and indeed by watching the game replay we see that those heroes are busy in the different part of the map during that time. In Figure 11 we illustrate these predictions with more details for two of the events. # D.2 Hero selection In the normal game of Dota 2, two teams at the beginning of the game go through the process of selecting heroes. This is a very important step for future strategy, as heroes have different skill sets and special abilities. OpenAI Five, however, is trained purely on learning to play the best game of Dota 2 possible given randomly selected heroes. Although we could likely train a separate drafting agent to play the draft phase, we do not need to; instead we can use the win probability predictor. Because the main varying observation that agents see at the start of the game is which heroes are on each team, the win probability at the start of the game estimates the strength of a given matchup. Because there are only 4,900,896 combinations of two 5-hero teams from the pool of 17 heroes, we can precompute agent’s predicted 35 (a) OpenAlls (604) Sco ffimative (b) Figure 11: Screenshots right before two of the dire tower falls in the OpenAI Five Finals game 1. In 11a, Gyrocopter and Crystal Maiden attack the bottom tower 1 (upper left in Figure 10) and plan perhaps to kill it (their predictions go up). But they are chased away by the incoming dire (human) heroes, and their plan changes (the prediction that they will participate in the tower kill falls back to zero). Radiant creeps kill the tower half a minute later. In 11b, all radiant heroes attack mid tower 2 (center in Figure 10). However just before it falls, few dire heroes show up trying to save it, and most radiant heroes end up chasing them a fair distance away from the building. The prediction for those heroes to participate in the tower kill drops accordingly. 36 OpenAI Five F (s) Radiant 651% 651% 65.1% 676% 67.6% 67.6% 67.6 % OpenAI Five Win : \ ss Probability Estimate Figure 12: When drafting heroes, our drafting program would pick the one that maximizes worst- case scenario of opponent hero selection (minimax algorithm). In this example (from OpenAI Finals game 1), OpenAI Five deems the humans’ first pick suboptimal, immediately updating its expected win probability of 52.8% to 65.1%. The drafter then makes two choices (which it believes to be optimal of course). The humans’ second and third choices further decreases their chances of victory (according to the agent), indicated by the green win probability. However, for the human team’s last two choices, OpenAI Five agrees they were optimal, as can be seen by the win probability remaining constant (even though choice 4, Riki, is a character very differently played by humans and by OpenAI Five). win probability from the first few frames of every lineup. Given these precomputed win probabilities, we apply a dynamic programming algorithm to draft the best hero available on each turn. Results of this approach in a web-based drafting program that we have built can be seen on Figure 12. In addition to building a hero selection tool, we also learned about our agent’s preferences from this exercise. In many ways OpenAI Five’s preferences match human player’s preferences such as placing a high value (within this pool) on the hero Sniper. In other ways it does not agree with typical human knowledge, for example it places low value on Earthshaker. Our agent had trouble dealing with geometry of this hero’s “Fissure” skill, making this hero worse than others in training rollouts. Another interesting tidbit is that at the very start of the draft, before any heroes are picked, OpenAI Five believes that the Radiant team has a 54% win chance (if picking first in the draft) or 53% (if picking second). Our agent’s higher estimate for the Radiant side over the Dire agrees with conventional wisdom within the Dota 2 community. Of course, this likely depends on the set of heroes available. # E Observation Space At each time step one of our heroes observes ∼ 16, 000 inputs about the game state (mostly real numbers with some integer categorical data as well). See Figure 14 for a schematic outline of our observation space and Table 4 for a full listing of the observations. Instead of using the pixels on the screen, we approximate the information available to a human player in a set of data arrays. This approximation is imperfect; there are small pieces of information which humans can gain access to which we have not encoded in the observations. On the flip side, 37 Global data time since game started is it day or night? time to next day/night change time to next spawn: creep, neutral, bounty, runes time since seen enemy courier is that > 40 seconds?a min&max time to Rosh spawn Roshan’s current max hp is Roshan definitely alive? is Roshan definitely dead? Next Roshan drops cheese? Next Roshan drops refresher? Roshan health randomizationb Glyph cooldown (both teams) Stock countsc Per-unit (189 units) position (x, y, z) facing angle (cos, sin) currently attacking?e time since last attackd max health last 16 timesteps’ hit points attack damage, attack speed physical resistance invulnerable due to glyph? glyph timer movement speed on my team? neutral? animation cycle time eta of incoming ranged & tower creep projectile (if any) # melee creeps atking this unitd [Shrine only] shrine cooldown vector to me (dx, dy, length)e am I attacking this unit?e is this unit attacking me?d,e eta projectile from unit to mee unit type current animation 22 1 2 4 2 2 1 1 1 1 1 1 2 4 43 3 2 2 17 2 1 2 1 2 1 3 1 3 3 1 1 ter- nearby 14x14 grid of pass- 25 1 1 2 4 1 1 1 3 1 1 3 1 1 4 211 7 3 2 3 196 2 6 2 2 2 2 310 3x2x9 Per-modifier (10 heroes x 10 modifiers & 179 non- heroes x 2 modifiers) remaining duration stack count modifier name Per-item (10 heroes x 16 items) location tory/backpack/stash) charges is on cooldown? cooldown time is disabled by recent swap? item swap cooldown toggled state special Power Treads one-hot (str/agi/int/none) item name one-hot (inven- Per-ability (10 heroes x 6 abilities) cooldown time in use? castable Level 1/2/3/4 unlocked?d ability name Per-pickup (6 pickups) status present/unknown) location (x, y) distance from all 10 heroes pickup name one-hot (present/not Minimap (10 tiles x 10 tiles) fraction of tile visible # allied & enemy creeps # allied & enemy wards # enemy heroes cell (x, y, id) 2 1 1 1 13 3 1 2 2 1 4 1 7 1 1 1 4 1 15 3 2 10 1 9 1 2 2 1 3 a These observations are leftover from an early version of Five which played a restricted 1v1 version of the game. They are likely obsolete and not needed, but this was not tested. b These observations are about our per-game randomizations. See Appendix O. c For items: gem, smoke of deciept, observer ward, infused raindrop. d Observations are not visible per-se, but can be estimated. We use scripted logic to estimate them from visible observations. e These observations (only) are different for the five different heroes on the team. f This observation appears twice, and serves as an example of the difficulties of surgery. Although this is a categorical input, we began by treating it as a float input to save on engineering work (this observation is unlikely to be very important). Later the time came to upgrade it to a properly embedded categorical input, but our surgery tools do not support removing existing observations. Hence we added the new observation, but were forced to leave the deprecated observation as well. Table 4: Full Observation Space: All observations OpenAI Five receives at each time step. Blue rows are categorical data. Entries with a question mark are boolean observations (only take values 0 or 1 but treated as floats otherwise). The bulk of the observations are per-unit observations, observed for each of 189 units: heroes (5), creeps (30), buildings (21), wards (30), and courier (1) for each team, plus 15 neutrals. If the number of visible units in a category is less than the allotted number, the rest are padded with zeroes. If more, we observe only the units closest to allied heroes. Units in fog of war are not observed. When enemy heroes are in fog of war, we reuse the observation from the last time step when the unit was visible. 38 a. Player Hero b. Allied Hero cc. Allied Team —d. Enemy Team e. Enemy Creep f. Enemy Heroes ( a g. Allied Creeps s k. Fog of War I. Allied Tower h. Modifiers [<4 __-—— i. Items j. Abilities Figure 13: Dota 2’s human “Observation Space” while we were careful to ensure that all the information available to the model is also available to a human, the model does get to see all the information available simultaneously every time step, whereas a human needs to click into various menus and options to get that data. Although these discrepancies are a limitation, we do not believe they meaningfully detract from our ability to benchmark against human players. Humans observe the game via a rendered screen, depicted in Figure 13. OpenAI Five uses a more semantic observation space than this for two reasons: First, because our goal is to study strategic planning and gameplay rather than focus on visual processing. Second, it is infeasible for us to render each frame to pixels in all training games; this would multiply the computation resources required for the project manyfold. All float observations (including booleans which are treated as floats that happen to take values 0 or 1) are normalized before feeding into the neural network. For each observation, we keep a running mean and standard deviation of all data ever observed; at each timestep we subtract the mean and divide by the st dev, clipping the final result to be within (-5, 5). # F Action Space Dota 2 is usually controlled using a mouse and keyboard. The majority of the actions involve a high-level command (attack, use a certain spell, or activate a certain item), along with a target (which might be an enemy unit for an attack, or a spot on the map for a movement). For that reason we represent the action our agent can choose at each timestep as a single primary action 39 Unit Observations Heroes Non-Heroes Allied (5) Enemy (5) Allied (82) Enemy (82) Neutral (15) Per-unit obs 10cat 10 cat 164cat 164 cat : 30 cat (2 categorical + 43 continuous) 215 cont: 215 cont i 3,526 cont 3,526 cont i 645 cont Modifiers | socat 50 cat / 164cat | 164cat’ =O cat | (1 categorical + 2 continuous) |__100 cont _ 100cont__|_—-328cont_|_—328cont__—60.cont Per-hero extra obs i 10 mods per hero (25 continuous) | 125cont = 125 cont 2 per non-hero Abilities 30cat 30 cat l 6 abilities per hero Individual (1 categorical + 7 continuous) |__210 cont__ 210 cont Observations Items) 80cat 80 cat | 16 items per hero Previous Action (1 categorical + 13 continuous) |_1:042cont____1,040 cont 310 cont Per-allied-hero extra obs a Non-Unit Observations (2 categorical + 211 continuous) | 4,055 cont Global Pickups (6) re ag 22 cont (1 cat,15 cont each) (2 cat, 6 cont Minimap (10x10) Adan channels) 9 channels 128 cat 900 cont 384 cont Figure 14: Observation Space Overview: The arrays that OpenAI Five observes at each timestep. Most of OpenAI Five’s observations are unit-centered; for 189 different units on the map, we observe a set of basic properties. These units are grouped along the top of the figure. We observe some data about all units, some extra data about the primary units (the heroes), and even more data about the heroes on our team. A few observations are not tied to any unit. Finally, two observations having to do with hero control (terrain near me, and my previous action) are only observed about the individual hero that this LSTM replica operates. In this diagram blue bands represent categorical data and yellow bands represent continuous or boolean data; most entities (units, modifiers, abilities, items, and pickups), have some of each. Each piece of the figure sum- marizes the total dimensionality of that portion of the input. All together, an OpenAI Five hero observes 1,200 categorical values and 14,534 continuous/boolean values. 40 (a) Delay: An integer from 0 to 3 indicating which frame during the next frameskip to take the action on (see Appendix L). If 0, the action will be taken im- mediately when the game engine processes this time step; if 3, the action will be taken on the last game frame before the next pol- icy observation. This parameter is never ignored. (b) Unit Selection: One of the 189 visible units in the observa- tion. For actions and abilities which target units, either enemy units or friendly units. For many actions, some of the possible unit targets will be invalid; attempt- ing an action with an invalid tar- get results in a noop. (c) Offset: A 2D (X, Y ) coor- dinate indicating a spatial off- set, used for abilities which tar- get a location on the map. The offset is interpreted relative to the caster or the unit selected by the Unit Selection parameter, depending on the ability. Both X and Y are discrete integer out- puts ranging from -4 to +4 inclu- sive, producing a grid of 81 pos- sible coordinate pairs. # Figure 15: Action Parameters along with a number of parameter actions. The number of primary actions available varies from time step to time step, averaging 8.1 in the games against OG. The primary actions available at a given time include universal actions like noop, move, attack, and others; use or activate one of the hero’s spells; use or activate one of the hero’s items; situational actions such as Buyback (if dead), Shrine (if near a shrine), or Purchase (if near a shop); and more. For many of the actions we wrote simple action filters, which determine whether the action is available; these check if there is a valid target nearby, if the ability/item is on cooldown, etc. At each timestep we restrict the set of available actions using these filters and present the final choices to the model. In addition to a primary action, the model chooses action parameters. At each timestep the model outputs a value for each of them; depending on the primary action, some of them are read and others ignored (when optimizing, we mask out the ignored ones since their gradients would be pure noise). There are 3 parameter outputs, Delay (4 dim), unit selection (189 dim), and offset (81 dim), described in Figure 15. All together this produces a combined factorized action space size of up to 30 × 4 × 189 × 81 = 1, 837, 080 dimensions (30 being the maximum number of primary actions we support). This number ignores the fact that the number of primary actions is usually much lower; some parameters are masked depending on the primary action; and some parameter combinations are invalid and those actions are treated as no-ops. To get a better picture, we looked at actual data from the two games played against Team OG, and simply counted number of available actions at each step. The average number of available 41 Action Target Type Example No Target Point Target Unit Target Unit Offset Target Teleport Target Ward Target Parameters Delay Delay, Offset (Caster) Delay, Unit Selection (Regular) Delay, Unit Selection (Regular), Offset (Regular) Delay, Unit Selection (Teleport), Offset (Regular) Power Treads Move Attack Sniper’s Shrapnel Town Portal Scroll Place Observer Ward Delay, Offset (Ward) # Table 5: Action Target Types actions varies significantly across heroes, as different heroes have different numbers spells and items with larger parameter counts. Across the two games the average number of actions for a hero varied from 8,000 to 80,000. Unit Selection and Offset are actually implemented within the model as several different, mutu- ally exclusive parameters depending on the primary action. For Unit Selection, we found that using a single output head caused that head to learn very well to target tactical spells and abilities. One ability called “teleport,” however, is significantly different from all the others — rather than being used in a tactical fight, it is used to strategically reposition units across the map. Because the action is much more rare, the learning signal for targeting this ability would be drowned out if we used a single model output head for both. For this reason the model outputs a normal Unit Selection parameter and a separate Teleport Selection parameter, and one or the other is used depending on the primary action. Similarly, the Offset parameter is split into “Regular Offset,” “Caster Offset” (for actions which only make sense offset from the caster), and “Ward Placement Offset” (for the rare action of placing observer wards). We categorize all primary actions into 6 “Action target types” which determines which parameters the action uses, listed in Table 5. # F.1 Scripted Actions Not all actions that a human takes in a game of Dota 2 are controlled by our RL agent. Some of the actions are scripted, meaning that we have written a rudimentary rules-based system to handle these decisions. Most of these are for historical reasons — at the start of the project we gave the model control over a small set of the actions, and we gradually expanded it over time. Each additional action that we remove from the scripted logic and hand to the model’s control gives the RL system a higher potential skill cap, but comes with an cost measured in engineering effort to set it up and risks associated with learning and exploration. Indeed even when adding these new actions gradually and systematically, we occasionally encountered instabilities; for example the agent might quickly learn never to take a new action (and thus fail to explore the small fraction of circumstances where that action helps), and thus moreover fail to learn (or unlearn) the dependent parts of the gameplay which require competent use of the new action. In the end there were still several systems that we had not yet removed from the scripted logic by the time the agent reached superhuman performance. While we believe the agent could ultimately perform better if these actions were not scripted, we saw no reason to do remove the scripting because superhuman performance had already been achieved. The full set of remaining scripted actions is: 42 1. Ability Builds: Each hero has four spell abilities. Over the course of the game, a player can choose which of these to “level up,” making that particular skill more powerful. For these, in evaluation games we follow a fixed schedule (improve ability X at level 1, then Y at level 2, then Z at level 3, etc). In training, we randomize around this fixed script somewhat to ensure the model is robust to the opponent choosing a different schedule. 2. Item Purchasing: As a hero gains gold, they can purchase items. We divide items into consumables — items which are consumed for a one-time benefit such as healing — and everything else. For consumables, we use a simple logic which ensures that the agent always has a certain set of consumables; when the agent uses one up, we then purchase a new one. After a certain time in the game, we stop purchasing consumables. For the non-consumables we use a system similar to the ability builds - we follow a fixed schedule (first build X, then Y, then Z, etc). Again at training time we randomly perturb these builds to ensure robustness to opponents using different items.11 3. Item Swap: Each player can choose 6 of the items they hold to keep in their “inventory” where they are actively usable, leaving up to 3 inactive items in their “backpack.” Instead of letting the model control this, we use a heuristic which approximately keeps the most valuable items in the inventory. 4. Courier Control: Each side has a single “Courier” unit which cannot fight but can carry items from the shop to the player which purchased them. We use a state-machine based logic to control this character. # G Reward Weights Our agent’s ultimate goal is to win the game. In order to simplify the credit assignment problem (the task of figuring out which of the many actions the agent took during the game led to the final positive or negative reward), we use a more detailed reward function. Our shaped reward is modeled loosely after potential-based shaping functions [58], though the guarantees therein do not apply here. We give the agent reward (or penalty) for a set of actions which humans playing the game generally agree to be good (gaining resources, killing enemies, etc). All the results that we reward can be found in Table 6, with the amount of the reward. Some are given to every hero on the team (“Team”) and some just to the hero who took the action “Solo.” Note that this means that when team spirit is 1.0, the total amount of reward is five times higher for “Team” rewards than “Solo” rewards. In addition to the set of actions rewarded and their weights, our reward function contains 3 other pieces: • Zero sum: The game is zero sum (only one team can win), everything that benefits one team necessarily hurts the other team. We ensure that all our rewards are zero-sum, by subtracting from each hero’s reward the average of the enemies’ rewards. 11This randomization is done randomly deleting items from the build order and randomly inserting new items sampled from the distribution of which items that hero usually buys in human games. This is the only place in our system which relies on data from human games. 43 Name Win Hero Death Courier Death XP Gained Gold Gained Gold Spent Reward Heroes Description 5 -1 -2 0.002 0.006 0.0006 Team Solo Team Solo Solo Solo For each unit of gold gained. Reward is not lost when the gold is spent or lost. Per unit of gold spent on items without using courier. Health Changed Mana Changed Killed Hero Last Hit 2 0.75 -0.6 -0.16 Solo Measured as a fraction of hero’s max health.‡ Solo Measured as a fraction of hero’s max mana. Solo For killing an enemy hero. The gold and expe- rience reward is very high, so this reduces the total reward for killing enemies. The gold and experience reward is very high, so this reduces the total reward for last hit to ∼ 0.4. Solo . A : a : For buildings, two-thirds of the reward i one-third is earned as a lump sum when i ¥ See item O.2. s earned linearly as the building loses health, and dies. ¥ Hero’s health is quartically interpolated between 0 (dead) and 1 (full health); health at fraction x of full health is worth (a +1- set once and then untouched for the dura (1—x)‘) /2. This function was not tuned; it was ion of the project. Table 6: Shaped Reward Weights 44 • Game time weighting: Each player’s “power” increases dramatically over the course of a game of Dota 2. A character who struggled to kill a single weak creep early in the game can often kill many at once with a single stroke by the end of the game. This means that the end of the game simply produces more rewards in total (positive or negative). If we do not account for this, the learning procedure focuses entirely on the later stages of the game and ignores the earlier stages because they have less total reward magnitude. We use a simple renormalization to deal with this, multiplying all rewards other than the win/loss reward by a factor which decays exponentially over the course of the game. Each reward ρi earned a time T since the game began is scaled: ρi ← ρi × 0.6(T /10 mins) (11) • Team Spirit: Because we have multiple agents on one team, we have an additional dimen- sion to the credit assignment problem, where the agents need learn which of the five agent’s behavior cause some positive outcome. The partial rewards defined in Table 6 are an attempt to make the credit assignment easier, but they may backfire and in fact add more variance if an agent receives reward when a different agent takes a good action. To attempt dealing with this, we have introduced team spirit. It measures how much agents on the team share in the spoils of their teammates. If each hero earns raw individual reward ρi, then we compute the hero’s final reward ri as follows: ri = (1 − τ )ρi + τ ρ (12) with scalar ρ being equal to mean of ρ. If team spirit is 0, then it’s every hero for themselves; each hero only receives reward for their own actions ri = ρi. If team spirit is 1, then every reward is split equally among all five heroes; ri = ρ. For a team spirit τ in between, team spirit-adjusted rewards are linearly interpolated between the two. Ultimately we care about optimizing for team spirit τ = 1; we want the actions to be chosen to optimize the success of the entire team. However we find that lower team spirit reduces gradient variance in early training, ensuring that agents receive clearer reward for advancing their mechanical and tactical ability to participate in fights individually. See Appendix O for an ablation of this method. We ran a small-scale ablation with partial reward weights disabled (see Figure 16). Surprisingly, the model learned to play well enough to beat a hand-coded scripted agent consistently, though with a large penalty to sample efficiency relative to the shaped reward baseline. From watching these games, it appears that this policy does not play as effectively at the beginning of the game, but has learned to coordinate fights nearer to the end of the game. Investigating the tradeoffs and benefits of sparse rewards is an interesting direction for future work. # H Neural Network Architecture A simplified diagram of the joint policy and value network is shown in the main text in Figure 1. The combined policy + value network uses 158,502,815 parameters (in the final version). The policy network is designed to receive observations from our bot-API observation space, and interact with the game using a rich factorized action space. These structured observation and action spaces heavily inform the neural network architecture used. We use five replica neural networks, 45 — Baseline 200 | —— Sparse Reward (long horizon) 175 4 150 4 Trueskill 25 4 0 T T T T T T T 0 5000 10000 15000 20000 25000 30000 35000 Iterations Figure 16: Sparse rewards in Dota 2: TrueSkill over the course of training for experiments run with 0-1 loss only. For the sparse reward run horizon was set to 1 hour (γ = 0.99996) (versus 180 seconds for the baseline run). The baseline otherwise uses identical settings and hyperparameters including our shaped reward. The sparse reward succeeds at reaching TrueSkill 155; for reference, a hand-coded scripted agent reaches TrueSkill 100. 46 Global Minimap Nearby 6 Pick 5 Enemy 5 Allied 82 Allied 82 Enemy 15 Neutral Obs (10x10) Map (8x8) Let Heroes Heroes Nonheroes Nonheroes Nonheroes 1 OOo0OD | (Tomes ] [~easines } [ 1610s | Data type dictates processing: LJ CIC) ( J { } Continuous Data: normalization only; no learned processing Process Set Summarizes an unordered set of N elements into a vector of size S wa 2xFC |g —+{_max-poo! EERE _Enbating oat Supe be ! output of shape Nx S - | gives embedding of each element. Categorical Data: Embed Spatial Data: 2 layer conv net Unordered Set: “Process Set” E : (a) Flattening the observation space: First we process the complicated observation space into a single vector. The observation space has a tree structure; the full game state has various attributes such as global continuous data and a set of allied heroes. Each allied hero in turn has a set of abilities, a set of modifiers, etc. We process each node in the tree according to its data type. For example for spatial data, we concatenate the data within each cell and then apply a 2 layer conv net. For unordered sets, a common feature of our observations, we use a “Process Set” module. Weights in the Process Set module for processing abilities/items/modifiers are shared across allied and enemy heroes; weights for processing modifiers are shared across allied/enemy/neutral nonheroes. In addition to the main Game State observation, we extract the the Unit Embeddings from the “embedding output” of the units’ process sets, for use in the output (see Figure 18). | Game State - FC |—4 Cross-Hero My Hero Unit Embedding (b) Preparing for LSTM: In order to tell each LSTM which of the team’s heroes it controls, we append the controlled hero’s Unit Embedding from the Unit Embeddings output of Figure 17a to the Game State vector. Almost all of the inputs are the same for each of the five replica LSTMs (the only differences are the nearby map, previous action, and a very small fraction of the observations for each unit). In order to allow each replica to respond to the non-identical inputs of other replicas if needed, we add a “cross-hero pool” operation, in which we maxpool the first 25% of the vector across the five replica networks. # Figure 17: Observation processing in OpenAI Five 47 Available Action Ids dot product sample/argmax 1. The primary action is chosen via a linear projection over the available actions. Chosen Action ID 2. The target unit is chosen via an attention mechanism over the available units. The unit keys are masked by a learned per-action mask based on the sampled action. Unit Embeddings A Target Unit sample/argmax ; Offset X Offset Y sample/argmax }+ Samp TaTaMRX Figure 18: The hidden state of the LSTM and unit embeddings are used to parameterize the actions. each responsible for the observations and actions of one of the heroes in the team. At a high level, this network consists of three parts: first the observations are processed and pooled into a single vector summarizing the state (see Figure 17), then that is processed by a single-layer large LSTM, then the outputs of that LSTM are projected to produce outputs using linear projections (see Figure 18). To provide the full details, we should clarify that Figure 1 is a slight over-simplification in three ways: 1. In practice the Observation Processing portion of the model is also cloned 5 times for the five different heroes. The weights are identical and the observations are nearly identical — but there are a handful of derived features which are different for each replica (such as “distance to me” for each unit; see Table 4 for the list of observations that vary). Thus the five replicas produce nearly identical, but perhaps not entirely identical, LSTM inputs. These non-identical features form a small portion of the observation space, and were not ablated; it is possible that they are not needed at all. 2. The “Flattened Observation” and “Hero Embedding” are processed before being sent into the LSTM (see Figure 17b) by a fully-connected layer and a “cross-hero pool” operation, to ensure that the non-identical observations can be used by other members of the team if needed. 3. The “Unit Embeddings” from the observation processing are carried along beside the LSTM, and used by the action heads to choose a unit to target (see Figure 18). In addition to the action logits, the value function is computed as another linear projection of the LSTM state. Thus our value function and action policy share a network and share gradients. 48 # I Human Games See Table 7 for a listing of the games OpenAI Five played against high-profile teams. # J TrueSkill: Evaluating a Dota 2 Agent Automatically We use the TrueSkill [27] rating system to evaluate our agents. We first establish a pool of many reference agents of known skill. We evaluate the reference agents’ TrueSkill by playing many games between the the various reference agents, and using the outcome of the games to compute a TrueSkill for each agent. Our TrueSkill environment use the parameters σ = 25/3, β = σ/2, τ = 0.0, draw_probability=0.02. Reference agents’ µ are aligned so that an agent playing randomly has µ = 0. A hand-crafted scripted agent which we wrote, which can defeat beginners but not amateur players, has TrueSkill around 105. During our experiments we continually added new reference agents as our agent “outgrew” the existing ones. For all results in this work, however, use a single reference agent pool containing mostly agents from OpenAI Five’s training history along with some other smaller experiments at the lower end. The 83 reference agents range in TrueSkill from 0 (random play) to 254 (the version that beat the world champions). To evaluate a training run during training, a dedicated set of computers continually download the latest agent parameters and plays games between the latest trained agent and the reference agents. We attempt to only play games against reference agents that are nearby in skill in order to gain maximally useful information; we avoid playing agents more than 10 TrueSkill points away (corresponding to a winrate less than 15% or more than 85%). When a game finishes, we use the TrueSkill algorithm to update the test agent’s TrueSkill, but treat the reference agent’s TrueSkill as a constant. After 750 games have been reported, we log that version’s TrueSkill and move on to the new current version. New agents are initialized with µ equal to the final µ of the previous agent. This system gives us updates approximately once every two hours during running experiments. One difficulty in using TrueSkill across a long training experiment was maintaining consistent metrics with a changing environment. Two agents that were trained on different game versions must ultimately play on a single version of the game, which will result in an inherent advantage for the agent that trained on it. Older agents had their code upgraded in order to always be compatible with the newest version, but this still leads to metric inflation for newer agents who got to train on the same code they are evaluated on. This included any updates to the hero pool (adding new heroes that old agents didn’t train with), game client updates or balancing changes, and adding any new actions (using a particular consumable or item differently). # K Dota 2 Gym Environment # K.1 Data flow between the training environment and Dota 2 Dota 2 includes a scripting API designed for building bots. The provided API is exposed through Lua and has methods for querying the visible state of the game as well as submitting actions for bots to take. Parts of the map that are out of sight are considered to be in the fog of war and cannot be queried through the scripting API, which prevents us from accidentally “cheating” by observing anything a human player would not be able to see (although see Appendix Q). 49 Opponent Result Duration Version Restrictions June 6, 2018 - Internal Event Internal team Internal team Audience team Audience team August 5, 2018 - Benchmark win win win win 15:15 (surr) 20:51 31:33 23:33 (surr) 7.13 7.13 7.13 7.13 Mirror match, multiple couriers, no invis Mirror match, multiple couriers, no invis Mirror match, multiple couriers, no invis Mirror match, multiple couriers, no invis Caster team Caster team Caster team win win lose 21:38 (surr) 24:56 (surr) 35:47 7.16 7.16 7.16 Drafted, multiple couriers Drafted, multiple couriers Audience draft, multiple couriers August 9, 2018 - Private eval Team Secret Team Secret Team Secret win lose lose 17:00 (surr) 48:46 38:55 7.16 7.16 7.16 Drafted, multiple couriers Drafted, multiple couriers Drafted, multiple couriers August 22-23, 2018 - The International Pain Gaming Chinese Legends October 5, 2018 - Private eval Team Lithium Team Lithium Team Lithium January 16, 2019 - Private eval lose lose win win win 52:29 45:44 48:57 48:16 31:33 7.19 7.19 7.19 7.19 7.19 Pre-set lineup Pre-set lineup TI pre-set lineup TI pre-set lineup Drafted SG Esports SG Esports SG Esports SG Esports win win win win 24:29 (surr) 25:08 (surr) 27:36 (surr) 25:30 (surr) 7.19 7.19 7.20 7.20 TI pre-set lineup Drafted Mirror match Mirror match February 1, 2019 - Private eval 17:11 31:33 28:16 April 13, 2019 - OpenAI Five Finals 38:18 20:51 Alliance Alliance Alliance win win win OG OG win win 7.20d 7.20d 7.20d 7.21d 7.21d Drafted Drafted Reverse drafted Drafted Drafted Table 7: Major matches of OpenAI Five against high-skill human players. 50 Soonest possible action frame oo, 4 game frames combined F F into an observation Submit Action Figure 19: Reaction Time: OpenAI Five observes four frames bundled together, so any surprising new information will become available at a random frame in the red region. The model then processes the observation in parallel while the game engine runs forward four more frames. The soonest it can submit an action based on the red observations is marked in yellow. This is between 5 and 8 frames (167-267ms) after the surprising event. We designed our Dota 2 environment to behave like a standard OpenAI Gym environment[59]. This standard respects an API contract where a step method takes action parameters and returns an observation from the next state of the environment. To send actions to Dota 2, we implemented a helper process in Go that we load into Dota 2 through an attached debugger that exposes a gRPC server. This gRPC server implements methods to configure a game and perform an environment step. By running the game with an embedded server, we are able to communicate with it over the network from any remote process. When the step method is called in the gRPC server, it gets dispatched to the Lua code and then the method blocks until an observation arrives back from Lua to be returned to the caller. In parallel, the Dota 2 engine runs our Lua code on every step, sending the current game state observation12 to the gRPC server and waiting for it to return the current action. The game blocks until an action is available. These two parallel processes end up meeting in the middle, exchanging actions from gRPC in return for observations from Lua. Go was chosen to make this architecture easy to implement through its channels feature. Putting the game environment behind a gRPC server allowed us to package the game into a Docker image and easily run many isolated game instances per machine. It also allowed us to easily setup, reset, and use the environment from anywhere where Docker is running. This design choice significantly improved researcher productivity when iterating on and debugging this system. # L Reaction time The Dota 2 game engine runs at 30 steps per second so in theory a bot could submit an action every 33ms. Both to speed up our game execution and in order to bring reactions of our model closer 12Originally the Lua scripting API was used to iterate and gather the visible game state, however this was somewhat slow and our final system used an all-in-one game state collection method that was added through cooperation with Valve 51 to the human scale we downsample to every 4th frame, which we call frameskip. This yields an effective observation and action rate of 7.5 frames per second. To allow the model to take precisely timed actions, the action space includes a “delay” which indicates which frame during the frameskip the model wants this action to evaluate on. Thus the model can still take actions at a particular frame if so desired, although in practice we found that the model did not learn to do this and simply taking the action at the start of the frameskip was better. Moreover, we reduce our computational requirements by allowing the game and the machine learning model to run concurrently by asynchronously issuing actions with an action offset. When the model receives an observation at time T , rather than making the game engine wait for the model to produce an action at time T , we let the game engine carry on running until it produces an observation at time T + 1. The game engine then sends the observation at time T + 1 to the model, and by this time the model has produced its action choice based on the observation at time T . In this way the action which the model takes at time T + 1 is based upon the observation at time T . In exchange for this penalty in available “reaction time,” we are able to utilize our compute resources much more efficiently by preventing the two major computations from blocking one another (see Figure 19). Taken together, these effects mean that the agent can react to new information with a reaction time randomly distributed between 5 and 8 frames (167ms to 267ms), depending on when during the frameskip the new information happens to occur. For comparison, human reaction time has been measured at 250ms in controlled experimental settings[26]. This is likely an underestimate of reaction time during a Dota game. # M Scale and Data Quality Ablation Details As shown in Figure 5 of the main text, we studied several key ingredients of RL at this scale, and learned important lessons which we conjecture should generalize beyond this environment. In this section we explain the details of these experiments. Training runs the size of OpenAI Five are expensive; running a scan of 4 different variants would be prohibitively expensive. For this reason we use the normal Dota 2 environment, simply using a batch size 8x smaller than Rerun (which itself was 2-3 times smaller than OpenAI Five). See Figure 20 for an estimate of the variation in these training runs. Throughout the following sections we scan over various parameters of the experimental setup and monitor the results in terms of TrueSkill (see Appendix J) and speedup (see Equation 2). Our the uncertainty on speedup comes from uncertainty in both the numerator and the de- nominator. Although we have some understanding in the variance in the number of iterations for a baseline to reach each TrueSkill (see Figure 20), we do not have the luxury of multiple runs of every experiment. Instead, we use as proxy for the uncertainty on the number of iterations to reach TrueSkill T , the number of iterations to reach to reach T ±∆T where ∆T is the variance in TrueSkill across the variations in Figure 20, approximately 2 TrueSkill points. We combine the numerator and denominator uncertainty in quadrature to attain an overall uncertainty for the speedup. In each experiment the baseline uses hyperparameters given in Appendix C, except as noted. 52 16 200 14 175 12 150 10 = 125 = S g 3 g 8 E 100 & 6 75 4 50 — Runi 2 25 —— Run2 — Run3 — Runa 0 0 7 1 7 i r 0 1000 2000 3000 4000 5000 6000 7000 ie} 1000 2000 3000 4000 5000 6000 7000 Iterations Iterations Figure 20: Variation in 5v5 baseline training: On the left, the TrueSkill over the course of training for different “baseline” experiments, using identical settings and hyperparameters. On the right, the standard deviation in TrueSkill across four runs. See Appendix C for the hyperparameters used. Although we only have 4 runs, we can estimate that different runs tend to vary by about 2 TrueSkill. # M.1 Batch Size Training using small mini-batches is a generally accepted trade-off between convergence time and number of optimization steps. However, recent literature on large-scale supervised learning of image classifiers [44–46] explored much larger batch sizes and showed that strong scaling was possible by carefully tuning learning rate and initialization of the neural network. This renewed interest in reducing convergence-time and treating batch-size as a key design parameter also motivated the work of [28], where an analytical tool is derived to estimate a training-time optimal batch size on per task basis by studying the “noise scale” of the gradients. While existing literature on large-scale training of neural networks had focused on supervised learning, as far as we know using large batch sizes for reinforcement learning was novel when we began the Dota 2 project. These observations were later shown to be consistent with the analytical tools derived in [28]. In this section we demonstrate how large batch-sizes affect optimization time. Because we average gradients across the pool of optimizer machines, the effective total batch size is given by the product of the number of GPU optimizers with the batch size on each optimizer. We always use the maximum batch size on each optimizer which will fit within the GPU’s memory constraints (120 for our setup). Thus in order to change the overall batch size we increase the number of optimizer GPUs. We increase the size of the other machine pools in the experiment (rollout CPU workers, forward pass GPUs, etc), such that the larger batch size experiment is truly optimizing over more data, not simply reusing the same data more. This means that doubling the batch size causes the experiment to use twice as much computing power in almost all respects. Because we do not have the resources to separately optimize these hyperparameters at each individual batch size, 53 200 3.0 4 175 2.5 4 150 E125 $ 2.0 4 wn 3 100 34. 4 n 75 4 — Batch size 1966k . — Batch size 983k . ee 50 4 — Batch size 492k nae *@- 7S175 —— Batch size 246k . wo -@: 78125 25 4 —— Batch size 123k (b) -@- Ts100 ——— Batchsize61k | fos Linear speedup ) T T T T 1 0.0 t T T T T r Ok 2k 5k 7k 10k 12k 15k 61k 123k 246k 492k 983k 1966k Parameter versions Batch size (in frames, log scale) # Fe Figure 21: Effect of batch size on training speed: (Replicated from main text Figure 5a) TrueSkill over the course of training (see Appendix J) and speedup measured by the rate to attain different TrueSkill thresholds (computed using Equation 2) granted by increasing the batch size. The dotted line indicates perfect linear scaling (using 2x more data gives 2x speedup). Larger batch size significantly speeds up training, but the speedup is sublinear in the resources consumed. Later training (TrueSkill 175) benefits more from increased scale than earlier training (TrueSkill 100). Note that TrueSkill 175 is still quite early in the overall training of OpenAI Five which ultimately reaches above 250 (see Figure 3), so these results are inconclusive about whether large batch size causes linear speedup for the bulk of the training time. we keep all other hyperparameters fixed to those listed under “baseline” in Table 2. Results can be seen in Figure 5a, with discussion in the main text. # M.2 Sample Quality — Staleness In an ideal world, each piece of data in the optimizer would be perfectly on-policy (to obtain unbiased gradients), would be used exactly once and then thrown out (to avoid overfitting), would be from a completely different episode than every other piece of data (to eliminate correlations), and more. Because of our enormous batch size and small learning rate, we hypothesized that loosening the above constraints would not be a large price to pay in exchange for the benefits of asynchronous processing. However, we actually learned that issues like this surrounding data quality can be quite significant. In this and next section we will focus on two of these issues, which we call staleness and sample reuse. Early on in the development of our agent we would play the whole game of Dota 2 using single set of parameters, then send this huge package of sample data to optimizers for training. One of the negative effects of this approach was that this would render data stale; the policy parameters which played the start of the game would be an hour old or more, making the gradients estimated from them incorrect. Therefore we have switched to accumulating small amount of training data; 54 1.2 4 +@» 15150 200 + -@- 75125 | -@- T5100 175 1.0 150 0.8 4 = 125 2 a 3 06 4 $s © 100 & Queue length 0 (b) 75 4 — Queue length 1 04 4 — Queue length 2 50 4 —— Queue length 4 02 4 —— Queue length 8 . 25 4 —— Queue length 16 —— Queue length 32 0.0 4 i} T T T T T T T T T T Ok 2k 4k 6k 8k 10k 2 4 8 16 32 Parameter versions Measured staleness (log) Figure 22: Effect of Staleness on training speed. (Replicated from main text Figure 5b) TrueSkill over the course of training (see Appendix J) and speedup measured by the rate to at- tain different TrueSkill thresholds (computed using Equation 2) granted by increasing Staleness. Increasing staleness of data causes significant losses in training speed. sending it over to optimizers and updating agent parameters; then continuing with the same game. In order to generate rollouts with a certain version of the parameters, a long round-trip has to happen (see Figure 2). This new set of parameters is published to the controller, then independently pulled by forward pass machines, which only then will start using this version of parameters to perform forward-passes of our agent. Then some amount of gameplay must be rolled forward and after that the data is finally sent to the optimizers. In the meanwhile, the optimizers have been running on previously-collected data and advanced by some number of new gradient descent steps. In our setup where rollouts send about 30 seconds of gameplay in each chunk, this loop takes 1-2 minutes. Because our learning rate is small, and this is only a few minutes on the scale of a multi- week learning endeavor, one might expect this to be a minor concern — but to the contrary, we observe that this it can be a crucial detail. In this study we artificially introducing additional delay to see the effect. This is implemented on the rollout workers; instead of sending their data immediately back to the optimizers, they now put it in a queue, and pop data off the end of it to send to the optimizers. Thus the length of the queue determines the amount of artificial staleness introduced. See Figure 23; we observe the desired increase in measured staleness with the length of the queue. The results can be found in the main text in Figure 5b, and are reproduced in Figure 22. Staleness negatively affects speed of training, and the drop can be quite severe when the staleness is larger than a few versions. For this reason we attempt to keep staleness as low as possible in our experiments. 55 40 4 304 204 Measured staleness 104 i) 10 20 30 40 Queue length Figure 23: Adding a queue that buffers rollout data on the way to optimizers increases measured staleness in a predictable manner. Error bars indicate the standard deviation of measured staleness as it varied over the course of training due to distributed systems fluctuations. # M.3 Sample Quality — Sampling and Sample Reuse Our asynchronous training system reuses samples in multiple optimization steps. Each optimizer’s experience buffer is constantly asynchronously collecting data from rollout machines. At each op- timization step, a batch of data is sampled from this buffer. The buffer is configured to hold 4096 samples. Our optimizers compute the average sample reuse as the ratio between data arrival and consumption rates: Sample Reuse ≡ (samples per batch) × (batches per second) (experience buffer intake samples per second) (13) Sample reuse is a function of the round trip time between rollout machines and optimizers, the ratio of rollout machines to optimizers, and other factors, and thus we only approximately hit target values but do not set them exactly. We measure the effect of sample reuse by varying the rate of In practice, the rate of data production from each rollout incoming samples to the optimizers. worker stays relatively stable, so we vary this rate by changing the number of rollout CPU workers and forward pass GPUs while keeping the number of optimizers and everything else fixed. Our baseline experiment is tuned to have a sample reuse of approximately 1. To measure the effect of sample reuse we reduced the number of rollouts by 2, 4, and 8x to induce higher sample reuse. Additionally we also doubled the number of rollouts for one experiment to investigate the regime where sample reuse is lower than 1. These adjustments yielded sample reuse measurements between 0.57 and 6.3 (see Figure 25). It is important to highlight that adjusting the number of rollouts directly affects the number of simultaneous games being played, which affects the diversity of games that are used for training. The results can be found in the main text in Figure 5c, and are reproduced in Figure 24. We found that increasing sample reuse causes a significant decrease in performance. As long as the optimizers are reusing data, adding additional rollout workers appears to be a relatively cheap way to accelerate training. CPUs are often easier and cheaper to scale up than GPUs and this can be a significant performance boost in some setups. 56 -@: 78125 200 4 -@+ T5100 175 150 5 125 wn o 2 100 fa 5 4 _ —— Sample Reuse 0.5 50 4 — Sample Reuse 1 (b) —— Sample Reuse 2 25 4 —— Sample Reuse 4 —— Sample Reuse 8 0 T T T Ok 2k 4k 6k 0.5 1.0 2.0 4.0 8.0 Parameter versions Measured sample reuse (log) Figure 24: Effect of Sample Reuse on training speed. (Replicated from main text Figure 5c) TrueSkill over the course of training (see Appendix J) and speedup measured by the rate to attain different TrueSkill thresholds (computed using Equation 2) granted by increasing Sample Reuse. Increasing sample reuse causes significant slowdowns. In fact, the run with 1/8th as many rollout workers (sample reuse around 6.3), seems to have converged to less than 75 TrueSkill. Measured sample reuse N Target sample reuse Figure 25: As our target sample reuse increases measured sample reuse increases predictably. Error bars indicate the standard deviation of measured sample reuse as it varied over the course of training. 57 200 200 + 175 175 4 150 150 4 125 B 125 4 wn wn o 100 > 100 4 FE 75 75 4 50 50 + 25 4 — Baseline 25 4 — Baseline ——— Synchronous ——— Synchronous 0 T T T T 0 T T T T 0 2 4 6 8 10 0 1000 2000 3000 4000 Wall time (days) Iterations # BE # o 2 FE Figure 26: Asynchronous training: Plots of TrueSkill over the course of training for a “base- line” experiment together with a “synchronous” run using only on-policy data (staleness = 0) and restricting each sample to be used at most once (max sample reuse = 1). On the left, the x-axis is wall time. On the right, the x-axis is iterations. Asynchronous training is nearly 3x faster at achieving TrueSkill 150 when measuring by wall time, even though the two runs perform similarly as a function of the number of iterations. The fact that our algorithms benefit from extremely low sample reuse underlines how sample inefficient they are. Ideally, our training methods could take a small amount of experience and use that to learn a great deal, but currently we cannot even usefully optimize over that experience for more than a couple of gradient steps. Learning to use rollout data more efficiently is one of the major areas for future work in RL research. This investigation suggests that sample reuse below one can be beneficial. This experiment out performed all others after around iteration 5,000, including the experiment with sample reuse 1. The improvement over sample reuse 1 is minor compared to the gaps between more severe sample reuses, but it is significant. Intuitively one might expect that using each sample exactly once would be the most optimal, as no data would get wasted and no data would get used twice; collecting more data and then not optimizing over it would not help. However, the sample reuse is measured as an average rate of data production to consumption (Equation 13). Because the optimizers sample each batch randomly from the buffer, sample reuse 1 just means that on average each sample is used once, but in fact many samples are used twice, and some not used at all. For this reason producing twice as much data as we can consume still reduces the number of samples which get selected multiple times. Of course the magnitude of improvement is relatively small and the cost (doubling the number of rollout workers and forward pass GPUs) is significant. Doubling the number of rollout workers may also decrease correlation across samples; using two adjacent samples from the same game (when very little has changed between them) may have similar drawbacks to using the same sample twice. 58 # N Self-play OpenAI Five is trained without any human gameplay data through a self-improvement process named self-play. This technique was successfully used in prior work to obtain super human per- formance in a variety of multiplayer games including Backgammon, Go, Chess, Hex, StarCraft 2, Poker [1, 4, 7, 37–39]. In self-play training, we continually pit the current best version of an agent against itself or older versions, and optimize for new strategies that can defeat these past and present opponents. In training OpenAI Five 80% of the games are played against the latest set of parameters, and 20% play against past versions. We play occasionally against past parameter versions in order to obtain more robust strategies and avoid strategy collapse in which the agent forgets how to play against a wide variety of opponents because it only requires a narrow set of strategies to defeat its immediate past version (see Balduzzi et al. [60] for a discussion of cyclic strategies in games with simultaneous-turns and/or imperfect information). OpenAI Five uses a dynamic sampling system in which each past opponent i = 1..N is given a quality score qi. Opponent agents are sampled according to a softmax distribution; agent i is chosen with probability pi proportional to eqi. Every 10 iterations we add the current agent to past opponent pool and initialize its quality score to the maximum of the existing qualities. After each rollout game is completed, if the past opponent defeats the current agent, no update is applied. If the current agent defeats a past opponent, an update is applied proportional to a learning rate constant η (which we fix at 0.01): qi ← qi − η N pi (14) In Figure 27 we see the opponent distribution at several points in early training. The spread of the distribution gives a good picture of how quickly the agent is improving: when the agent is improving rapidly, then older opponents are worthless to play against and have very low scores; when progress is slower the agent plays against a wide variety of past opponents. # O Exploration Exploration is a well-known and well-researched problem in the context of reinforcement learning. We encourage exploration in two different ways: by shaping the loss (entropy and team spirit) and by randomizing the training environment. # O.1 Loss function Per [14], we use entropy bonus to encourage exploration. This bonus is added to the PPO loss function in the form of cS[πθ](st), where c is a hyperparameter referred to as entropy coefficient. In initial stages of training a long-running experiment like OpenAI Five or Rerun we set it to an initial value and lower it during training. Similarly to [14], [16], or [61], we find that using entropy bonus prevents premature convergence to suboptimal policy. In Figure 28, we see that entropy bonus of 0.01 (our default) performs best. We also find that setting it to 0 in early training, while not optimal, does not completely prevent learning. As discussed in Appendix G, we introduced a hyperparameter team spirit to control whether agents optimize for their individual reward or the shared reward of the team. Early training and speedup curves for team spirit can be seen in Figure 29. We see evidence that early in training, 59 > # Version=1001 Distribution Version=7001 Distribution Version=14001 Distribution 3 0.10 2 0.10 0.10 $ = & 0.08 E 0.08 0.08 < Re) 2 0.06 D> 0.06 0.06 wn £& o E > 0.04 D 0.04 0.04 2 a & 0.02 0.02 0.02 ie) = 0.00 0.00 0.00 Oo ~2000 —1000 0 —2000 —1000 0 —2000 —1000 0 Relative Opponent Version Relative Opponent Version Relative Opponent Version a I 0.100 | 0.200 5 0.300 _—_.s | 2 gL 0.400 < o S —1000 S - 4 5 @ —1250 4 2 oO o —1500 + cc —1750 5 —2000 + T T T T T T 0 2000 4000 6000 8000 10000 12000 14000 Version Figure 27: Opponent Manager Distribution over past versions. As the performance of the agent improves, the distribution over past versions changes to find stronger contenders. The slope in the distribution reflects how fast the current agent is outpacing previous versions: a slow falloff indicates that the agent is still challenged by far older versions, while a steep falloff is evidence that counter-strategies have been found that eliminate past agents. In later versions the opponent distri- bution includes many more past versions, suggesting that after a warmup period, skill progression slows. 60 200 175 150 125 Trueskill 100 1s 50 25 Entropy coefficient = 0 1 2 Entropy coefficient = 1e-3 Entropy coefficient = 1e-4 0 2000 4000 6000 Iterations 8000 10000 14 12 1.0 Speedup 0.6 0.4 0.2 0.0 @ = Ts100 @ 15125 @ 1s150 f i i t ‘I oO 0.0001 0.001 0.01 Entropy coefficient (log scale but with 0) 0.1 Figure 28: Entropy in early training: TrueSkill and speedup with varied entropy coefficients. Lower entropy performs worse because the model has a harder time exploring; higher entropy performs much worse because the actions are too random. 200 25 team spirit 0.0 team spirit 0.3 (defaults) team spirit 0.5 team spirit 0.75 team spirit 1.0 0 2000 4000 6000 Iterations 8000 10000 14 12 1.0 0.4 0.2 0.0 » Ts100 » T8125 + Ts150 + T8175 0.00 0.30 0.50 0.75 1.00 Team spirit # Trueskill Figure 29: Team Spirit in early training: Very early in training (TrueSkill <125) the run with team spirit 0 does best; this can be seen by the speedup for lower TrueSkill being highest at team spirit 0. The maximum speedup quickly moves to 0.5 in the medium TrueSkill regime (150 and 175). 61 200 4 175 4 150 4 100 4 Trueskill 75 4 50 4 25 4 — lane assignments on — lane assignments off 0 2000 4000 6000 8000 10000 Iterations Figure 30: Lane Assignments: we see that this randomization actually provided little benefit. “Lane assignments” randomization on vs. off. In this ablation lower team spirits do better. At the very start team spirit 0 is the best, quickly ovdertaken by team spirit 0.3 and 0.5. We hypothesize that later in training team spirit 1.0 will be best, as it is optimizing the actual reward signal of interest. # O.2 Environment Randomization We further encouraged exploration through randomization of the environment, with three simulta- neous goals: 1. If a long and very specific series of actions is necessary to be taken by the agent in order to randomly stumble on a reward, and any deviation from that sequence will result in negative advantage, then the longer this series, the less likely is agent to explore this skill thoroughly and learn to use it when necessary. 2. If an environment is highly repetitive, then the agent is more likely to find and stay in a local minimum. 3. In order to be robust to various strategies humans employ, our agents must have encountered a wide variety of situations in training. This parallels the success of domain randomization in transferring policies from simulation to real-world robotics[5]. We randomize many parts of the environment: In our rollout games, heroes start with random perturbations around the default starting level, experience, and gold, armor, movement speed, health regeneration, mana regeneration, magic resistance, strength, intellect, and agility. 62 • Lane Assignments: From a strategic perspective it makes sense for heroes to act in certain area of the map more than the others. Most inter-team skirmishes happen on lanes (3 distinct paths that connect opposing bases). At a certain stage of our work, we noticed that our agents developed a preference to stick together as a group of 5 on a single lane, and fighting any opponent coming their way. This represents a large local minimum, with higher short- term reward but lower long-term one as the resources from the other lanes are lost. After that we introduced lane assignments, which randomly assigned each hero to a subset of lanes, and penalized them with negative reward for leaving those lanes. However, the ablation study in Figure 30 indicates that this may not have been necessary in the end. • Roshan Health: Roshan is a powerful neutral creature, that sits in a specific location on the map and awaits challengers. Early in training our agents were no match for it; later on, they would already have internalized the lesson never to approach this creature. In order to make this task easier to learn, we randomize Roshan’s health between zero and the full value, making it easier (sometimes much easier) to kill. In each training game, we randomly sample teams from the hero pool. While the hero randomization is necessary for robustness in evaluations against human players (which may use any hero teams), we hypothesize that it may serve as additional exploration encouragement, varying the game and preventing premature convergence. In Appendix P, we see that training with additional heroes causes only a modest slowdown to training despite the extra heroes having new abilities and strategies which interact in complex ways. • Item Selection: Our item selection is scripted: in an evaluation game, our agents always In training we randomize around that, buy the same set of items for each specific hero. swapping, adding, or removing some items from the build. This way we expose our agents to enemies playing with and using those alternative items, which makes our agents more robust to games against human players. There are shortcomings to this method, e.g. a team with randomly picked items is likely to perform worse, as our standard build is carefully crafted. In the end our agent was able to perform well against humans who choose a wide variety of items. # P Hero Pool Size One of the primary limitations of our agent is its inability to play all the heroes in the game. We compared the progress in early training from training with various numbers of heroes. In all cases, each training game is played using an independent random sampling of five heroes from the pool for each team. To ensure a fair comparison across the runs, evaluation games are played using only the smallest set of heroes. Because the test environment uses only five heroes, the runs which train with fewer heroes are training closer to the test distribution, and thus can be expected to perform better; the question is how much better? In Figure 31, we see that training with more heroes causes only a modest slowdown. Training with 80 heroes has a speedup factor of approximately 0.8, meaning early training runs 20% slower than with the base 17 heroes. From this we hypothesize that an agent trained on the larger set of heroes using the full resources of compute of Rerun would attain a similar high level of skill with approximately 20% more training time. Of course this experiment only compares the very early stages of training; it could be that the speedup factor becomes worse later in training. 63 14 4 12 4 10 4 = 08 4 3 3 3 e g & Foe J o4 4 — 5S heroes — 10 heroes 02 4 -@- 1S175 25 — 17 heroes (b) -@- T5150 — 40 heroes -@- TS125 — 80 heroes oo 4 -@- TS100 0 r : ; r : ; 1 : r : : 0 2000 4000 © 6000-8000»: 10000-12000 5 10 Vv 40 80 Iterations Heroes Figure 31: Effect of hero pool size on training speed: TrueSkill over the course of training (see Appendix J) and speedup measured by the rate to attain different TrueSkill thresholds (computed using Equation 2) granted by varying the size of the hero pool. Additional heroes slows down early training only slightly. The severe underperformance of the 5-hero run for the first 4k versions was not investigated in detail. It is likely not due to the hero count but rather some instability in that particular training run. 64 a Se Figure 32: Learning Rate during The International: This is what happens when humans under time pressure choose hyperparameters. We believe that in the future automated systems should optimize these hyperparameters instead. After this event, our team began to internally refer to the act of frantically searching over hyperparameters as “designing skyscrapers.” # Q Bloopers # Q.1 Manually Tuned Hyperparameters Leading into The International competition in August 2018, we already a very good agent, but we felt it was likely not yet as good as the very best humans (as indeed turned out to be the case; we lost both games at that event). In the final few days before the games, we sought to explore high-variance options which had a chance of offering a surprising improvement. In the end, however, we ultimately believe that human intuition, especially under time pressure, is not the best way to set hyperparameters. See Figure 32 for the history of our learning rate parameter during those few days. # Q.2 Zero Team Spirit Embedding One of our team members stumbled upon a very strange phenomenon while debugging a failed surgery. It turned out that replacing a certain set of 128 learned parameters in the model with zero increased the model’s performance significantly (about 55% winrate after versus before). We believe that the optimizers were unable to find this direction for improvement because although the win rate was higher, the shaped reward (see Table 6) was approximately the same. A random perturbation to the parameters should have overwhelming probability of making things worse rather than better. We do not know why zero would be a special value for these parameters. These parameters were certainly an unusual piece of the model. In the early stages of applying team spirit (see Appendix G), we attempted to randomize the team spirit parameter in each game. We had a fixed list of four possible team spirit values; each rollout game one was chosen at random. The agent was allowed to observe the current team spirit, via an embedding table with four entries. We hoped this might encourage exploring games of different styles, some very selfless games and some very selfish games. After training in this way for only a short period, we decided this randomization was not helping, and turned it off. Because our surgery methods do not allow for removing parameters easily, we simply set the team spirit observation to always use a fixed entry in the embedding table. In this way we arrived at a situation where the vector of “global” observations g consisted of the real observations 65 gr, concatenated with 128 dimensions from this fixed embedding E; these extra dimensions were learned parameters which did not depend on the observations state: g = [gr, E] (15) Because this vector is consumed by a fully connected layer W g + B, these extra parameters do not affect the space of functions representable by the neural network. They are exactly equivalent to not including E in the global observations and instead using a modified bias vector: Bi =W(0,E)+B (16) For this reason we were comfortable leaving this vestigial part of the network in place. Because it was an embedding, there should be nothing special about 0 in the 128-dimensional space of possible values of E. However we see clear evidence that zero is special, because a generic perturbation to the parameters should have a negative effect. Indeed, we tried this explicitly — perturbing these parameters in other random directions — and the effect was always negative except for the particular direction of moving towards zero. # Q.3 Learning path dependency The initial training of OpenAI Five was done as a single consecutive experiment over multiple months. During that time new items, observations, heroes, and neural network components were added. The order of introduction of these changes was a priori not critical to learning, however when reproducing this result in Rerun with all the final observations, heroes, and items included, we found that one item — Divine Rapier — could cause the agents to enter a negative feedback loop that reduced their skill versus reference opponents. As Rapier began to be used, we noted a decrease in episodic reward and TrueSkill. When repeating this experiment with Rapiers banned from the items available for purchase, TrueSkill continues to improve. We hypothesize that this effect was not observed during our initial long-lasting training because Rapier was only added after the team spirit hyperparamater was raised to 1 (see Appendix G). Rapier is a unique item which does not stay in your inventory when you die, but instead falls to the ground and can be picked up by enemies or allies. Because of this ability to transfer a high-value item, it is possible that the reward collected in a game increases in variance, thereby preventing OpenAI Five from learning a reliable value function. 66
{ "id": "1807.01281" }
1912.06166
ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles
We present the "Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles" (ABOUT ML) project as an initiative to operationalize ML transparency and work towards a standard ML documentation practice. We make the case for the project's relevance and effectiveness in consolidating disparate efforts across a variety of stakeholders, as well as bringing in the perspectives of currently missing voices that will be valuable in shaping future conversations. We describe the details of the initiative and the gaps we hope this project will help address.
http://arxiv.org/pdf/1912.06166
Inioluwa Deborah Raji, Jingying Yang
cs.CY, stat.ML
Presented at Human-Centric Machine Learning workshop at Neural Information Processing Systems conference 2019; equal contribution from authors, Jingying Yang is the current program lead for the ABOUT ML project at Partnership on AI, more details can be found about the project at https://www.partnershiponai.org/about-ml/
null
cs.CY
20191212
20200108
0 2 0 2 n a J 8 ] Y C . s c [ 3 v 6 6 1 6 0 . 2 1 9 1 : v i X r a # ABOUT ML: Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles # Deborah I. Raji ∗ Partnership on AI San Francisco, CA [email protected] # Jingying Yang ∗ Partnership on AI San Francisco, CA [email protected] Deborah I. Raji * Jingying Yang * Partnership on AI Partnership on AI San Francisco, CA San Francisco, CA [email protected] [email protected] # Abstract We present the "Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles" (ABOUT ML) project as an initiative to opera- tionalize ML transparency and work towards a standard ML documentation practice. We make the case for the project’s relevance and effectiveness in consolidating dis- parate efforts across a variety of stakeholders, as well as bringing in the perspectives of currently missing voices that will be valuable in shaping future conversations. We describe the details of the initiative and the gaps we hope this project will help address. # Introduction When AI is deployed within human systems, complications can arise, often leading to unanticipated and undesirable consequences [14]. At times, the low compatibility of an AI system to the human context arises from a simple lack of communication. If certain details are not made explicit between system developers and impacted stakeholders, then the system can be unknowingly misused and its results become misinterpreted, untrustworthy and difficult to hold accountable [15, 16]. It is for this reason that transparency is emerging as a major priority for organizations around the world [9]. Laws like the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the General Data Protection Regulation (GDPR), government procurement processes like Canada’s Algorithmic Impact Assessment, and engineering practice at IBM [6], Microsoft [4] and Google [11] in particular highlight the shift towards documentation-based approaches to transparency as a practical mechanism to achieving more trustworthy machine learning deployments. Information-based approaches involve a recorded exchange between the system developer and users in order to reach a shared understanding of the known details of the system and its intended function [16]. As a far more accessible, inexpensive and simple solution to ethical AI deployment challenges than the often more complex and abstract fairness interventions available, transparency through documentation is a promising practical intervention that can integrate into existing workflows to provide clarity in decision making for users, external auditors, procurement departments, and other stakeholders alike. In this paper, we go over the details of a practical transparency approach to making ML systems more human-compatible. We discuss the evidence of the acknowledged importance of transparency as a value to industry and government stakeholders and then summarize the details of the approach of transparency through documentation. We then present the "Annotation and Benchmarking on Understanding and Transparency of Machine Learning Lifecycles" (ABOUT ML) project as a future # 1Equal contribution. 33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada. resource and standard to operationalize this principle and consolidate efforts across stakeholders. Finally, we acknowledge the ongoing challenges as we move forward with the project. # 2 Demand for Transparency in ML Systems In machine learning, models can encode complex and unintuitive relationships between inputs and outputs, making it challenging for human operators to naturally infer the details of what guided the process leading to a particular outcome. As a result, many organizations include transparency as a core value in their AI principle statements. Of 50 AI principle statements documented through the Linking AI Principles (LAIP) project, 94% (47) explicitly mention transparency[19]. Similarly, 87% and 88% of principle statements surveyed in two other concurrent studies reference “transparency” [9]. In fact, transparency is often highlighted as the most frequently occurring principle in these survey studies, and has been named "the most prevalent principle in the current literature” [9]. That being said, the intricacy and difficulty of translating the high-level ethical ideal of transparency into concrete engineering processes and requirements has been repeatedly referenced as a major challenge. Although study after study confirms that meaningful progress cannot be made until ethical ideals are operationalized [17, 12, 5], the inconsistency with which high level principles such as transparency are interpreted across different contexts, organizations and even teams makes it difficult to design consistent practical interventions. This lack of practical theory serves as a roadblock to facilitating outside auditing from interested parties looking to hold AI system developers accountable, and can impede or slow down the responsible deployment of these models [5, 14]. # 3 Transparency Through Documentation One simple and accessible approach to increasing transparency in ML lifecycles is through an improvement in both internal and external documentation norms and processes. This is in fact among the most common of reported fairness interventions implemented in industry and government [8]. For an increasingly concerned public or auditing organization, thorough, externally distributed documentation on ML systems is essential to earning and maintaining trust, and minimizing the misuse of these systems. External documentation and reporting standards can also assist practitioners in making the case within their organizations to allocate the necessary resources to more thoroughly incorporate ethics into their AI projects. Internal documentation is also vital, serving to improve communication between collaborating teams. Internal documents build employee trust by outlining the nature of an individual or team’s contribution to an overall system, giving opportunity for ethical objections and a more meaningful understanding of the impact of their personal participation in the creation of an end product. Beyond the artifact, however, the process of documentation itself is inherently valuable, as it prompts critical thinking about the ethical implications at every step in the ML lifecycle and encourages adherence to the set of steps required to understand and report a complete picture of system capabilities, limitations, and risks. Documentation for transparency is thus both an artifact (in this case, a document with details about the ML system, similar to a nutrition label on food) and a process (in this case, a series of steps people follow in order to create the document). Both of these interpretations are at the core of the initial effort of the ABOUT ML initiative, which focuses on developing documentation to clarify the details of specific ML systems, for the sake of improving the transparency of that system. # 4 Research Themes on Documentation for Transparency In order to define the characteristics and intended uses of the system, there are well-researched sets of documentation questions already available, through various disparate research efforts. These past research efforts differ greatly from one another and are often specific to a particular domain such as Natural Language Processing [2], or geared towards a specific element of the system, such as the dataset [4]. As these documentation templates are often modeled on those used in other industries, such as safety data sheets from the electronics industry [4] or nutrition labels from the food industry [7], the suggested templates vary widely in length and appearance, ranging from a single concise page of succinct statements [11] or a set of symbols and visualizations [7, 10] to upwards of 10 pages of detailed prose and graphs [4]. Whether the documentation is meant for internal or external 2 consumption also impacts length and contents, as internal documentation can be more detailed and thus longer. There is thus currently great variability in the past research attempts to inform the documentation questions and format in machine learning development, and a need for consolidated guidance for the community with respect to documentation best practices. Despite these differences, there certain themes from past work to pay attention to. A common focus across templates, even outside of data-specific work, is on clarifying the details of data provenance, for both the training and testing data used in ML development [4, 2, 11]. Documentation questions across papers consistently address the risks that arise at various stages of data creation and distribution, with the goal of encouraging practitioners to reflect on ethical concerns at every stage including data use and release, and some templates placing additional focus on specific risks like privacy [10]. Another recurring theme in the related work is on the importance of clarifying the model’s intended use and objectives. A major stated goal for these templates is to allow the team to articulate initial objectives, so they can refer back to these goals to ensure ongoing consistency with their declared intentions. Model- and system-level documentation efforts that emerged from earlier work on data documentation, in particular introduce questions more specific to the definition of overall operational objectives and design decisions within a broader system [11, 6]. Some work has gone further to suggest a legal advantage to declaring intended use cases and ethical concerns, as it can provide grounds for legally restricting third party misuse [3]. Also, even with a broad range of contributors to the current body of work, certain voices are consistently missing. For instance, the perspective of civil rights organizations and government repre- sentatives is often not included in this work, despite their very specific expectations for transparent systems and the important role of documentation in system auditing. There is also no record of what those impacted by the deployed ML systems would like to see reported in documentation. Getting that feedback requires formalizing the inclusion of the perspectives of those most affected by the ML system, especially people from traditionally marginalized and underrepresented communities. As most corporate perspectives have been produced by large multi-national technology companies, it is also important to diversify our understanding of what industry-appropriate documentation practice looks like, and how the process should accommodate less resourced engineering workflows. # 5 ABOUT ML The ABOUT ML project2 is an iterative, multistakeholder process to collaboratively create best practices via input from diverse perspectives, with the goal of translating those best practices into industry norms. Although just completing its primary stages, the ABOUT ML project is a promising model for standardizing the operationalization of other common AI ethics principles, and prioritizing inclusion throughout the process. By catalyzing cross-organizational collaboration, the goal is to translate ethical ideals such as transparency into research-backed tools for a variety of stakeholders. Organizations have already begun to implement documentation recommendations from research publications, and such work is beginning to influence documentation requirements in regulation and engineering practice [8]. However, there is no consensus on which practices work best and still a lack of understanding of which basic information needs to be disclosed in an ML system. In fact, the definition of transparency itself is highly contextual. As a result, there is currently no standardized process for the documentation of machine learning systems. Each team that wants to apply the research summarized above to improve transparency in their ML systems via documentation must address the entire suite of questions about what transparency means for their team, product, and organization given their specific goals and constraints, with little formal guidance. The goal of ABOUT ML is thus to consolidate past efforts and condense that work into meaningful guidelines and templates to support documentation practice in machine learning. The process is modeled after iterative ongoing processes to design internet standards (such as W3C, IETF, and WHATWG) and includes a public forum for discussion and a place to submit any proposed changes. The resulting template recommendations can serve as a head start to those looking to implement these strategies. Rather than a rigid list of requirements, ABOUT ML will offer a summary of recommendations and practices that is mindful of the variance in transparency expectations, in order to guide teams to identify and address their context-specific challenges. # 2More information on the project available at: https://www.partnershiponai.org/about-ml/ 3 This project also serves as a method to accelerate academic progress on the topic by pooling insights more quickly, sharing resources, and reducing the redundancy of efforts. The success and quality of the eventual ABOUT ML output depends on engagement and buy-in from a wide range of relevant stakeholders. The hosting organization is committed to investing in the resources necessary to seek out other groups undertaking transparency initiatives and incorporate their lessons into ABOUT ML recommendations. With ABOUT ML, we aim to create a platform for teams and individuals to discuss and share experiences alongside researchers, civil society organizations, advocacy groups, users, and other people impacted by AI technology by creating and maintaining an online forum and a concurrent public comment process. In order to make this process as inclusive and robust as possible, we have designed a standardization process with two key design elements: a Steering Committee and the Diverse Voices methodology. Keeping up with the latest developments in research and practice, we recruited 30 experts, researchers and practitioners from a diverse set of partner organizations to serve on the ABOUT ML Steering Committee. This committee will guide the updating of ABOUT ML drafts based on submitted public comments, new research developments in the field and advances in reported practices. They will approve new releases by “rough consensus,” which is commonly used by other multi-stakeholder working groups [13]. The steering committee is representative of organizations from a broad set of perspectives, including civil society organizations, non-profits, large and small corporations as well as academic institutions. To ensure that diverse perspectives — especially those from communities historically excluded from technology decision-making — contribute to any ABOUT ML recommendations, we are engaging with the Tech Policy Lab at the University of Washington to conduct Diverse Voices panels for the ABOUT ML project [18]. This methodology was designed to gather feedback from stakeholders who are impacted by a technology policy but whose perspectives might not otherwise be consulted in its formation. Thus, for each iteration of the ABOUT ML template, this Diverse Voices panel feedback will inform the final edits incorporated before release, providing an alternate perspective to the Steering Committee input. # 6 Current Ongoing Challenges & Gaps When attempting to implement the recommended documentation guidelines, a number of common challenges arise. For instance, a deeper study of current institutional structures and missing enablers of transparency interventions would serve as an excellent foundation for future pilot and implemen- tation phases of ABOUT ML. Additional gaps for further work include curating a consolidated set of documentation questions, defining a shared understanding of what is necessary to consider an ML system transparent, and agreeing on an equitable process to empower more stakeholders to have input on what goes into documentation. There are also inherent known limitations to transparency as a mechanism for trust [1], as well as information security, intellectual property and customization concerns to accommodate when designing for safe system disclosures. Despite these challenges, the ABOUT ML project is an important first step in bringing together disparate and at times even eco- nomically competitive groups together to move the industry towards more transparent and compatible ML systems. # 7 Conclusion For ML systems to preserve privacy, ensure fairness, and reduce bias they must first be developed and deployed within a framework that provides accountability. The industry attention on the AI principle of transparency provides an opportunity to finally take action towards creating more human- compatible ML systems. The ABOUT ML project hopes to channel this enthusiasm into practical results by lowering the barrier to integrating documentation processes into any team and workflow. Although decentralized experimentation has begun on documentation as a transparency intervention, there lacks adequate guidance to implement or make sense of the available recommendations, particularly across a diversity of contexts and interests. ABOUT ML can hopefully evolve into a framework to offer proper support of the attempted real-world implementations of this approach, and thus support the increased transparency of deployed ML systems overall. 4 # 8 Appendix Below is an overview of the ABOUT ML process to standardize ML documentation for transparency. The project is currently on schedule. Release version 0 is in the process of Steering Committee and Diverse Voices, after undergoing a public comment period. Release version 1 is expected at the beginning of 2020, and will include more a more formal set of recommendations. More details can be found at the following web address: https://www.partnershiponai.org/about-ml/ We encourage anyone interested in participating in the project to get in touch for futher updates. ABOUT ML Process CURRENT PHASE Run plot tests — \\ with PAI Partners) Bievate host \ =e ) practices besed )) and organizations //) aniad on evidence Within each phase, iteratively repeat these steps Collect pubic commanente revisions, approves update raft (milestone releases only) Loren son pune connie TM costo vo reuse comment MM cin ro ramic connor a y Steering Committee proposes ») Conduct Diverse Volces process on approved ~~ ST Figure 1: Overview of ABOUT ML project’s lifecycle. ABOUT ML Timeline PUBLIC COMMENT TO PUBLIC COMMENT * suLy 31, 2019 ‘+ SEPTEMBER 28, 2019 FUTURE Successive drafts, ABOUT ML Version 0 Draft released for public comment & Steering Committee announced First Steering Committee meeting with input from Diverse Voices panels, public comments and Steering Committee \ \\ JUL 2819 AUG 2819 SEP 2619 oct 2019 Nov 2619 DEC 2619 a 2026 >) UY, //j i = AUGUST 2019 ONWARD * OCTOBER 2019 - DECENBER 2619 + a1 2620 oNMARD : ‘ Diverse Voices Proces Public Comment Period vers . Public Comment Period Figure 2: Current timeline for ABOUT ML. 5 # References [1] Mike Ananny and Kate Crawford. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. new media & society, 20(3):973–989, 2018. [2] Emily M Bender and Batya Friedman. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587–604, 2018. [3] Misha Benjamin, Paul Gagnon, Negar Rostamzadeh, Chris Pal, Yoshua Bengio, and Alex Shee. Towards standardization of data licenses: The montreal data license. arXiv preprint arXiv:1903.12262, 2019. [4] Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford. Datasheets for datasets. arXiv preprint arXiv:1803.09010, 2018. [5] Daniel Greene, Anna Lauren Hoffmann, and Luke Stark. Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Proceedings of the 52nd Hawaii International Conference on System Sciences, 2019. [6] Michael Hind, Sameep Mehta, Aleksandra Mojsilovic, Ravi Nair, Karthikeyan Natesan Ra- mamurthy, Alexandra Olteanu, and Kush R Varshney. Increasing trust in ai services through supplier’s declarations of conformity. arXiv preprint arXiv:1808.07261, 2018. [7] Sarah Holland, Ahmed Hosny, Sarah Newman, Joshua Joseph, and Kasia Chmielinski. The dataset nutrition label: A framework to drive higher data quality standards. arXiv preprint arXiv:1805.03677, 2018. [8] Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé III, Miro Dudik, and Hanna Wallach. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, page 600. ACM, 2019. [9] Anna Jobin, Marcello Ienca, and Effy Vayena. Artificial intelligence: the global landscape of ethics guidelines. arXiv preprint arXiv:1906.11668, 2019. [10] Patrick Gage Kelley, Joanna Bresee, Lorrie Faith Cranor, and Robert W Reeder. A nutrition label for privacy. In Proceedings of the 5th Symposium on Usable Privacy and Security, page 4. ACM, 2009. [11] Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchin- son, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 220–229. ACM, 2019. [12] Brent Mittelstadt. Ai ethics–too principled to fail? Available at SSRN 3391293, 2019. [13] Pete Resnick. On consensus and humming in the ietf, 2014. [14] Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency, pages 59–68. ACM, 2019. [15] Andrea L Thomaz and Cynthia Breazeal. Transparency and socially guided machine learning. In 5th Intl. Conf. on Development and Learning (ICDL), 2006. [16] Michael Veale. Logics and practices of transparency and opacity in real-world applications of public sector machine learning. arXiv preprint arXiv:1706.09249, 2017. [17] Jess Whittlestone, Rune Nyrup, Anna Alexandrova, and Stephen Cave. The role and limits of principles in ai ethics: towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 195–200. ACM, 2019. [18] Meg Young, Lassana Magassa, and Batya Friedman. Toward inclusive tech policy design: a method for underrepresented voices to strengthen tech policy documents. Ethics and Information Technology, 21(2):89–103, 2019. [19] Yi Zeng, Enmeng Lu, and Cunqing Huangfu. Linking artificial intelligence principles. arXiv preprint arXiv:1812.04814, 2018. 6
{ "id": "1805.03677" }
1912.05877
Extending Machine Language Models toward Human-Level Language Understanding
Language is crucial for human intelligence, but what exactly is its role? We take language to be a part of a system for understanding and communicating about situations. The human ability to understand and communicate about situations emerges gradually from experience and depends on domain-general principles of biological neural networks: connection-based learning, distributed representation, and context-sensitive, mutual constraint satisfaction-based processing. Current artificial language processing systems rely on the same domain general principles, embodied in artificial neural networks. Indeed, recent progress in this field depends on \emph{query-based attention}, which extends the ability of these systems to exploit context and has contributed to remarkable breakthroughs. Nevertheless, most current models focus exclusively on language-internal tasks, limiting their ability to perform tasks that depend on understanding situations. These systems also lack memory for the contents of prior situations outside of a fixed contextual span. We describe the organization of the brain's distributed understanding system, which includes a fast learning system that addresses the memory problem. We sketch a framework for future models of understanding drawing equally on cognitive neuroscience and artificial intelligence and exploiting query-based attention. We highlight relevant current directions and consider further developments needed to fully capture human-level language understanding in a computational system.
http://arxiv.org/pdf/1912.05877
James L. McClelland, Felix Hill, Maja Rudolph, Jason Baldridge, Hinrich Schütze
cs.CL, cs.AI
null
null
cs.CL
20191212
20200704
0 2 0 2 l u J 4 ] L C . s c [ 2 v 7 7 8 5 0 . 2 1 9 1 : v i X r a # Extending Machine Language Models toward Human-Level Language Understanding # James L. McClellanda,b,2, Felix Hillb,2, Maja Rudolphc,2, Jason Baldridged,1,2, and Hinrich Schützee,1,2 aStanford University, Stanford, CA 94305, USA; bDeepMind, London N1C 4AG, UK; cBosch Center for Artificial Intelligence, Renningen, 71272, Germany; dGoogle Research, Austin, TX 78701, USA; eLMU Munich, Munich, 80538, Germany This manuscript was compiled on January 7, 2022 Language is crucial for human intelligence, but what exactly is its role? We take language to be a part of a system for understand- ing and communicating about situations. The human ability to un- derstand and communicate about situations emerges gradually from experience and depends on domain-general principles of biological neural networks: connection-based learning, distributed represen- tation, and context-sensitive, mutual constraint satisfaction-based processing. Current artificial language processing systems rely on the same domain general principles, embodied in artificial neural net- works. Indeed, recent progress in this field depends on query-based attention, which extends the ability of these systems to exploit con- text and has contributed to remarkable breakthroughs. Nevertheless, most current models focus exclusively on language-internal tasks, limiting their ability to perform tasks that depend on understanding situations. These systems also lack memory for the contents of prior situations outside of a fixed contextual span. We describe the orga- nization of the brain’s distributed understanding system, which in- cludes a fast learning system that addresses the memory problem. We sketch a framework for future models of understanding draw- ing equally on cognitive neuroscience and artificial intelligence and exploiting query-based attention. We highlight relevant current di- rections and consider further developments needed to fully capture human-level language understanding in a computational system. for modeling cognition (2). This work introduced the idea that structure in cognition and language is emergent: it is captured in learned connection weights supporting the construction of context-sensitive representations whose characteristics reflect a gradual, input-statistics dependent, learning process (3). Classical linguistic theory and most computational linguistics employs discrete symbols and explicit rules to characterize language structure and relationships. In neural networks, these symbols are replaced by continuous, multivariate patterns called distributed representations or embeddings and the rules are replace by continuous, multi-valued arrays of connection weights that map patterns to other patterns. Since its introduction (3), debate has raged about this approach to language processing (4). Protagonists argue it supports nuanced, context- and similarity-sensitive processing that is reflected in the quasi-regular relationships between phrases and their sounds, spellings, and meanings (5, 6). These models also capture subtle aspects of human performance in language tasks (7). However, critics note that neural networks often fail to generalize beyond their training data, blaming these failures on the absence of explicit rules (8–10). Natural Language Understanding | Deep Learning | Situation Models | Cognitive Neuroscience | Artificial Intelligence Striking recent advances in machine intelligence have ap- peared in language tasks. Machines better transcribe speech and respond in ever more natural sounding voices. Widely available applications allow one to say something in one lan- guage and hear its translation in another. Humans perform better than machines in most language tasks, but these systems work well enough to be used by billions of people everyday. Another key principle is mutual constraint satisfaction (11). For example, interpreting a sentence requires resolving both syntactic and semantic ambiguity. If we hear A boy hit a man with a bat, we tend to assume with a bat attaches to the verb (syntax) and thereby the instrument of hitting (semantics). However, if beard replaces bat, then with a beard is attached to man (syntax) and describes the person affected (semantics) (12). Even segmenting language into elementary units depends on meaning and context (Fig. 1). Rumelhart (11) envisioned a model in which estimates of the probability of all aspects of an input constrain estimates of the probabilities of all others, motivating a model of context effects in perception (13) that launched the PDP approach. What underlies these successes? What limitations do they face? We argue that progress has come from exploiting prin- ciples of neural computation employed by the human brain, while a key limitation is that these systems treat language as if it can stand alone. We propose that language works in con- cert with other inputs to understand and communicate about situations. We describe key aspects of human understanding and key components of the brain’s understanding system. We then propose initial steps toward a model informed both by cognitive neuroscience and artificial intelligence and point to extensions addressing more abstract cases. # Neural Language Modeling Initial steps. Elman (14) introduced a simple recurrent neural network (RNN) (Fig. 2a) that captured key characteristics of language structure through learning, a feat once considered impossible (15). It was trained to predict the next word in a sequence (w(t + 1)) based on the current word (w(t)) and its own hidden (that is, learned internal) representation from the previous time step (h(t−1)). Each of these inputs is multiplied by a matrix of connection weights (arrows labeled Whi and # Principles of Neural Computation JM, FH, MR, JB, and HS wrote the paper. The principles of neural computation are domain general, inspired by the human brain and human abilities. They were first articulated in the 1950s (1) and further developed in the 1980s in the Parallel Distributed Processing (PDP) framework The authors declare no conflict of interest. 1J.B. and H.S. contributed equally. 2 To whom correspondence should be addressed. E-mail: jlmccstanford.edu, [email protected], [email protected], [email protected] or [email protected] 1 Fig. 1. Context influences the identification of letters in written text: the visual input we read as went in the first sentence and event in the second is the same bit of Rumelhart’s handwriting, cut and pasted into each context. Reprinted from (11). Whh in Fig. 2a) and the results are added to produce the input to the hidden units. The elements of this vector pass through a function limiting the range of their values, producing the hidden representation. This in turn is multiplied with weights to the output layer from the hidden layer (Woh) to generate a vector used to predict the probability of each of the possible successor words. Learning is based on the discrepancy between the network’s output and the actual next word; the values of the connection weights are adjusted by a small amount to reduce the discrepancy. The network is recurrent because the same connection weights (denoted by arrows in the figure) are used to process each successive word. (a) Elman’s Simple Recurrent (b) Learned Representations Network of Words = —_ Te aon VERBS pte oom h(t) ais LCs" samates ann — NOUNS, Lica FOOD INANIMATES L__-#=oreacamues Fig. 2. (a) Elman’s (1990) simple recurrent network and (b) his hierarchical clustering of the representations it learned, reprinted from (14). relationships of all the words in the corpus, improving gener- alization: task-focused neural models trained on small data sets better generalize to infrequent words (e.g., settee) based on frequent words (e.g. couch) with similar embeddings. A second challenge is the indefinite length of the context that might be relevant for prediction. Consider this passage: Elman showed two things. First, after training his network to predict the next word in sentences like man eats bread, dog chases cat, and girl sleeps, the network’s representations cap- tured the syntactic distinction between nouns and verbs (14). They also captured interpretable subcategories, as shown by a hierarchical clustering of the hidden representations of the dif- ferent words (Fig. 2b). This illustrates a key feature of learned representations: they capture specific as well as general or abstract information. By using a different learned representa- tion for each word, its specific predictive consequences can be exploited. Because representations for words that make similar predictions are similar, and because neural networks exploit similarity, the network can share knowledge about predictions among related words. Second, Elman (16) used both simple sentences like boy chases dogs and more complex ones like boy who sees girls chases dogs. In the latter, the verb chases must agree with the first noun (boy), not the closest noun (girls), since the sentence contains a main clause (boy chases dogs) interrupted by a reduced relative clause (boy [who] sees girls). The model learned to predict the verb form correctly despite the interven- ing clause, showing that it acquired sensitivity to the syntactic structure of language, not just local co-occurrence statistics. Scaling up to natural text. Elman’s task of predicting words based on context has been central to neural language modeling. However, Elman trained his networks with tiny, toy languages. For many years, it seemed they would not scale up, and language modeling was dominated by simple n-gram models and systems designed to assign explicit structural descriptions to sentences, aided by advances in probabilistic computations (17). Over the past 10 years, breakthroughs have allowed networks to predict and fill in words in huge natural language corpora. One challenge is the large size of a natural language’s vocabulary. A key step was the introduction of methods for learning word representations (now called embeddings) from co-occurrence relationships in large text corpora (18, 19). These embeddings exploit both general and specific predictive John put some beer in a cooler and went out with his friends to play volleyball. Soon after he left, someone took the beer out of the cooler. John and his friends were thirsty after the game, and went back to his place for some beers. When John opened the cooler, he discovered that the beer was ___. Here a reader expects the missing word to be gone. Yet if we replace took the beer with took the ice, the expected word is warm. Any amount of additional text between beer and gone does not change the predictive relationship, challenging RNNs like Elman’s. An innovation called Long-short-term memory (LSTM) (20) partially addressed this problem by augmenting the recurrent network architecture with learned connection weights that gate information into and out of a network’s internal state. However, LSTMs did not fully alleviate the context bottleneck problem (21): a network’s internal state was still a fixed-length vector, limiting its ability to capture contextual information. Query-based attention. Recent breakthroughs depend on an innovation we call query-based attention (QBA) (21). It was used in the Google Neural Machine Translation system (22), a system that attained a sudden leap in performance and attracted widespread public interest (23). We illustrate QBA in Fig. 3 with the sentence John hit the ball with the bat. Context is required to determine whether bat refers to an animal or a baseball bat. QBA addresses this by issuing queries for relevant information. A query might ask ‘is there an action and relation in the context that would indicate which kind of bat fits best?’ The embeddings of words that match the query then receive high weightings in the weighted attention vector. In our example, the query matches the embedding of hit closely and of with to some extent; the returned attention vector captures the content needed to determine that a baseball bat fits in this context. There are many variants of QBA; the figure presents a simple version. One important QBA model called BERT (25) In BERT, the uses both preceding and following context. McClelland et al.: Extending Machine Language Models toward Human-Level Language Understanding 2 RepVs sims Ws Scaled RepVs Context John words: hit a i: the ball | with the Focal Query Weighted Attention Vector word: Pat Fig. 3. Query-based attention (QBA). To constrain the interpretation of the word bat in the context John hit the ball with the __, a query generated from bat is used to construct a weighted attention vector which shapes the word’s interpretation. The query is compared to each of the learned representation vectors (RepVs) of the context words; this creates a set of similarity scores (Sims) which in turn produce a set of weightings (Ws, a set of positive numbers summing to 1). The Ws are used to scale the RepVs of the context words, creating Scaled RepVs. The weighted attention vector is the element-wise sum of the Scaled RepVs. The Query, RepVs, Sims, Scaled RepVs and weighted attention vector use red color intensity for positive magnitudes and blue for negative magnitudes. Ws are shown as green color intensity. White = 0 throughout. The Query and RepVs were made up for illustration, inspired by (24). Mathematical details: For query q and representation vector vj for context word j, the similarity score sj is cos (q, vj ). The sj are converted into weightings wj by the softmax function, wj =e(gsj )/(Σj0e(gsj0 )), where the sum in the denominator runs over all words in the context span, and g is a scale factor. sentations for subsequent fine tuning to perform other tasks. A recent model called GPT-3 achieves impressive gains on sev- eral benchmarks without requiring fine-tuning (28). However, this model still falls short of human performance on tasks that depend on what the authors call "common sense physics" and on carefully crafted tests of their ability to determine if a sentence follows from a preceding text (31). Further, the text corpora these models rely on are far larger than a hu- man learner could process in a lifetime. Gains from further increases may be diminishing, and human learners appear to be far more data-efficient. The authors of GPT-3 note these limitations and express the view that further improvements may require more fundamental changes. # Language in an Integrated Understanding System Where should we look for further progress addressing the limitations of current language models? In concert with others (32), we argue that part of the solution will come from treating language as part of a larger system for understanding and communicating. network is trained to correct missing or randomly replaced words in blocks of text, typically spanning two sentences. BERT relies on multiple attention heads, each employing QBA (26), concatenating the returned weighted vectors to form a composite vector. The process is iterated across several stages, so that contextually constrained representations of each word computed at intermediate stages in turn constrain the representations of every other word in later stages. In this way, BERT employs mutual constraint satisfaction, as Rumelhart (11) envisioned. The contextualized embeddings that result from QBA can capture gradations within the set of meanings a language maps to a given word, aiding translation and other down-stream tasks. For example, in English, we use ball for many types of balls, whereas in French, some are balles and others ballons. Subtly different embeddings for ball in different English sentences aids selecting the correct French translation. Situations. We adopt the perspective that the targets of under- standing are situations. Situations are collections of entities, their properties and relations, and patterns of change in them. A situation can be static (e.g., a cat on a mat). Situations in- clude events (e.g., a boy hitting a ball). Situations can embed within each other; the cat may be on a mat inside a house on a particular street in a particular town, and the same applies to events like baseball games. A situation can be conceptual, social or legal, such as one where a court invalidates a law. A situation may even be imaginary. The entities participating in a situation or event may be real or fictitious physical objects or locations; animals, persons, groups or organizations; beliefs or other states of mind; sets of objects (e.g., all dogs); symbolic objects such as symbols, tokens or words; or even contracts, laws, or theories. Situations can even involve changes in beliefs about relationships among classes of objects (e.g. biologists’ beliefs about the genus a species of trees belongs in). In QBA architectures, the vectors all depend on learned connection weights, and analysis of BERT’s representations shows that they capture syntactic structure well (24). Different attention heads capture different linguistic relationships, and the similarity relationships among the contextually-shaded word representations can be used to reconstruct a sentence’s syntactic structural description. These representations capture this structure without building it in, supporting the emergence principle. That said, careful analysis (27) indicates the deep network’s sensitivity to grammatical structure is still imperfect, and only partially understood. Some attention-based models (28) proceed sequentially, predicting each word using QBA over prior context, while BERT uses parallel processing or mutual QBA simultaneously on all the words in a pair of sentences. Humans appear to exploit past context and a limited window of subsequent context (29), suggesting a hybrid strategy. Some machine models adopt this approach (30), and below we adopt a hybrid approach as well. Attention-based models have produced remarkable improve- ments on a wide range of language tasks. The models can be pre-trained on massive text corpora, providing useful repre- What it means for an agent to understand a situation is to construct a representation of it that captures aspects of the participating objects, their properties, relationships and interactions, and resulting outcomes. We emphasize that the understanding should be thought of as a construal or interpre- tation that may be incomplete or inaccurate. The construal will depend on the culture and context of the agent and the agent’s purpose. When other agents are the source of the in- put, the target agent’s construal of the knowledge and purpose of these other agents also play important roles. As such, the construal process must be considered to be completely open ended and to potentially involve interaction between the con- struer and the situation, including exploration of the world and discourse between the agent and participating interlocutors. Within this construal of understanding, we emphasize that language should be seen as a component of an understanding system. This idea is not new, but historically it was not univer- sally accepted. Chomsky, Fodor and others (8, 33, 34) argued that grammatical knowledge sits in a distinct, encapsulated subsystem. Our proposal to focus on language as part of a system representing situations builds on a long tradition in linguistics (35), human cognitive psychology (36), psycholin- guistics (37), philosophy (38) and artificial intelligence (39). McClelland et al.: Extending Machine Language Models toward Human-Level Language Understanding 3 The approach was adopted in an early neural network model (40) and aligns with other current perspectives in cognitive neuroscience (41) and artificial intelligence (32). People construct situation representations. A person processing language constructs a representation of the described situation in real time, using both the stream of words and other available information. Words and their sequencing serve as clues to meaning (42) that jointly constrain the understanding of the situation (40). Consider this passage: John spread jam on some bread. The knife had been dipped in poison. We make many inferences: the jam was spread with the poi- soned knife and poison has been transferred to the bread. If John eats it he may die! Note the entities are objects, not words, and the situation could be conveyed by a silent movie. Evidence that humans construct situation representations from language comes from classic work by Bransford and colleagues (36, 43). This work demonstrates that (1) we un- derstand and remember texts better when we can relate the text to a familiar situation; (2) relevant information can come from a picture accompanying the text; (3) what we remember from a text depends on the framing context; (4) we represent objects in memory that were not explicitly mentioned; and (5) after hearing a sentence describing spatial or conceptual rela- tionships, we remember these relationships, not the language itself. For example, given Two frogs rested beside a floating log and a fish swam under it, the situation changes if it is replaced by them. After hearing the original sentence, people reject the variant with it in it as the sentence they heard before, but if the initial sentence said the frogs rested on the log, the situation is unchanged by replacing it with them, and people accept this variant. Cort Integrated System State _ > MTL Neo- cortex “.. the bat hit ...” “.,. the numbat eats termites...” Fig. 4. Sketch of the brain’s understanding system. Ovals in the blue box stand for neocortical brain areas representing different kinds of information. Arrows in the neocortex stand for connections allowing representations to constrain each other. The medial temporal lobe (red box) stores an integrated representation of the neocortical system state arising from an experience. The red arrow represents fast-learning connections that store this pattern for later reactivation and use. Green arrows stand for gradually learned connections supporting bidirectional influence between the MTL and neocortex. (A) and (B) are two example inputs discussed in the main text. while leaving them with the flexibility that has led to their successes to date. Evidence from eye movements shows that people use lin- guistic and non-linguistic input jointly and immediately (44). Just after hearing The man will drink ... participants look at a full wine glass rather than an empty beer glass (45). After hearing The man drank, they look at the empty beer glass. Understanding thus involves constructing, in real time, a representation conveyed jointly by vision and language. The compositionality of situations. An important debate in cognitive science and AI centers on compositionality. Fodor and Pylyshyn (8) argued that our cognitive systems must be compositional by design to allow language to express arbitrary relationships and noted that early neural network models failed tests of compositionality. Such failures are still reported (46) leading some to propose building compositionality in (10); yet, as we have seen, the most successful language models avoid doing so. We suggest that a focus on situations may enhance compositionality because situations are themselves compositional. Suppose a person picks an apple and gives it to someone. A small number of objects and persons are focally involved, and the effects on other persons and objects are likely to be local. A sentence like John picked an apple and gave it to Mary could describe this situation, capturing the most relevant participants and their relationships. We emphasize that compositionality is predominant and approximate, not universal or absolute, so it is best to allow for these matters of degree. Letting situation representations emerge through experience will help our models to achieve greater systematicity, Language informs us about situations. Situations ground the representations we construct from language; equally impor- tantly, language informs us about situations. Language tells us about situations we have not witnessed and describes aspects that we cannot observe. Language also communicates folk or scientific construals that shape listener’s construals, such as the idea that an all-knowing being took six days to create the world or the idea that natural processes gave rise to our world and ultimately ourselves over billions of years. Language can be used to communicate information about properties that only arise in a social setting, such as ownership, or that have been identified by a culture as important, such as exact num- ber. Language thus enriches and extends the information we have about situations and provides the primary medium conveying properties of many kinds of objects and many kinds of relationships. # Toward a Brain and AI Inspired Model of Understanding Capturing the full range of situations is clearly a long-term challenge. We return to this later, focusing first on concrete situations involving animate beings and physical objects. We seek to integrate insights from cognitive neuroscience and artificial intelligence toward the goal of building an integrated understanding model. We start with our construal of the understanding system in the human brain and then sketch aspects of what an artificial implementation might look like. McClelland et al.: Extending Machine Language Models toward Human-Level Language Understanding 4 The understanding system in the brain. Our construal of the human integrated understanding system builds on the princi- ples of mutual constraint satisfaction and emergence and with the idea that understanding centers on the construction of situation representations. It is consistent with a wide range of evidence, some of which we review, and is broadly consistent with recent characterizations in cognitive neuroscience (41, 47). However, researchers hold diverse views about the details of these systems and how they work together. We focus first on the part of the system located primarily in the neocortex of the brain, as schematized in the large blue box of Fig. 4. Together with input and output systems, this allows a person to combine linguistic and visual input to understand the situation referred to upon hearing a sen- tence, such as one containing the word bat, while observing a corresponding situation in the world. It is important to note that the neocortex is very richly structured, with on the order of 100 well-defined anatomical subdivisions. However, it is common and useful to group these divisions into subsystems. The ones we focus on here are each indicated by a blue oval in the figure. One subsystem subserves the formation of a visual representation of the given situation, and another subserves the formation of an auditory representation capturing the spatiotemporal structure of the co-occurring spoken language. The three ovals above these provide representations of more integrative/abstract types of information (see below). fine-grained static situations and short time scale events (here- after micro-situations) that can be conveyed by a sentence like the boy hit the ball with a bat. These context representations arise in a set of interconnected areas primarily within the pari- etal lobes (41, 47). In recent work, brain imaging data is used to analyze the time-varying patterns of neural activity while processing a temporally extended narrative. The brain activity patterns that represent scenes extending over tens of seconds (e.g., a detective searching a suspect’s apartment for evidence) are largely the same, whether the information comes from watching a movie, hearing or reading a narrative description, or recalling the movie after having seen it (52, 53). Activa- tions in different brain areas track information at different time scales. Activity in modality-specific areas associated with speech and visual processing follows the moment-by-moment time course of spoken and/or visual information while activ- ity in the network associated with situation representations fluctuates on a much longer time scale. During processing of narrative information, activations in these regions tends to be relatively stable within a scene and punctuated with larger changes at boundaries between these scenes, and these patterns lose their coherence when the narrative structure is scrambled (41, 53). Larger-scale spatial transitions (e.g., transitions between rooms) also create large changes in neural activity (47). Within each subsystem, and between each connected pair of subsystems, the neurons are reciprocally interconnected via learning-dependent pathways allowing mutual constraint satis- faction among all of the elements of each of the representation types, as indicated by the looping blue arrows from each oval to itself and by the bi-directional blue arrows between these ovals. Brain regions for representing visual and auditory inputs are well-established, and the evidence for their involvement in a mutual constraint satisfaction process with more integrative brain areas has been reviewed elsewhere (48, 49). Here we consider the three more integrative subsystems. Object representations. A brain area near the front of the tempo- ral lobe houses neurons whose activity provides an embedding capturing the properties of objects (50). Damage to this area impairs the ability to name objects, to grasp them correctly for their intended use, to match them with their names or the sounds they make, and to pair objects that go together, either from their names or from pictures. This brain area is itself an inter-modal area, receiving visual, language and other information about objects such as the sounds they make and how they feel to the touch. Models capturing these findings (51) treat this area as the hidden layer of an interactive, re- current network with bidirectional connections to other layers representing different types of object properties, including the object’s name. In these models, an input to any of these other layers activates the corresponding pattern in the hidden layer, which in turn activates the corresponding patterns in the other layers. This supports, for example, the ability to produce the name of an object from visual input. Damage (simulated by removing neurons in the hidden layer) degrades the model’s representations, capturing the patterns of errors made by patients with the condition. Representation of context. There is a network of areas in the brain that capture the large-scale spatiotemporal context of The role of language. Where in the brain should we look for representations of the relations among the objects participating in a micro-situation? The effects of brain damage suggest that representations of relations may be integrated, at least in part, with the representation of language itself. Injuries affecting the lateral surface of the frontal and parietal lobes produce profound deficits in the production of fluent language, but can leave the ability to read and understand concrete nouns largely intact. Such lesions produce intermediate degrees of impairment to abstract nouns, verbs, and modifiers, and profound impairment to words like if and by that capture grammatical and spatial relations (54). This striking pattern is consistent with the view that language itself is intimately tied to the representation of relations and changes in relations (information conveyed by verbs, prepositions, and grammatical markers). Indeed, the frontal and parietal lobes are associated with representation of space and action (which causes change in relations), and patients with lesions to the frontal and parietal language-related areas have profound deficits in relational reasoning tasks (55). We therefore tentatively suggest that the understanding of micro-situations depends jointly on the object and language systems, and that the language is intimately link to representation of spatial relationships and actions. Complementary learning systems. The brain systems described above support understanding of situations that draw on gen- eral knowledge as well as oft-repeated personal knowledge, but they do not support the formation of new memories that can be accessed and used at an arbitrary later time. This ability depends on structures that include the hippocampus in the me- dial temporal lobes (MTL; Fig. 4, red box). While these areas are critical for new learning, damage to them does not affect general knowledge, acquired skills, or the ability to process language and other forms of input to understand a situation— except when this depends on remote information experienced briefly outside the immediate current context (56). These McClelland et al.: Extending Machine Language Models toward Human-Level Language Understanding 5 findings are captured in the neural-network based complemen- tary learning systems (CLS) theory (57–59), which holds that connections within the neocortex acquire the knowledge that allows a human to understand objects and their properties, to link words and objects, and to understand and communicate about generic situations as these are conveyed through lan- guage and other forms of experience. According to this theory, the learning process is necessarily gradual, allowing it to cap- ture the nuanced statistical structure of experience. The MTL provides a complementary fast-learning system supporting the formation of new arbitrary associations, linking the elements of an experience together, including the objects and language encountered in a situation and the co-occurring spatiotemporal context, as might arise in the situation depicted in Fig. 4B, where a person encounters the word numbat from both visual and language input. tion – addressing it will benefit from a greater convergence of cognitive neuroscience and AI. Toward this goal, we sketch a proposal for a brain and AI-inspired model. We rely on the principles of mutual constraint satisfaction and emergence, the query-based attention architecture from AI and deep learn- ing, and the components and their interconnections in the understanding system in the brain, as illustrated in Fig. 4. We treat the system as one that receives sequences of picture-description (PD) pairs grouped into episodes that in turn form story-length narratives, with the language conveyed by text rather than speech. Our sketch leaves important issues open and will require substantial development. Steps toward addressing some of these issues are already being taken: mutual QBA is already being used, e.g., in (65), to exploit audio and visual information from movies. It is generally accepted that knowledge that depends ini- tially on the MTL can be integrated into the neocortex through a consolidation process (56). In CLS (58), the neocortex learns arbitrary new information gradually through interleaved pre- sentations of new and familiar items, weaving it into the fabric of knowledge and avoiding interference with existing knowl- edge (60). The details are subjects of current debate and ongoing investigation (61). In our proposed model, each PD pair is processed by inter- acting object and language subsystems, receiving visual and text input at the same time. Each subsystem must learn to restore missing or distorted elements (words in the language subsystem, objects in the object subsystem) by using mutual QBA as in BERT, allowing each element in each subsystem to be constrained by all of the elements in both sub-systems. Additionally these systems will query the context and memory subsystems, as described below. As in our example of the beer John left in the cooler, under- standing often depends on remote information. People exploit such information during language processing (62), and patients with MTL damage have difficulty understanding or producing extended narratives (63); they are also profoundly impaired in learning new words for later use (64). Neural language models, including those using QBA, also lack these capabilities. In BERT and related models, bi-directional attention operates within a span of a couple of sentences at a time. Other models (28) employ QBA over longer spans of prior context, but there is still a finite limit. These models learn gradually like the human neocortical system, allowing them to capture structure in experience and acquire knowledge of word meaning. GPT-3 (28) is impressive in its ability to use a word encountered for the first time within its contextual span appropriately in a subsequent sentence, but this information is lost forever when the context is re-initialized, as it would be in a patient without the medial temporal lobe. Including an MTL-like system in future understanding models would address this limitation. The brain’s complementary learning systems may provide a means to address the challenge of learning to use a word encountered in a single context appropriately across a wide range of contexts. Deep neural networks that learn gradually through many repetitions of an item that occurs in a single context, interleaved with presentations of other items occurring in a diversity of contexts, do not show this ability (46). We attribute this failure to the fact that the distribution of training experiences they receive conveys the information that the target item is in fact restricted to its single context. Further research should explore whether augmenting a model like GPT- 3 with an MTL-like memory would enable more human-like extension of a novel word encountered just once to a wider range of contexts. Next steps toward a brain and AI-inspired model. Given the construal we have described of the human understanding sys- tem, we now ask, what might an implementation of a model consistent with it be like? This is a long-term research ques- The context subsystem encodes a sequence of compressed representations of the previous PD pairs within the current episode. Processing in this subsystem would be sequential over pairs, allowing the network constructing the current com- pressed representation to query the representations of past pairs within the episode. Within the processing of a pair, the context system would engage in mutual QBA with the object and language subsystems, allowing the language and object subsystems to indirectly exploit information from past pairs within the episode. Our system also includes an MTL-like memory to allow it to use remote information beyond the current episode. A neural network with learned connection weights constructs a reduced description of the state of the object, language, and context modules along with their inputs from vision and text after processing each PD pair. Building on existing artificial systems with external memory (66, 67) this compressed vector is stored as a vector in a slot in an external memory. These states are then accessible to the cortical subsystems via QBA. The system would employ a flexible querying scheme, such that any subset of the object, language, or context representations of an input currently being processed would contribute to accessing relevant MTL representations. There contents would then be to all of the cortical subsystems using QBA. Thus, the appearance of a numbat in a visual scene would retrieve the corresponding language and context information containing its name and the fact that it eats termites, based on prior storage of the the compressed representation formed previously from the inputs in Fig. 4B. Ultimately, our model may benefit from a subsystem that guides processing in all of the subsystems we have described. The brain has such a system in its frontal lobes; damage to this system leads to impairments in guiding behavior and cog- nition according to current task demands, and others advocate including such a system in neural AI systems (68). We leave it to future work to consider how to integrate such a subsystem into the model we have described here. McClelland et al.: Extending Machine Language Models toward Human-Level Language Understanding 6 Enhancing understanding by incorporating interaction with the physical and social world. A complete model of the hu- man understanding system will require integration of many additional information sources. These include sounds, touch and force-sensing, and information about one’s own actions. Every source provides opportunities to predict information of each type, relying on every other type. Information salient in one source can bootstrap learning and inference in the other, and all are likely to contribute to enhancing composi- tionality and addressing the data-inefficiency of learning from language alone. This affords the human learner an important opportunity to experience the compositional structure of the environment through its own actions. Ultimately, an ability to link one’s actions to their consequences as one behaves in the world should contribute to the emergence of, and appreciation for, the compositional structure of events, and provide a basis for acquiring notions of cause and effect, of agency, and of object permanence (69). These considerations motivate recent work on agent- based language learning in simulated interactive 3D environ- ments (70–73). In (74), an agent was trained to identify, lift, carry and place objects relative to other objects in a virtual room, as specified by simplified language instructions. At each time step, the agent received a first-person visual observation and processed the pixels to obtain a representation of the scene. This was concatenated to the final state of an LSTM that processed the instruction, and then passed to an integrative LSTM whose output was used to select a motor action. The agent gradually learned to follow instructions of the form find lift up a basketball and put the teddy bear on the bed, a pencil, encompassing 50 objects and requiring up to 70 action steps. Such instructions require constructing representations based on language stimuli that support identification of objects and relations across space and time and the integration of this information to inform motor behaviors. Importantly, without building in explicit object representa- tions, the learned system was able to interpret novel instruc- tions. For instance, an agent trained to lift each of 20 objects, but only trained to put 10 of those in a specific location could place the remaining objects in the same location on command with over 90% accuracy, demonstrating a degree of compo- sitionality in its behavior. Notably, the agent’s ego-centric, multimodal and temporally-extended experience contributed to this outcome; both an alternative agent with a fixed perspec- tive on a 2D grid world and a static neural network classifier that received only individual still images exhibited significantly worse generalization. This underscores how affording neural networks access to rich, multi-modal interactive environments can stimulate the development of capacities that are essen- tial for language learning, and contribute toward emergent compositionality. concrete ones like path and its extended metaphorical use as the means to achieve goals (75). Embodied, simulation-based approaches to meaning (76, 77) build on this observation to bridge from concrete to abstract situations via metaphor. They posit that understanding words like grasp is linked to neural representations of the action of grabbing and that this cir- cuitry is recruited for understanding contexts such as grasping an idea. We consider situated agents as a critical catalyst for learning about how to represent and compose concepts pertaining to spatial, physical and other perceptually imme- diate phenomena—thereby providing a grounded edifice that can connect both to brain circuitry for motor action and to representations derived primarily from language. # Conclusion Language does not stand alone. The understanding system in the brain connects language to representations of objects and situations and enhances language understanding by exploiting the full range of our multi-sensory experience of the world, our representations of our motor actions, and our memory of previous situations. We believe next generation language understanding systems should emulate this system and we have sketched an approach that incorporates recent machine learning breakthroughs to build a jointly brain and AI in- spired understanding system. We emphasize understanding of concrete situations and argue that understanding abstract language should build upon this foundation, pointing toward the possibility of one day building artificial systems that under- stand abstract situations far beyond concrete, here-and-now situations. In sum, combining insights from neuroscience and AI will take us closer to human-level language understanding. ACKNOWLEDGMENTS. This article grew out of a workshop organized by HS at Meaning in Context 3, Stanford University, September 2017. We thank Janice Chen, Chris Potts, and Mark Seidenberg for discussion. HS was supported by ERC Advanced Grant #740516. 1. Rosenblatt F (1961) Principles of neurodynamics. Perceptrons and the theory of brain mech- anisms. (Spartan Books, Cornell University, Ithaca, New York). 2. Rumelhart DE, McClelland JL, the PDP research group (1986) Parallel Distributed Process- (MIT Press, ing. Explorations in the Microstructure of Cognition. Volume 1: Foundations. Cambridge MA). 3. Rumelhart DE, McClelland JL (1986) On learning the past tenses of English verbs in Parallel Distributed Processing. Explorations in the Microstructure of Cognition. Volume 2: Psycho- logical and Biological Models, eds. McClelland JL, Rumelhart DE, the PDP Research Group. (MIT Press, Cambridge MA), pp. 216–271. 4. Pinker S, Mehler J (1988) Connections and symbols. (MIT Press, Cambridge, MA). 5. MacWhinney B, Leinbach J (1991) Implementations are not conceptualizations: Revising the verb learning model. Cognition 40(1-2):121–157. 6. Bybee J, McClelland JL (2005) Alternatives to the combinatorial paradigm of linguistic theory based on domain general principles of human cognition. The linguistic review 22(2-4):381– 410. 7. Seidenberg MS, Plaut DC (2014) Quasiregularity and its discontents: The legacy of the past tense debate. Cognitive science 38(6):1190–1228. 8. Fodor JA, Pylyshyn ZW, , et al. (1988) Connectionism and cognitive architecture: A critical analysis. Cognition 28(1-2):3–71. 9. Marcus G (2001) The algebraic mind. (Cambridge, MA: MIT Press). Beyond concrete situations. How might our approach be ex- tended beyond concrete situations to those involving relation- ships among objects like laws, belief systems, and scientific theories? Basic word embeddings themselves capture some abstract relations via vector similarity, e.g., encoding that justice is closer to law than peanut. Words are uttered in real world contexts and there is a continuum between grounding and language-based linking for different words and different uses of words. For example, career is not only linked to other abstract words like work and specialization but also to more 10. Lake BM, Ullman TD, Tenenbaum JB, Gershman SJ (2017) Building machines that learn and think like people. Behavioral and Brain Sciences 40:e253. 11. Rumelhart DE (1977) Toward an interactive model of reading in Attention & Performance VI, ed. Dornic S. (LEA, Hillsdale, NJ), pp. 573–603. 12. Taraban R, McClelland JL (1988) Constituent attachment and thematic role assignment in Influences of content-based expectations. Journal of memory and sentence processing: language 27(6):597–632. 13. McClelland JL, Rumelhart DE (1981) An interactive activation model of context effects in letter perception: I. An account of basic findings. Psychological review 88(5):375. 14. Elman JL (1990) Finding structure in time. Cognitive Science 14:179–211. 15. Gold EM (1967) Language identification in the limit. Information and control 10(5):447–474. 16. Elman JL (1991) Distributed representations, simple recurrent networks, and grammatical structure. Mach. Learn. 7(2/3):195–225. 17. Manning CD, Schütze H (1999) Foundations of statistical language processing. (Cambridge, MA: MIT Press). McClelland et al.: Extending Machine Language Models toward Human-Level Language Understanding 7 18. Collobert R, et al. (2011) Natural language processing (almost) from scratch. Journal of Machine Learning Research 12(Aug):2493–2537. 19. Mikolov T, Sutskever I, Chen K, Corrado GS, Dean J (2013) Distributed representations of words and phrases and their compositionality in Proceedings of Neural Information Process- ing Systems. pp. 3111–3119. language? A voxel-based lesion symptom mapping study. Brain and language 113(2):59–64. 56. Milner B, Corkin S, Teuber HL (1968) Further analysis of the hippocampal amnesic syndrome: 14-year follow-up study of h.m. Neuropsychologia 6(3):215 – 234. 57. Marr D (1971) Simple memory: A theory for archicortex. Philosophical Transactions of the Royal Society of London B: Biological Sciences 262(841):23–81. 20. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural computation 9(8):1735–1780. 21. Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate in 3rd International Conference on Learning Representations, ICLR 2015. 22. Wu Y, et al. (2016) Google’s neural machine translation system: Bridging the gap between human and machine translation. CoRR abs/1609.08144. 58. McClelland JL, McNaughton BL, O’Reilly RC (1995) Why there are complementary learning Insights from the successes and failures of systems in the hippocampus and neocortex: connectionist models of learning and memory. Psychological review 102(3):419–457. 59. Kumaran D, Hassabis D, McClelland JL (2016) What learning systems do intelligent agents need? Complementary learning systems theory updated. Trends in cognitive sciences 20(7):512–534. 23. Lewis-Kraus G (2016) The great AI awakening. The New York Times Magazine. Published on-line December 14, 2016; Accessed May 23, 2020. 24. Manning CD, Clark K, Hewitt J, Khandelwal U, Levy O (2020) Emergent linguistic structure in artificial neural networks trained by self-supervision. Proceedings of the National Academy of Sciences. 25. Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers). pp. 4171–4186. 60. McClelland JL, McNaughton BL, Lampinen AK (2020) Integration of new information in mem- ory: new insights from a complementary learning systems perspective. Philosophical Trans- actions of the Royal Society B 375(1799):20190637. 61. Yonelinas A, Ranganath C, Ekstrom A, Wiltgen B (2019) A contextual binding theory of episodic memory: systems consolidation reconsidered. Nature reviews neuroscience 20:364–375. 62. Menenti L, Petersson KM, Scheeringa R, Hagoort P (2009) When elephants fly: Differential sensitivity of right and left inferior frontal gyri to discourse and world knowledge. Journal of cognitive neuroscience 21(12):2358–2368. 26. Vaswani A, et al. (2017) Attention is all you need in Advances in Neural Information Process- ing Systems 30. (Curran Associates, Inc.), pp. 5998–6008. 63. Zuo X, et al. (2020) Temporal integration of narrative information in a hippocampal amnesic patient. NeuroImage 213:116658. 27. Linzen T, Baroni M (2020) Syntactic structure from deep learning. Annual Reviews of Linguis- tics 2020. 64. Gabrieli JD, Cohen NJ, Corkin S (1988) The impaired learning of semantic knowledge follow- ing bilateral medial temporal-lobe resection. Brain and cognition 7(2):157–177. 28. Brown TB, et al. arXiv:2005.14165. (2020) Language models are few-shot learners. arXiv preprint 65. Sun C, Baradel F, Murphy K, Schmid C (2019) Contrastive bidirectional transformer for tem- poral representation learning. CoRR arXiv:1906.05743. 29. Warren RM (1970) Perceptual restoration of missing speech sounds. Science 167(3917):392–393. 66. Weston J, Chopra S, Bordes A (2015) Memory networks in International Conference on Learning Representations. 30. Yang Z, et al. (2019) Xlnet: Generalized autoregressive pretraining for language understand- ing. CoRR abs/1906.08237. 67. Graves A, et al. (2016) Hybrid computing using a neural network with dynamic external mem- ory. Nature 538(7626):471–476. 31. Nie Y, et al. (2019) Adversarial NLI: A new benchmark for natural language understanding. arXiv preprint arXiv:1910.14599. 68. Russin J, O’Reilly RC, Bengio Y (2020) Deep learning needs a prefrontal cortex. ICLR work- shop on Bridging AI and Cognitive Science. 32. Bisk Y, et al. (2020) Experience grounds language. arXiv preprint arXiv:2004.10151. 33. Chomsky N (1971) Deep structure, surface structure, and semantic interpretation in Seman- tics: An interdisciplinary reader in philosophy, linguistics, and psychology, eds. Steinberg D, Jakobovits LA. (Cambridge University Press Cambridge), pp. 183–216. 34. Fodor JA (1983) The Modularity of Mind. (MIT press). 35. Lakoff G (1987) Women, Fire, and Dangerous Things. (The University of Chicago Press). 36. Bransford JD, Johnson MK (1972) Contextual prerequisites for understanding: Some in- vestigations of comprehension and recall. Journal of verbal learning and verbal behavior 11(6):717–726. 37. Crain S, Steedman M (1985) Context and the psychological syntax processor in Natural lan- guage parsing, eds. Dowty, Karttunen, Zwicky. (Cambridge University Press). 38. Montague R (1973) The proper treatment of quantification in ordinary English in Approaches to natural language: Proceedings of the 1970 Stanford workshop on grammar and semantics, eds. Hintikka J, Moravcsik J, Suppes P. (Riedel, Dordrecht), pp. 221–242. 39. Schank RC (1983) Dynamic memory: A theory of reminding and learning in computers and people. (Cambridge University Press). 40. John MFS, McClelland JL (1990) Learning and applying contextual constraints in sentence comprehension. Artificial Intelligence 46(1):217 – 257. 41. Hasson U, Egidi G, Marelli M, Willems RM (2018) Grounding the neurobiology of language in first principles: The necessity of non-language-centric explanations for language compre- hension. Cognition 180:135–157. 69. Piaget J (1952) The origins of intelligence in children (m. cook, trans.). new york, ny, us. 70. Hermann KM, et al. (2017) Grounded language learning in a simulated 3D world. arXiv preprint arXiv:1706.06551. 71. Das R, Zaheer M, Reddy S, McCallum A (2017) Question answering on knowledge bases and text using universal schema and memory networks in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistic. pp. 358–365. 72. Chaplot DS, Sathyendra KM, Pasumarthi RK, Rajagopal D, Salakhutdinov R (2018) Gated- attention architectures for task-oriented language grounding in Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applica- tions of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Ad- vances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, eds. McIlraith SA, Weinberger KQ. (AAAI Press), pp. 2819–2826. 73. Oh J, Singh S, Lee H, Kohli P (2017) Zero-shot task generalization with multi-task deep rein- forcement learning in Proceedings of the 34th International Conference on Machine Learning- Volume 70. (JMLR. org), pp. 2661–2670. 74. Hill F, et al. (2020) Environmental drivers of systematicity and generalization in a situated agent in International Conference on Learning Representations, ICLR. 75. Bryson J (2008) Embodiment vs. memetics. Mind and Society 7(1):77–94. 76. Lakoff G, Johnson M (1980) Metaphors we live by. (University of Chicago, Chicago, IL). 77. Feldman J, Narayanan S (2004) Embodied meaning in a neural theory of language. Brain and language 89:385–392. 42. Rumelhart DE (1979) Some problems with the notion that words have literal meanings in Metaphor and thought, ed. Ortony A. (Cambridge Univ. Press, Cambridge, UK), pp. 71–82. 43. Barclay J, Bransford JD, Franks JJ, McCarrell NS, Nitsch K (1974) Comprehension and se- mantic flexibility. Journal of Verbal Learning and Verbal Behavior 13(4):471 – 481. 44. Tanenhaus M, Spivey-Knowlton M, Eberhard K, Sedivy J (1995) Integration of visual and linguistic information in spoken language comprehension. Science 268(5217):1632–1634. 45. Altmann GT, Kamide Y (2007) The real-time mediation of visual attention by language and world knowledge: Linking anticipatory (and other) eye movements to linguistic processing. Journal of Memory and Language 57(4):502–518. 46. Lake B, Baroni M (2018) Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks in International Conference on Machine Learning. pp. 2873–2882. 47. Ranganath C, Ritchey M (2012) Two cortical systems for memory-guided behaviour. Nature reviews neuroscience 13:713. Review Article. 48. McClelland JL, Mirman D, Bolger DJ, Khaitan P (2014) Interactive activation and mutual con- straint satisfaction in perception and cognition. Cognitive science 38(6):1139–1189. 49. Heilbron M, Richter D, Ekman M, Hagoort P, de Lange FP (2020) Word contexts enhance the neural representation of individual letters in early visual cortex. Nature communications 11(1):1–11. 50. Patterson K, Nestor PJ, Rogers TT (2007) Where do you know what you know? The represen- tation of semantic knowledge in the human brain. Nature reviews neuroscience 8(12):976. 51. Rogers TT, et al. (2004) Structure and deterioration of semantic memory: A neuropsycholog- ical and computational investigation. Psychological review 111(1):205–235. 52. Zadbood A, Chen J, Leong Y, Norman K, Hasson U (2017) How we transmit memories to other brains: Constructing shared neural representations via communication. Cerebral cortex 27(10):4988–5000. 53. Baldassano C, et al. (2017) Discovering event structure in continuous narrative perception and memory. Neuron 95(3):709–721. 54. Morton J, Patterson K (1980) Little words – No! in Deep dyslexia. (Routledge and Kegan Paul London), pp. 270–285. 55. Baldo JV, Bunge SA, Wilson SM, Dronkers NF (2010) Is relational reasoning dependent on McClelland et al.: Extending Machine Language Models toward Human-Level Language Understanding 8
{ "id": "1910.14599" }
1912.10165
Zero-shot Text Classification With Generative Language Models
This work investigates the use of natural language to enable zero-shot model adaptation to new tasks. We use text and metadata from social commenting platforms as a source for a simple pretraining task. We then provide the language model with natural language descriptions of classification tasks as input and train it to generate the correct answer in natural language via a language modeling objective. This allows the model to generalize to new classification tasks without the need for multiple multitask classification heads. We show the zero-shot performance of these generative language models, trained with weak supervision, on six benchmark text classification datasets from the torchtext library. Despite no access to training data, we achieve up to a 45% absolute improvement in classification accuracy over random or majority class baselines. These results show that natural language can serve as simple and powerful descriptors for task adaptation. We believe this points the way to new metalearning strategies for text problems.
http://arxiv.org/pdf/1912.10165
Raul Puri, Bryan Catanzaro
cs.CL
null
null
cs.CL
20191210
20191210
9 1 0 2 c e D 0 1 ] L C . s c [ 1 v 5 6 1 0 1 . 2 1 9 1 : v i X r a # Zero-shot Text Classification With Generative Language Models # Raul Puri NVIDIA [email protected] Raul Puri Bryan Catanzaro NVIDIA NVIDIA [email protected] [email protected] # Abstract This work investigates the use of natural language to enable zero-shot model adap- tation to new tasks. We use text and metadata from social commenting platforms as a source for a simple pretraining task. We then provide the language model with natural language descriptions of classification tasks as input and train it to generate the correct answer in natural language via a language modeling objective. This allows the model to generalize to new classification tasks without the need for multiple multitask classification heads. We show the zero-shot performance of these generative language models, trained with weak supervision, on six benchmark text classification datasets from the torchtext library. Despite no access to training data, we achieve up to a 45% absolute improvement in classification accuracy over random or majority class baselines. These results show that natural language can serve as simple and powerful descriptors for task adaptation. We believe this points the way to new metalearning strategies for text problems. # 1 Method Our method reformulates text classification problems as multiple choice question answering. To enable our model to generalize to new classification tasks, we provide the model with a multiple choice question description containing each class in natural language, and train it to generate the correct answer, also in natural language, from the provided description. To better prepare our model to handle a wide variety of class descriptors, we utilize a pretrained GPT-2 (Radford et al., 2019) transformer model and finetune it on the task of multiple choice title prediction for the OpenWebText dataset (Peterson et al., 2019). This pretraining task trains the model to use common sense reasoning to select the most probable title or description of the text data from a provided list of rich natural language descriptions or classes, similar to the problem formulation of text classification. The wide variety of titles available in the pretraining dataset help simulate numerous automatically generated N-way text classification tasks to enable meta-learning. In initial studies we found that the diverse language found in title prediction was necessary to adapt to new tasks, and other pretraining tasks such as WebText subreddit prediction did not transfer at all. For a given document, we randomly sample a number of titles t ∈ [2, 15] with one title being the correct title. Half of the time we replace a single title with “none of the above”, and occasionally (p = 1/t) we choose to replace the correct title with “none of the above”. We prepend all selected titles to the document in the form of a multiple choice question, and train the model to generate the answer, similar to generative Question Answering (McCann et al., 2018). Example input representations for title prediction can be found in Table 1. token prediction language modeling loss, t L(wt, P ( ˆwt|w[1,t−1])), that optimizes over the entire concatenated input w = [question, ref erence_text, output_answer] and the questions are generated according to a grammar. The input representation utilizes type tokens to segment the question, reference text, and answer. To 3rd Workshop on Meta-Learning at NeurIPS 2019, Vancouver, Canada. Text (a) a , [fakin f {Taanass 1 Lane’ cee ee a (b) Figure 1: Comparison between existing multitask classifiers and our method. (a) Multitask classifiers have the model featurize text and send it to one of N task heads. (b) In our method, one of N task descriptors is prepended to the text and the model generates the answer in natural language. Dataset Title Prediction Pretraining AGNews Zero- shot Classifica- tion Question Which of these choices best de- scribes the following document? : “ A pool For All Bodies ” , “ Lawmakers say they’d take pay cut, but they can’t ” , “ Raiders’ Gareon Conley faces civil suit ” , “ Prolific cybercriminal sus- pected of spreading ransomware arrested by Polish Police [Eu- ropol] ” How is the text best described? : “ Science & Technology ” , “ Busi- ness ” , “ Sports ” , or “ World News ” Text Story highlights Members of Congress also preparing for po- tential sharp cuts in federal spending But lawmakers will not see any change to their an- nual salary of $174,000... An Entertaining Holiday Pick Hastings, a multimedia retailer, trims losses and raises full-year guidance. Answer Lawmakers they’d say take pay cut, but they can’t Business Table 1: Example inputs for pretraining and downstream tasks. The descriptor questions are concate- nated to the text samples and the language model generates the remaining output answer text. Class descriptors for the 5 other downstream classification tasks can be found in appendix A.4.1 encode positional information, the input uses learned positional embeddings that reset to position 0 at the start of the answer. This is described in more detail in the appendix section A.1. For our analysis of zero shot classification we examine the performance of our model at various sizes on several of the torchtext classification datasets. When transferring the model we provide all the given dataset’s classes (typically ranging from 2-15 classes) to the model in the multiple choice question format and prompt it to generate out the correct class. Furthermore, we ensure that downstream tasks do not contain "none of the above" options. We use greedy autoregressive decoding to generate our output text. Example inputs for each of our downstream tasks are shown in Table 1. # 1.1 Dataset We build upon prior work collecting large language modeling datasets from the internet. Namely, we extend the OpenWebText corpus (Peterson et al., 2019) by annotating the documents with subreddits and titles in natural language. The OpenWebText dataset is collected by scraping outbound weblinks from reddit that have more than 3 karma score. We annotate each outbound weblink with the title of the Reddit post, and the subreddit that the link was posted in. Weblinks can appear in multiple posts across different subreddits, so for a given link we aggregate a list of all it’s related subreddits and 2 titles. Detailed dataset statistics can be found in appendix section A.3. To create training data we sample a random document, multiple titles including one of the documents corresponding titles, and arrange the input as described in the previous section. We evaluate the trained model on the DBPedia, AGNews, Yahoo Answers, SST-2, Amazon-2, and Yelp-2 text classification datasets (Socher et al., 2013; Lehmann et al., 2015). The classes and class descriptors used for each of these tasks can be found in appendix section A.4.1. To experiment with different class descriptions and model architectures, we create a small validation set of 2000 random training set examples for each of the downstream tasks. We evaluate our design choices on these validation sets before reporting final accuracies on the entire test set. # 2 Related Work Zero and few shot learning have been the subject of many studies. Some works have looked at meta-learning for machine translation in low resource languages (Gu et al., 2018), iteratively guiding policies with language (Co-Reyes et al., 2018) for instruction following (Branavan et al., 2009; Chen and Mooney, 2011), and generating WikiSQL-style structured queries from natural language queries (Huang et al., 2018). Radford et al. (2018, 2019) show that large scale language models can be used in a multitask zero shot capacity by allowing the model to generate output text in an autoregressive manner given a prompt with the task description. They demonstrate that larger transformer language models perform better than smaller models in zero shot settings. However, their models are never explicitly trained for zero shot text classification. To perform classification, the authors propose appending a prompt token to the text and restricting the output vocabulary to the tokens of possible answers. This effectively turns the output vocabulary into a pretrained task-specific classification head. Unlike our approach their work requires manual intervention and does not take advantage of task descriptors to modulate output behavior. The Multitask Question Answering Network (McCann et al., 2018) study also investigates zero shot performance of multitask generative language models prompted with descriptor questions. However, they only analyze zero shot classification performance between tasks of identical domains (SST-2 and Amazon-2) that are trained with supervised learning and identical prompts. Using identical prompts and supervised learning prevents a true analysis of the model’s ability to adapt to unseen task descriptors. Recent work in meta-learning has centered around gradient based meta learning strategies such as Model Agnostic Meta-Learning or MAML (Finn et al., 2017). However, parallel work such as Memory Augmented Neural Networks (Santoro et al., 2016) and Simple Neural Attentive Learners (Mishra et al., 2017) demonstrate the effectiveness of architecture based meta-learning. This is similar to our work except that our models receive weak supervision in the form of class labels and a question in natural language instead of similar class examples. We show throughout this work that melding techniques from NLP and architecture based meta-learning allows our model to adapt to new language classification tasks. Lastly, similar to our work, concurrent research investigates models capable of handling tasks with different class counts and output mappings. Bansal et al. (2019) combine prototypical networks and MAML to adapt to NLP tasks with different numbers of labels. Raffel et al. (2019) propose a unified multitask language model that uses weakly-supervised task labels to generate task outputs with natural language. By doing so, the resulting model is capable of performing a diverse set of tasks including classification, natural language inference, question answering, and abstractive summarization. Furthermore, the authors demonstrate the viability of this approach by scaling the model to 11 billion parameters and achieving state of the art accuracy. However, neither of these works examine the ability of a unified model to adapt to new task descriptors in a zero-shot fashion. # 3 Results To test the ability of our pretrained models to adapt to new tasks and tasks descriptions, we transfer the models to 6 classification tasks. We provide three baselines the first two of which are designed to expose dataset bias: random guessing, majority class (mode of the training dataset), and directly finetuning a 355 million parameter classification model on the downstream tasks. In our experiments we investigate the effect of two components of the pretraining process on downstream task perfor- mance: model scale and data scale. Table 2 shows that increasing model size leads to improved performance on downstream tasks. In some scenarios smaller models are barely able to perform better 3 Model Random Guess~ Majority Class~ 117M All Data 355M 1/4 Data 355M All Data 355M Finetuned~ SOTA SST-2 50.6 49.9 51.8 / 0 61.7 / 0 62.5 / 0 93.23 96.8* AGNews 27.4 25.3 40.2 / .00 68.3 / .51 65.5 / .01 94.87 95.51* DBPedia 7.27 7.6 39.6 / .25 52.5 / .03 44.8 / .62 99.0 99.38* Yahoo 10.2 9.9 26.1 / .97 52.2 / .64 49.5 / .30 72.79 76.26** Amazon-2 52.9 49.3 50.3 / .001 64.5 / .001 80.2 / 0 97.115 97.6* Yelp-2 50.4 49.2 50.1 / 0 58.5 / 0 74.7 / 0 94.479 98.45* Average 33.1 31.9 43.0 / .202 59.6 / .197 62.9 / .176 91.91 94 Table 2: Zero shot transfer results. Seperated by a slash, each column contains test accuracies and (when applicable) the percentage of out of vocabulary test answers. Provided baseline models include random guessing~, majority class~, and finetuning~ baselines. State of the art results held by *XLNet (Yang et al., 2019) and **DRNN (Wang, 2018). than random. For DBPedia the 355M GPT-2 model leads to a 45.2% absolute accuracy improvement over random. In tasks with several classes such as DBPedia, AGnews, and Yahoo Answers the model performs noticeably better than random; however, they struggle to break past 50% and no task comes close to achieving either finetuned or SOTA accuracies. Contextualizing these results with the results of the binary classification tasks like SST-2, Amazon-2, and Yelp-2 we hypothesize that the model can narrow down unlikely classes, but struggles to choose between the two most plausible options due to its lack of formal supervision. These results also show that restricting the size of the dataset and available document-title pairs leads to a reduction in overall task performance averaged across all tasks. This highlights the need for pretraining across a diverse set of tasks and language. Table 2 demonstrates that the robustness of our generative model is also similarly dictated by model and pretraining dataset size. Although rare across all pretrained models, the out of distribution answers (generated answers that are not valid classes) diminish with larger pretrained models and data. The most common out of vocab answer is an empty string where the model decides to immediately predict the end of text token. Other out of vocab answers are typically rearrangements of valid answer tokens. These are rare with greedy decoding, but become more frequent when using other sampling methods such as top-k (Fan et al., 2018) or top-p nucleus sampling (Holtzman et al., 2019). In the case of Yahoo Answers the model can combine two categories such as "Education & Reference" with "Science & Mathematics" to output "Education & Mathematics". We perform further studies examining the relationship between question descriptions, tokenization, accuracy, and out of vocabulary answers in appendix section A.4.2. These studies showcase the model’s ability to adapt to different descriptions, but expose issues with controllability. Nevertheless, with this model the practitioner’s burden is shifted away from designing effective zero-shot multitask architectures, to data problem design. # 4 Conclusion and Future Work In this work, we present a novel pretraining method for zero shot language classification through a generative language model classifier. By generating classifications through natural language, the model eliminates the need for multiple task-specific classification heads, making the model far more general and flexible. Increasing model and data scale further demonstrates that the capabilities of recent transformer language models are sufficient to extract meaningful feature representations that allow us to better generalize and adapt to new tasks. These results highlight the potential of natural language as learning and adaptation signals in future applications. Currently this work is employed for zero-shot classification. Future extensions should investigate the ability of gradient based metalearning to adapt to task descriptors, either through K-shot support- based learning or by taking gradient steps on the task descriptors themselves as in Metz et al. (2018). Additionally, future work could extend the text classification task to other language problems such as question answer or instruction following. Applying this technique in other settings will require addressing its current limitations with respect to controllability, available data and task diversity. 4 # References Bansal, T., R. Jha, and A. McCallum 2019. Learning to few-shot learn across diverse natural language classification tasks. arXiv preprint arXiv:1911.03863. 2009. Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, Pp. 82–90. Association for Computational Linguistics. Chen, D. L. and R. J. Mooney 2011. Learning to interpret natural language navigation instructions from observations. In Twenty- Fifth AAAI Conference on Artificial Intelligence. Co-Reyes, J. D., A. Gupta, S. Sanjeev, N. Altieri, J. DeNero, P. Abbeel, and S. Levine 2018. Guiding policies with language via meta-learning. CoRR, abs/1811.07882. Fan, A., M. Lewis, and Y. N. Dauphin 2018. Hierarchical neural story generation. CoRR, abs/1805.04833. Finn, C., P. Abbeel, and S. Levine 2017. Model-agnostic meta-learning for fast adaptation of deep networks. CoRR, abs/1703.03400. Gu, J., Y. Wang, Y. Chen, K. Cho, and V. O. K. Li 2018. Meta-learning for low-resource neural machine translation. CoRR, abs/1808.08437. Holtzman, A., J. Buys, M. Forbes, and Y. Choi 2019. The curious case of neural text degeneration. CoRR, abs/1904.09751. Huang, P., C. Wang, R. Singh, W. Yih, and X. He 2018. Natural language to structured query generation via meta-learning. CoRR, abs/1803.02400. Kingma, D. P. and J. Ba 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Lehmann, J., R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann, M. Morsey, P. Van Kleef, S. Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167–195. Loshchilov, I. and F. Hutter 2019. Decoupled weight decay regularization. In International Conference on Learning Represen- tations. McCann, B., N. S. Keskar, C. Xiong, and R. Socher 2018. The natural language decathlon: Multitask learning as question answering. CoRR, abs/1806.08730. Metz, L., N. Maheswaranathan, B. Cheung, and J. Sohl-Dickstein 2018. Meta-learning update rules for unsupervised representation learning. arXiv preprint arXiv:1804.00222. Micikevicius, P., S. Narang, J. Alben, G. F. Diamos, E. Elsen, D. Garcia, B. Ginsburg, M. Houston, O. Kuchaiev, G. Venkatesh, and H. Wu 2017. Mixed precision training. CoRR, abs/1710.03740. Mishra, N., M. Rohaninejad, X. Chen, and P. Abbeel 2017. Meta-learning with temporal convolutions. CoRR, abs/1707.03141. Peterson, J., S. Meylan, and D. Bourgin 2019. Open clone of openai’s unreleased webtext dataset scraper. 5 Radford, A., K. Narasimhan, T. Salimans, and I. Sutskever 2018. Improving language understanding by generative pre-training. Radford, A., J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever 2019. Better language models and their implications. Raffel, C., N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu 2019. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Santoro, A., S. Bartunov, M. Botvinick, D. Wierstra, and T. P. Lillicrap 2016. One-shot learning with memory-augmented neural networks. CoRR, abs/1605.06065. Socher, R., A. Perelygin, J. Wu, J. Chuang, C. D. Manning, A. Ng, and C. Potts 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceed- ings of the 2013 conference on empirical methods in natural language processing, Pp. 1631–1642. Srivastava, N., G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. # Wang, B. 2018. Disconnected recurrent neural networks for text categorization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Pp. 2311–2320. Yang, Z., Z. Dai, Y. Yang, J. G. Carbonell, R. Salakhutdinov, and Q. V. Le 2019. Xlnet: Generalized autoregressive pretraining for language understanding. CoRR, abs/1906.08237. 6 # A Appendix # Input Representation and Training Details # A.1.1 Input Tokens Transformer t es el es ee ee ee ee) a EE Ee [| on To form the input representation, the question, text, and answer tokens are concatenated together. Each set of tokens has a <|endoftext|> token appended to the end, and has a special prompt token prepended to the set. The special tokens for the three fields are respectively <|question|>, <|text|>, and <|answer|>. In addition to prompt tokens, each segment of the input also has unique type token embeddings added. There are three different type tokens total, one for each segment of the input. Lastly, to encode positional information in our input representation we utilize two sets of position embeddings. One range of position ids up to and including the <|answer|> prompt token, and another set of ids starting from 0 at the beginning of the answer tokens. These ranges are depicted by the colored gradient in the figure above. This helps the transformer distinguish between the context and the generated output. # A.1.2 Multiple Choice Format We maintain a list of approximately 25 multiple choice question formats as shown below. At training and evaluation time we randomly sample a question format and fill the brackets with the desired classes. We format the classes as a comma separated list with double quotation marks to help segment the answers from the rest of the question text. We ensure that spaces are put between the answers and the quotation marks to avoid any unwanted byte pair merges: “ class1 ” , “ class2 ” , or “ class3 ”. Examples of this formatting can be seen in Table 1. To which category does the following document belong? : {} • To which category does the following text belong? : {} • To which category does the text belong? : {} • To which category does the article belong? : {} • How would you describe the following document? : as {} • How would you describe the text? : as {} • How would you describe the following text? : as {} • Which best describes the text? : {} • Which best describes the document? : {} • Which best describes the following document? : {} • Which best describes the following text? : {} • The following document is _ ? : {} • The following text is _ ? : {} • The text is _ ? : {} • The document is _ ? : {} • How is the text best described? : {} • How is the document best described? : {} 7 How is the following text best described? : {} How is the following document best described? : {} Which of these choices best describes the text? : {} Which of these options best describes the text? : {} Which of these choices best describes the document? : {} Which of these options best describes the document? : {} • Which of these categories best describes the following document? : {} Which of these choices best describes the following document? : {} • Which of these options best describes the following text? : {} # A.2 Training Hyperparameters To train our model we follow a procedure largely based on the training procedures described in Radford et al. (2019) with a few differences. All training is performed with a maximum sequence length of 512 tokens. In the full dataset training setting we utilize a learning rate of 4 × 10−5 and a batch size of 128. When training with a quarter of the dataset we then used a learning rate of 3 × 10−5 and a batch size of 32. Our learning rate has a warmup period over 1% of the total training iterations before decaying according to a single cycle cosine decay schedule over 10 epochs. We utilize an Adam optimizer (Kingma and Ba, 2014) with decoupled weight decay (Loshchilov and Hutter, 2019) λ = 0.01. All our models are trained efficiently on V100 GPUs by utilizing mixed precision training with dynamic loss scaling (Micikevicius et al., 2017). Additionally, we use global gradient norm clipping of 1.0 to improve the stability of training large models. Lastly, we utilize attention and hidden state dropout (Srivastava et al., 2014) values of 0.1. # A.3 Training Data Statistics We provide class frequency statistics shown below to highlight the diversity of the dataset used for pretraining. Subreddit Frequency distribution oy 2 3 5 10 2 | r=] 2 192 5 | aa 104 10" { ul py 0 50000 100000 150000 200000 250000 # of examples per subreddit Figure 2: Subreddit Class Distribution. The number of times a subreddit occurs (frequency) is presented on the x-axis. The y-axis corresponds to the number of subreddits that appear at a certain frequency. The data is distributed according to a power law distribution clustered around <1000 samples per subreddit, with a long tail reaching up to 245000 samples for a given subreddit. Zooming into the distribution (shown below) we find that there are approximately 9400 subreddits with 20 or more 8 samples out of 50700 subreddit. Out of the 9400 subreddit two thirds have fewer than 100 samples. This level of diversity is ideal for a meta learning or domain adaptation dataset. Subreddit Frequency distribution 10* 4 10? 10? 4 # of subreddits 10° 4 200 400 # of examples per subreddit 600 800 Figure 3: Enlarged Subreddit Class Distribution. Lastly we show the most common subreddits along with their subreddit frequency in Table A.3. We find that half of the top fifteen subreddits are politically related. This skew may lead to possible biases in the training process. A plausible explanation for this bias can be found in the way the dataset is collected. Since we heuristically filter for reputable outbound links it is likely that we choose subreddits where people post outside news. Subreddit r/politics r/worldnews r/The_Donald r/todayilearned r/news r/technology r/science r/Conservative r/POLITIC r/conspiracy r/india r/environment r/atheism r/programming r/Libertarian Frequency 245308 122884 80042 59892 59166 54860 46452 30823 28310 28293 27892 26816 25999 24020 23711 Table 3: Subreddit Frequency. 9 # A.4 Downstream Task Setup # A.4.1 Class Descriptors Listed below are the class descriptions used for each classification task. Dataset SST-2 AGNews DBPedia Yahoo Answers Yelp-2 Amazon-2 Classes Positive Sentiment, Negative Sentiment Science & Technology, Business, Sports , World News Company, Mean Of Transportation, Film, Office Holder, Written Work, Animal, Natural Place, Artist, Plant, Athlete, Album, Building, Village, Educational Institution Family & Relationships, Business & Finance, Health, Society & Culture, Education & Reference, Entertainment & Music, Science & Mathematics, Computers & Internet, Sports, Politics & Government Positive polarity, Negative polarity Positive polarity, Negative polarity # A.4.2 Descriptor Selection The ability of our model to adapt to new tasks and its behavior for a given input is controlled by the input descriptor questions it receives. In this section we investigate the impact that question formulation has on downstream task performance. Specifically, we modify the provided class descriptions for several tasks and observe the effects this has on the 355 million parameter model’s downstream task performance: • For binary classification tasks like SST-2, Amazon-2, Yelp-2 we move away from Positive Sentiment and Negative Sentiment, or Positive polarity and Negative polarity. Instead we simply use positive and negative as in McCann et al. (2018). For DBPedia we revert to the original class descriptions provided by the dataset and remove all whitespace (eg. Mean Of Transportation becomes MeanOfTransportation). • For AGNews we also revert to the original class descriptions and change World News to World and Science & Technology to Sci/Tech. Table A.4.2 shows that the choice of class description has a significant impact on performance. In the worst case poor class descriptions can lead to an absolute 27% drop in accuracy and 44% increase in out of vocabulary answers. In the cases of binary classification tasks and AGNews we hypothesize performance is negatively impacted by incomplete task descriptions: positive and World do not explicitly convey positive sentiment or World News. Empirical observations in Figure 4 show that the model either selects plausibly overlapping categories in the case of AGNews, or responds with a completely out of vocabulary answer as in the case of sentiment analysis. For DBPedia and AGNews, concatenating words together drastically changes the resulting bytepair tokenization despite the descriptions still being human readable. This changes the semantic understanding that the model receives and as a result the model completely avoids selecting it. In some cases the model may not have ever trained the subword embeddings corresponding to those tokens. This section highlights that our language modeling technique, while general, is subject to errors arising from problem formulation and requires careful control to craft questions that elicit desired effects. Remedying these issues will be a goal of future work. Descriptor Set Good Descriptors Bad Descriptors SST-2 63.22 / 0 35.91 / 44.3 AGNews 69.04 / .478 62.61 / 0 DBPedia 53.85 / .056 44.99 / .050 Amazon-2 81.22 / .056 64.3 / 22.1 Yelp-2 74.35 / 0 68.02 / 23.4 Table 4: Validation Set Accuracy/Out of Vocabulary Answer Percentages. We compare performance on the validation set with two different sets of descriptors: one deemed good and one deemed bad. We showcase the importance of selecting appropriate descriptors for a task. 10 Good Negative Sentiment negative wue Positive Sentiment vf” predicted (a) SST-2 Science & Technology Bad F Sports. wad Mew e re Cs a * & pete eo predicted (b) AGNews Album Album Animal Animal Artist Artist nete bite Building Building Company ‘Company 3 Educational institution Educationalinstitution Film Mean Of Transportation Natural Place Office Holder Plant Village Written Work Film MeanofTransportation NaturalPlace OfficeHolder Plant Village Writtenwork SOK SIE’ PETES SS : Sew es eRe ReS : » predicted (c) DBPedia PL EPISEE EEL LIE os ENN eH EF ge & &. eo predicted Figure 4: Confusion matrices for several classification tasks. The left column corresponds to the first row in Table A.4.2, and the right column corresponds to the second row. The color represents the prediction frequency with green being the highest, red the lowest, and yellow in the middle. 11
{ "id": "1804.00222" }
1912.04838
Scalability in Perception for Autonomous Driving: Waymo Open Dataset
The research community has increasing interest in autonomous driving research, despite the resource intensity of obtaining representative real world data. Existing self-driving datasets are limited in the scale and variation of the environments they capture, even though generalization within and between operating regions is crucial to the overall viability of the technology. In an effort to help align the research community's contributions with real-world self-driving problems, we introduce a new large scale, high quality, diverse dataset. Our new dataset consists of 1150 scenes that each span 20 seconds, consisting of well synchronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban geographies. It is 15x more diverse than the largest camera+LiDAR dataset available based on our proposed diversity metric. We exhaustively annotated this data with 2D (camera image) and 3D (LiDAR) bounding boxes, with consistent identifiers across frames. Finally, we provide strong baselines for 2D as well as 3D detection and tracking tasks. We further study the effects of dataset size and generalization across geographies on 3D detection methods. Find data, code and more up-to-date information at http://www.waymo.com/open.
http://arxiv.org/pdf/1912.04838
Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, Vijay Vasudevan, Wei Han, Jiquan Ngiam, Hang Zhao, Aleksei Timofeev, Scott Ettinger, Maxim Krivokon, Amy Gao, Aditya Joshi, Sheng Zhao, Shuyang Cheng, Yu Zhang, Jonathon Shlens, Zhifeng Chen, Dragomir Anguelov
cs.CV, cs.LG, stat.ML
CVPR 2020
null
cs.CV
20191210
20200512
0 2 0 2 y a M 2 1 ] V C . s c [ 7 v 8 3 8 4 0 . 2 1 9 1 : v i X r a # Scalability in Perception for Autonomous Driving: Waymo Open Dataset Pei Sun1, Henrik Kretzschmar1, Xerxes Dotiwalla1, Aur´elien Chouard1, Vijaysai Patnaik1, Paul Tsui1, James Guo1, Yin Zhou1, Yuning Chai1, Benjamin Caine2, Vijay Vasudevan2, Wei Han2, Jiquan Ngiam2, Hang Zhao1, Aleksei Timofeev1, Scott Ettinger1, Maxim Krivokon1, Amy Gao1, Aditya Joshi1, Sheng Zhao1, Shuyang Cheng1, Yu Zhang∗1, Jonathon Shlens2, Zhifeng Chen2, and Dragomir Anguelov1 1Waymo LLC 2Google LLC # Abstract instance segmentation [7, 17, 23, 10]. The research community has increasing interest in au- tonomous driving research, despite the resource intensity of obtaining representative real world data. Existing self- driving datasets are limited in the scale and variation of the environments they capture, even though generalization within and between operating regions is crucial to the over- all viability of the technology. In an effort to help align the research community’s contributions with real-world self- driving problems, we introduce a new large-scale, high quality, diverse dataset. Our new dataset consists of 1150 scenes that each span 20 seconds, consisting of well syn- chronized and calibrated high quality LiDAR and camera data captured across a range of urban and suburban ge- ographies. It is 15x more diverse than the largest cam- era+LiDAR dataset available based on our proposed geo- graphical coverage metric. We exhaustively annotated this data with 2D (camera image) and 3D (LiDAR) bounding boxes, with consistent identifiers across frames. Finally, we provide strong baselines for 2D as well as 3D detection and tracking tasks. We further study the effects of dataset size and generalization across geographies on 3D detection methods. Find data, code and more up-to-date information at http://www.waymo.com/open. # 1. Introduction Autonomous driving technology is expected to enable a wide range of applications that have the potential to save many human lives, ranging from robotaxis to self-driving trucks. The availability of public large-scale datasets and benchmarks has greatly accelerated progress in machine perception tasks, including image classification, object de- tection, object tracking, semantic segmentation as well as ∗Work done while at Waymo LLC. To further accelerate the development of autonomous driving technology, we present the largest and most diverse multimodal autonomous driving dataset to date, comprising of images recorded by multiple high-resolution cameras and sensor readings from multiple high-quality LiDAR scanners mounted on a fleet of self-driving vehicles. The geographi- cal area captured by our dataset is substantially larger than the area covered by any other comparable autonomous driv- ing dataset, both in terms of absolute area coverage, and in distribution of that coverage across geographies. Data was recorded across a range of conditions in multiple cities, namely San Francisco, Phoenix, and Mountain View, with large geographic coverage within each city. We demonstrate that the differences in these geographies lead to a pronounced domain gap, enabling exciting research opportunities in the field of domain adaptation. Our proposed dataset contains a large number of high- quality, manually annotated 3D ground truth bounding boxes for the LiDAR data, and 2D tightly fitting bounding boxes for the camera images. All ground truth boxes contain track identifiers to support object tracking. In addition, researchers can extract 2D amodal camera boxes from the 3D LiDAR boxes using our provided rolling shutter aware projection library. The multimodal ground truth facilitates research in sensor fusion that leverages both the LiDAR and the camera annotations. Our dataset contains around 12 million LiDAR box annotations and around 12 million camera box annota- tions, giving rise to around 113k LiDAR object tracks and around 250k camera image tracks. All annotations were created and subsequently reviewed by trained labelers using production-level labeling tools. We recorded all the sensor data of our dataset using an industrial-strength sensor suite consisting of multiple high- resolution cameras and multiple high-quality LiDAR sensors. Furthermore, we offer synchronization between the camera and the LiDAR readings, which offers interesting opportu- 1 nities for cross-domain learning and transfer. We release our LiDAR sensor readings in the form of range images. In addition to sensor features such as elongation, we provide each range image pixel with an accurate vehicle pose. This is the first dataset with such low-level, synchronized infor- mation available, making it easier to conduct research on LiDAR input representations other than the popular 3D point set format. Our dataset currently consists of 1000 scenes for training and validation, and 150 scenes for testing, where each scene spans 20 s. Selecting the test set scenes from a geographical holdout area allows us to evaluate how well models that were trained on our dataset generalize to previously unseen areas. We present benchmark results of several state-of-the-art 2D-and 3D object detection and tracking methods on the dataset. # 2. Related Work large-scale datasets are crucial for au- tonomous driving research. There have been an increasing number of efforts in releasing datasets to the community in recent years. Most autonomous driving systems fuse sensor readings from multiple sensors, including cameras, LiDAR, radar, GPS, wheel odometry, and IMUs. Recently released au- tonomous driving datasets have included sensor readings obtained by multiple sensors. Geiger et al. introduced the multi-sensor KITTI Dataset [9, 8] in 2012, which provides synchronized stereo camera as well as LiDAR sensor data for 22 sequences, enabling tasks such as 3D object detection and tracking, visual odometry, and scene flow estimation. The SemanticKITTI Dataset [2] provides annotations that associate each LiDAR point with one of 28 semantic classes in all 22 sequences of the KITTI Dataset. The ApolloScape Dataset [12], released in 2017, pro- vides per-pixel semantic annotations for 140k camera images captured in various traffic conditions, ranging from simple scenes to more challenging scenes with many objects. The dataset further provides pose information with respect to static background point clouds. The KAIST Multi-Spectral Dataset [6] groups scenes recorded by multiple sensors, in- cluding a thermal imaging camera, by time slot, such as daytime, nighttime, dusk, and dawn. The Honda Research Institute 3D Dataset (H3D) [19] is a 3D object detection and tracking dataset that provides 3D LiDAR sensor readings recorded in 160 crowded urban scenes. Some recently published datasets also include map infor- mation about the environment. For instance, in addition to multiple sensors such as cameras, LiDAR, and radar, the nuScenes Dataset [4] provides rasterized top-down semantic maps of the relevant areas that encode information about driveable areas and sidewalks for 1k scenes. This dataset has limited LiDAR sensor quality with 34K points per frame, KITTI NuScenes Argo Ours Scenes Ann. Lidar Fr. Hours 22 15K 1.5 1000 40K 5.5 113 22K 1 1150 230K 6.4 3D Boxes 2D Boxes 80K 80K 1.4M – 993k – 12M 9.9M Lidars Cameras Avg Points/Frame LiDAR Features 1 4 120K 1 1 6 34K 1 2 9 107K 1 5 5 177K 2 Maps Visited Area (km2) No – Yes 5 Yes 1.6 No 76 Table 1. Comparison of some popular datasets. The Argo Dataset refers to their Tracking dataset only, not the Motion Forecasting dataset. 3D labels projected to 2D are not counted in the 2D Boxes. Avg Points/Frame is the number of points from all LiDAR returns computed on the released data. Visited area is measured by diluting trajectories by 75 meters in radius and union all the diluted areas. Key observations: 1. Our dataset has 15.2x effective geographical coverage defined by the diversity area metric in Section 3.5. 2. Our dataset is larger than other camera+LiDAR datasets by different metrics. (Section 2) VFOV Range (restricted) Returns/shot TOP [-17.6◦, +2.4◦] 75 meters 2 F,SL,SR,R [-90◦, 30◦] 20 meters 2 Table 2. LiDAR Data Specifications for Front (F), Right (R), Side- Left (SL), Side-Right (SR), and Top (TOP) sensors. The vertical field of view (VFOV) is specified based on inclination (Section 3.2). limited geographical diversity covering an effective area of 5km2 (Table 1). In addition to rasterized maps, the Argoverse Dataset [5] contributes detailed geometric and semantic maps of the environment comprising information about the ground height together with a vector representation of road lanes and their connectivity. They further study the influence of the provided map context on autonomous driving tasks, including 3D tracking and trajectory prediction. Argoverse has a very limited amount raw sensor data released. See Table 1 for a comparison of different datasets. # 3. Waymo Open Dataset # 3.1. Sensor Specifications The data collection was conducted using five LiDAR sen- sors and five high-resolution pinhole cameras. We restrict the range of the LiDAR data, and provide data for the first two returns of each laser pulse. Table 2 contains detailed specifications of our LiDAR data. The camera images are captured with rolling shutter scanning, where the exact scan- F FL,FR SL,SR Size HFOV 1920x1280 ±25.2◦ 1920x1280 ±25.2◦ 1920x1040 ±25.2◦ Table 3. Camera Specifications for Front (F), Front-Left (FL), Front- Right (FR), Side-Left (SL), Side-Right (SR) cameras. The image sizes reflect the results of both cropping and downsampling the original sensor data. The camera horizontal field of view (HFOV) is provided as an angle range in the x-axis in the x-y plane of camera sensor frame (Figure 1). /\@. a) af tsersive tert \ tee taser FRONT } { Laser: SIDE_RIGHT Laser: REAR Vehicle > y-axis Cameras © — Zaxisis positive upwards FRONT_RIGHT SIDE RIGHT _ Figure 1. Sensor layout and coordinate systems. ning mode can vary from scene to scene. All camera images are downsampled and cropped from the raw images; Table 3 provides specifications of the camera images. See Figure 1 for the layout of sensors relevant to the dataset. # 3.2. Coordinate Systems This section describes the coordinate systems used in the dataset. All of the coordinate systems follow the right hand rule, and the dataset contains all information needed to transform data between any two frames within a run segment. The Global frame is set prior to vehicle motion. It is an East-North-Up coordinate system: Up (z) is aligned with the gravity vector, positive upwards; East (x) points directly east along the line of latitude; North (y) points towards the north pole. The Vehicle frame moves with the vehicle. Its x-axis is positive forwards, y-axis is positive to the left, z-axis is positive upwards. A vehicle pose is defined as a 4x4 transform matrix from the vehicle frame to the global frame. Global frame can be used as the proxy to transform between different vehicle frames. Transform among close frames is very accurate in this dataset. A Sensor frame is defined for each sensor. It is denoted as a 4x4 transformation matrix that maps data from sensor frame to vehicle frame. This is also known as the ”extrinsics” matrix. The LiDAR sensor frame has z pointing upward. The x-y Figure 2. LiDAR label example. Yellow = vehicle. Red = pedes- trian. Blue = sign. Pink = cyclist. axes depends on the LiDAR. The camera sensor frame is placed at the center of the lens. The x axis points down the lens barrel out of the lens. The z axis points up. The y/z plane is parallel to the image plane. The Image frame is a 2D coordinate system defined for each camera image, where +x is along the image width (i.e. column index starting from the left), and +y is along the image height (i.e. row index starting from the top). The origin is the top-left corner. The LiDAR Spherical coordinate system is based on the Cartesian coordinate system in the LiDAR sensor frame. A point (x, y, z) in the LiDAR Cartesian coordinate system can be uniquely transformed to a (range, azimuth, inclina- tion) tuple in the LiDAR Spherical coordinate system by the following equations: range = x2 + y2 + z2 (1) (2) # azimuth = atan2(y, x) inclination = atan2(z, x2 + y2). (3) # 3.3. Ground Truth Labels We provide high-quality ground truth annotations, both for the LiDAR sensor readings as well as the camera images. Separate annotations in LiDAR and camera data opens up exciting research avenues in sensor fusion. For any label, we define length, width, height to be the sizes along x-axis, y-axis and z-axis respectively. We exhaustively annotated vehicles, pedestrians, signs and cyclists in the LiDAR sensor readings. We labeled each object as a 7-DOF 3D upright bounding box (cx, cy, cz, l, w, h, θ) with a unique tracking ID, where cx, cy, cz represent the center coordinates, l, w, h are the length, width, height, and α denotes the heading angle in radians of the bounding box. Figure 2 illustrates an annotated scene as an example. In addition to the LiDAR labels, we separately exhaus- tively annotated vehicles, pedestrians and cyclists in all cam- era images. We annotated each object with a tightly fitting 4-DOF image axis-aligned 2D bounding box which is com- plementary to the 3D boxes and their amodal 2D projections. The label is encoded as (cx, cy, l, w) with a unique tracking ID, where cx and cy represent the center pixel of the box, l represents the length of the box along the horizontal (x) axis in the image frame, and w represent the width of the box along the vertical (y) axis in the image frame. We use this convention for length and width to be consistent with 3D boxes. One interesting possibility that can be explored using the dataset is the prediction of 3D boxes using camera only. We use two levels for difficulty ratings, similar to KITTI, where the metrics for LEVEL 2 are cumulative and thus include LEVEL 1. The criteria for an example to be in a specific difficulty level can depend on both the human labelers and the object statistics. We emphasize that all LiDAR and all camera groundtruth labels were manually created by highly experienced human annotators using industrial-strength labeling tools. We have performed multiple phases of label verification to ensure a high labeling quality. # 3.4. Sensor Data LiDAR data is encoded in this dataset as range images, one for each LiDAR return; data for the first two returns is provided. The range image format is similar to the rolling shutter camera image in that it is filled in column-by-column from left to right. Each range image pixel corresponds to a LiDAR return. The height and width are determined by the resolution of the inclination and azimuth in the LiDAR sensor frame. Each inclination for each range image row is provided. Row 0 (the top row of the image) corresponds to the maximum inclination. Column 0 (left most column of the image) corresponds to the negative x-axis (i.e., the backward direction). The center of the image corresponds to the positive x-axis (i.e., the forward direction). An azimuth correction is needed to make sure the center of the range image corresponds to the positive x-axis. Each pixel in the range image includes the following properties. Figure 4 demonstrates an example range image. • Range: The distance between the LiDAR point and the origin in LiDAR sensor frame. • Intensity: A measurement indicating the return strength of the laser pulse that generated the LiDAR point, partly based on the reflectivity of the object struck by the laser pulse. • Elongation: The elongation of the laser pulse beyond its nominal width. Elongation in conjunction with in- tensity is useful for classifying spurious objects, such as dust, fog, rain. Our experiments suggest that a highly elongated low-intensity return is a strong indicator for a spurious object, while low intensity alone is not a sufficient signal. • No label zone: This field indicates whether the LiDAR point falls into a no label zone, i.e., an area that is ignored for labeling. • Vehicle pose: The pose at the time the LiDAR point is captured. (SIDE LEFT FRONTLEFT ©) FRONT §§ FRONTRIGHT gj SIDE_RIGHT 12.5% 10.0% 75% 5.0% 2.5% 0.0% 4 2 o 2 4 6 8 Figure 3. Camera LiDAR synchronization accuracy in milliseconds. The number in x-axis is in milli-seconds. The y-axis denotes the percentage of data frames. Figure 4. A range image example. It is cropped to only show the front 90◦. The first three rows are range, intensity, and elongation from the first LiDAR return. The last three are range, intensity, and elongation from the second LiDAR return. • Camera projection: We provide accurate LiDAR point to camera image projections with rolling shutter effect compensated. Figure 5 demonstrates that LiDAR points can be accurately mapped to image pixels via the pro- jections. Our cameras and LiDARs data are well-synchronized. The synchronization accuracy is computed as camera center time − frame start time− camera center offset/360◦ ∗ 0.1s The camera center time is the exposure time of the image’s center pixel. The frame start time is the start time of this data frame. The camera center offset is the offset of the +x axis of each camera sensor frame w.r.t. the backward direction of the vehicle. The camera center offset is 90◦for SIDE LEFT camera, 90◦ + 45◦ for FRONT LEFT camera etc. See Figure 3 for the synchronization accuracy for all the cameras. The synchronization error is bounded in [-6ms, 7ms] with 99.7% confidence, [-6ms, 8ms] with 99.9995% confidence. Camera images are JPEG compressed images. Rolling shutter timing information is provided with each image. Rolling shutter projection. For any given point p in the (4) Figure 5. An example image overlaid with LiDAR point projections. PHX MTV SF Day Night Dawn Train Validation 286 93 103 21 409 88 646 160 79 23 73 19 Table 4. Scene counts for Phoenix (PHX), Mountain View (MTV), and San Francisco (SF) and different time of the day for training and validation set. global frame, the rolling shutter camera captures the point at an unknown time t. We can estimate the vehicle pose at t assuming a constant velocity v and angular velocity ω. Using the pose at t, we can project p to the image and get an image point q, which uniquely defines a pixel capture time ˜t. We minimize the difference between t and ˜t by solving a single variable (t) convex quadratic optimization. The algorithm is efficient and can be used in real time as it usually converges in 2 or 3 iterations. See Figure 5 for an example output of the projection algorithm. # 3.5. Dataset Analysis The dataset has scenes selected from both suburban and urban areas, from different times of the day. See Table 4 for the distribution. In addition to the urban/suburban and time of day diversity, scenes in the dataset are selected from many different parts within the cities. We define a geographical coverage metric as the area of the union of all 150-meter di- luted ego-poses in the dataset. By this definition, our dataset covers an area of 40km2 in Phoenix, and 36km2 combined in San Francisco and Mountain View. See Figure 6 for the parallelogram cover of all level 13 S2 cells [1] touched by all ego poses from all scenes. The dataset has around 12M labeled 3D LiDAR objects, around 113k unique LiDAR tracking IDs, around 12M la- beled 2D image objects and around 254k unique image track- ing IDs. See Table 5 for counts of each category. # 4. Tasks We define 2D and 3D object detection and tracking tasks for the dataset. We anticipate adding other tasks such as segmentation, domain adaptation, behavior prediction, and imitative planning in the future. For consistent reporting of results, we provide pre-defined Vehicle Pedestrian Cyclist Sign 3D Object 3D TrackID 2D Object 2D TrackID 6.1M 60k 9.0M 194k 2.8M 23k 2.7M 58k 67k 620 81k 1.7k 3.2M 23k – – Table 5. Labeled object and tracking ID counts for different object types. 3D labels are LiDAR labels. 2D labels are camera image labels. training (798 scenes), validation (202 scenes), and test set splits (150 scenes). See Table 5 for the number of objects in each labeled category. The LiDAR annotations capture all objects within a radius of 75m. The camera image an- notations capture all objects that are visible in the camera images, independent of the LiDAR data. # 4.1. Object Detection # 4.1.1 3D Detection For a given frame, the 3D detection task involves predict- ing 3D upright boxes for vehicles, pedestrians, signs, and cyclists. Detection methods may use data from any of the Li- DAR and camera sensors; they may also choose to leverage sensor inputs from preceding frames. Accurate heading prediction is critical for autonomous driving, including tracking and behavior prediction tasks. Average precision (AP), commonly used for object detection, does not have a notion of heading. Our proposed metric, APH, incorporates heading information into a familiar object detection metric with minimal changes. 1 AP = 100 [ max{p(r’)|r’ >= r}dr, (5) 0 0 1 1 APH = 100 [ max{h(r’)|r’ >= r}dr, (6) 0 where p(r) is the P/R curve. Further, h(r) is computed sim- ilar to p(r), but each true positive is weighted by heading accuracy defined as min(|˜θ − θ|, 2π − |˜θ − θ|)/π, where ˜θ and θ are the predicted heading and the ground truth head- ing in radians within [−π, π]. The metrics implementation takes a set of predictions with scores normalized to [0, 1], and samples a fixed number of score thresholds uniformly in this interval. For each score threshold sampled, it does a Hungarian matching between the predictions with score above the threshold and ground truths to maximize the over- all IoU between matched pairs. It computes precision and recall based on the matching result. If the gap between recall values of two consecutive operating points on the PR curve is larger than a preset threshold (set to 0.05), more p/r points are explicitly inserted between with conservative precisions. Example: p(r) : p(0) = 1.0, p(1) = 0.0, δ = 0.05. We add p(0.95) = 0.0, p(0.90) = 0.0, ..., p(0.05) = 0.0. The Figure 6. Parallelogram cover of all level 13 S2 cells touched by all ego poses in San Francisco, Mountain View, and Phoenix. AP = 0.05 after this augmentation. This avoids producing an over-estimated AP with very sparse p/r curve sampling. This implementation can be easily parallelized, which makes it more efficient when evaluating on a large dataset. IoU is used to decide true positives for vehicle, pedestrian and cyclist. Box center distances are used to decide true positives for sign. aid in direct comparison of method quality: d(m: + fp, + mme;) MOTA = 100 — 100 (7) Vege di MOTP = 100m . (8) eee # 4.1.2 2D Object Detection in Camera Images In contrast to the 3D detection task, the 2D camera image detection task restricts the input data to camera images, ex- cluding LiDAR data. The task is to produce 2D axis-aligned bounding boxes in the camera images based on a single cam- era image. For this task, we consider the AP metric for the object classes of vehicles, pedestrians, and cyclists. We use the same AP metric implementation as described in Section 4.1.1 except that 2D IoU is used for matching. Let mt, fpt and mmet represent the number of misses, false positives and mismatches. Let gt be the ground truth count. A mismatch is counted if a ground truth target is matched to a track and the last known assignment was not the track. In MOTP, let di t represent the distance between a detection and its corresponding ground truth match, and ct be the number of matches found. The distance function used to calculate di t is 1 − IoU for a matched pair of boxes. See [3] for the full procedure. Similar to the detection metrics implementation described in 4.1, we sample scores directly and compute an MOTA for each score cutoff. We pick the highest MOTA among all the score cutoffs as the final metric. # 5. Experiments # 4.2. Object Tracking Multi-Object Tracking involves accurately tracking of the identity, location, and optionally properties (e.g. shape or box dimensions) of objects in a scene over time. We provide baselines on our datasets based on recent approaches for detection and tracking for vehicles and pedes- trians. The same method can be applied to other object types in the dataset. We use 0.7 IoU for vehicles and 0.5 IoU for pedestrians when computing metrics for all tasks. Our dataset is organized into sequences, each 20 seconds long with multiple sensors producing data sampled at 10Hz. Additionally, every object in the dataset is annotated with a unique identifier that is consistent across each sequence. We support evaluation of tracking results in both 2D image view, and 3D vehicle centric coordinates. To evaluate the tracking performance, we use the multiple object tracking (MOT) metric [3]. This metric aims to con- solidate several different characteristics of tracking systems – namely the ability of the tracker to detect, localize, and track the identities of objects over time – into a single metric to # 5.1. Baselines for Object Detection 3D LiDAR Detection To establish a 3D Object Detection baseline, we reimplemented PointPillars [16], which is a simple and efficient LiDAR-based 3D detector that first uses a single layer PointNet [20] to voxelize the point cloud into the Birds Eye View, followed by a CNN region proposal network [25]. We trained the model on single frame of sensor data with all LiDARs included. For vehicles and pedestrians we set the voxel size to 0.33m, the grid range to [−85m, 85m] along the X and Y axes, and [−3m, 3m] along the Z axis. This gives us a 512 × 512 pixel Birds Eye View (BEV) pseudo-image. We use the same convolutional backbone architecture as the original paper [16], with the slight exception that our Vehicle model matches our Pedestrian model in having a stride of 1 for the first convolutional block. This decision means both the input and output spatial resolutions of the models are 512 × 512 pixels, which increases accuracy at the cost of a more expensive model. We define anchor sizes (l, w, h) as (4.73m, 2.08m, 1.77m) for vehicles and (0.9m, 0.86m, 1.71m) for pedestrians. Both vehicles and pedestrians have anchors oriented to 0 and π/2 radians. To achieve good heading prediction, we used a different rota- tion loss formulation, using a smooth-L1 loss of the heading residual error, wrapping the result between [−π, π] with a huber delta δ = 1 9 . In reference to the LEVEL definition in section 3.3, we define the difficulty for the single frame 3D object detection task as follows. We first ignore all 3D labels without any LiDAR points. Next, we assign LEVEL 2 to examples where either the labeler annotates as hard or if the example has ≤ 5 LiDAR points. Finally, the rest of the examples are assigned to LEVEL 1. We evaluate models on the proposed 3D detection metrics for both 7-degree-of-freedom 3D boxes and 5-degree-of- freedom BEV boxes on the 150-scene hidden test set. For our 3D tasks, we use 0.7 IoU for vehicles and 0.5 IoU for pedestrians. Table 6 shows detailed results; 2D Object Detection in Camera Images We use the Faster R-CNN object detection architecture [21], with ResNet-101 [11] as the feature extractor. We pre-trained the model on the COCO Dataset [17] before fine-tuning the model on our dataset. We then run the detector on all 5 camera images, and aggregate the results for evaluation. The resulting model achieved an AP of 63.7 at LEVEL 1 and 53.3 at LEVEL 2 on vehicles, and an AP of 55.8 at LEVEL 1 and 52.7 at LEVEL 2 on pedestrians. # 5.2. Baselines for Multi-Object Tracking 3D Tracking We provide an online 3D multi-object track- ing baseline following the common tracking-by-detection paradigm, leaning heavily on the above PointPillars [16] models. Our method is similar in spirit to [22]. In this paradigm, tracking at each timestep t consists of running a detector to generate detections dn t } with n being the total number of detections, associating these detections to our tracks tm t = {t1 t } with m being the current number of tracks, and updating the state of these tracks tm t given the new information from detects dn t . Ad- ditionally, we need to provide a birth and death process to determine when a given track is Dead (not to be matched with), Pending (not confident enough yet), and Live (being returned from the tracker). For our baseline, we use our already trained PointPillars [16] models from above, 1 − IOU as our cost function, the Hungarian method [15] as our assignment function, and a Kalman Filter [13] as our state update function. We ignore detections with lower than a 0.2 class score, and set a min- imum threshold of 0.5 IoU for a track and a detect to be considered a match. Our tracked state consists of a 10 pa- rameter state tm t = {cx, cy, cz, w, l, h, α, vx, vy, vz} with a constant velocity model. For our birth and death process, we simply increment the score of the track with the associated detection score if seen, decrement by a fixed cost (0.3) if the track is unmatched, and provide a floor and ceiling of the score [0, 3]. Both vehicle and pedestrian results can be seen in Table 7. For both vehicles and pedestrians the mismatch percentage is quite low, indicating IoU with a Hungarian algorithm [15] is a reasonable assignment method. Most of the loss of MOTA appears to be due to misses that could either be due to localization, recall, or box shape prediction issues. 2D Tracking We use the visual multi-object tracking method Tracktor [14] based on a Faster R-CNN object de- tector that we pre-trained on the COCO Dataset [17] and then fine-tuned on our dataset. We optimized the parameters of the Tracktor method on our dataset and set σactive = 0.4, λactive = 0.6, and λnew = 0.3. The resulting Tracktor model achieved a MOTA of 34.8 at LEVEL 1 and 28.3 at LEVEL 2 when tracking vehicles. # 5.3. Domain Gap The majority of the scenes in our dataset were recorded in three distinct cities (Table 4), namely San Francisco, Phoenix, Mountain View. We treat Phoenix and Mountain View as one domain called Suburban (SUB) in this experi- ment. SF and SUB have similar number of scenes per (Table 4) and different number of objects in total (Table 8). As these two domains differ from each other in fascinating ways, the resulting domain gap in our dataset opens up exciting re- search avenues in the field of domain adaptation. We studied the effects of this domain gap by evaluating the performance of object detectors trained on data recorded in one domain on the training set and evaluated in another domain on the validation set. We used the object detectors described in Section 5.1. We filter the training and validation datasets to only contain frames from a specific geographic subset referred to as SF (San Francisco), SUB (MTV + Phoenix), or ALL (all data), and retrain and reevaluate models on the permutation of these splits. Table 9 summarizes our results. For the 3D LiDAR- based vehicle object detector, we observed an APH reduction of 8.0 when training on SF and evaluating on SUB compared with training on SUB and evaluating on SUB, and an APH Metric Overall BEV (LEVEL 1/LEVEL 2) 0 - 30m 30 - 50m 50m - Inf Overall 3D (LEVEL 1/LEVEL 2) 0 - 30m 30 - 50m 50m - Inf Vehicle APH Vehicle AP 79.1/71.0 80.1/71.9 90.2/87.7 90.8/88.3 77.3/71.1 78.4/72.2 62.8/49.9 64.8/51.6 62.8/55.1 63.3/55.6 81.9/80.8 82.3/81.2 58.5/52.3 59.2/52.9 34.9/26.7 35.7/27.2 Pedestrian APH 56.1/51.1 70.0/63.8 Pedestrian AP 63.2/61.1 76.9/74.5 54.6/50.5 68.5/63.4 43.9/36.0 58.1/47.9 50.2/45.1 62.1/55.9 59.0/56.7 71.3/68.6 48.3/44.3 60.1/55.2 35.8/28.8 47.0/37.9 Table 6. Baseline APH and AP for vehicles and pedestrians. Metric MOTA Overall (LEVEL 1/LEVEL 2) Miss MOTP Mismatch FP 30 - 50m 50m - Inf Vehicle 3D 42.5/40.1 18.6/18.6 40.0/43.4 0.14/0.13 17.3/16.4 70.6/69.9 39.7/37.5 12.5/11.2 Pedestrian 3D 38.9/37.7 34.0/34.0 48.6/50.2 0.49/0.47 12.0/11.6 52.5/51.4 37.6/36.5 22.3/21.3 # MOTA by Range (LEVEL 1/LEVEL 2) 0 - 30m Table 7. Baseline multi-object tracking metrics for vehicles and pedestrians. reduction of 7.6 when training on SUB and evaluating on SF compared with training on SF and evaluating on SF. For 3D object detection of pedestrians, the results are interesting. When evaluating on SUB, training on either SF or SUB yield similar APH, while training on all data yields a 7+ APH improvement. This result does not hold when evaluating on SF. Training just on SF when evaluating on SF yields a 2.4 APH improvement as compared to training on the larger combined dataset, while training on SUB only and evaluating on SF leads to a 19.8 APH loss. This interesting behavior on pedestrian might be due to the limited amount pedestrians available in SUB (MTV + Phoenix). Overall, these results suggest a pronounced domain gap between San Francisco and Phoenix in terms of 3D object detection, which opens up exciting research opportunities to close the gap by utilizing semi-supervised or unsupervised domain adaptation algorithms. achieve better results without requiring data augmentation: we trained the same PointPillars model [16] from Section 5.1 on subsets of the training sequences and evaluated these models on the test set. To have meaningful results, these subsets are cumulative, meaning that the larger subsets of sequences contain the smaller subsets. The results for these experiments can be found in Table 10. Dataset %-age 10% 30% 50% 100% Vehicle Pedestrian 29.7/28.9 39.5/27.7 41.4/41.0 45.7/35.7 46.3/45.8 50.3/40.4 49.8/49.4 53.0/43.0 Table 10. The AP/APH at LEVEL 2 difficulty on the Validation set of Vehicles and Pedestrians as the dataset size grows. Each column uses a cumulative random slice of the training set with size determined by the percentage in the first row. SF(Tra) SUB(Tra) SF(Val) SUB(Val) Vehicle Pedestrian 2.9M 2.0M 1.9M 210K 691K 435K 555K 103K Table 8. 3D LiDAR object counts for each domain in training (Tra) and Validation (Val) sets. ALL/SUB/SF→SUB ALL/SF/SUB→SF Vehicle Pedestrian 45.3/44.0/36.7 25.7/20.6/19.9 50.3/49.2/42.5 46.0/47.6/29.7 Table 9. 3D object detection baseline LEVEL 2 APH results for domain shift on 3D vehicles and pedestrians on the Validation set. IoU thresholds: Vehicle 0.7, Pedestrian 0.5. # 5.4. Dataset Size A larger dataset enables research on data intensive algo- rithms such as Lasernet[18]. For methods that work well on small datasets such as PointPillars [16], more data can # 6. Conclusion We presented a large-scale multimodal camera-LiDAR dataset that is significantly larger, higher quality, more ge- ographically diverse than any existing similar dataset. It covers 76km2 when considering the diluted ego poses at a visibility of 150 meters. We demonstrated domain diversity among Phoenix, Mountain View and San Francisco data in this dataset, which opens exciting research opportunities for domain adaptation. We evaluated the performance of 2D and 3D object detectors and trackers on the dataset. The dataset and the corresponding code are publicly available; we will maintain a public leaderboard to keep track of progress in the tasks. In the future, we plan to add map information, more labeled and unlabeled data with more diversity focused on different driving behaviors and different weather condi- tions to enable exciting research on other self-driving related tasks, such as behavior prediction, planning and more diverse domain adaptation. # References [1] S2 geometry. http://s2geometry.io/. 5 [2] Jens Behley, Martin Garbade, Andres Milioto, Jan Quen- zel, Sven Behnke, Cyrill Stachniss, and Juergen Gall. Se- mantickitti: A dataset for semantic scene understanding of lidar sequences. In Proc. of the IEEE/CVF International Conf. on Computer Vision (ICCV), 2019. 2 [3] Keni Bernardin and Rainer Stiefelhagen. Evaluating multiple object tracking performance: The clear mot metrics. 2008. 6 [4] Holger Caesar, Varun Bankiti, Alex H. Lang, Sourabh Vora, Venice Erin Liong, Qiang Xu, Anush Krishnan, Yu Pan, Gi- ancarlo Baldan, and Oscar Beijbom. nuscenes: A multimodal dataset for autonomous driving. CoRR, abs/1903.11027, 2019. 2 [5] Ming-Fang Chang, John Lambert, Patsorn Sangkloy, Jagjeet Singh, Slawomir Bak, Andrew Hartnett, De Wang, Peter Carr, Simon Lucey, Deva Ramanan, and James Hays. Argoverse: 3d tracking and forecasting with rich maps. In The IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR), June 2019. 2 [6] Yukyung Choi, Namil Kim, Soonmin Hwang, Kibaek Park, Jae Shin Yoon, Kyounghwan An, and In So Kweon. Kaist multi-spectral day/night data set for autonomous and assisted IEEE Transactions on Intelligent Transportation driving. Systems, 19(3). 2 [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition. 1 [8] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets robotics: The kitti dataset. Interna- tional Journal of Robotics Research (IJRR), 2013. 2 [9] Andreas Geiger, Philip Lenz, and Raquel Urtasun. Are we ready for autonomous driving? the kitti vision benchmark suite. In Conference on Computer Vision and Pattern Recog- nition (CVPR), 2012. 2 [10] Agrim Gupta, Piotr Dollar, and Ross Girshick. Lvis: A dataset for large vocabulary instance segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1 [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceed- ings of the IEEE conference on computer vision and pattern recognition. 7 [12] Xinyu Huang, Xinjing Cheng, Qichuan Geng, Binbin Cao, Dingfu Zhou, Peng Wang, Yuanqing Lin, and Ruigang Yang. The apolloscape dataset for autonomous driving. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2 [13] Rudolph Emil Kalman. A new approach to linear filtering and prediction problems. Transactions of the ASME–Journal of Basic Engineering, 82(Series D). 7 [14] Chanho Kim, Fuxin Li, and James M Rehg. Multi-object tracking with neural gating using bilinear lstm. In ECCV, 2018. 7 [15] Harold W. Kuhn and Bryn Yaw. The hungarian method for the assignment problem. Naval Res. Logist. Quart, 1955. 7 [16] Alex H Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. Pointpillars: Fast encoders for object detection from point clouds. CVPR, 2019. 6, 7, 8 [17] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In European conference on computer vision. 1, 7 [18] Gregory P Meyer, Ankit Laddha, Eric Kee, Carlos Vallespi- Gonzalez, and Carl K Wellington. Lasernet: An efficient probabilistic 3d object detector for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 8 [19] Abhishek Patil, Srikanth Malla, Haiming Gang, and Yi-Ting Chen. The h3d dataset for full-surround 3d multi-object de- tection and tracking in crowded urban scenes. In Proceedings of IEEE Conference on Robotics and Automation (ICRA). 2 [20] Charles Ruizhongtai Qi, Hao Su, Kaichun Mo, and Leonidas J. Guibas. Pointnet: Deep learning on point sets for 3d classifica- tion and segmentation. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017. 6 [21] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in neural information pro- cessing systems. 7 [22] Xinshuo Weng and Kris Kitani. A baseline for 3d multi-object tracking. arXiv:1907.03961, 2019. 7 [23] Bolei Zhou, Hang Zhao, Xavier Puig, Sanja Fidler, Adela Bar- riuso, and Antonio Torralba. Scene parsing through ade20k dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1 [24] Yin Zhou, Pei Sun, Yu Zhang, Dragomir Anguelov, Jiyang Gao, Tom Ouyang, James Guo, Jiquan Ngiam, and Vijay Va- sudevan. End-to-end multi-view fusion for 3d object detection in lidar point clouds. 2019 Conference on Robot Learning (CoRL), 2019. [25] Y. Zhou and O. Tuzel. Voxelnet: End-to-end learning for point cloud based 3d object detection. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018. 6
{ "id": "1907.03961" }
1912.03817
Machine Unlearning
Once users have shared their data online, it is generally difficult for them to revoke access and ask for the data to be deleted. Machine learning (ML) exacerbates this problem because any model trained with said data may have memorized it, putting users at risk of a successful privacy attack exposing their information. Yet, having models unlearn is notoriously difficult. We introduce SISA training, a framework that expedites the unlearning process by strategically limiting the influence of a data point in the training procedure. While our framework is applicable to any learning algorithm, it is designed to achieve the largest improvements for stateful algorithms like stochastic gradient descent for deep neural networks. SISA training reduces the computational overhead associated with unlearning, even in the worst-case setting where unlearning requests are made uniformly across the training set. In some cases, the service provider may have a prior on the distribution of unlearning requests that will be issued by users. We may take this prior into account to partition and order data accordingly, and further decrease overhead from unlearning. Our evaluation spans several datasets from different domains, with corresponding motivations for unlearning. Under no distributional assumptions, for simple learning tasks, we observe that SISA training improves time to unlearn points from the Purchase dataset by 4.63x, and 2.45x for the SVHN dataset, over retraining from scratch. SISA training also provides a speed-up of 1.36x in retraining for complex learning tasks such as ImageNet classification; aided by transfer learning, this results in a small degradation in accuracy. Our work contributes to practical data governance in machine unlearning.
http://arxiv.org/pdf/1912.03817
Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, Nicolas Papernot
cs.CR, cs.AI, cs.LG
Published in IEEE S&P 2021
null
cs.CR
20191209
20201215
0 2 0 2 c e D 5 1 ] R C . s c [ 3 v 7 1 8 3 0 . 2 1 9 1 : v i X r a # In 42nd IEEE Symposium of Security and Privacy # Machine Unlearning Lucas Bourtoule*‡§, Varun Chandrasekaran*†, Christopher A. Choquette-Choo*‡§, Hengrui Jia*‡§, Adelin Travers*‡§, Baiwu Zhang*‡§, David Lie‡, Nicolas Papernot‡§ University of Toronto‡, Vector Institute§, University of Wisconsin-Madison† it is generally difficult for them to revoke access and ask for the data to be deleted. Machine learning (ML) exacerbates this problem because any model trained with said data may have memorized it, putting users at risk of a successful privacy attack exposing their information. Yet, having models unlearn is notoriously difficult. We introduce SISA training, a framework that expedites the unlearning process by strategically limiting the influence of a data point in the training procedure. While our framework is applicable to any learning algorithm, it is designed to achieve the largest improvements for stateful algorithms like stochastic gradient descent for deep neural networks. SISA training reduces the computational overhead associated with unlearning, even in the worst-case setting where unlearning requests are made uniformly across the training set. In some cases, the service provider may have a prior on the distribution of unlearning requests that will be issued by users. We may take this prior into account to partition and order data accordingly, and further decrease overhead from unlearning. data motivates us to examine how this right to be forgotten can be efficiently implemented for ML systems. Because ML models potentially memorize training it is important to unlearn what they have learned from data that is to be deleted. This problem is tangential to privacy-preserving ML—enforcing ¢-differential privacy with « # 0 does not alleviate the need for an unlearning mechanism. Indeed, while algorithms which are differentially private guarantee a bound on how much individual training points contribute to the model and ensure that this contribution remains small (13). (14), there remains a non-zero contribution from each point. If this was not the case, the model would not be able to learn at all (see § [p. In contrast, forgetting requires that a particular training point have zero contribution to the model, which is orthogonal to the guarantee provided by differential privacy. Our evaluation spans several datasets from different domains, with corresponding motivations for unlearning. Under no dis- tributional assumptions, for simple learning tasks, we observe that SISA training improves time to unlearn points from the Purchase dataset by 4.63×, and 2.45× for the SVHN dataset, over retraining from scratch. SISA training also provides a speed-up of 1.36× in retraining for complex learning tasks such as ImageNet classification; aided by transfer learning, this results in a small degradation in accuracy. Our work contributes to practical data governance in machine unlearning. 0 # I. INTRODUCTION Many applications of machine learning (ML) involve ana- lyzing data that is collected from individuals. This data is often sensitive in nature and could include information like medical records [1] or personal emails [2]. Morever, data pipelines are often not static [3]: new data is collected regularly and incrementally used to further refine existing models following the online learning paradigm [4]. Conversely, data may also need to be deleted. Recently introduced legislation, such as the General Data Protection Regulation (GDPR) in the European Union [5], the California Consumer Privacy Act [6] in the United States, and PIPEDA privacy legislation in Canada [7] include provisions that re- quire the so-called right to be forgotten [8]. This requirement, which has been one of the most controversial in the GDPR, mandates that companies take reasonable steps to achieve the erasure of personal data concerning [the individual] [9]. The unprecedented scale at which ML is being applied on personal *All student authors contributed equally and are ordered alphabetically. Having models forget necessitates knowledge of exactly how individual training points contributed to model parameter updates. Prior work showed this is possible when the learning algorithm queries data in an order that is decided prior to the start of learning [15] i.e., in the statistical query (SQ) learning setting [16]. When the dataset is instead queried adaptively, i.e., a given query depends on any queries made in the past, convergence of the approach is no longer guaranteed. In the adaptive setting, the divergence induced by this approach is bounded only for models which require a small number of iterations for learning. While it is true that any algorithm in the PAC setting can be converted to its equivalent in the SQ learning setting [16], efficient algorithms for SQ learning of complex models such as DNNs do not exist. A naive way to have such models provably forget is to re- train them from scratch. To avoid the large computational and time overhead associated with fully retraining models affected by training data erasure, our research seeks to hold ML to standards such as the right to be forgotten instead through the ability to unlearn. Given a trained model, unlearning assures the user that the model is no longer trained using the data which the user elected to erase. Put another way, unlearning guarantees that training on a point and unlearning it afterwards will produce the same distribution of models that not training on the point at all, in the first place, would have produced. Due to this strong definition, we do not consider the setting in which unlearning is used to mitigate poisoning attacks [17]–[19]; the guarantee we provide is far stricter than what would be needed for poisoning—i.e., that the loss of model accuracy due to the poisoning are mitigated. Instead, we 1 focus on mechanisms that provide the stronger privacy-minded unlearning guarantee described above in order to satisfy the right to be forgotten requirement. Our SISA training approach, short for Sharded, Isolated, Sliced, and Aggregated training, can be implemented with minimal modification to existing pipelines. First, we divide the training data into multiple disjoint shards such that a training point is included in one shard only; shards partition the data. Then, we train models in isolation on each of these shards, which limits the influence of a point to the model that was trained on the shard containing the point. Finally, when a request to unlearn a training point arrives, we need to retrain only the affected model. Since shards are smaller than the entire training set, this decreases the retraining time to achieve unlearning. However, by doing so, we are reducing the amount of data per shard, which may result in a weak learner [20]. In addition, rather than training each model on the entire shard directly, we can divide each shard’s data into slices and present slices incrementally during training. We save the state of model parameters before introducing each new slice, allowing us to start retraining the model from the last known parameter state that does not include the point to be unlearned—rather than a random initialization. Slicing further contributes to decreasing the time to unlearn, at the expense of additional storage. At inference, we use different strategies to aggregate the predictions of models trained on each shard: the simplest one is a majority vote over predicted labels. To demonstrate that SISA training handles streams of un- learning requests effectively, we analytically compute speed- ups achieved when the service provider processes unlearning requests sequentially (i.e., immediately upon a user revoking access to their data) or in batches (i.e., the service provider buffers a few unlearning requests before processing them). Our results show that SISA training achieves more advantageous trade-offs between accuracy and time to unlearn—compared to two baselines: (1) the naive approach of retraining from scratch, and (2) only train on a fraction of the original training set (i.e., only use one of the shards to predict). We first turn to simple learning tasks, such as deep networks trained on Purchase and SVHN. When processing 8 unlearning requests on Purchase and 18 unlearning requests on SVHN, we find that SISA training achieves a speed-up of 4.63× and 2.45× over the first baseline—through the combined effect of partitioning the data in 20 shards each further divided in 50 slices. This comes at a nominal degradation in accuracy of less than 2 percentage points. The second baseline is only viable when training on a large fraction 1 S of the data: it outperforms SISA training by a factor of S but quickly induces a large cost in accuracy as S increases. Compared to these baselines, we conclude that SISA training enables the service provider to find a more advantageous compromise between model accuracy and time to unlearn. Next, we turn to more complex learning tasks involving datasets such as Imagenet and deeper networks. With the same setup (i.e., number of shards and slices), we observe a speed-up of 1.36×, at the expense of a greater accuracy degradation (19.45 percentage points for 2 top-5 accuracy) for 39 requests1. We demonstrate that transfer learning can significantly reduce this accuracy degradation. We observe that speed-up gains from sharding exist when the number of unlearning requests is less than three times the number of shards. However, for complex learning tasks, increasing the number of shards results in a decrease in aggregate accuracy. Slicing, however, always provides a speed- up. While the number of unlearning requests may seem small, these are three orders of magnitude larger than those in prior work [21]. These savings in retraining times enable large organizations to benefit from economies of scale. When faced with different distributions of unlearning re- i.e., requests are not uniformly issued across the quests, dataset, we present a refined variant of our approach, which assumes prior knowledge of the distribution of unlearning requests. We validate it in a scenario that models a company operating across multiple jurisdictions with varying legislation and sensitivities to privacy, and accordingly varying distri- butions of unlearning requests from users based on publicly available information [21]. Knowing this distribution enables us to further decrease expected unlearning time by placing the training points that will likely need to be unlearned in a way that reduces retraining time. For simple learning tasks, the cost in terms of accuracy is either null or negligible, depending on the distribution of requests considered. In summary, the contributions of this paper are: • We formulate a new, intuitive definition of unlearning. Our definition also takes into account non-uniform dis- tributions of unlearning requests. • We introduce SISA training, a practical approach for unlearning that relies on data sharding and slicing to reduce the computational overhead of unlearning. We analytically derive the asymptotic reduction in time to unlearn points with sharding and slicing when the service provider processes requests sequentially or in batches. • We demonstrate that sharding and slicing combined do not impact accuracy significantly for simple learning tasks, and that SISA training could be immediately applied to handle orders of magnitude more unlearning requests than what Google anticipates is required to implement the GDPR right to be forgotten [21]. • For complex learning tasks, we demonstrate that a com- bination of transfer learning and SISA training induces a nominal decrease in accuracy (∼ 2 percentage points) with improved retraining time. # II. BACKGROUND ON MACHINE LEARNING We provide rudiments of machine learning as they apply to neural networks. We chose to study neural networks because they almost always generate the largest computational costs and require investments in dedicated accelerators [22], [23]. Our efforts fall under the realm of supervised machine learning [24]. Tasks to be learned are defined in a space Z 1For 4 requests, observe an 8.01× speed-up for mini-Imagenet at the expense of 16.7 percentage points accuracy degradation. of the form X × Y, where X is the sample space and Y is the output space. For example, X could be thought of as the space of images and Y as the labels of the images. Given a dataset of input-output pairs (7, y) € ¥ x Y, the goal of a supervised learning algorithm is to find a model, i.e., a function F : Y +> )Y that maps these inputs to outputs. The learning algorithm that produces this model uses an optimizer. It takes in a dataset, a hypothesis space, and an objective: • Dataset: Consistent with the probably approximately correct (PAC) learning setting [25], we assume there is an underlying distribution on Z that describes the data; the learner has no direct knowledge of the distribution but has access to a dataset D that is drawn from it. This dataset D is further split into the training dataset Dtr and a holdout dataset called the test dataset Dte such that Dte ∪ Dtr = D and Dte ∩ Dtr = ∅. « Hypothesis space: An hypothesis is a set of parameter values w, which together with the model architecture F selected, represent one possible mapping F,, : ¥ 4 Y between inputs and outputs. In our case, the hypothesis is a neural network and its parameters are the weights that connect its different neurons (see below). ¢ Objective: Also known as the loss function, the objective characterizes how good any hypothesis is by measuring its empirical risk on the dataset, i.e., approximate the error of the model on the underlying task distribution of which we only have a few samples. A common example is the cross-entropy loss, which measures how far a model’s outputs are from the label: I(x,y) = — 7} yi -log(F,,(x)) where n is the number of classes in the problem. Given an architecture F , a model Fw is found by searching for a set of parameters w that minimize the empirical risk of Fw on the training set Dtr. Performance of the model is validated by measuring its accuracy on the test dataset Dte. We experiment with our approach using neural networks and deep learning [26]. Deep neural networks (DNNs) are non-parametric functions organized as layers. Each layer is made of neurons—elementary computing units that apply a non-linear activation function to the weighted average of their inputs. Neurons from a given layer are connected with weights to neurons of the previous layer. The layout of these layers and the weight vectors that connect them constitutes the architecture of the DNN, while the value of each individual weight (collectively denoted by w) is to be learned. Weights are updated using the backpropagation algorithm [27]. The algorithm starts by assigning a random value to each weight. Then, a data point is sampled from the dataset and the loss function is computed to compare the model’s prediction to the data point’s label. Each model parameter value is updated by multiplying the gradient of the loss function with respect to the parameter by a small constant called the learning rate. This algorithm enables learning and gradually improves the model’s predictions as more inputs are processed. 3 # III. DEFINING UNLEARNING A requirement of privacy regulations such as the GDPR or the CCPA is that individuals whose data is housed by organizations have the right to request for this data to be erased. This requirement poses challenges to current machine learning technologies. We define the unlearning problem by examining these challenges, which then leads us to a formal model of the unlearning problem. We identify objectives for an effective approach to unlearning, which we use to show the ineffectiveness of existing strawman solutions. A. Why is Unlearning Challenging? The reason unlearning is challenging stems from the com- plex and stochastic nature of training methods used to optimize model parameters in modern ML pipelines. 1. We have a limited understanding of how each data point impacts the model. There exists no prior work that measures the influence of a particular training point on the parameters of a model. While research has attempted to trace a particular test-time prediction through the model’s architecture and back to its training data [28], [29], these techniques rely on influence functions, which involve expensive computations of second- order derivatives of the model’s training algorithm. Further, it is not obvious how to modify such influence functions so that they map the effect of a single training point on model parameters for complex models such as DNNs. We later discuss techniques for differentially private learning, which seek to bound the influence any training point can have on model parameters, and explain how they are inadequate because the bound is always non-zero. 2. Stochasticity in training. A great deal of randomness exists in the training methods for complicated models such as DNNs; small batches of data (e.g., with 32 points) are randomly sampled from the dataset, and the ordering of batches varies between different epochs, i.e., passes of the algorithm through the dataset. Further, training is often parallelized without ex- plicit synchronization, meaning the inherent random ordering of parallel threads may make the training non-deterministic. 3. Training is incremental. Additionally, training is an incre- mental procedure where any given update reflects all updates that have occurred prior to it. For example, if a model is updated based on a particular training point (in a particular batch) at a particular epoch, all subsequent model updates will depend, in some implicit way, on that training point. 4. Stochasticity in learning. Intuitively, learning algorithms are designed to search for an optimal hypothesis in a vast hypoth- esis space. In the case of neural networks, this space contains all models that can be defined by setting the weights of a fixed neural network architecture. PAC learning theory suggests that the learned hypothesis is one of many hypotheses that minimize the empirical risk. For example, the common choice of optimizer for neural networks, stochastic gradient descent, is capable of converging to one of the many local minima for any convex loss function. Coupled with the stochasticity ¢ - OK bri ad oO Reinitialization Ma # Mc Fig. 1: Unlearning (red arrow) is hard because there exists no function that measures the influence of augmenting the dataset D with point du and fine-tuning a model MA already trained on D to train (left blue arrow) a model MB for D+{du}. This makes it impossible to revert to model MA without saving its parameter state before learning about du. We call this model slicing (short green arrow). In the absence of slicing, one must retrain (curved green arrow) the model without du, resulting in a model MC that is different from the original model MA. involved in training, it is very challenging to correlate a data point with the hypothesis learned from it. B. Formalizing the Problem of Unlearning We formalize the unlearning problem as a game between two entities: an honest service provider S, and a user popu- lation U. The service provider could be a large organization that collects information from various individuals (such as a company or hospital). This data is curated in the form of a dataset D. The service provider uses this data for training and testing a machine learning model M in any way they desire. Any user u ∈ U can revoke access to their individual data du ⊂ D. Observe that du can be a single element in the dataset, or a set of elements. Within a finite period of time, the service provider has to erase the revoker’s data and modify any trained models M to produce M¬du , where M¬du is some model that could plausibly have been trained if du were not in D. In Definition III.1, we define plausibility according to the distribution of models output by the training algorithm. Further, S must convince u that M¬du is such a model—a defense akin to that of plausible deniability. Access to data may be revoked by users sequentially, but the service provider may choose to perform data erasing in a batched fashion, as discussed in § VII. We illustrate this scenario in Figure |1| One can observe that given a dataset D, it is possible to train one of several models (e.g., DNNs) that generalize well from this dataset unless the learning hypothesis class leads to a unique closed form solution (e.g., linear classifier). We denote two such models M4 and Mc. If we add one more data point d,, to the dataset D, we can train another model on this new dataset D’ in many ways. This includes using the parameters of M4 to initialize a new model (rather than randomly initializing it) and continuing training from there on to obtain model Mz. 4 Since there is no efficient function that measures the influence of this one additional point d,, on the parameters in Mp, it is very hard to invert the procedure unless a copy of My, had been previously saved. Later in SIV we will define this strategy, termed slicing. In the absence of slicing, the most convincing way to obtain plausible deniability, and ensure that the model is devoid of the influence of a particular training point d,,, is to retrain it from scratch without that particular point (keeping all other training hyperparameters the same) i.e. use D’\ d,, to obtain the model Mc in our example from Figure |1] It is conceivable that the parameters of M4 and Mc are similar (despite stochasticity in learning) and it is desired for their performance (in terms of test accuracy) to be comparable. However, the fact that model Mc was obtained by training on D’ \ d,, from scratch provides a certificate to the data owner that their data share was indeed removed. This conveys a very strong notion of privacy. Definition III.1. Let D = {d; : i € U} denote the training set collected from population U/. Let D' = DU d,. Let Du denote the distribution of models learned using mechanism M on D’ and then unlearning d,,. Let D,ca; be the distribution of models learned using M on D. The mechanism M facilitates unlearning when these two distributions are identical. We draw the attention of the reader to two key aspects of the definition. First, the definition captures inherent stochasticity in learning: it is possible for multiple hypotheses to minimize empirical risk over a training set. As illustrated by models M4 and Mc in Figure|1} two models having different parameters does not imply that they were trained with a different dataset. Conversely, two models trained with a different dataset do not necessarily have different parameters. Second, the definition does not necessarily require that the owner retrain the model M’ from scratch on D\d,, as long as they are able to provide evidence that model M’ could have been trained from scratch on D’ \ d,,. In our work, this evidence takes the form of a training algorithm, which if implemented correctly, guarantees that the distributions Dj, and D,.q; are identical. # C. Goals of Unlearning The simple strategy we have discussed thus far i.e., training a model from scratch on the dataset without the point being unlearned is very powerful. We refer to this strategy as the baseline strategy through the rest of the paper. However, for large dataset sizes, such an approach will quickly become intractable (in terms of time and computational resources expended). For example, to be compliant with GDPR/CCPA, organizations will have to retrain models very frequently. Thus, any new strategy should meet the following requirements. G1. Intelligibility: Conceptually, the baseline strategy is very easy to understand and implement. Similarly, any un- learning strategy should be intelligible; this requirement ensures that the strategy is easy to debug by non-experts. G2. Comparable Accuracy: It is conceivable that the accuracy of the model degrades, even in the baseline, if (a) the fraction of training points that need to be unlearned becomes too large, or (b) prototypical points [30] are unlearned. Even if there is no component of the approach that explicitly promotes high accuracy, any unlearning strategy should strive to introduce a small accuracy gap in comparison to the baseline for any number of points unlearned. G3. Reduced Unlearning Time: The strategy should have provably lower time than the baseline for unlearning any number of points. G4. Provable Guarantees: Like the baseline, any new strategy should provide provable guarantees that any number of points have been unlearned (and do not influence model parameters). Additionally, such a guarantee should be intuitive and easy to understand for non-experts [31]. G5. Model Agnostic: The new strategy for unlearning should be general i.e., should provide the aforementioned guar- antees for models of varying nature and complexity. G6. Limited Overhead: Any new unlearning strategy should not introduce additional overhead to what are already computationally-intense training procedures. D. Strawman Solutions Based on the requirements discussed earlier, we propose several strawman candidates for an unlearning strategy. The goals specified (sometimes in parantheses) are the goals the strawman solutions do not meet. 1. Differential Privacy: Proposed by Dwork et al. [32], ε- differential privacy offers probabilistic guarantees about the privacy of individual records in a database. In our case, ε bounds the changes in model parameters that may be induced by any single training point. While several efforts [14], [33] make it possible to learn with differential privacy, this guaran- tee is different from what we wish to provide. We require that a point has no influence on the model once it has been unlearned. While differential privacy allows us to bound the influence any point may have on the model, that bound remains non-zero. This implies that there is a possibility that a point still has a small but non-zero influence on the model parameters. To guarantee unlearning, we would need to achieve ε-differential privacy with ε = 0. This would make it impossible for the algorithm to learn from the training data (G2). 2. Certified Removal Mechanisms: Other mechanisms relax the definition of differential privacy to provide certificates of data removal. This includes two concurrent proposals [34], [35] The mechanism by Guo et al. [34] uses a one-step Newton update [29]. While such a mechanism introduces a small residue, this is masked by adding noise (similar to approaches in differential privacy). However, as before, their guarantees are probabilistic, and different from what we wish to provide with SISA training. Additionally, to train non-linear models, they resort to pretraining models on public data (for which no guarantees are provided) or from differentially-private feature extractors. In summary, such a mechanism is effective for simple models such as linear regression models, which suggest that they fall short of achieving G5. 5 3. Statistical Query Learning: Cao et al. [15] model unlearning in the statistical query learning framework [16]. By doing so, they are able to unlearn a point when the learning algorithm queries data in an order decided prior to the start of learning. In this setting, it is possible to know exactly how individual training points contributed to model parameter updates. How- ever, their approach is not general2 (G5) and does not easily scale to more complex models (such as those considered in this work). Indeed, these models are trained using adaptive statistical query algorithms which make queries that depend on all queries previously made. In this setting, the approach of Cao et al. [15] diverges in an unbounded way unless the number of queries made is small, which is not the case for the deep neural networks we experiment with. 4. Decremental Learning: Ginart et al. [36] consider the problem from a data-protection regulation standpoint. They present a formal definition of complete data erasure which can be relaxed into a distance-bounded definition. Deletion time complexity bounds are provided. They note that the deletion and privacy problems are orthogonal, which means deletion capability does not imply privacy nor vice versa. However, it is unclear if the approach presented (Quantized k-Means) is applicable (G5) and scalable (G6) for all model classes. # IV. THE SISA TRAINING APPROACH Our discussion thus far motivates why retraining from scratch while omitting data points that need to be unlearned is the most straightforward way to provide provable guarantees. However, this naive strategy is inefficient in the presence of large datasets or models with high capacity that take a long time to train. We present, SISA (or Sharded, Isolated, Sliced, Aggregated) training to overcome these issues. A. The SISA training Approach to Training As illustrated in Figure 2, SISA training replicates the model being learned several times where each replica receives a disjoint shard (or subset) of the dataset—similar to current distributed training strategies [37], [38]. We refer to each replica as a constituent model. However, SISA training devi- ates from current strategies in the way incremental model up- dates are propagated or shared—there is no flow of information between constituent models. For example, if each constituent model is a DNN trained with stochastic gradient descent, then gradients computed on each constituent are not shared between different constituents; each constituent is trained in isolation. This ensures that the influence of a shard (and the data points that form it) is restricted to the model that is being trained using it. Each shard is further partitioned into slices, where each constituent model is trained incrementally (and iteratively, in a stateful manner) with an increasing number of slices. At inference, the test point is fed to each constituent and all the constituents’ responses are aggregated, similar to the case of ML ensembles [39]. 2Kearns [16] shows that any PAC learning algorithm has a corresponding SQ learning equivalent. However, an efficient implementations of SQ equiva- lents for more complex algorithms does not exist, to the best of our knowledge. + M, : s"" constituent model + D, :s'data split + D,,:r""slice in s™ data split + EE : data to unlearn Aggregation Output 2 2 A A Réesh> ET Cts) S Fig. 2: SISA training: data is divided in shards, which are themselves divided into slices. One constituent model is trained on each shard by presenting it with incrementally many slices and saving its parameters before the training set is augmented with a new slice. When data needs to be unlearned, only one of the constituent models whose shards contains the point to be unlearned needs to be retrained — retraining can start from the last parameter values saved before including the slice containing the data point to be unlearned. When a data point is to be unlearned, only the constituent model whose dataset contains this point is affected. More specifically, a data point is unlearned from a particular slice in a particular shard. Retraining can start from the last parameter state saved prior to including the slice containing the data point to be unlearned: only the models that are trained using the slice containing the unlearned point need to be retrained. We will describe each component in more detail in § IV-B. Observe that our analysis of unlearning however assumes that the retraining time grows linearly in the size of the dataset. We validate this assumption in § V-A. However, we make no assumptions about the nature of the constituent models or if the constituents are homogeneous (i.e., the same model or hypothesis class) or heterogeneous (i.e., different models or hypothesis class). Sharding is possible for any model or hypothesis class: it has no impact on how training is performed beyond the smaller set of data each model has access to. Slicing is possible for any iterative learning algorithm that is stateful: the algorithm should be such that it can continue to learn from its current state when presented with new data. Gradient descent naturally falls under that category. However, decision tree learning is a counter-example of a technique that does not benefit from slicing, because it greedily chooses a feature to add to the decision tree based on how well it splits the data according to a metric like Gini impurity. For this reason, when a new slice of data is added, the tree must be constructed again from scratch. In summary, slicing can be used for any model that is trained through gradient descent: e.g., logistic regression and neural networks, but also support vector machines in some cases [40]. The key requirement of our training strategy is that the 6 the updates obtained during the iterative training process are not exchanged between different constituents. Intuitively, such an approach may seem detrimental to improving the generalization capabilities of the model; each constituent is trained on a (significantly) smaller portion of the dataset, and may become a weak learner [20]. We evaluate this aspect in § VII, and discuss trade-offs of several aggregation approaches to mitigate this effect for different learning tasks. B. Techniques 1. Sharding: By dividing the data into disjoint fragments and training a constituent model on each smaller data fragment, we are able to distribute the training cost. While this means our approach naturally benefits from parallelism across shards, we do not take this into account in our analysis and experiments, out of fairness to the baseline of retraining a model from scratch (which could also be accelerated by distributing the computation across multiple machines). For the remainder of this section, we assume that we have no prior information associated with the probabilities with which each individual point might be unlearned. In such a scenario, a dataset D can be uniformly partitioned into S shards such that ∩k∈[S]Dk = ∅ and ∪k∈[S]Dk = D. For each shard Dk, a model (denoted Mk) is trained using the entirety of the data available in Dk. In § VIII, we explore the scenario where the distribution of unlearning requests is known to S. Observe that user u’s data-point d,, can lie in each of the S shards with equal probability. Moreover, one of the parameters of the training can be whether each d,, be part of only one shard or several. For simplicity, we will assume that each d, is part of only one shard, as this maximizes the savings in unlearning time. We discuss this further in § [Ex] If the user desires for d,, to be unlearned, then the service provider has to (a) first locate the dataset (and shard) in which d,, is located, referred to as D,, and (b) retrain from scratch the corresponding model on D,, \d,,; this will result in a new model M//,. In comparison, the baseline would entail retraining the model from scratch on D \d,. Since |D| >> |Dy|, the time required for retraining (henceforth referred to as retraining time) in the baseline is far greater than in our proposal; our proposal provides an expected speed-up of S wil 2. Isolation: Observe that based on the proposal detailed earlier, the training of each shard occurs in isolation. By not performing a joint update, we potentially degrade the generalization ability of the overall model (comprising of all constituents). However, we demonstrate that for appropriate choices of the number of shards, this does not occur in practice for certain types of learning tasks. Isolation is a subtle, yet powerful construction that enables us to give concrete, provable, and intuitive guarantees with respect to unlearning. 3. Slicing: By further dividing data dedicated for each model (i.e., each shard) and incrementally tuning (and storing) the parameter state of a model, we obtain additional time savings. 3For a single unlearning request. Specifically, each shard’s data Dk is further uniformly parti- tioned into R disjoint slices such that ∩i∈[R]Dk,i = ∅ and ∪i∈[R]Dk,i = Dk. We perform training for e epochs to obtain Mk as follows: 1) At step 1, train the model using random initialization using only Dk,1, for e1 epochs. Let us refer to the resulting model as Mk,1. Save the state of parameters associated with this model. 2) At step 2, train the model Mk,1 using Dk,1 ∪ Dk,2, for e2 epochs. Let us refer to the resulting model as Mk,2. Save the parameter state. 3) At step R, train the model Mk,R−1 using ∪iDk,i, for eR epochs. Let us refer to the resulting final model as Mk,R = Mk. Save the parameter state. As before, observe that if user u’s data-point d,, lies in shard D;, then it can lie in any of the R slices with equal probability. Thus, if the user desires for d,, to be unlearned, then the service provider has to (a) first locate the slice in which d,, is located, referred to as Dy, and (b) perform the training procedure as specified above from step u onwards using Dy, \d,; this will result in a new model Mj,_,. For a single unlearning request, this provides a best-case speed-up up to Ri x compared to using the strategy without slicing (we discus this in more detail in § . It is also worth noting that the duration of training for the constituent models with and without data slicing is different when they have the same number of epochs. Each epoch takes less time when only a subset of the slices is being traine on; on the other hand, training incremental combinations o: slices takes longer because the training process is repeate: after each slice is added. In order to train models with an without slicing for the same amount of time, we introduce the following relationship between the number of epochs with an without slicing. Let D = x be the number of points per shard, where N is the size of the dataset. Let e’ be the number o epochs without slicing; we seek to find the number of epochs e= an e; to train a model with R slices, where e; is the number of epochs required to train p samples. We make a simplifying assumption: we assume that each slice is traine: equally long i.e., Vi, e; = 7. We also assume that the training time is estimated solely based on the amount of training data (as detailed in § [V). e () e€ R . D /D= Se e LoR i=1 The speed-up provided by slicing comes at no expense beyond the overhead induced by storing the state of parameters before each slice is introduced in training. We explore these trade-offs in detail in Appendix C. 4. Aggregation: At inference time, predictions from various constituent models can be used to provide an overall predic- tion. The choice of aggregation strategy in SISA training is influenced by two key factors: 7 1) It is intimately linked to how data is partitioned to form shards: the goal of aggregation is to maximize the joint predictive performance of constituent models. 2) The aggregation strategy should not involve the training data (otherwise the aggregation mechanism itself would have to be unlearned in some cases). In the absence of knowledge of which points will be the subject of unlearning requests, there is no better strategy than to partition4 data uniformly and to opt for a voting strategy where each constituent contributes equally to the final outcome through a simple label-based majority vote. This naturally satisfies both requirements above. In cases where constituent models assign high scores to multiple classes rather than a single class, the majority vote aggregation loses information about the runner-up classes. In § VII-A, we evaluate a refinement of this strategy where we av- erage the entire prediction vectors (i.e., the post-softmax vector indicating the model’s confidence in predicting each class) and pick the label of the highest value. We also considered training a controller model that re-weights predictions made by constituent models [41], i.e., that learns which model is best for predicting on a given test point. However improvements in accuracy were modest and not worth the cost of now having to retrain the controller model if its own training data is the subject of an unlearning request later made by a user. Take-away. In summary, the techniques discussed here can provide a best-case speed-up of (R+1)S 2 × in terms of retraining time (for one unlearning request). However, our approach introduces several challenges. # C. Challenges We make no assumptions about (a) the nature of unlearning requests, (b) the nature of training algorithms, and (c) the nature of data distribution within both the shards and slices. This results in several challenges which we discuss below. 1) Weak Learners: We motivate the notion of weak learners with the concept of task complexity5 – defined as a function of (a) the input dimensionality, (b) the complexity of the model (in our case, DNN) used to solve a particular learning task, and (c) the number of samples per class available to the model for learning. Datasets such as MNIST [43] are considered to be simple because they (a) have inputs with few features, (b) are trained over deep neural networks with few hidden layers, and (c) have a large number of samples per class. Instead, Imagenet [44] is considered complex with over 150,000 features and 1000 classes: it requires neural networks with a large number of hidden layers (in the order of a 100s). Since each constituent model is trained on a small shard, these models could be weak learners [20], [45]: in other words, their accuracy will be lower than a single model trained on the entire dataset. This effect is more profound in complex learning tasks. The primary reason for why this accuracy gap could 4Partition applies to both shards and slices here. 5The notion of task complexity is subjective. For example, if MNIST is considered a simple task, few shot learning [42] of MNIST can be complex. exist is that when each constituent model is trained on very limited data which is also not prototypical [30]—especially when the number of samples per class is low; if the model has high-capacity (as is the case with DNNs), the model might overfit to the small training dataset. Some of this accuracy will be recovered by the aggregation operation. However, we instantiate our approach assuming that the constituent models trained on shards are all trained with the same architecture, and the same hyperparameters. Since this departs from prior work on ML ensembles, which typically involves an ensemble of heterogeneous models [46] trained with different techniques, we may not obtain as large benefits from aggregation as is typically the case. 2) Hyperparameter Search: Additionally, sharding and slicing may require that the service provider revisit some hy- perparameter choices made on the entire dataset. For instance, sharding and slicing may require training with a different number of epochs. Slicing could also negatively interact with batching when the service provider is using a large number of slices—because each slice will be smaller. If each constituent model requires a different set of hyper- parameters for optimal performance, then as the number of models (of the order O(SR)) increases, performing hyperpa- rameter tuning is a truly challenging problem. Training O(SR) models, depending on the hyperparameter search needed to optimize for these challenges, may introduce a computational overhead. We note that hyperparameters are shared across constituent models when data is split uniformaly across shards. In that case, one only needs to train O(R) models to tune the hyperparameters for slicing. Take-away. We revisit these challenges in § VII, discuss the various solutions we explored for each of the problems listed above, and highlight insights we gained from them. # V. MEASURING TIME A. Measuring time analytically Motivation. Measuring time experimentally is difficult be- cause both hardware and software introduce variance in measurements. To circumvent these variances, we measure unlearning time indirectly through the number of samples that one needs to retrain. We were able to validate, in a controlled experiment, the linear relationship between the number of (re)training samples and a model’s training time. This ex- periment was performed on a workstation equipped with a RTX2080 Ti accelerator and repeated 5 times to estimate variance. For the SVHN and Purchase datasets (described in § VI-A), the results in Figure 3 show that the number of samples to retrain is proportional to the retraining time. Note that we verify this relationship for the MNIST dataset as well, but omit the figure due to space constraints. Having established this relationship, the following analysis calculates the expectation of the number of data points needed for retraining, given an unlearning request, as the number of shards and slices vary. 8 (a) SVHN dataset (b) Purchase dataset Fig. 3: We validate the linear relationship (within error) between training time and the number of samples trained on. Measurements are obtained on increments of 10% of the dataset size. We repeat 5 times to report mean and variance, on SVHN and Purchase. B. Measuring Time for Sharding Observe that for each unlearning request, a single con- stituent model is retrained when it arrives sequentially whereas multiple models are retrained when the requests are batched. 1. Sequential Setting: In the sequential setting, we make two assumptions: (a) the training data is shuffled and evenly split into S shards, and (b) each unlearning request can require any of the S shards to be retrained, with equal probability, at any step. We explicitly calculate the expectation of the number of points needed to be used for retraining. To achieve our desired result, we make a simplifying assumption: the shard sizes stay roughly the same as points are removed due to unlearning. If the sharding is uniform, then each model has (roughly) the same number of initial training data points N S ; it is obvious that the first unlearning request will result in retraining of N S − 1 points for the one shard that is affected. For the second unlearning request, there will be two cases: the shard affected in the first unlearning request is affected again, which will S − 2 data points with a probability 1 result in retraining N S , or any other shard is impacted resulting in retraining N S − 1 data points with probability 1 − 1 S . Thus, inductively, we can see that for the ith unlearning request, the probability that N S − 1 − j points (for 0 ≤ j ≤ i − 1) are retrained is JOC” By first summing over all possible combinations of points that are unlearned in a shard at a specific step, and then summing over all requests (K in total), we are able to obtain the expected number of points to be retrained (E(C)) as: EE(S)G) 0-8) GG) This expression can be simplified using the binomial theo- rem, as described in Appendix D to obtain: N 1 K? FIC] (3 "35 1) K~ 99 @) An upper bound for the above equation can be obtained if we assume that after each unlearning request, the size of each shard remains constant. Thus, the cost of any step is N S . We then have a linear bound for the total cost: XK ; doubling the number of shards involves dividing the number of data points that need retraining by two. This bound captures the behavior of the expected cost when two conditions are met: (a) kK — 0, and (b) x >> 1. Conversely, for k — N, the quadratic behavior becomes preponderant. 2. Batch Setting: Alternatively, service provider S could aggre- gate unlearning requests into a batch, and service the batch. The cost of unlearning the batch is C = ea (4 = uj)b; where (u;)j¢41,...,5} are the random variables which indicate the number of times a shard of index j is impacted, and (bj )je 1,...,8} are the Bernouilli random variables indicating if a shard of index j is impacted by an unlearning request. We can show that (u;)j€41,...,5} follows a binomial distribution B(K, 3). Thus, the expected cost is: nie)=(1- (-3)') -K @) Asymptotically, E[C] ~ N(1 — exp(*)) where 7 = (—In(1 — $))~! when K -+ 0, and E[C] ~ N — K when KK — +00. Thus, the benefits of sharding are most noticeable when Kk < N (refer to Appendix [E] for more details). C. Measuring Time for Slicing Our analysis of slicing differs from the analysis we pre- sented for sharding because unlike shards, which are indepen- dent, a slice depends on all slices observed before them. Again, we distinguish two cases: in the first, the service provider processes unlearning requests sequentially, and in the second, requests are processed in batches. 1. Sequential Setting: The case where unlearning requests are processed as a stream is easier to analyze. Since we assume that the time for retraining a model is proportional to the number of points needed to be retrained, we need to find the expectation of the number of samples that will need to be retrained for a single unlearning request. Recall from § IV that if an unlearning request happens in the rth slice, we need to retrain all the way to the Rth slice. From equation 1, the expected number of samples that need to retrain is: R 2e’ iD in (2, 1 EC] ee co (5+ sn) @) i=r which is an upper bound on the expected number of points to be retrained for a single unlearning request. The upper bound is due to the approximation we make about the number of points per slice 2 remaining constant throughout unlearning. In § [VI] we show that this approximation is acceptable when K « N. With R - +00, we have E[C] + 3e'D, which gives the maximum expected speed-up of 1.5x. With R = 1, we have E[C] = e’D (or no speed-up). 2. Batch Setting: As before, we denote the number of unlearn- ing requests processed in a batch as Ky. In this case, we need to find the expected minimum value over multiple draws of a random variable to compute the index of the slice from which 9 we will have to restart training. Each unlearning request can still be modelled as a random draw from a uniform distribution U (1, D). However, the model will now have to be retrained from the slice which contains an unlearning request and has the smallest index – all iterations of training on slices that follow it were impacted by the point included in this slice. To compute the minimum slice index among all slices af- fected by the K unlearning requests, we make the simplifying assumption that multiple unlearning requests are sampled from a uniform distribution U (1, D) with replacement. Although this assumption does not hold (the same point would not ask to be unlearned multiple times), we verify numerically that it does not significantly affect our estimate. It is intuitive to see why given that the number of requests is orders of magnitude smaller than the number of points in the training set. In Appendix G, we derive the moments of the minimum Xmin,n of n draws X1, ..., Xn from an uniform distribution U (a, b) E[min(X0, ..., Xn)] = na+b n+1 . This is useful to model the slice of minimum index rmin impacted by the batch of unlearning requests. We derive the expected cost to be: 2e’D R(R+1) 1 R(R+ i! =(Elrevin] — Elrmin])) (5) E[C] ,) a With K > R, we have E[C] ~ e’D, which gives no speed-up (but no degradation either). With K < R, E[C| decreases in je as K — 0. The case kK = 1 corresponds to the sequential setting. In that case, we showed a speed-up exists as soon as R > 1. Thus there exists a regime, for small values of K < R, where there is a significant speed-up. We detail the proof in Appendix # VI. IMPLEMENTATION DETAILS A. Datasets the datasets we used in Table I. Note that for the Purchase dataset, we follow a methodology similar to Shokri et al. [47, §6]; we curated the Purchase dataset by choosing the top 600 most purchased items based on the category attribute. For Mini-Imagenet, we follow the process of Vinyals et al. [48] to create a dataset for supervised classification, not few-shot classification. Dataset MNIST [43] Purchase [49] SVHN [50] CIFAR-100 [51] Imagenet [44] Mini-Imagenet [48] Dimensionality 28 × 28 600 32 × 32 × 3 32 × 32 × 3 224 × 224 × 3 224 × 224 × 3 Size 60000 250000 604833 60000 1281167 128545 # Classes 10 2 10 100 1000 100 TABLE I: Dataset characteristics. Datasets chosen encapsulate variety in the total number of samples, input dimensionality, samples per class. This allows us to explore a spectrum of task complexities—the first three are simple while the three remaining are complex. We will highlight the importance of this diversity in later subsections. SVHN Purchase MNIST ~ SS, > Preller, > / S 99S S Pe mS aS 08 > ES ES ES ese SCT 92 8 Beyee 97 2 98 8 8 8 90 § § & Ey 96 5 Ey 88 2 eer 5 975 3 95 2 2 86 < mpeg tee [OS 96 < f 3) xX ~ Ne Pr Nump®, 6 N bey 60 Me 30 > f Re 120 Or 120 x Re qued go on Re qued go NS) SISA (S=10) SISA (S=20) SISA (S=50) 1/S (S=10) 1/S (S=20) 1/S (S=50) Batch K Fig. 4: We compare the experimental accuracy of SISA training (with different number of shards) with the two baselines on three datasets: SVHN, Purchase, and MNIST. It is clear that SISA training provides higher accuracy than the 1 S fraction baseline, along with less retraining time than the batch K baseline especially when the number of unlearning request is small. B. Models & Experimental Setup For simplicity, we use the same model architectures for (a) the baselines and (b) the SISA training scheme. The details are presented in Table II. Observe that we consider a variety of deep neural networks with increasingly more hidden layers as well as varying layer sizes. We compare our approach against two baselines. They are: • batch K unlearning requests and retrain the entire model after every K unlearning requests. This is the same to the naive baseline of retraining the entire dataset (without the points to be unlearned) from scratch, in a batch setting. S fraction of the data and only retrain when • train on a 1 the point to be unlearned falls into this set. Dataset MNIST [43] Purchase [49] SVHN [50] CIFAR-100 [51] Imagenet [44] Mini-Imagenet [48] Model Architecture 2 conv. layers followed by 2 FC layers 2 FC layers Wide ResNet-1-1 ResNet-50 ResNet-50 ResNet-50 TABLE II: Salient features of DNN models used. From our analysis, we draw the following insights on the applicability of SISA training in practical settings: 1) We observe that the sharding component of SISA training induces accuracy degradation as (a) the number of un- learning requests increases, and (b) the number of shards increases (more so for complex tasks). This stems from the decrease in the number of samples per class per shard caused by both (a) and (b) (refer § VII-A). We run our experiments using P100 and T4 Nvidia GPUs, with 12 and 16 GB of dedicated memory, respectively. We use Intel Xeon Silver 4110 CPUs with 8 cores each and 192GB of Ram. The underlying OS is Ubuntu 18.04.2 LTS 64 bit.We use PyTorch v1.3.1 with CUDA 10.1 and Python 3.6. # VII. EVALUATION Our evaluation is designed to understand the limitations of SISA training in the scenario where the service provider has no information about the nature of the distribution of the unlearn- ing requests i.e., in the uniform setting. In § VIII, we utilize explicit knowledge of this distribution (modeled based on re- cent public insight from Google [21]) to verify that it improves retraining time. All code (and model checkpoints) are avail- able at https://github.com/cleverhans-lab/machine-unlearning. In this section, our experiments tease apart each component of SISA training. We perform an ablation study to answer the following questions: 1) What is the impact of sharding on accuracy for varying numbers of unlearning requests? 2) What is the impact of slicing on accuracy for varying numbers of unlearning requests? 3) Does SISA training improve the retraining time? 4) Do the findings from above hold for both simple and complex learning tasks? 2) We observe that slicing does not induce accuracy degra- dation so long as the number of epochs required for training are recalibrated (refer § VII-A). 3) Even in the worst-case scenario (with no knowledge of the distribution of unlearning requests), for a certain number of unlearning requests, a combination of sharding and slicing significantly outperforms the naive baseline. If the number of requests exceeds this threshold, SISA training gracefully degrades to the performance of the baseline. We can analytically obtain this threshold (refer § VII-B) based on our theoretical analysis in § V. 4) SISA training has advantages compared to both the batch K baseline, and the 1 S fraction baseline in terms of retraining time and accuracy respectively (refer § VII-A). A. The Big Picture To understand the gains, we stress test the approach to understand its benefits for a very large number of shards and a very large number of unlearning requests. In all our experiments (unless mentioned otherwise), SISA training is performed in the batch setting. 1) Impact of Sharding: As discussed earlier, increasing the number of shards (S) increases expected unlearning speed-up (refer § V) for a bounded number of requests. However, we wish to understand the impact of sharding on accuracy. To this 10 # > se S s | 8 < — SS 804 / / / Number 2604 — - of Slices F | —1 404 r T —4 — 8 204 — 16 64 100 954 umber of Slices 1 —2 —4 — 8 16 32 904 854 804 Accuracy(%) 755 20 30 40 50 60 Epochs 70-+ 10 25 30 (a) Accuracy vs. Number of epochs for SVHN dataset. (b) Accuracy vs. Number of epochs for Purchase dataset. Fig. 5: Performance of single model trained with data slicing. We train each model 5 times for each number of slices on the SVHN and Purchase datasets, respectively, and plot the history of validation accuracy and confidence intervals against the number of training epochs. For a small number of epochs, models with more slicing have lower accuracy, due to the fact that they have significantly less amount of data at the beginning. As the number of epochs grows and the accuracy reaches a plateau, the accuracy of models converges. end, we utilize SISA training for a large number of unlearning requests. Note that the batch K baseline is the same as SISA training with S = R = 1 in the batch setting. From our experiments with simple learning tasks involving the MNIST, SVHN, Purchase datasets (refer Figure 4), we make the following observations: (a) by increasing S > 20, we observe a more noticeable decrease in accuracy that is greater than 5 percentage points (PPs), and (b) increasing the number of unlearning requests K > 3S degrades the retraining time to the batch K baseline case (refer Figures 12a and 12c in Appendix J). The former can be attributed to the decreasing volumes of data as the number of shards increases. If the number of shards is greater than 20, we observe that even simple learning tasks (such as those in Figure 4) tend to become more complex (refer § IV). This phenomenon can also be observed if one increases the number of unlearning requests—after unlearning, each shard has fewer data points. When we compare the accuracy vs. retraining time for SISA training with the 2 baselines, we observe that the batch K baseline has higher accuracy than SISA training, but at the expense of increased retraining time. As noted earlier, this is because this baseline is similar to SISA training with one shard and one slice (ergo losing corresponding speed-ups). The 1 S fraction has lower retraining times, but lower accuracy due to the fact that it is trained on a fraction of the entire dataset. While these findings are consistent independently of the task, we discuss the varying impact on accuracy next. provides better accuracy, with average improvements of 1.68 PPs in terms of top-1 accuracy and 4.37 PPs in terms of top-5 accuracy (to reduce the top-5 accuracy gap to 11.77 PPs). We make the same observations on the mini-Imagenet dataset. the number of samples per class per shard impacts generalizability of the constituent models, we studied its impact on accuracy. From Figure 15 (in Appendix K), we conclude that the lower number of samples per class per shard (in complex tasks) induces more accuracy degradation. In § VII-C, we discuss real world implications of this gap, and how they can be bridged. The key takeaway is that it is essential to ensure each shard has sufficiently many data points to ensure high accuracy at each constituent model. (a) Imagenet dataset (b) Mini-Imagenet dataset Observe that despite having the same benefits over the batch K and 1 S fraction baselines, SISA training induces more accuracy degradation for complex tasks (such as Imagenet); from Figure 6, observe that SISA training is consistently better than the 1 S fraction baseline. However, with label aggregation, the average top-5 accuracy6 degradation is 16.14 PPs (batch K top-5 accuracy on Imagenet with ResNet-50 is 92.87%). To reduce the accuracy gap, we varied the aggregation strategy from label aggregation to prediction vector aggregation (refer § IV-B). From Figure 14a (in Appendix J), observe that this 6The average top-1 accuracy degradation is 18.76 PPs, when the batch K baseline is 76.15%. Fig. 6: For complex learning tasks such as those involving Imagenet and Mini-Imagenet, SISA training introduces a larger accuracy gap in comparison to the batch K baseline. However, it is still more performant S fraction baseline. Each constituent (and baseline) utilized the prediction vector aggregation strategy. 2) Impact of Slicing: From Figure 5, we observe that slicing does not have detrimental impact on model accuracy in comparison to the approach without slicing if the training time is the same for both approaches. We ensure that training time is the same by setting the number of epochs for slicing based on the calculations in § IV. Combined with the analysis in § V, it is clear that slicing reduces the retraining time so long as the storage overhead for storing the model state after adding a new slice is acceptable (which is linear in the number of slices). 11 (a) SVHN dataset (b) Purchase dataset Fig. 7: Combined speed-up induced by sharding and slicing in the batch setting while there are 0.003% of the dataset to be unlearned. As the number of shards increases, speed-up increases near proportionally. On the other hand, increasing the number of slices has diminishing returns beyond a few slices. 3) Combination of Sharding and Slicing: From Figure 7, we observe that a combination of sharding and slicing induces the desired speed-up for a fixed number of unlearning requests (0.003% the size of the corresponding datasets). We utilize these datasets as they have sufficiently many points, resulting in us being able to issue more unlearning requests in the regime where we obtain speed-up (refer § VII-B). Observe that the speed-up grows rapidly with an increase in S, but increasing S provides marginal gains in this regime. We elaborate upon this further in § VII-B and Appendix A. # B. Understanding the Regime The results presented in § VII-A are exhaustive, and cover a diverse number of shards, slices, unlearning requests, and task complexities. However, not all these configurations are interesting, as some have a detrimental impact on accuracy (as discussed above). For complex learning tasks, better par- titioning and aggregation strategies can bridge the accuracy gap, but the findings we present here are generally applicable. By fixing the number of shards based on our earlier analysis, we can bound the accuracy degradation. However, we wish to understand if there are improvements in retraining time for any number of unlearning requests given this fixed number of shards. Our time analysis in § V suggests otherwise. Based on this analysis, we plot the retraining time as a function of the number of retraining requests (refer to Figure 12 in Appendix B). We observe that for both datasets, the regime where the SISA training approach provides the most retraining benefits is when the number of unlearning requests (as a function of the size of the total dataset) is less than 0.075% of the dataset. If the number of unlearning requests exceeds this value, then the SISA training approach gracefully degrades to the performance of the batch K baseline. Next, we turn to slicing assuming that the number of shards S is fixed to 20, and observe that the regime where slicing provides gains is when the number of unlearning requests is less than 0.003% of the dataset (refer Figure 12 in Appendix B). Thus, to extract benefit from both approaches, the ideal number of unlearning requests would be the minimum of the two. Our findings validate that the speed-up exists as long as the 12 number of unlearning requests K < 3S. While the regime we provide gains in (≤ 0.003%) may seem very small, recent work by Bertram et al. [21] shows that in practice, the number of unlearning requests (as a function of the size of the total dataset) is much smaller, and is in the order of 10−6. Additionally, large organizations operate on datasets which are much larger than those in our experiments; the (narrow) regime in which SISA training provides a benefit still provides significant cost reductions. C. Bridging the Accuracy Gap For complex learning tasks in the real-world, the common approach is to utilize a base model trained on public data and utilize transfer learning to customize it towards the task of interest. We replicated such a setup by performing transfer learning using a base model trained on Imagenet (using the ResNet-50 architecture) to the CIFAR-100 dataset. We then perform SISA training and measure the accuracy gap between the baseline (S = 1) and S > 1 cases (refer Figure 8), in terms of both top-1 and top-5 accuracy (the latter is a more representative metric for this complex task). We observe that for this realistic deployment, at S = 10, the top-1 accuracy gap is reduced to ∼ 4 PPs, while the top-5 accuracy gap is reduced to < 1 PP. Additionally in this transfer learning setting, the time analysis for unlearning still holds. Thus, performing transfer learning enables us to decrease the accuracy degrada- tion induced by SISA training on complex tasks without (a) varying the hyperparameters of the constituent models, whilst (b) maintaining constituent model homogeneity. 100 Top-5 Accuracy (for 95 & 90 ey g 85 Top-1 Accuracy (for S=1) BI 2 80 <t 75 SISA Top-1 --¥-- SISA Top-5 70 5 10 15 20 Number of Shards Fig. 8: In the setting of transfer learning (from ImageNet to CIFAR- 100), we observe a lower accuracy degradation induced by SISA training (with S > 1). # VIII. DISTRIBUTIONAL KNOWLEDGE In this section, we relax our assumptions and discuss how additional knowledge of the distribution of unlearning requests can be beneficial to the service provider. Specifically, we wish to understand (a) if we can estimate those data points that are more likely to be unlearned than others based on auxiliary information, and (b) how this knowledge can be used a priori to minimize the retraining time and accuracy degradation. We believe that an owner’s request for unlearning may vary depending on (a) how their data is used and by whom the data is used, (b) the general perception of the surrounding (geographic) population, and (c) incidents related to data GE Likely to unlearm 1 Unlikely to unlearn RT Shard 1 4 Shard2 1 Shard3 4 Shard4 4 Shard5 Uniform Adaptive Shard 1 4 Shard2 1 Shard3 4 Shard4 4 Shard5 Fig. 9: Example of how a service provider aware of the distribution of unlearning requests may adapt to outperform uniform sharding. misuse etc. For example, machine learning models are not adept at dealing with bias; data owners from those populations who are biased against may wish to request for their data to be erased. By grouping this data, we can further reduce unlearning costs, however, it may also harm fair predictions. Future work should consider these ethical implications. As before, we assume the existence of a data owner u ∈ U, and the data point generated by u to be du. We denote the probability of user u requesting to have their data erased as p(u). By aggregating users who are likely to request data erasure into shards of small sizes, intuitively, we would be able to reduce the retraining time. To illustrate, consider a population split between two groups: the first group H having a high probability py of being unlearned and the second group L having a low probability pr of being unlearned, with py > pr. If we follow the uniform sharding of § [IV] each shard will contain points from both groups H and L. Because points from H are very likely to be unlearned, and each shard contains at least a few points from group H, it is very likely that all shards will have to be unlearned—even if the number of unlearning requests is low. This scenario is illustrated in Figure [9] Alternatively, if we know the population will follow such a distribution of unlearning requests, we can adapt our sharding strategy to concentrate all points from members of group H in relatively few partitions. This strategy ultimately reduces the total num- ber of shards whose models need to be retrained. We now apply this intuition to a more realistic scenario. # A. Realistic Scenario Modeling realistic distributions of unlearning requests is a challenging proposition; prior work in this space is lim- ited. Without data to determine the parameters for a known distribution, such as a Gaussian, or to learn an underlying distribution, we design the following scenario based on insight from the recent work published by Google [21]. Specifically, we propose a scenario where we assume that an organization with access to data records from a large number of data owners operates across various countries, with owners in each region having varied privacy expectations. We assume the existence of N countries; the dataset D comprises of per-country datasets Dc for each country c.7 We have ∩cDc = ∅ and ∪cDc = D. Each data owner in the country c has a fixed probability (denoted pc) for issuing a data erasure request i.e., ∀du ∈ Dc, p(u) = pc. Thus, the data owner issuing an unlearning request can be modeled as a Bernoulli trial. 7Each per-country dataset is conceptually similar to a shard; the distinction is made for easier discussion. 13 It is important to note that this technique can be generalized to any distribution so long as it is known by the service provider. Specifically, after selecting a distribution ν that mod- els the unlearning requests from a population U, we randomly sample from this distribution to assign the probability p(u) with which each u ∈ U wishes to perform data erasure. Each data point is still a Bernoulli trial; however, the sum χi of these independent Bernoulli trials can be modelled by a Poisson binomial distribution. Armed with this knowledge, we can evaluate the expected number of unlearning requests for this shard Di, over n trials, as E(χi) = np, where p = , and E(χi) denotes the expectation with which shard Di is unlearned. By selecting those users u ∈ U and their corresponding data elements du to create shard Di such that E(χi) < C for any constant C ≤ 1, we expect to not have to retrain a model trained using shard Di. DNNs typically require large data volumes for training; we attempt to create few data shards, with more data in each shard. In all experiments we describe in this section, we concep- tualize a scenario with N = 3 countries – c1, c2 and c3, such that pc1 = 3 × 10−6,pc2 = 3 × 10−5, and pc3 = 6 × 10−6. Additionally, |Dc1| = 0.7717 × |D| ,|Dc2 | = 0.1001 × |D| and |Dc3| = 0.1282 × |D|. B. Distribution-Aware Sharding a) Approach: This motivates distribution-aware shard- ing, where the service provider can create shards in a way so as to minimize the time required for retraining. We discuss one such approach in Algorithm 1, under the following as- sumptions: (a) the distribution of unlearning requests is known precisely, and (b) this distribution is relatively constant over a time interval. Recall that each data point du ∈ D has an associated probability p(u) with which it may be erased. We first sort the data points in the order of their erasure probability, and points to a shard Di till the desired value of E(Di) is reached. Once this value is exceeded, we create a new shard 8. Di+1 and restart the procedure with the residual data D \ Di By enforcing a uniform cumulative probability of unlearning a across shards, Algorithm 1 naturally aggregates the training points that are likely to require unlearning into a fewer shards that are also smaller in size. b) Results: As done for our motivating example, Fig- ure 10 plots the number of points to be retrained with respect to the number of unlearning requests for both uniform and distribution-aware sharding. In expectation, the distribution- aware strategy decreases the number of points to be retrained. Yet, because this strategy creates shards of unequal size, we also need to evaluate the accuracy of our predictions aggre- gated across constituent models. For the parameters specified above, we find that our approach generates 19 shards. We find that the aggregate achieves about 94.4% prediction accuracy in the regime of unlearning requests we consider, which is one percent point lower than uniform sharding, at 95.7%. This 8Observe that this strategy holds even when the entire dataset D is replaced by the dataset for a particular country Dc. # Algorithm 1 Distribution-Aware Sharding # Input: Dataset D, constant C 1: procedure ShardData(D, C) 2: sort {du}|D| i ← 0 create empty shard Di for j ← 0 to |D| do 3: iO 3: 4: 5: 6: 7: remove du with lowest p(u) from D Di = Di ∪ du if E(χi) ≥ C then Di = Di \ du i ← i + 1 create empty shard Di Di = Di ∪ du 7: D; =D, Udy 8: if E(y;) > C then 8: 9: 10: 11: 12: 13: 9: D; = Di \ du 10: iei+l 11: create empty shard D; 12: D, =D, Udy end if # 14: 15: end procedure 600 4 s00 4 4004 3004 (1000x) 2004 Sharding Strategy —*— uniform 100 J Number of Points to Retrain --*-- poisson_binomial 10 5 20 Number of Unlearning Request Fig. 10: # points (variance shaded) of the SVHN dataset that need to be retrained for uniform and distribution-aware sharding where users have varying probability of revoking access to their data. result means that distribution-aware sharding incurs a trade-off of accuracy for decreased unlearning overhead. We leave to future work the exploration of alternatives to majority voting aggregation that would cope with such imbalanced shard sizes. # IX. DISCUSSION Unlearning in the Absence of Isolation. Conceptually, SISA training borrows elements from distributed training and ensem- ble learning. As discussed earlier, the divide from ensemble learning stems from the fact that each constituent model in SISA training is obtained in isolation. Ensemble learning ap- proaches utilize boosting algorithms [52], even for ensembles of neural networks [53], to enhance accuracy. Data Replication. Empirical evidence suggests that beyond a certain data volume (i.e., shard size), there is performance degradation in each constituent model when datasets are too small, or if the learning task is complex. One way to alleviate this problem is through data replication. However, one must decide which data point is replicated such that the accuracy of the constituent models is increased. This selection is a challenging problem [54]. One must also factor in if access to the replicated data point is likely to be revoked; if that is the case, one would intuitively wish to reduce the replication of 14 such a point to limit overhead on unlearning. Understanding these trade-offs is of interest and is future work. Is All Data Useful? Neural networks require large datasets. However, not all of this data is useful [55]. As discussed earlier, understanding the importance of each data point to- wards the final model parameters learned is a challenging problem. A relatively simpler problem is that of core-set selection, where the objective is to choose a subset of the dataset that will produce a hypothesis that is as performant as one obtained while using the entire dataset [56], [57]. Core- sets can help reduce the cost of learning. Consequently, they can also improve the cost of unlearning. Verified Unlearning. We assume that the service provider per- forms unlearning in an honest manner. Our approach provides an intuitive and provable guarantee under the assumption that the data owner believes the service provider, due to the inher- ent stochasticity in learning (refer Figure 1). To increase user confidence, the service provider could release code. One could imagine that authorities relevant to the enforcement of the right to be forgotten could audit the code base to validate the implementation of SISA training. This is sufficient, because of the design of SISA training, to demonstrate that the point to be unlearned would not influence model parameters anymore. However, under certain adversarial settings, this trust need not be the case. As stated earlier, there is no way to measure the influence of a data point on the model parameters. Even worse, these models are often proprietary. Thus, understanding if the unlearning procedure can be verified, similar to approaches in other domains [58]–[60], is of merit. # X. CONCLUSIONS Our work illustrates how to design learning algorithms that incorporate the need to later unlearn training data. We show how simple strategies like SISA training can empower users to expect that their data be completely removed from a model in a timely manner. While our work was primarily motivated by privacy, it is easy to see how unlearning can be a first step towards achieving model governance. We hope this will spur follow-up work on effective ways to patch models upon identifying limitations in datasets used to train them. # ACKNOWLEDGMENTS We would like to thank the reviewers for their insightful feedback, and Henry Corrigan-Gibbs for his service as the point of contact during the revision process. This work was supported by CIFAR through a Canada CIFAR AI Chair, and by NSERC under the Discovery Program and COHESA strategic research network. We also thank the Vector Institute’s sponsors. Varun was supported in part through the following US National Science Foundation grants: CNS-1838733, CNS- 1719336, CNS-1647152, CNS-1629833 and CNS-2003129. # REFERENCES [1] Y. Liu, K. K. Gadepalli, M. Norouzi, G. Dahl, T. Kohlberger, S. Venugopalan, A. S. Boyko, A. Timofeev, P. Q. Nelson, G. Corrado, J. Hipp, L. Peng, and M. Stumpe, “Detecting cancer metastases on gigapixel pathology images,” arXiv, Tech. Rep., 2017. [Online]. Available: https://arxiv.org/abs/1703.02442 [2] M. X. Chen, B. N. Lee, G. Bansal, Y. Cao, S. Zhang, J. Lu, J. Tsay, Y. Wang, A. M. Dai, Z. Chen et al., “Gmail smart compose: Real-time assisted writing,” arXiv preprint arXiv:1906.00080, 2019. [3] X. He, J. Pan, O. Jin, T. Xu, B. Liu, T. Xu, Y. Shi, A. Atallah, R. Her- brich, S. Bowers et al., “Practical lessons from predicting clicks on ads at facebook,” in Proceedings of the Eighth International Workshop on Data Mining for Online Advertising. ACM, 2014, pp. 1–9. [4] S. Shalev-Shwartz et al., “Online learning and online convex optimiza- tion,” Foundations and Trends® in Machine Learning, vol. 4, no. 2, pp. 107–194, 2012. [5] A. Mantelero, “The eu proposal for a general data protection regulation and the roots of the ‘right to be forgotten’,” Computer Law & Security Review, vol. 29, no. 3, pp. 229–235, 2013. [6] “Bill text,” https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml? bill id=201720180AB375. [7] O. of the Privacy Commissioner of Canada, “Announcement: Pri- vacy commissioner seeks federal court determination on key issue for canadians’ online reputation,” https://www.priv.gc.ca/en/opc-news/ news-and-announcements/2018/an 181010/, Oct 2018. [8] S. Shastri, M. Wasserman, and V. Chidambaram, “The seven sins of personal-data processing systems under gdpr,” USENIX HotCloud, 2019. [9] “Lex access to european union law,” https://eur-lex.europa.eu/eli/reg/ 2016/679/2016-05-04. inversion attacks that exploit confidence information and basic countermeasures,” in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. ACM, 2015, pp. 1322–1333. [11] N. Carlini, C. Liu, U. Erlingsson, J. Kos, and D. Song, “The secret sharer: Evaluating and testing unintended memorization in neural net- works,” in Proceedings of the 28th USENIX Conference on Security Symposium. USENIX Association, 2019. [12] C. Dwork, A. Roth et al., “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3–4, pp. 211–407, 2014. [13] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate, “Differentially private empirical risk minimization,” Journal of Machine Learning Research, vol. 12, no. Mar, pp. 1069–1109, 2011. [14] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2016, pp. 308–318. forget with machine unlearning,” in 2015 IEEE Symposium on Security and Privacy. [Online]. Available: https: IEEE, 2015, pp. 463–480. //ieeexplore.ieee.org/document/7163042/ [16] M. Kearns, “Efficient noise-tolerant learning from statistical queries,” Journal of the ACM (JACM), vol. 45, no. 6, pp. 983–1006, 1998. [17] B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph et al., “Exploiting machine learning to subvert your spam filter,” in Proceedings of the 1st Usenix Workshop on Large-Scale Exploits and Emergent Threats. USENIX Association, 2008. [18] B. I. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S.-h. Lau, S. Rao, N. Taft, and J. D. Tygar, “Antidote: Understanding and defending against poisoning of anomaly detectors,” in Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement, 2009. [19] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” arXiv preprint arXiv:1206.6389, 2012. on [20] M. Kearns, “Thoughts hypothesis boosting,” Unpublished manuscript, vol. 45, p. 105, 1988. [21] T. Bertram, E. Bursztein, S. Caro, H. Chao, R. C. Feman et al., “Five years of the right to be forgotten,” in Proceedings of the Conference on Computer and Communications Security, 2019. [22] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural infor- mation processing systems, 2012, pp. 1097–1105. [23] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers et al., “In-datacenter performance analysis of a tensor processing unit,” in 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2017, pp. 1–12. [24] S. Shalev-Shwartz and S. Ben-David, Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. 15 [25] L. G. Valiant, “A theory of the learnable,” in Proceedings of the sixteenth annual ACM symposium on Theory of computing. ACM, 1984, pp. 436–445. [26] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015. [27] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning repre- sentations by back-propagating errors,” nature, vol. 323, no. 6088, pp. 533–536, 1986. [28] R. D. Cook and S. Weisberg, “Characterizations of an empirical influ- ence function for detecting influential cases in regression,” Technomet- rics, vol. 22, no. 4, pp. 495–508, 1980. [29] P. W. Koh and P. Liang, “Understanding black-box predictions via in- fluence functions,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70. JMLR. org, 2017, pp. 1885–1894. [30] B. Kim, C. Rudin, and J. A. Shah, “The bayesian case model: A gen- erative approach for case-based reasoning and prototype classification,” in Advances in Neural Information Processing Systems, 2014. [31] J. H. Saltzer and M. D. Schroeder, “The protection of information in computer systems,” Proceedings of the IEEE, vol. 63, no. 9, pp. 1278– 1308, 1975. [32] C. Dwork, “Differential privacy,” Encyclopedia of Cryptography and Security, pp. 338–340, 2011. [33] K. Chaudhuri and C. Monteleoni, “Privacy-preserving logistic regres- sion,” in Advances in neural information processing systems, 2009, pp. 289–296. [34] C. Guo, T. Goldstein, A. Hannun, and L. van der Maaten, “Cer- tified data removal from machine learning models,” arXiv preprint arXiv:1911.03030, 2019. [35] A. Golatkar, A. Achille, and S. Soatto, “Eternal sunshine of the spotless net: Selective forgetting in deep networks,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 9304–9312. [36] A. Ginart, M. Y. Guan, G. Valiant, and J. Zou, “Making AI forget you: Data deletion in machine learning,” CoRR, vol. abs/1907.05012, 2019. [Online]. Available: http://arxiv.org/abs/1907.05012 [37] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, M. Ran- zato, A. Senior et al., “Large scale distributed deep networks,” in Advances in neural information processing systems, 2012. [38] T. Ben-Nun and T. Hoefler, “Demystifying parallel and distributed deep learning: An in-depth concurrency analysis,” ACM Computing Surveys (CSUR), vol. 52, no. 4, p. 65, 2019. [39] T. G. Dietterich, “Ensemble methods in machine learning,” in Interna- Springer, 2000, pp. tional workshop on multiple classifier systems. 1–15. [40] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter, “Pegasos: Primal estimated sub-gradient solver for svm,” Mathematical programming, vol. 127, no. 1, pp. 3–30, 2011. [41] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, “Outrageously large neural networks: The sparsely-gated mixture-of-experts layer,” arXiv preprint arXiv:1701.06538, 2017. [42] J. Snell, K. Swersky, and R. Zemel, “Prototypical networks for few-shot learning,” in Advances in neural information processing systems, 2017, pp. 4077–4087. [43] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, pp. 2278 – 2324, 12 1998. [44] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database,” in CVPR09, 2009. [45] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of on-line learning and an application to boosting,” Journal of computer and system sciences, vol. 55, no. 1, pp. 119–139, 1997. [46] D. Opitz and R. Maclin, “Popular ensemble methods: An empirical study,” Journal of artificial intelligence research, vol. 11, pp. 169–198, 1999. [47] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE Symposium on Security and Privacy (SP). [48] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra et al., “Matching networks for one shot learning,” in Advances in neural information processing systems, 2016, pp. 3630–3638. [49] C. O. Sakar, S. O. Polat, M. Katircioglu, and Y. Kastro, “Real-time prediction of online shoppers’ purchasing intention using multilayer perceptron and lstm recurrent neural networks,” Neural Computing and Applications, vol. 31, no. 10, pp. 6893–6908, 2019. [50] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Ng, “Reading digits in natural images with unsupervised feature learning,” NIPS, 01 2011. [51] A. Krizhevsky, “Learning multiple layers of features from tiny images,” 2009. [52] R. E. Schapire, “A brief introduction to boosting,” in Ijcai, vol. 99, 1999, pp. 1401–1406. [53] H. Schwenk and Y. Bengio, “Boosting neural networks,” Neural com- putation, vol. 12, no. 8, pp. 1869–1887, 2000. [54] B. Settles, “Active learning literature survey,” University of Wisconsin- Madison Department of Computer Sciences, Tech. Rep., 2009. [55] S.-J. Huang, R. Jin, and Z.-H. Zhou, “Active learning by querying infor- mative and representative examples,” in Advances in neural information processing systems, 2010, pp. 892–900. [56] C. Baykal, L. Liebenwein, I. Gilitschenski, D. Feldman, and D. Rus, “Data-dependent coresets for compressing neural networks with applications to generalization bounds,” CoRR, vol. abs/1804.05345, 2018. [Online]. Available: http://arxiv.org/abs/1804.05345 [57] O. Sener and S. Savarese, “Active learning for convolutional neural networks: A core-set approach,” arXiv preprint arXiv:1708.00489, 2017. [58] C. Tan, L. Yu, J. B. Leners, and M. Walfish, “The efficient server audit problem, deduplicated re-execution, and the web,” in Proceedings of the 26th Symposium on Operating Systems Principles. ACM, 2017, pp. 546–564. [59] R. S. Wahby, Y. Ji, A. J. Blumberg, A. Shelat, J. Thaler, M. Walfish, and T. Wies, “Full accounting for verifiable outsourcing,” in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 2017, pp. 2071–2086. [60] S. T. Setty, R. McPherson, A. J. Blumberg, and M. Walfish, “Making argument systems for outsourced computation practical (sometimes).” in NDSS, vol. 1, no. 9, 2012, p. 17. [61] https://math.stackexchange.com/questions/786392/ expectation-of-minimum-of-n-i-i-d-uniform-random-variables. # APPENDIX A. Simulation of SISA training Time Analysis To get a more intuitive understanding of unlearning time de- scribed in § V, we randomly generate K unlearning requests. We then compute the amount of data that needs to be retrained by determining the shard and slice each unlearning request maps to. We then deduce the number of samples that need to be retrained on, to achieve unlearning through SISA training. By varying K between 1 and 500, we visualize the speed- up achieved by SISA training as a function of the number of unlearning requests made. We repeat the experiment 100 times to obtain variance. The results are plotted in Figure 11. (a) SVHN (b) Purchase Fig. 11: This plot shows the relationship between K and unlearning time (which is proportional to amount of data to retrain) where S is shown in the legend and R is set to 20. It is plotted in log-log scale for the ease of visualizing. B. Individual Contributions due to Slicing and Sharding the unlearning cost (i.e., number of points needed to be retrained) as functions 16 of number of unlearning requests, slices, and shards. We plotted the speed-up induced by SISA in Figure 7, but the number of unlearning requests is set to a constant for ease of visualization. Therefore, Figure 12 is plotted to show the effect of all three variables. C. Costs Associated With Storage The slicing introduced by SISA training is trading disk storage for unlearning speed-up. This is supported by the fact that the the cost of GPU accelerators is more than the cost of storage disks. For instance, storage costs $0.026/Gb/Month on Google Cloud, $0.023/Gb/Month on Amazon Web Services, and $0.058/GB per month on Azure at the time of writing. Instead, renting the cheapest GPUs starts at $0.25/hour on Google Cloud, $0.526/hour on Amazon Web Services, and $0.90/hour on Azure. To limit usage of GPU accelerators, it is already a common practice to regularly checkpoint models during training. Hence, we believe the overhead from slicing will not be a drawback: in most cases, it will suffice to change the order in which data is presented during training because checkpointing is already in place. D. Sequential Time Analysis of Sharding Proof: 1. Assumption: At each step and for all shards, the probability that an unlearning request affects that specific shard is approx- imately equal to ¥ The intuition is as follows: if many points from a specific shard are deleted as unlearning occurs, the number of such (unlearnable) points decreases and they are therefore less likely to be deleted. Conversely, if few points from that shard are deleted, the proportion of those points increases as points from other shards are deleted. Thus, they become more likely to be deleted. 2. Intuition: The size of the shard that is affected by the first request is always x. For the second request, it can be either % with probability (1 — 4) if the request does not affect the same shard or ( — 1) with probability 3 if it does. For the third request it can be g, (2 - 1), or (4 - ). Note that there are two ways to get (= - ): either from a shard that had (4 - 1) point before the previous request and that was not affected by it, or from a shard that had x points before the previous request and that was affected by it. 3. Size of the retraining set: To model this behavior, we define the event E;,; as the i‘” request received landing on shard s containing + — j points, with 7 € {0,..., i — 1}. The associated cost is x -—j-1. 4. Associated probability: The probability of E;,; given a configuration of the 7 requests, i.e., which specific subset of the i — 1 requests corresponds to those that landed on s, is (4) Q- ayy, The first term of the product means that shard s was affected j times, and the second term means that another shard (but not s) was affected i — 1 — j times. However, there are (5) possible configurations of the 7 requests that landed on shard s. Thus the probability of E;,; is (SV QV =a). S − j − 1. # j % Dataset to Unlearn % Dataset to Unlearn 0.00 0.05 0.10 0.15 0.00 0.01 0.02 0.03 & & 20 ~Thaseline! 30241) £600 fatten § x oe es 2 i 24 Ss 2 5507 x # shards g : # slices 2S 5004 1 (baseline) 2s 20 (50 slices: 20459) 1(baseline) E = 4504 5 EF] = 15 5 BS es 10 BS 10 3S 4004 ~ 20 5 10 - 20 5 5 B ss0 50 Bs --- 50 E + 100 g - 100 of — Z 300 2 0 151 302 453 0 2 4 6 8 10 Number of Unlearning Requests Number of Unlearning Requests (a) Impact of sharding on the number of points to retrain(SVHN) (b) Impact of slicing on the number of points to retrain(SVHN) % Dataset to Unlearn % Dataset to Unlearn 0.05 0.10 0.15 0.00 0.02 0.04 0.06 0.08 S Ss 1 1 1 fi fi = 2504 € is (baseline: 12500) _ 2 2 — 2 2 ne 2 2004 eg 10.0 a 2 # slices 23 150-4 1 (baseline) 28 15 ices: 8457) 1 (baseline) gS 5 gs oS ay ax A= io] 10 AZ 5.0 r) ~ 20 r) 25 B04 50 3 g ~ 100 = 00 Zz 0 Zz T T T T T @ 135 187 Number of Unlearning Requests 10 Number of Unlearning Requests (c) Impact of sharding on the number of points to retrain(Purchase) (d) Impact of slicing on the number of points to retrain(Purchase) Fig. 12: Impact of sharding and slicing on retraining time in a batch setting, as measured by the changes induced in the number of points needed for retraining (which is a proxy for retraining time). Observe that below a particular number of unlearning requests, both sharding and slicing provide noticeable improvements. Afterward, both gracefully degrade to the performance of the naive baseline. 5. Expected cost: The expected cost of the ith unlearning request can be decomposed on the family of events (Ei,j)j (with only j varying between 0 and i − 1) that partitions the universe of possible outcomes: (8) —~l- i) y0- ) i-1 CJ=>> j=0 1 (5')(s L 5 N "6 S S (6) To obtain the total cost, we sum the costs of all unlearning requests, which is to say we sum over i between 1 and K. E. Batched Time Analysis of Sharding Proof: Let S denote the number of shards and K the number of points in the batch. Let (si)i∈{1,...,K} be random variables that give the index of the shard impacted by each point in the batch. We assume that those variables are i.i.d. and that: ¥(5)6) G38) (G-) YEO )E) 2X(5') (6) yr inl S 1 Ss R L 5 ite st stp i=l j= We can use the fact that 5) =(i- G74) and apply the binomial theorem to both inner sums after reindexing the second inner sum. (7) ∀i ∈ {1, . . . , K}, si ∼ U (0, S) We can define (hj)j∈{1,...,S} which are Bernoulli random variables whose value is 1 when shard j is impacted. We have: 6 =O Vie {1,...,K}, 5: #5 Thus P(hj = 0) = (1- 1), We define the total cost C of retraining as the number of points that need to be retrained while processing the batch as C = S7?_ | To obtain |Dj|, we define (uj)j∈{1,...,K}, the random vari- ables that count the number of times each shard is affected. These variables count the number of successes in a repetition 17 of independent Bernoulli experiments, namely counting the number of times si = j, when i varies from 1 to K. Thus: ∀j ∈ {1, . . . , S}, uj ∼ B K, 1 S 7) Ss > ( j=l Nbj S Ss 5; (2 - j=l By construction, hj = 0 ⇐⇒ uj = 0 Thus ujhj = uj and: Nb; o-¥( 3") Using the linearity of the expected value and the expected values of Bernouilli and binomial random variables, (5 (1-(-3)')-§) w (i (0 _il Ss K E(C] 5 F, Sequential Time Analysis of Slicing Proof: When a model is trained on an entire shard (i.e., without slicing) of size D = Xx for e’ epochs, the number of samples seen by the training algorithm is proportional to e’D. Recall from § [IV] that we modified the number of epochs e when slicing is applied (refer to equation [Ip For each slice indexed r, we use data from the first r slices (ie, re 2 ; samples), training the model fo Fat hits the r’” slice, we need to retrain the model from the r slice to the R'” slice, leading to the following retraining cost (i.e., number of samples): th e5D) 2e’D _ R(R+1) | ~ R(R+1) 2 r(r—1) 2 We model the index of a slice hit by an unlearning request by the random variable r ∼ U ({1, . . . , R}). The expected cost can be expressed as: 1 ( - 5 (El"] - Ex) 2e’D R(R+1) E[C]= 5) (12) RR+1) We can compute the two first moments of r: = = &tH Dyer KP(e = fi) Deer P(t = fy) (R41) 241) 6 E[r] E[t?] And plug them into the expected cost: # pus IR+1 1 E[C] = a 2 (R- me +5) ='D(3 +35) (13) 2 2 Note that for R > 20, the speed-up starts to plateau and any increase in R does not provide a significant speed-up gain. (9) (10) 18 G. Moments of Distribution the Minimum of Draws from a Uniform Let X1, ..., Xn denote the n draws we make from a uniform distribution U ([a, b]). We would like to compute the expectation of the minimum of these draws, denoted as Xmin,n = mini∈{1,...,n}(Xi). Proof: Our proof follows material found online [61]. First recall that the cumulative distribution function of any Xi is FXi = x−a b−a 1[a,b] + 1[b,+∞) We then compute the CDF of Xmin,n: FXiringn (L) = P(Xminn < 2) =1-P(Xminn > 2) n -1-0 (Aux) i=1 =1-[[P(X: > 2) (14) i=l = (: -II (1 - =) 1a.) + 1,400) i=1 b— b— =(1- 1 1 ( ( =) ) (a,6] + 1p, +00) where the antepenultimate line holds because the draws are independent. We now take the derivative and obtain the density function: b n-1 OTe L =) [a b— n FX min yn (@) = Fs (5) We compute the first moment of Xmin,n by using an integration by part: na +b +1 (16) +00 E[Xminya] = [ OX inn (Bd Similarly, we can compute the second moment by using two # integrations by part (or the first moment of Xminjn+1): +00 E[X 2 min,n] = x2fXmin,n (x)dx −∞ = a2 + 2(b − a) n + 1 (n + 1)a + b n + 2 (17) H. Batched Time Analysis of Slicing Proof: In the batch setting, we retrain all the slices be- tween the slice rmin having the minimal index that has been hit after K requests and the Rth slice. Since the indices (ri)i∈{1,...,K} ∼ U ({1, . . . , R}) i.i.d. (we assume the requests to be independent), we can use results of previous sections to compute the moments of rmin. The expected cost becomes: E(C] = (18) R(R+1) 2 1 14 2R-1)(K+1)+R K+R 5 t 2e'D (24 +1) 2 K+1l K+2 K+1 )) I. Lone Shard Baseline Time Analysis Definition: A lone shard is a model trained on a 1 S fraction of the dataset. The remainder of data is not used for training. 1) Sequential Setting: 1. Assumption: The assumptions made in Appendix D are valid here, though we only have one shard of initial size N S . The probability of it being impacted is approximately equal to 1 S . 2. Size of the retraining set: We can develop a reasoning very similar to Appendix D. At each step, two cases are possible. Either we affect a shard, the only shard we have. This corresponds to the event Ei,j of Appendix D if the shards has already been affected j times, or we affect no shard with cost 0. We call this event Zi. 3. Associated probabilities: The probability of Zi, since we have only one shard, is 1 − 1 S . Notice that in Appendix D this event had zero probability. The probability of Ei,j is . The factor 1 1 S accounts for the fact S that request i affects a shard with probability 1 S , the rest of the formula is similar to the one in Appendix D. 4. Expected cost: We can easily show that we obtain a formula for the expected cost similar to the one in Appendix D but with a 1 kK -~1\)\K-— ) 2S? icl=5 (3+ s (19) S* 25 Thus the lone shard baseline provides a S× speed up w.r.t. SISA training. However, this fact should not discourage the use of SISA training since the lone shard baseline will perform poorly in terms of accuracy on complex learning tasks. 2) Batched Setting: Let K denote the batch size. We model whether the i” request of the batch affects the training set (or not) by a Bernoulli random variable h; ~ B() iid. We define s~ = Y: 6; the number of times the shard is affected for the batch. By construction, sx ~ B(K, $s): The number of points to retrain when the batch is processed is simply the total number of points in the training set minus the number of times the shard is affected: C = a — sx. Thus: E[C] = N − K S (20) Recall that the batched cost of SISA training is: 1 K nie)=w(1- (1-3) ) -K (21) For K <N, we roughly have a cost of N(1 — exp(=*)) where tT = (—In(1 — $))~+ for SISA training. Thus for small enough K, there might exist a regime where SISA training outperforms the lone shard baseline. Determin- ing a usable value of K in that regime is the challenge – K can not be less than 1. Note that K = 1 is exactly the first step of the sequential setting: the lone shard baseline provides a S× speed up w.r.t. SISA training (refer § I1). It turns out this regime is impractical. Therefore, for small values of K, the lone shard baseline outperforms SISA training with a speed-up of at least S×. Once again, those findings must be considered along with their impact on accuracy, and are meaningless by themselves. 19 J. Impact of aggregation strategy Due to the nature of SISA, we need to aggregate the predictions of different models. Here we tested 2 aggregation strategies on 4 datasets respectively, the results can be found in Figures 13 and 14. (a) SVHN (b) Purchase Fig. 13: We explore the effect of aggregating labels versus aggre- gation prediction vectors, on Purchase and SVHN. It can be seen that on these easy datasets, changing aggregation strategy does not influence performance of the models significantly. (a) Imagenet (b) Mini-Imagenet Fig. 14: We explore the effect of aggregating labels versus aggre- gation prediction vectors, on Mini-ImageNet and ImageNet. It can be seen that on these hard tasks such as classifying high-resolution images, a good aggregation strategy is able to help recover more accuracy. K. Impact of number of samples per class on learnability The results from Figure 15 suggest that as the number of samples per class goes down, so does the accuracy. This is the case with increased sharding for complex learning tasks. Accuracy (%) BSSESa8 4 sse 0 0 200 400 600 800 1000 1200 Average Number of Samples/Class Fig. 15: We plot the test accuracy as a function of the average number of samples per class. Observe that as the average number of samples per class increases, so does the accuracy.
{ "id": "1708.00489" }
1912.02757
Deep Ensembles: A Loss Landscape Perspective
Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable variational Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. Developing the concept of the diversity--accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods. Finally, we evaluate the relative effects of ensembling, subspace based methods and ensembles of subspace based methods, and the experimental results validate our hypothesis.
http://arxiv.org/pdf/1912.02757
Stanislav Fort, Huiyi Hu, Balaji Lakshminarayanan
stat.ML, cs.LG
null
null
stat.ML
20191205
20200625
0 2 0 2 n u J 5 2 ] L M . t a t s [ 2 v 7 5 7 2 0 . 2 1 9 1 : v i X r a # Deep Ensembles: A Loss Landscape Perspective # Stanislav Fort∗ Google Research [email protected] # Huiyi Hu∗ DeepMind [email protected] Balaji Lakshminarayanan† DeepMind [email protected] # Abstract Deep ensembles have been empirically shown to be a promising approach for improving accuracy, uncertainty and out-of-distribution robustness of deep learning models. While deep ensembles were theoretically motivated by the bootstrap, non-bootstrap ensembles trained with just random initialization also perform well in practice, which suggests that there could be other explanations for why deep ensembles work well. Bayesian neural networks, which learn distributions over the parameters of the network, are theoretically well-motivated by Bayesian principles, but do not perform as well as deep ensembles in practice, particularly under dataset shift. One possible explanation for this gap between theory and practice is that popular scalable variational Bayesian methods tend to focus on a single mode, whereas deep ensembles tend to explore diverse modes in function space. We investigate this hypothesis by building on recent work on understanding the loss landscape of neural networks and adding our own exploration to measure the similarity of functions in the space of predictions. Our results show that random initializations explore entirely different modes, while functions along an optimization trajectory or sampled from the subspace thereof cluster within a single mode predictions-wise, while often deviating significantly in the weight space. Developing the concept of the diversity–accuracy plane, we show that the decorrelation power of random initializations is unmatched by popular subspace sampling methods. Finally, we evaluate the relative effects of ensembling, subspace based methods and ensembles of subspace based methods, and the experimental results validate our hypothesis. # Introduction Consider a typical classification problem, where xn ∈ RD denotes the D-dimensional features and yn ∈ [1, . . . , K] denotes the class label. Assume we have a parametric model p(y|x, θ) for the conditional distribution where θ denotes weights and biases of a neural network, and p(θ) is a prior distribution over parameters. The Bayesian posterior over parameters is given by p(θ|{xn, yn}N Computing the exact posterior distribution over θ is computationally expensive (if not impossible) when p(yn|xn, θ) is a deep neural network (NN). A variety of approximations have been developed for Bayesian neural networks, including Laplace approximation [MacKay, 1992], Markov chain Monte Carlo methods [Neal, 1996, Welling and Teh, 2011, Springenberg et al., 2016], variational Bayesian methods [Graves, 2011, Blundell et al., 2015, Louizos and Welling, 2017, Wen et al., 2018] and Monte-Carlo dropout [Gal and Ghahramani, 2016, Srivastava et al., 2014]. While computing the posterior is challenging, it is usually easy to perform maximum-a-posteriori (MAP) estimation, which corresponds to a mode of the posterior. The MAP solution can be written as the minimizer of the following loss: N Ouap = arg min L(0, {a@n, Yn }R_1) = arg min — log p(@) — Ss log p(yn|@n, 4). (1) n=1 ∗Equal contribution. †Corresponding author. The MAP solution is computationally efficient, but only gives a point estimate and not a distribution over parameters. Deep ensembles, proposed by Lakshminarayanan et al. [2017], train an ensemble of neural networks by initializing at M different values and repeating the minimization multiple times which could lead to M different solutions, if the loss is non-convex. Lakshminarayanan et al. [2017] found adversarial training provides additional benefits in some of their experiments, but we will ignore adversarial training and focus only on ensembles with random initialization. Given finite training data, many parameter values could equally well explain the observations, and capturing these diverse solutions is crucial for quantifying epistemic uncertainty [Kendall and Gal, 2017]. Bayesian neural networks learn a distribution over weights, and a good posterior approximation should be able to learn multi-modal posterior distributions in theory. Deep ensembles were inspired by the bootstrap [Breiman, 1996], which has useful theoretical properties. However, it has been empirically observed by Lakshminarayanan et al. [2017], Lee et al. [2015] that training individual networks with just random initialization is sufficient in practice and using the bootstrap can even hurt performance (e.g. for small ensemble sizes). Furthermore, Ovadia et al. [2019] and Gustafsson et al. [2019] independently benchmarked existing methods for uncertainty quantification on a variety of datasets and architectures, and observed that ensembles tend to outperform approximate Bayesian neural networks in terms of both accuracy and uncertainty, particularly under dataset shift. These empirical observations raise an impor- tant question: Why do deep ensembles trained with just random initialization work so well in practice? One possible hypothesis is that ensembles tend to sample from differ- ent modes3 in function space, whereas vari- ational Bayesian methods (which minimize DKL(q(θ)|p(θ|{xn, yn}N n=1)) might fail to ex- plore multiple modes even though they are ef- fective at capturing uncertainty within a single mode. See Figure 1 for a cartoon illustration. Note that while the MAP solution is a local opti- mum for the training loss,it may not necessarily be a local optimum for the validation loss. ; Space of solutions Variational met how local mneertainty Training Figure 1: Cartoon illustration of the hypothesis. x-axis indicates parameter values and y-axis plots the negative loss −L(θ, {xn, yn}N n=1) on train and validation data. Recent work on understanding loss landscapes [Garipov et al., 2018, Draxler et al., 2018, Fort and Jastrzebski, 2019] allows us to investigate this hypothesis. Note that prior work on loss landscapes has focused on mode-connectivity and low-loss tunnels, but has not explicitly focused on how diverse the functions from different modes are. The experiments in these papers (as well as other papers on deep ensembles) provide indirect evidence for this hypothesis, either through downstream metrics (e.g. accuracy and calibration) or by visualizing the performance along the low-loss tunnel. We complement these works by explicitly measuring function space diversity within training trajectories and subspaces thereof (dropout, diagonal Gaussian, low-rank Gaussian and random subspaces) and across different randomly initialized trajectories across multiple datasets, architectures, and dataset shift. Our findings show that the functions sampled along a single training trajectory or subspace thereof tend to be very similar in predictions (while potential far away in the weight space), whereas functions sampled from different randomly initialized trajectories tend to be very diverse. # 2 Background The loss landscape of neural networks (also called the objective landscape) – the space of weights and biases that the network navigates during training – is a high dimensional function and therefore could potentially be very complicated. However, many empirical results show surprisingly simple properties of the loss surface. Goodfellow and Vinyals [2014] observed that the loss along a linear path from an initialization to the corresponding optimum is monotonically decreasing, encountering no significant obstacles along the way. Li et al. [2018] demonstrated that constraining optimization to a random, low-dimensional hyperplane in the weight space leads to results comparable to full-space optimization, provided that the dimension exceeds a modest threshold. This was geometrically understood and extended by Fort and Scherlis [2019]. 3We use the term ‘mode’ to refer to unique functions fo(a). Due to weight space symmetries, different parameters can correspond to the same function, i.e. fo, (a) = fe, (x) even though 01 4 02. 2 Garipov et al. [2018], Draxler et al. [2018] demonstrate that while a linear path between two independent optima hits a high loss area in the middle, there in fact exist continuous, low-loss paths connecting any pair of optima (or at least any pair empirically studied so far). These observations are unified into a single phenomenological model in [Fort and Jastrzebski, 2019]. While low-loss tunnels create functions with near-identical low values of loss along the path, the experiments of Fort and Jastrzebski [2019], Garipov et al. [2018] provide preliminary evidence that these functions tend to be very different in function space, changing significantly in the middle of the tunnel, see Appendix A for a review and additional empirical evidence that complements their results. # 3 Experimental setup We explored the CIFAR-10 [Krizhevsky, 2009], CIFAR-100 [Krizhevsky, 2009], and ImageNet [Deng et al., 2009] datasets. We train convolutional neural networks on the CIFAR-10 dataset, which contains 50K training examples from 10 classes. To verify that our findings translate across architectures, we use the following 3 architectures on CIFAR-10: SmallCNN: channels [16,32,32] for 10 epochs which achieves 64% test accuracy. • MediumCNN: channels [64,128,256,256] for 40 epochs which achieves 71% test accuracy. • ResNet20v1 [He et al., 2016a]: for 200 epochs which achieves 90% test accuracy. We use the Adam optimizer [Kingma and Ba, 2015] for training and to make sure the effects we observe are general, we validate that our results hold for vanilla stochastic gradient descent (SGD) as well (not shown due to space limitations). We use batch size 128 and dropout 0.1 for training SmallCNN and MediumCNN. We used 40 epochs of training for each. To generate weight space and prediction space similarity results, we use a constant learning rate of 1.6 × 10−3 and halfing it every 10 epochs, unless specified otherwise. We do not use any data augmentation with those two architectures. For ResNet20v1, we use the data augmentation and learning rate schedule used in Keras examples.4 The overall trends are consistent across all architectures, datasets, and other hyperparameter and non-linearity choices we explored. To test if our observations generalize to other datasets, we also ran certain experiments on more complex datasets such as CIFAR-100 [Krizhevsky, 2009] which contains 50K examples belonging to 100 classes and ImageNet [Deng et al., 2009], which contains roughly 1M examples belonging to 1000 classes. CIFAR-100 is trained using the same ResNet20v1 as above with Adam optimizer, batch size 128 and total epochs of 200. The learning rate starts from 10−3 and decays to (10−4, 5 × 10−5, 10−5, 5 × 10−7) at epochs (100, 130, 160, 190). ImageNet is trained with ResNet50v2 [He et al., 2016b] and momentum optimizer (0.9 momentum), with batch size 256 and 160 epochs. The learning rate starts from 0.15 and decays to (0.015, 0.0015) at epochs (80, 120). In addition to evaluating on regular test data, we also evaluate the performance of the methods on corrupted versions of the dataset using the CIFAR-10-C and ImageNet-C benchmarks [Hendrycks and Dietterich, 2019] which contain corrupted versions of original images with 19 corruption types (15 for ImageNet-C) and varying intensity values (1-5), and was used by Ovadia et al. [2019] to measure calibration of the uncertainty estimates under dataset shift. Following Ovadia et al. [2019], we measure accuracy as well as Brier score [Brier, 1950] (lower values indicate better uncertainty estimates). We use the SVHN dataset [Netzer et al., 2011] to evaluate how different methods trained on CIFAR-10 dataset react to out-of-distribution (OOD) inputs. # 4 Visualizing Function Space Similarity # 4.1 Similarity of Functions Within and Across Randomly Initialized Trajectories First, we compute the similarity between different checkpoints along a single trajectory. In Figure 2(a), TT we plot the cosine similarity in weight space, defined as cos(0, 02) = war In Figure 2(b), we plot the disagreement in function space, defined as the fraction of points the checkpoints disagree on, that is, + N [f(a@n3O1) A f(an; O2)], where f(a; 0) denotes the class label predicted by the n=1 network for input x. We observe that the checkpoints along a trajectory are largely similar both in the weight space and the function space. Next, we evaluate how diverse the final solutions from different random initializations are. The functions from different initialization are different, as demonstrated 4https://keras.io/examples/cifar10_resnet/ 3 SNE axis 2 weary 2 210 a 8 5 9 {SNE axis 1 SNE axis 2 10 Checkpoint 1D Checkpoint 1D . 0.92 0.35 ose 0s weary 2 0 5 10 15 2 25 30 5 10 15 20 25 210 a 8 5 9 Checkpoint ID Checkpoint ID {SNE axis 1 10 Checkpoint 1D . 0.92 ose 0 5 10 15 2 25 30 Checkpoint ID Checkpoint 1D 0.35 0s 5 10 15 20 25 Checkpoint ID (a) Cosine similarity of weights (b) Disagreement of predictions (c) t-SNE of predictions Figure 2: Results using SmallCNN on CIFAR-10. Left plot: Cosine similarity between checkpoints to measure weight space alignment along optimization trajectory. Middle plot: The fraction of labels on which the predictions from different checkpoints disagree. Right plot: t-SNE plot of predictions from checkpoints corresponding to 3 different randomly initialized trajectories (in different colors). by the similarity plots in Figure 3. Comparing this with Figures 2(a) and 2(b), we see that functions within a single trajectory exhibit higher similarity and functions across different trajectories exhibit much lower similarity. Next, we take the predictions from different checkpoints along the individual training trajectories from multiple initializations and compute a t-SNE plot [Maaten and Hinton, 2008] to visualize their similarity in function space. More precisely, for each checkpoint we take the softmax output for a set of examples, flatten the vector and use it to represent the model’s predictions. The t-SNE algorithm is then used to reduce it to a 2D point in the t-SNE plot. Figure 2(c) shows that the functions explored by different trajectories (denoted by circles with different colors) are far away, while functions explored within a single trajectory (circles with the same color) tend to be much more similar. Independent solution independent solutior Independent solution independent solutior (a) Results using SmallCNN (b) Results using ResNet20v1 Figure 3: Results on CIFAR-10 using two different architectures. For each of these architectures, the left subplot shows the cosine similarity between different solutions in weight space, and the right subplot shows the fraction of labels on which the predictions from different solutions disagree. In general, the weight vectors from two different initializations are essentially orthogonal, while their predictions are approximately as dissimilar as any other pair. # 4.2 Similarity of Functions Within Subspace from Each Trajectory and Across Trajectories In addition to the checkpoints along a trajectory, we also construct subspaces based on each individual trajectory. Scalable variational Bayesian methods typically approximate the distribution of weights along the training trajectory, hence visualizing the diversity of functions between the subspaces helps understand the difference between Bayesian neural networks and ensembles. We use a representative set of four subspace sampling methods: Monte Carlo dropout, a diagonal Gaussian approximation, a low-rank covariance matrix Gaussian approximation and a random subspace approximation. Unlike dropout and Gaussian approximations which assume a parametric form for the variational posterior, the random subspace method explores all high-quality solutions within the subspace and hence could be thought of as a non-parametric variational approximation to the posterior. Due to space constraints, we do not consider Markov Chain Monte Carlo (MCMC) methods in this work; Zhang et al. [2020] show that popular stochastic gradient MCMC (SGMCMC) methods may not explore multiple modes and propose cyclic SGMCMC. We compare diversity of random initialization and cyclic SGMCMC in Appendix C. In the descriptions of the methods, let θ0 be the optimized weight-space solution (the weights and biases of our trained neural net) around which we will construct the subspace. 4 • Random subspace sampling: We start at an optimized solution θ0 and choose a random direction v in the weight space. We step in that direction by choosing different values of t and looking at predictions at configurations θ0 + tv. We repeat this for many random directions v. Dropout subspace: We start at an optimized solution 09 apply dropout with a randomly chosen Pkeep, evaluate predictions at dropout. (80) and repeat this many times with different pyoep . ‘Pkeep e Diagonal Gaussian subspace: We start at an optimized solution 09 and look at the most recent iterations of training proceeding it. For each trainable parameter 0;, we calculate the mean j1; and standard deviation a; independently for each parameter, which corresponds to a diagonal covariance matrix. This is similar to SWAG-diagonal [Maddox et al., 2019]. To sample solutions from the subspace, we repeatedly draw samples where each parameter independently as 0; ~ N(j1;,0;). Low-rank Gaussian subspace: We follow the same procedure as the diagonal Gaussian subspace above to compute the mean j1; for each trainable parameter. For the covariance, we use a rank- k approximation, by calculating the top k principal components of the recent weight vectors {v; € RPatems},. We sample from a k-dimensional normal distribution and obtain the weight configurations as 9 ~ pp + >, N’(0", 1")v;. Throughout the text, we use the terms low-rank and PCA Gaussian interchangeably. random subspaces arppout subspaces, Pi 1s 4___Runs § diagonal normal subspaces ie ie yns 6. jow-rank normal subspaces random subspaces arppout subspaces, 1s 4___Runs § diagonal normal subspaces Pi ie ie yns 6. jow-rank normal subspaces Figure 4: Results using SmallCNN on CIFAR-10: t-SNE plots of validation set predictions for each trajectory along with four different subspace generation methods (showed by squares), in addition to 3 independently initialized and trained runs (different colors). As visible in the plot, the subspace- sampled functions stay in the prediction-space neighborhood of the run around which they were constructed, demonstrating that truly different functions are not sampled. Figure 4 shows that functions sampled from a subspace (denoted by colored squares) corresponding to a particular initialization, are much more similar to each other. While some subspaces are more diverse, they still do not overlap with functions from another randomly initialized trajectory. Accuracy Function similarity to optimum 1 Function similarity to optimum 2 1s 1.0 os 0.0 Weight space direction 2 ett @ m2 @ itz @ int? onan ois | @ tt @ mmt2 onan | f oas Spat 2 pane — eau oa |S pana & pane Gauss aao00 S paina 2 pane Gausexa00.0 He OHEL e Opt2 de OF oe Ont de FL oe Ont? 00 0.00 0.00 “05 00 05 10 45 “05 00 05 10 45 05 00 05 10 45 Weight space direction 1 Weight space direction 1 Weight space direction 1 Figure 5: Results using MediumCNN on CIFAR-10: Radial loss landscape cut between the origin and two independent optima. Left plot shows accuracy of models along the paths of the two independent trajectories, and the middle and right plots show function space similarity to the two optima. As additional evidence, Figure 5 provides a two-dimensional visualization of the radial landscape along the directions of two different optima. The 2D sections of the weight space visualized are defined by the origin (all weights are 0) and two independently initialized and trained optima. The weights of the two trajectories (shown in red and blue) are initialized using standard techniques and they increase radially with training due to their softmax cross-entropy loss. The left subplot shows that different randomly initialized trajectories eventually achieve similar accuracy. We also sample from a Gaussian subspace along trajectory 1 (shown in pink). The middle and the right subplots 5 show function space similarity (defined as the fraction of points on which they agree on the class prediction) of the parameters along the path to optima 1 and 2. Solutions along each trajectory (and Gaussian subspace) are much more similar to their respective optima, which is consistent with the cosine similarity and t-SNE plots. # 4.3 Diversity versus Accuracy plots To illustrate the difference in another fashion, we sample functions from a single subspace and plot accuracy versus diversity, as measured by disagreement between predictions from the baseline solution. From a bias-variance trade-off perspective, we require a procedure to produce functions that are accurate (which leads to low bias by aggregation) as well as de-correlated (which leads to lower variance by aggregation). Hence, the diversity vs accuracy plot allows us to visualize the trade-off that can be achieved by subspace sampling methods versus deep ensembles. The diversity score quantifies the difference of two functions (a base solution and a sampled one), by measuring fraction of datapoints on which their predictions differ. We chose this approach due to its simplicity; we also computed the KL-divergence and other distances between the output probability distributions, leading to equivalent conclusions. Let ddiff denote the fraction of predictions on which the two functions differ. It is 0 when the two functions make identical class predictions, and 1 when they differ on every single example. To account for the fact that the lower the accuracy of a function, the higher its potential ddiff due to the possibility of the wrong answers being random and uncorrelated between the two functions, we normalize this by (1 − a), where a is the accuracy of the sampled solution. We also derive idealized lower and upper limits of these curves (showed in dashed lines) by perturbing the reference solution’s predictions (lower limit) and completely random predictions at a given accuracy (upper limit), see Appendix D for a discussion. Simple, CNN on CIFAR-10 + Upper limit ~~ Lower limit fandom subspace ++ dropout subspace +++ diagonal gaussian + rank 4 gaussian 2 ‘iodke independent optima S “tobe baseline optimum ® 00 01 02 03 04 05 06 07 08 Validation accuracy MediymCNN on CIFAR-10 = Upper limit = Lower limit +++ random subspace +++ dropout subspace +++ simple gaussian “tebe independent optima i] ‘tobe baseline optimum * oo 01 02 03 04 05 08 07 08 09 Validation accuracy ResNet20v1 on CJFAR-10 + Upper limit = Lower limit ++ random subspace ++ dropout subspace ++ simple gaussian s+ rank 4 gaussian debt independent optima eik: baseline optimum oo 02 04 06 08 10 Test accuracy Simple, CNN on CIFAR-10 MediymCNN on CIFAR-10 ResNet20v1 on CJFAR-10 + Upper limit = Lower limit ++ random subspace ++ dropout subspace ++ simple gaussian s+ rank 4 gaussian debt independent optima eik: baseline optimum + Upper limit ~~ Lower limit fandom subspace ++ dropout subspace +++ diagonal gaussian + rank 4 gaussian = Upper limit = Lower limit +++ random subspace +++ dropout subspace 2 +++ simple gaussian ‘iodke independent optima S “tebe independent optima i] “tobe baseline optimum ® ‘tobe baseline optimum * 00 01 02 03 04 05 06 07 08 oo 01 02 03 04 05 08 07 08 09 oo 02 04 06 08 10 Validation accuracy Validation accuracy Test accuracy Figure 6: Diversity versus accuracy plots for 3 models trained on CIFAR-10: SmallCNN, Medium- CNN and a ResNet20v1. Independently initialized and optimized solutions (red stars) achieve better diversity vs accuracy trade-off than the four different subspace sampling methods. Figure 6 shows the results on CIFAR-10. Comparing these subspace points (colored dots) to the baseline optima (green star) and the optima from different random initializations (denoted by red stars), we observe that random initializations are much more effective at sampling diverse and accurate solutions, than subspace based methods constructed out of a single trajectory. The results are consistent across different architectures and datasets. Figure 7 shows results on CIFAR-100 and ImageNet. We observe that solutions obtained by subspace sampling methods have a worse trade off between accuracy and prediction diversity, compared to independently initialized and trained optima. Interestingly, the separation between the subspace sampling methods and independent optima in the diversity–accuracy plane gets more pronounced the more difficult the problem and the more powerful the network. ResNet20v1 on GIFAR-100 canst oer + Upper limit ~ we == Lower limit ">, be ++ random subspace "* % dropout subspace simple gaussian +++ rank 4 gaussian tek independent optima toe: baseline optimum oo 01 02 03 Of O05 06 Test accuracy ResNet50v2 on ImageNet giop sos *| Z| —— Upper limit a £05) --~ Lower limit + Soy random subspace . 3 simple gaussian ei dropout subspace a iS independent optima MO 0.0} %& baseline optimum ol 02 03 Oa O05 06 oF Validation accuracy ResNet20v1 on GIFAR-100 ResNet50v2 on ImageNet canst oer giop + Upper limit ~ we sos *| == Lower limit ">, be Z| —— Upper limit a ++ random subspace "* % £05) --~ Lower limit + dropout subspace Soy random subspace . simple gaussian 3 simple gaussian +++ rank 4 gaussian ei dropout subspace a tek independent optima iS independent optima MO toe: baseline optimum 0.0} %& baseline optimum oo 01 02 03 Of O05 06 ol 02 03 Oa O05 06 oF Test accuracy Validation accuracy (a) ResNet20v1 trained on CIFAR-100. (b) ResNet50v2 trained on ImageNet. Figure 7: Diversity vs. accuracy plots for ResNet20v1 on CIFAR-100, and ResNet50v2 on ImageNet. 6 # 5 Evaluating the Relative Effects of Ensembling versus Subspace Methods Our hypothesis in Figure 1 and the empirical observations in the previous section suggest that subspace-based methods and ensembling should provide complementary benefits in terms of un- certainty and accuracy. Since our goal is not to propose a new method, but to carefully test this hypothesis, we evaluate the performance of the following four variants for controlled comparison: Baseline: optimum at the end of a single trajectory. • Subspace sampling: average predictions over the solutions sampled from a subspace. • Ensemble: train baseline multiple times with random initialization and average the predictions. • Ensemble + Subspace sampling: train multiple times with random initialization, and use subspace sampling within each trajectory. To maintain the accuracy of random samples at a reasonable level for fair comparison, we reject the sample if validation accuracy is below 0.65. For the CIFAR-10 experiment, we use a rank-4 approximation of the random samples using PCA. Note that diagonal Gaussian, low-rank Gaussian and random subspace sampling methods to approximate each mode of the posterior leads to an increase in the number of parameters required for each mode. However, using just the mean weights for each mode would not cause such an increase. Izmailov et al. [2018] proposed stochastic weight averaging (SWA) for better generalization. One could also compute an (exponential moving) average of the weights along the trajectory, inspired by Polyak-Ruppert averaging in convex optimization, (see also [Mandt et al., 2017] for a Bayesian view on iterate averaging). As weight averaging (WA) has been already studied by Izmailov et al. [2018], we do not discuss it in detail. Our goal is to test if WA finds a better point estimate within each mode (see cartoon illustration in Figure 1) and provides complementary benefits to ensembling over random initialization. In our experiments, we use WA on the last few epochs which corresponds to using just the mean of the parameters within each mode. Figure 8 shows the results on CIFAR-10. The results validate our hypothesis that (i) subspace sampling and ensembling provide complementary benefits, and (ii) the relative benefits of ensembling are higher as it averages predictions over more diverse solutions. 10 3 Ensemble size 10 Fy 1“ 6 8 Ensemble size 10 10 Fy 1“ 6 8 Ensemble size 3 Ensemble size Figure 8: Results using MediumCNN on CIFAR-10 showing the complementary benefits of ensemble and subspace methods as a function of ensemble size. We used 10 samples for each subspace method. Effect of function space diversity on dataset shift We test the same hypothesis under dataset shift [Ovadia et al., 2019, Hendrycks and Dietterich, 2019]. Left and middle subplots of Figure 9 show accuracy and Brier score on the CIFAR-10-C benchmark. We observe again that ensembles and subspace sampling methods provide complementary benefits. The diversity versus accuracy plot compares diversity to a reference solution, but it is also important to also look at the diversity between multiple samples of the same method, as this will effectively determine the efficiency of the method in terms of the bias-variance trade-off. Function space diversity is particularly important to avoid overconfident predictions under dataset shift, as averaging over similar functions would not reduce overconfidence. To visualize this, we draw 5 samples of each method and compute the average Jensen-Shannon divergence between their predictions, defined as 7_, KL(po,, (yla)||P(y|a)) where KL denotes the Kullback-Leibler divergence and p(y\x) = (1/M) >>, ve,, (y|z). Right subplot of Figure 9 shows the results on CIFAR-10-C for increasing corruption intensity. We observe that Jensen-Shannon divergence is the highest between independent random initializations, and lower for subspace sampling methods; the difference is 7 Figure 9: Results using MediumCNN on CIFAR-10-C for varying levels of corruption intensity. Left plot shows accuracy, medium plot shows Brier score and right plot shows Jensen-Shannon divergence. higher under dataset shift, which explains the findings of Ovadia et al. [2019] that deep ensembles outperform other methods under dataset shift. We also observe similar trends when testing on an OOD dataset such as SVHN: JS divergence is 0.384 for independent runs, 0.153 for within-trajectory, 0.155 for random sampling, 0.087 for rank-5 PCA Gaussian and 0.034 for diagonal Gaussian. Results on ImageNet To illustrate the effect on another challenging dataset, we repeat these ex- periments on ImageNet [Deng et al., 2009] using ResNet50v2 architecture. Due to computational constraints, we do not evaluate PCA subspace on ImageNet. Figure 10 shows results on ImageNet test set (zero corruption intensity) and ImageNet-C for increasing corruption intensities. Similar to CIFAR-10, random subspace performs best within subspace sampling methods, and provides complementary benefits to random initialization. We empirically observed that the relative gains of WA (or subspace sampling) are smaller when the individual models converge to a better optima within each mode. Carefully choosing which points to average, e.g. using cyclic learning rate as done in fast geometric ensembling [Garipov et al., 2018] can yield further benefits. Intensity = 0 Intensity = y= Intensity = 3 Intensity = 0 Intensity = y= Intensity = 3 Figure 10: Results using ResNet50v2 on ImageNet test and ImageNet-C for varying corruptions. # 6 Discussion Through extensive experiments, we show that trajectories of randomly initialized neural networks explore different modes in function space, which explains why deep ensembles trained with just random initializations work well in practice. Subspace sampling methods such as weight averaging, Monte Carlo dropout, and various versions of local Gaussian approximations, sample functions that might lie relatively far from the starting point in the weight space, however, they remain similar in function space, giving rise to an insufficiently diverse set of predictions. Using the concept of the diversity–accuracy plane, we demonstrate empirically that current variational Bayesian methods do not reach the trade-off between diversity and accuracy achieved by independently trained models. There are several interesting directions for future research: understanding the role of random initialization on training dynamics (see Appendix B for a preliminary investigation), exploring methods which achieve higher diversity than deep ensembles (e.g. through explicit decorrelation), and developing parameter-efficient methods (e.g. implicit ensembles or Bayesian deep learning algorithms) that achieve better diversity–accuracy trade-off than deep ensembles. 8 # References David JC MacKay. Bayesian methods for adaptive models. PhD thesis, California Institute of Technology, 1992. Radford M. Neal. Bayesian Learning for Neural Networks. Springer-Verlag New York, Inc., 1996. Max Welling and Yee Whye Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In ICML, 2011. Jost Tobias Springenberg, Aaron Klein, Stefan Falkner, and Frank Hutter. Bayesian optimization with robust Bayesian neural networks. In NeurIPS, 2016. Alex Graves. Practical variational inference for neural networks. In NeurIPS, 2011. Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural networks. In ICML, 2015. Christos Louizos and Max Welling. Multiplicative Normalizing Flows for Variational Bayesian Neural Networks. In ICML, 2017. Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, and Roger Grosse. Flipout: Efficient pseudo- independent weight perturbations on mini-batches. In ICLR, 2018. Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In ICML, 2016. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 2014. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In NeurIPS, 2017. Alex Kendall and Yarin Gal. What uncertainties do we need in Bayesian deep learning for computer vision? In NeurIPS, 2017. Leo Breiman. Bagging predictors. Machine learning, 1996. Stefan Lee, Senthil Purushwalkam, Michael Cogswell, David Crandall, and Dhruv Batra. Why M heads are better than one: Training a diverse ensemble of deep networks. arXiv preprint arXiv:1511.06314, 2015. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, D Sculley, Sebastian Nowozin, Joshua V Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model’s uncertainty? Evaluating predictive uncertainty under dataset shift. In NeurIPS, 2019. Fredrik K Gustafsson, Martin Danelljan, and Thomas B Schön. Evaluating scalable Bayesian deep learning methods for robust computer vision. arXiv preprint arXiv:1906.01620, 2019. Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry P Vetrov, and Andrew G Wilson. Loss surfaces, mode connectivity, and fast ensembling of DNNs. In NeurIPS, 2018. Felix Draxler, Kambis Veschgini, Manfred Salmhofer, and Fred A Hamprecht. Essentially no barriers in neural network energy landscape. arXiv preprint arXiv:1803.00885, 2018. Stanislav Fort and Stanislaw Jastrzebski. Large scale structure of neural network loss landscapes. In NeurIPS, 2019. Ian J. Goodfellow and Oriol Vinyals. Qualitatively characterizing neural network optimization problems. CoRR, abs/1412.6544, 2014. Chunyuan Li, Heerad Farkhoor, Rosanne Liu, and Jason Yosinski. Measuring the intrinsic dimension of objective landscapes. In ICLR, 2018. Stanislav Fort and Adam Scherlis. The Goldilocks zone: Towards better understanding of neural network loss landscapes. In AAAI, 2019. 9 Alex Krizhevsky. Learning multiple layers of features from tiny images. 2009. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR, 2009. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, 2016a. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In ICLR, 2015. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, 2016b. Dan Hendrycks and Thomas Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In ICLR, 2019. Glenn W Brier. Verification of forecasts expressed in terms of probability. Monthly weather review, 1950. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading Digits in Natural Images with Unsupervised Feature Learning. In NeurIPS Workshop on Deep Learning and Unsupervised Feature Learning, 2011. Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-SNE. JMLR, 2008. Ruqi Zhang, Chunyuan Li, Jianyi Zhang, Changyou Chen, and Andrew Gordon Wilson. Cyclical stochastic gradient mcmc for bayesian deep learning. In International Conference on Learning Representations, 2020. URL https://openreview.net/forum?id=rkeS1RVtPS. Wesley J Maddox, Pavel Izmailov, Timur Garipov, Dmitry P Vetrov, and Andrew Gordon Wilson. A simple baseline for bayesian uncertainty in deep learning. In Advances in Neural Information Processing Systems, pages 13132–13143, 2019. Pavel Izmailov, Dmitrii Podoprikhin, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Averaging weights leads to wider optima and better generalization. In UAI, 2018. Stephan Mandt, Matthew D Hoffman, and David M Blei. Stochastic gradient descent as approximate Bayesian inference. JMLR, 2017. 10 # Supplementary Material # A Identical loss does not imply identical functions in prediction space To make our paper self-contained, we review the literature on loss surfaces and mode connectivity [Garipov et al., 2018, Draxler et al., 2018, Fort and Jastrzebski, 2019]. We provide visualizations of the loss surface which confirm the findings of prior work as well as complement them. Figure S1 shows the radial loss landscape (train as well as the validation set) along the directions of two different optima. The left subplot shows that different trajectories achieve similar values of the loss, and the right subplot shows the similarity of these functions to their respective optima (in particular the fraction of labels predicted on which they differ divided by their error rate). While the loss values from different optima are similar, the functions are different, which confirms that random initialization leads to different modes in function space. ® ou ° t ® ou ° t # (a) Accuracy along the radial loss-landscape cut # (b) Function-space similarity Figure S1: Results using MediumCNN on CIFAR-10: Radial loss landscape cut between the origin and two independent optima and the predictions of models on the same plane. Next, we construct a low-loss tunnel between different optima using the procedure proposed by Fort and Jastrzebski [2019], which is a simplification of the procedures proposed in Garipov et al. [2018] and Draxler et al. [2018]. As shown in Figure S2(a), we start at the linear interpolation point (denoted by the black line) and reach the closest point on the manifold by minimizing the training loss. The minima of the training loss are denoted by the yellow line in the manifolds. Figure S2(b) confirms that the tunnel is indeed low-loss. This also confirms the findings of [Garipov et al., 2018, Fort and Jastrzebski, 2019] that while solutions along the tunnel have similar loss, they are dissimilar in function space. Loss = Gift from1 — itt from2 a 00 02 04 06 08 10 00 0.2 04 06 08 10 00 02 04 06 08 10 Interpolation Interpolation Interpolation primum 2 Loss Init 2 = Gift from1 — itt from2 a 00 02 04 06 08 10 00 0.2 04 06 08 10 00 02 04 06 08 10 Interpolation Interpolation Interpolation Origin primum 2 Init 2 Origin # (a) Cartoon illustration # (b) Low-loss tunnel Figure S2: Left: Cartoon illustration showing linear connector (black) along with the optimized connector which lies on the manifold of low loss solutions. Right: The loss and accuracy in between two independent optima on a linear path and an optimized path in the weight space. In order to visualize the 2-dimensional cut through the loss landscape and the associated predictions along a curved low-loss path, we divide the path into linear segments, and compute the loss and prediction similarities on a triangle given by this segment on one side and the origin of the weight space on the other. We perform this operation on each of the linear segments from which the low-loss path is constructed, and place them next to each other for visualization. Figure S3 visualizes the loss along the manifold, as well as the similarity to the original optima. Note that the regions between radial yellow lines consist of segments, and we stitch these segments together in Figure S3. The accuracy plots show that as we traverse along the low-loss tunnel, the accuracy remains fairly constant as expected. However, the prediction similarity plot shows that the low-loss tunnel does not correspond to similar solutions in function space. What it shows is that while the modes are 11 Training set accuracy, 7 Vaidatign set accurge Sipilarity to optimu 1 Similarity to optimum 2 (a) Accuracy along the low-loss tunnel (b) Prediction similarity along the low-loss tunnel Figure S3: Results using MediumCNN on CIFAR-10: Radial loss landscape cut between the origin and two independent optima along an optimized low-loss connector and function space similarity (agreement of predictions) to the two optima along the same planes. connected in terms of accuracy/loss, their functional forms remain distinct and they do not collapse into a single mode. # B Effect of randomness: random initialization versus random shuffling Random seed affects both initial parameter values as well the order of shuffling of data points. We run experiments to decouple the effect of random initialization and shuffling; Figure S4 shows the results. We observe that both of them provide complementary sources of randomness, with random initialization being the dominant of the two. As expected, random mini-batch shuffling adds more randomness at higher learning rates due to gradient noise. GPU TPU 05 0.5 === Random inits, random batches === Random inits, random batches =e Random inits, fixed batches =e Random inits, fixed batches === Fixed inits, random batches === Fixed inits, random batches — Fixed ints, fixed batches Fixed ints, fixed batches 0.4 0.4 = es 5 I g =a # 5 03 - 0.3 2 = j a — g f= = = = = DB 2 ‘6 j £02 —| 0.2 = s ae | # 8 £ 0.1 = 0.1 0.0 0.0 4.0 x 10-4 8.0 x 10-¢ 16.0 x 10-* 4.0 x 10-4 8.0 x 10-4 16.0 x 10-4 Learning rate Learning rate Figure S4: The effect of random initializations and random training batches on the diversity of predictions. For runs on a GPU, the same initialization and the same training batches (red) do not lead to the exact same predictions. On a TPU, such runs always learn the same function and have therefore 0 diversity of predictions. 12 # C Comparison to cSG-MCMC Zhang et al. [2020] show that vanilla stochastic gradient Markov Chain Monte Carlo (SGMCMC) methods do not explore multiple modes in the posterior and instead propose cyclic stochastic gradient MCMC (cSG-MCMC) to achieve that. We ran a suite of verification experiments to determine whether the diversity of functions found using the proposed cSG-MCMC algorithm matches that of independently randomly initialized and trained models. We used the code published by the authors Zhang et al. [2020] 5 to match exactly the setup of their paper. We ran cSG-MCMC from 3 random initializations, each for a total of 150 epochs amounting to 3 cycles of the 50 epoch period learning rate schedule. We used a ResNet-18 and ran experiments on both CIFAR-10 and CIFAR-100. We measured the function diversity between a) independently initialized and trained runs, and b) between different cyclic learning rate periods within the same run of the cSG-MCMC. The latter (b) should be comparable to the former (a) if cSG-MCMC was as successful as vanilla deep ensembles at producing diverse functions. We show that both for CIFAR-10 and CIFAR-100, vanilla ensembles generate statistically significantly more diverse sets of functions than cSG-MCMC, as shown in Figure C. While cSG-MCMC is doing well in absolute terms, the shared initialization for cSG-MCMC training seems to lead to lower diversity than deep ensembles with multiple random initializations. Another difference between the methods is that individual members of deep ensemble can be trained in parallel unlike cSG-MCMC. ResNet18 on CIFAR-10 ResNet18 on CIFAR-100 > > @ Fa + 0.925 —s g 0.850 3 3 2 0.900 9 0.825 8 g e ‘© (0.875 e g 0.800 —e_. 2 0.850 3 2 0.775 e cSG-MCMC Ensemble cSG-MCMC Ensemble Figure S5: Comparison of function space diversities between the cSG-MCMC (blue) and deep ensembles (red). The left panel shows the experiments with ResNet-18 on CIFAR-10 and the right panel shows the experiments on CIFAR-100. In both cases, deep ensembles produced a statistically significantly more diverse set of functions than cSG-MCMC as measured by our function diversity metric. The plots show the mean and 1σ confidence intervals based on 4 experiments each. # D Modeling the accuracy – diversity trade off In our diversity–accuracy plots (e.g. Figure 6), subspace samples trade off their accuracy for diversity in a characteristic way. To better understand where this relationship comes from, we derive several limiting curves based on an idealized model. We also propose a 1-parameter family of functions that provide a surprisingly good fit (given the simplicity of the model) to our observation, as shown in Figure S6. We will be studying a pair of functions in a C-class problem: the reference solution f ∗ of accuracy a∗, and another function f of accuracy a. # D.1 Uncorrelated predictions: the best case The best case scenario is when the predicted labels are uncorrelated with the reference solution’s labels. On a particular example, the probability that the reference solution got it correctly is a∗, and the probability that the new solution got it correctly is a. On those examples, the predictions do not differ since they both have to be equal to the ground truth label. The probability that the reference solution is correct on an example while the new solution is wrong is a∗(1 − a). The probability that the reference solution is wrong on an example while the new solution is correct is (1 − a∗)a. On the examples where both solutions are wrong (probability (1 − a∗)(1 − a)), there are two cases: 1. the solutions agree (an additional factor of 1/(C − 1)), or 2. the two solutions disagree (an additional factor of (C − 2)/(C − 1)). # 5 https://github.com/ruqizhang/csgmcmc 13 Only case 2 contributes to the fraction of labels on which they disagree. Hence we end up with ddiff (a; a∗, C) = (1 − a∗)a + (1 − a)a∗ + (1 − a∗)(1 − a) C − 2 C − 1 . This curve corresponds to the upper limit in Figure 6. The diversity reached in practice is not as high as the theoretical optimum even for the independently initialized and optimized solutions, which provides scope for future work. # D.2 Correlated predictions: the lower limit By inspecting Figure 6 as well as a priori, we would expect a function f close to the reference function f ∗ in the weight space to have correlated predictions. We can model this by imagining that the predictions of f are just the predictions of the reference solution f ∗ perturbed by perturbations of a particular strength (which we vary). Let the probability of a label changing be p. We will consider four cases: 1. the label of the correctly classified image does not flip (probability a∗(1 − p)), 2. it flips (probability a∗p), 3. an incorrectly labelled image does not flip (probability (1 − a∗)(1 − p)), and 4. it flips (probability (1 − a∗)p). The resulting accuracy a(p) obtains a contribution a∗(1−p) from case 1) and a contribution (1−a∗)p with probability 1/(C − 1) from case 4). Therefore a(p) = a∗(1 − p) + p(1 − a∗)/(C − 1). Inverting this relationship, we get p(a) = (C − 1)(a∗ − a)/(Ca∗ − 1). The fraction of labels on which the solutions disagree is simply p by our definition of p, and therefore (C − 1)(a∗ − a) Ca∗ − 1 This curve corresponds to the lower limit in Figure 6. D.3 Correlated predictions: 1-parameter family 1.0 1.0 0.8 0.8 20.6 B06 2 2 g a er | g oa) f(p, p°?) 6 0.4) --- Family Alp, p°) S\N \r ae Typical flp, p) Typical f(p, p) SS \ I Worst case f(p, 0) 0.2 Worst case f(p, 0) Sy \ Wl 0.2 subspace samples *% independent optima SAI % independent optima 0.0 % baseline optimum e 0.01 %* baseline optimum 00 #O1 02 03 04 05 06 00 O01 02 03 04 O05 06 Test accuracy Test accuracy Figure S6: Theoretical model of the accuracy-diversity trade-off and a comparison to ResNet20v1 on CIFAR-100. The left panel shows accuracy-diversity trade offs modelled by a 1-parameter family of functions specified by an exponent e. The right panel shows real subspace samples for a ResNet20v1 trained on CIFAR-100 and the best fitting function with e = 0.22. We can improve upon this model by considering two separate probabilities of labels flipping: p+, which is the probability that a correctly labelled example will flip, and p−, corresponding to the probability that a wrongly labelled example will flip its label. By repeating the previous analysis, we obtain ddiff (p+, p−; a∗, C) = a∗(1 − p+) + (1 − a∗)p− 1 C − 1 , (3) 14 and a(p+, p−; a∗, C) = a∗p+ + (1 − a∗)p− . (4) The previously derived lower limit corresponds to the case p+ = p− = p ∈ [0, 1], where the probability of flipping the correct labels is the same as the probability of flipping the incorrect labels. The absolute worst case scenario would correspond to the situation where p+ = p, while p− = 0, i.e. only the correctly labelled examples flip. We found that tying the two probabilities together via an exponent e as p+ = p and p− = pe = pe + generates a realistic looking trade off between accuracy and diversity. We show the resulting functions for several values of e in Figure S6. For e < 1, the chance of flipping the wrong label is larger than that of the correct label, simulating a robustness of the learned solution to perturbations. We found the closest match to our data provided by e = 0.22. 15
{ "id": "1906.01620" }
1912.02164
Plug and Play Language Models: A Simple Approach to Controlled Text Generation
Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and entailing the significant cost of retraining. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM's hidden activations and thus guide the generation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differentiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper.
http://arxiv.org/pdf/1912.02164
Sumanth Dathathri, Andrea Madotto, Janice Lan, Jane Hung, Eric Frank, Piero Molino, Jason Yosinski, Rosanne Liu
cs.CL, cs.AI, cs.LG
ICLR 2020 camera ready
null
cs.CL
20191204
20200303
0 2 0 2 r a M 3 ] L C . s c [ 4 v 4 6 1 2 0 . 2 1 9 1 : v i X r a Published as a conference paper at ICLR 2020 PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATION Andrea Madotto ∗ HKUST Janice Lan Uber AI Jane Hung Uber AI Piero Molino Uber AI Jason Yosinski †† Uber AI [email protected], [email protected] {janlan, jane.hung, mysterefrank, piero, yosinski, rosanne}@uber.com # ABSTRACT Large transformer-based language models (LMs) trained on huge text corpora have shown unparalleled generation capabilities. However, controlling attributes of the generated language (e.g. switching topic or sentiment) is difficult without modifying the model architecture or fine-tuning on attribute-specific data and en- tailing the significant cost of retraining. We propose a simple alternative: the Plug and Play Language Model (PPLM) for controllable language generation, which combines a pretrained LM with one or more simple attribute classifiers that guide text generation without any further training of the LM. In the canonical scenario we present, the attribute models are simple classifiers consisting of a user-specified bag of words or a single learned layer with 100,000 times fewer parameters than the LM. Sampling entails a forward and backward pass in which gradients from the attribute model push the LM’s hidden activations and thus guide the gener- ation. Model samples demonstrate control over a range of topics and sentiment styles, and extensive automated and human annotated evaluations show attribute alignment and fluency. PPLMs are flexible in that any combination of differen- tiable attribute models may be used to steer text generation, which will allow for diverse and creative applications beyond the examples given in this paper. # INTRODUCTION The Transformer architecture (Vaswani et al., 2017) has enabled large-scale language models (LMs) trained on a huge amount of data (Radford et al., 2019; Dai et al., 2019b; Radford et al., 2018b) to greatly improve the state-of-the-art on natural language processing tasks. These models are used to extract contextualized word embeddings for transfer learning purposes (Devlin et al., 2019) and as natural language generators. The latter can leverage large amounts of unannotated data and a simple log-likelihood training objective. However, once such models are trained, controlling attributes of generated text becomes difficult without modifying the model architecture to allow for extra input attributes or fine-tuning with attribute-specific data (Keskar et al., 2019; Ziegler et al., 2019). ∗Work done during internship at Uber AI †Co-senior authors . • Summary of contributions: SD, RL & JY conceptualized PPLMs and led the manuscript writing. SD led the project, implemented the PPLM, set up and ran all modeling experiments, engineered how to obtain workable gradients via the weighted embedding approach, and made the model work. AM helped with preparing datasets for discriminator training, automated evaluation, running experiments, and writing the manuscript. SD, RL & AM ran the external baselines. RL & JL built and oversaw the human evaluation pipeline and computed the statistics. JH ran the story generation with skeleton prefixes. EF assisted with detoxification experiments. PM led efforts to migrate to the new pytorch transformer, helped with code release. JY helped with the annotation pipeline, finding bugs, navigating model and experimental directions, engineering workable gradients, and posing the model mathematically. RL implemented preliminary experiments and multi-attribute control, and cleaned and coordinated release of the code. RL & JY oversaw the project. 1 Published as a conference paper at ICLR 2020 Table 1: The PPLM employs a pre-trained language model (LM) without any changes to the model parameters and can generate text with controlled attributes such as topic and sentiment. We demon- strate control with two tiny and easy to construct attribute models: a bag of words (BoW) related to a topic and a linear discriminator trained on top of LM latent representations to control sentiment. The underlined prefix is what the LM is conditioned on to generate a passage of text (e.g. The potato The potato The potato The potato The potato The potato The potato The potato). The potato The potato The potato The potato The potato The potato The potato The potato The potato The controlled attributes are colored and bracketed (e.g. [Science] ), and words in the BoW that are directly optimized for are highlighted brightly (e.g. research). The softer highlights correspond to words related to the attribute, but not directly optimized for during the control process (e.g. health). [–] The potato The potato The potato The potato The potato The potato The potato The potato and cauliflower are both in season to make combo breads, mounds, or pads. For an added The potato The potato The potato The potato The potato The potato The potato The potato The potato challenge, try some garlic mashed potatoes. [Negative] The potato The potato The potato The potato The potato The potato The potato The potato is a pretty bad idea. It can make you fat, it can cause you to have a terrible immune The potato The potato The potato The potato The potato The potato The potato The potato The potato system, and it can even kill you.. . . [Positive] The potato The potato The potato The potato The potato The potato The potato The potato chip recipe you asked for! We love making these, and I’ve been doing so for years. The potato The potato The potato The potato The potato The potato The potato The potato The potato I’ve always had a hard time keeping a recipe secret. I think it’s the way our kids love to eat them – so many little ones. [Science] The potato The potato The potato The potato The potato The potato The potato The potato was once thought to have no health problems and has been promoted as a nutritious The potato The potato The potato The potato The potato The potato The potato The potato The potato food source since the mid-1800s, but recent reports indicate that it has many harmful health issues. In fact, researchers from Johns Hopkins University. . . [Politics] [Positive] To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude this series of articles, I will present three of the most popular and To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude influential works on this topic. The first article deals with the role of women’s political participation in building a political system that is representative of the will of the people. [Politics] [Negative] To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude, the most significant and lasting damage from the economic crisis in To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude 2008 was that many governments, including those in the political center, lost power for the first time in modern history. Controllable generation entails modeling p(x|a), where a is some desired controllable attribute(s) and x the generated sample. However, generative models only learn p(x). In computer vision, Plug & Play Generative Networks (PPGN) from Nguyen et al. (2017) developed a mechanism for generating images with different attributes by plugging a discriminator (attribute model) p(a|x) together with a base generative model p(x) and sampling from the resulting p(x|a) ∝ p(a|x)p(x), effectively creating a conditional generative model on the fly from any supplied attribute model. In a similar manner, we propose the Plug and Play Language Model (PPLM) for conditional language generation that combines one or more simple attribute models p(a|x)—either in the form of a bag- of-words (BoW) or single layer classifiers—with a pre-trained, unconditional language model p(x). We sample from the resulting combined model by following gradients in the latent representation space in a manner inspired by the approximate Metropolis-adjusted Langevin (MALA) (Roberts et al., 1996; Roberts & Rosenthal, 1998) sampler deployed in Nguyen et al. (2017). Optimization is performed ex post facto in the activation space, therefore no re-training or fine- tuning is needed. Control is fine-grained, with a strength parameter determining how strong the attribute influence should be; a strength of 0 fully recovers the original model p(x). This design allows vast flexibility: users can combine a state-of-the-art generative model, which may be large and difficult to train, with any number of attribute controllers. Attribute models may be easier to train or untrained (in the case of BoW models), and multiple controllers may be combined flexibly during inference. In this paper, we demonstrate the PPLM approach using a GPT-2 345M model (Radford et al., 2019) as the general-purpose LM p(x), but the method applies in any representation space from any transformer-based text generator and allows combination with any attribute model p(a|x). We demonstrate controlled generation with a number of attribute controllers, assembled and com- bined during generation, each with a different strength, acting as a set of “control knobs” that tune generation towards the desired attribute (see examples in Table 1). Code for the experiments is available at: https://github.com/uber-research/PPLM. Our key contributions are: • We introduce the Plug and Play LM for controlled language generation, discuss its relation to existing work, and how sampling from a PPLM works (Sections 2 and 3). • We demonstrate controlling of text generation on a range of attributes, including 7 topics each defined using a bag of words, and 1 simple discriminator on sentiments. We quantify effectiveness using both automated evaluation (separately trained perplexity and sentiment 2 Published as a conference paper at ICLR 2020 models) as well as human evaluation (for attribute relevance and fluency). All evaluations point toward the ability of PPLMs to generate attribute controlled, fluent text (Section 4). • We compare PPLM with CTRL (Keskar et al., 2019) and GPT-2 finetuned for positivty (Ziegler et al., 2019). Our method, without any LM training, is on par and often outper- forms the baselines on attribute relevance and fluency (Section 4.2, and Section 4.3). • We show that the PPLM approach can be used to detoxify instances where generation of toxic content is likely by following the negative gradient of a model trained to detect toxicity (Section 4.4). We also show how PPLM can be used for structurally constrained story writing (Section 4.5). 2 RELATED WORK Controlled generation Current methods for controlled text generation involve either fine-tuning existing models with Reinforcement Learning (RL) (Ziegler et al., 2019), training Generative Ad- versarial Networks (Yu et al., 2017), or training conditional generative models (Kikuchi et al., 2016; Ficler & Goldberg, 2017). Different from our approach, these methodologies are not plug and play, since the entire model needs to be separately fine-tuned for each specific attribute. Keskar et al. (2019) train a large language model with over 50 different control codes. The results are high quality because they train exactly to maximize p(x|a), but this comes at the expense of fixing control codes upfront and of training a very large model (1.6B parameters). Our method does not require retraining any conditional generative model, and both the language model and the conditional model can be flexibly assembled. Table 2 gives a comparison of recent approaches to language modeling tuned for specific attributes. In another interesting but tangential piece of work, Subramani et al. (2019) recently showed that a pre-trained language model can be steered to recover arbitrary sen- tences. In earlier works Gu et al. (2016; 2017); Chen et al. (2018) explored the idea of using a small neural network to steer an LM. Noisy Channel Modeling Yu et al. (2016), and more recently Yu et al. (2019); Yee et al. (2019); Ng et al. (2019), leveraged the Shannon Noisy Channel Theory (Shannon, 1948) for improving sequence-to-sequence modeling. Their approach translates a source language sentence y into a target language sentence x by first sampling from a forward model proposal distribution pforward(x|y) and then reranking samples based on probabilities given by pbackward(x|y) ∝ p(x)p(y|x). PPLM scores samples using the same basic equation, but as we have no forward or proposal model pforward(x|a), we rely on the latent space updates, similar to Nguyen et al. (2017). As a baseline, we consider using p(x) as a “forward model” and then reranking, which we will see works moderately well in some scenarios and poorly in others (see Tables 4 and 6). Weighted decoding Holtzman et al. (2018); Ghazvininejad et al. (2017) consider controlled lan- guage generation – the former with discriminators, and the latter with a bag of words – where the decoding procedure is modified to consider the scoring function used for decoding. See et al. (2019) note that control with weighted decoding (WD) is difficult and often leads to sacrificing fluency and coherence. Further, Ghazvininejad et al. (2017) strongly relies on sampling from a set of keywords on a specific topic and it does not allow to bias generation towards a topic in a manner that does not necessary include a set of keywords. Similarly, Baheti et al. (2018) proposed a decoding strategy for generating interesting responses in dialogue systems, using bags of words and word embed- dings. Sophisticated sampling methods (Metropolis et al., 1953) can be used to constrain the model generation to certain keywords and topics. We evaluate WD as a baseline. Text Style Transfer Outside of language modeling, the text style transfer studies a related task. Shen et al. (2017); Hu et al. (2017) train variational auto-encoders for style transfer that rely on learning disentangled latent representations for style and content. Li et al. (2018) demonstrate the efficacy of a simple approach based on replacing attribute related n-grams with n-grams correspond- ing to the desired attribute based on a conditional generative model. A key difference between the above and our approach is that we use an offline discriminator and perform optimization based on this discriminator, which as suggested by Elazar & Goldberg (2018) may outperform adversarial training approaches. More recently, Lample et al. (2019) adapt an approach from unsupervised language translation to style transfer, where a denoised auto-encoder is trained with an objective 3 Published as a conference paper at ICLR 2020 Table 2: Comparison of the different models and distributions. All models in this table are useful in different scenarios. The particular advantage of PPLM is that very small, custom attribute models, p(a|x), may be combined with powerful, general pre-trained language models, p(x), to create cheap but still powerful conditional generative models, p(x|a). Model type Language Model Fine-tuned Language Model Conditional Language Model Plug and Play Language Model (PPLM) Form of model p(x) p(x) p(x|a) p(x|a) ∝ p(x)p(a|x) Samples Uncond. Uncond. Cond. Cond. Example models and number of trainable params GPT-2 medium: 345M (Radford et al., 2019) Fine-tuned GPT-2 medium: 345M (Ziegler et al., 2019) CTRL: 1.6B (Keskar et al., 2019) PPLM-BoW: 0 (curated word list) PPLM-Discrim: ∼ 1K/attribute (not counting pretrained p(x)) consisting of a weighted combination of a re-construction loss and a back-translation loss. While the above approaches have shown impressive success on style transfer tasks, the main focus is not controlled language generation, and further, the methods are not plug and play. # 3 PLUG AND PLAY LANGUAGE MODELS 3.1 LANGUAGE MODELING WITH TRANSFORMERS Given a sequence of tokens X = {x0, · · · , xn}, LMs are trained to compute the unconditional prob- ability of the sequence p(X). This probability can be rewritten in terms of product of conditional probabilities by recursively applying the chain-rule (Manning et al., 1999; Bengio et al., 2003) as: # n p(X) = p(xi|x0, · · · , xi−1) i=1 (1) In this paper, we use a transformer (Vaswani et al., 2017) to model the distribution of natural lan- guage. To present our approach clearly, we first briefly summarize the transformer using recur- rent notation. Let us define the history matrix Ht to consist of the key-value pairs from the past )], where (K (i) i.e Ht = [(K (1) ) corresponds to the key-value pairs from the i-th layer generated at all time-steps from 0 to t. Efficient implementations of the trans- former (Wolf et al., 2019) use the cached Ht to generate xt+1, given xt. This recurrent interpretation of a transformer can be summarized as: ot+1, Ht+1 = LM(xt, Ht), (2) where W is a linear transformation that maps the logit vector ot+1 to a vector of vocabulary size, and then xt+1 is sampled as xt+1 ∼ pt+1 = Softmax(W ot+1). This allows for efficient language gen- eration without repeated forward passes corresponding to the prior conditioning text x0, . . . , xt−1. # 3.2 STEERING GENERATION: ASCENDING log p(a|x) In order to control the output of the language model, at every generation step t, we shift the history Ht in the direction of the sum of two gradients: one toward higher log-likelihood (LL) of the attribute a under the conditional attribute model p(a|x) and one toward higher LL of the unmodified language model p(x). Combining these factors with a variable multiplier provides us with a controllable “knob” to guide generation in a given direction with a specified strength. The updates are restricted to Ht and not the other model activations because future predictions depend on the past only via Ht (note that Ht is composed of all transformer key and value pairs generated up to time t). Taking steps in Ht space leads to gradual changes to model activations — which may be thought of as gradual reinterpretations of the past — that guide future generation in the desired direction. Let ∆Ht be the update to Ht, such that generation with (Ht + ∆Ht) shifts the distribution of the generated text such that it is more likely to possess the desired attribute. ∆Ht is initialized 4 Published as a conference paper at ICLR 2020 Attribute Model p(a|x) J See § ee ( ——» Forward Pass oo Step 1 Original distribution Se Be) L 5 —_— ae Backward Pass ig ~ and update latents iS Step 2 ed | Recompute with i L updated latents 10 is ( —— > Recompute 2 A Step 3 mi Updated distribution The chicken tastes L fp ("delicious") Figure 1: Simplified illustration of the proposed approach in three phases. In Step 1, a forward pass is performed through the language model to compute the likelihood of a desired attribute using an attribute model that predicts p(a|x). In Step 2, a backward pass updates the internal latent represen- tations of the LM, using gradients from the attribute model, to increase the likelihood of the passage having the desired attribute. In Step 3, a new distribution over the vocabulary (p;+1) is generated from the updated latents (H;) and the current token x;. The next token is then sampled from the updated distribution. This process of updating the latents is repeated at each time-step, leading to a gradual transition towards the desired attribute. For computational efficiency, one may choose to modify only the latents within some window of the recent past, depicted as the dotted-red region. at zero and updated with gradients from an attribute model that measures the extent to which the generated text possesses the desired attribute (e.g. positivity). We rewrite the attribute model p(a|x) as p(a|Ht + ∆Ht) and then make gradient based updates to ∆Ht as follows: AH, — AH, +a Van, log p(a|H, + AH1) |Van, log p(a|Hi + AHz)||° (3) where a is the step size, y is the scaling coefficient for the normalization term] This update step can be repeated m times; in practice we use 3 to 10. Subsequently, a forward pass through the LM with the updated key-value pairs is performed to obtain the updated logits 0:41 as 6:41, Higa = LM(a:, H:), where H, = H,+AH;. The perturbed 6;+, is then used to generate a new distribution De+1 as in Equation[2] # 3.3 ENSURING FLUENCY: ASCENDING log p(x) The approach described in the previous section is able to generate text tuned for a particular dis- criminator, but left unchecked it will quickly result in unrealistic adversarial or fooling examples (Szegedy et al., 2013; Nguyen et al., 2015) as the text moves into low probability regions. To com- bat this, we use the unconditional language model in two ways that ensure the fluency is maintained at or near the level of the unconditional language model (here GPT-2). Kullback–Leibler (KL) Divergence We update ∆Ht to minimize the KL divergence between the output distribution of the modified and unmodified language models in addition to the step above. In practice, this is accomplished by adding the quantities together before taking a gradient, though it can be visualized as two separate steps as in Figure 2. We scale the KL coefficient by a scalar λKL, and in practice, setting this hyperparameter to 0.01 works well in general across tasks. Post-norm Geometric Mean Fusion _ In addition to minimizing KL divergence, which affects the past via AH,, we perform post-norm fusion similarly to (2018). This does not directly affect AH; rather, it just serves to constantly tie the generated text to the unconditional p(x) LM distribution. We accomplish this by sampling from x41 ~ 3 (pre Piyi?”) , where p;41 and p;+1 are the unmodified and modified output distributions, respectively, and is a normalizing factor such that it forms a valid distribution. As ygm — 1 this converges to the distribution from the updated LM, and as gm — 0 it converges to the unconditional LM distribution. We find that in practice values for gm in the range 0.8 — 0.95 work well. 1 One normalization term is computed for each layer of the transformer. 5 Published as a conference paper at ICLR 2020 Figure 2: An oversimplified view into why steps that maximize both log p(a|x) and log p(x) are needed. The sentence under consideration is shown as a black dot, which is first pushed in the direction of maximizing log p(a|x) and then in the direction of maximizing log p(x). In practice we use a single step and simply add the log proba- bilities; we take steps in continuous space of hid- den representations H rather than in the discrete x (byte pair) space, and rather than resampling the entire sentence each step, we take one step in H space per byte-pair sample. # 3.4 SAMPLING AND RANKING The attribute model p(a|x) in PPLM provides two functionalities: first, a score that can be used to rank samples based on the LL of the desired attribute (forward pass only; Step 1, Figure 1), and second, a gradient ascent direction to perform an update in the latent space (Step 2 & 3; Figure 1). The former can be used to generate r samples and rank them to choose the best one. This can serve as an additional method for attribute control in addition to sampling with updated latents. Further, to avoid the problem of repetitive, low quality text (Holtzman et al., 2018), we compute the mean over the Dist-1, Dist-2 and Dist-3 scores (for the generated passage), which is an indicator of repetitiveness (Li et al., 2015), and then discard samples with a mean score below a threshold τ . # 4 EXPERIMENTS, RESULTS, AND EVALUATION In this section, we describe our evaluation methodology and then show controlled generation results under various attribute models. We also show use cases of PPLM in language detoxification and in controlled story telling. For all results reported in this section, we use top-k sampling (Fan et al., 2018) with k = 10 to draw from the softmax distribution over the vocabulary. 4.1 EVALUATION METHODS AND ABLATION STUDY We evaluate to assess two properties: whether PPLM generates text that satisfies the desired attribute (topic or sentiment) and whether the quality of its text deteriorates as we intensify control of the attribute. Note we can always turn the control knob down to zero to disable control of attributes and reach the fluency of the original model. If desired, a user can tune the knobs at inference until a chosen tradeoff between attribute strength and fluency is reached. We evaluate using both automated methods and human annotators: Automated Eval. Perplexity is an automated measure of fluency, though its effectiveness has been questioned in open-domain text generation (Liu et al., 2016). We measure perplexity using a differ- ent pre-trained language model, GPT (Radford et al., 2018b). The diversity of text in the passages is measured using the number of distinct n-grams (normalized by the length of text) as in Li et al. (2015). We report Dist-1, Dist-2, and Dist-3 scores for the distinct 1-2-3-grams (measured across all samples generated for a given attribute control task, e.g. a specific topic for topic control). Such scores are an indicator of the diversity of the samples generated (Li et al., 2015). We aslo use external sentiment classifiers for sentiment evaluation. Human Eval. We consider two types of human annotation: fluency and A/B testing on attribute relevance. Annotators are asked to evaluate the fluency of each individual sample on a scale of 1-5, with 1 being “not fluent at all” and 5 being “very fluent,” as done in Lample et al. (2019). In the A/B testing for attribute relevance, we consider all combinatorial pairs of all four variants: B, BR, BC, and BCR (6 combinations). We then ask annotators to rank the pair on the desired attribute (e.g. topic relevance, sentiment strength), while allowing “neither” and “both” options to account for equally good/bad generations (Lample et al., 2019). We obtain annotations from nine external occupational annotators. Each pair of samples is evaluated by three individuals and we use majority-voting to 6 Published as a conference paper at ICLR 2020 Table 3: Comparison of different samples generated by (top row) baseline GPT-2 and (other rows) PPLM with different BoW corresponding to different topics (e.g. [Military] ), all conditioned on a single prefix: "The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused". Both directly optimized (in red) and related words (in soft red) The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused are highlighted, showing how the optimization takes effect. [–] The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused on the way that the city’s police officers have reacted in recent years to the deaths of The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused Michael Brown in Ferguson, Mo., Eric Garner in New York City and Sandra Bland in Texas, as well as the shooting of unarmed teen Michael Brown by a white police officer in Ferguson, Mo. . . . [Military] The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused on the fact that the government had spent billions on the military and that it The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused could not deploy the troops in time. The prime minister said that the country would take back control of its airspace over Syria in the next 48 hours. \n The military is investigating why. . . [Space] The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused on a series of incidents that occurred in the past few months, which included an The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused alleged attack by Islamic State fighters on a Kurdish checkpoint, the use of drones in combat, space technology research by Russian and American space companies, and more. \n The world. . . [Science] The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused on a single piece: the question "What is the meaning of life?" This question The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused has puzzled many philosophers, who have attempted to solve it by using some of the concepts of quantum mechanics, but they have to solve it by the laws of nature themselves.. . . [Politics] The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused on a single section of the legislation. It’s unclear whether the committee will The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused vote to extend the law, but the debate could have wider implications. \n "The issue of the law’s applicability to the United Kingdom’s referendum campaign has been one of. . . [Computers] The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused on the role of social media as a catalyst for political and corporate The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused engagement in the digital economy, with the aim of encouraging companies to use the power of social media and the Internet to reach out to their target market. \n . . . compute attribute relevance. For fluency, we use average of the three annotations. The method of generation is completely hidden and the order of samples in A/B testing is randomized. Ablation study and baselines. We conduct an ablation study with four variants: B: the baseline, unchanged GPT-2 LM, sampled once; BR: B but sampled r times, with best sample chosen based on the LL ranking and filtering based on Dist score; BC: update the latent representations (H,) and then sample once; and lastly BCR: update the latent representations (H;,) and generate r samples, choose the best sample based on the LL score (after filtering out samples with low Dist scores). As baseline approaches we consider CTRL: (Keskar etal] 2015), a recent language model; GPT2-FT- RL: a GPT-2 LM fine-tuned for human evaluated positivity with RL (Ziegler et al.|[2019); and WD: a weighted decoding baseline in which the B LM’s outputs are weighted directly toward maximizing p(a\x) (Ghazvininejad et al. 2017); see Section[S7}for details, and Section|S 1 1|for hyperparameters. 4.2 BOW ATTRIBUTE MODELS The simplest attribute model we use gives the log of the sum of likelihoods of each word in some predefined Bag of Words (BoW). Given a set of keywords {w1, · · · , wk} that specify a topic of interest and the output distribution of the language model pt+1, the log likelihood is: k lg als) = log (SO mes) ® We construct BoWs that represent seven distinct topics: SCIENCE, MILITARY, LEGAL, COMPUT- ERS, SPACE, POLITICS, and RELIGION (see Section S17 for complete word lists). Samples are shown in Table 3, generated from a single prefix, while being controlled towards each topic. Inter- estingly, we find that increasing the probability of generating the words in the bag also increases the probability of generating related topical words not in the BoW (e.g. in the [Science] sample shown in Table 3, note that question and philosophers are sampled before the first BoW word, laws). Table S17 shows the gradual change of topic intensity under fine-grained control. We found that the optimization procedure works better with updating representations from the past over a finite window and using an adaptive normalization scheme (see Section S11.3). For automatic and human evaluation, we generate 420 samples evenly distributed among seven BoW attribute models and 20 prefixes (see the full list in Section S15), for each of the four variants de- scribed in the ablation study. See Section S8 for further details on evaluation and results. Table 4 shows that human annotators find text from BCR (51.7%) and BC (46.9%) to be significantly more 7 Published as a conference paper at ICLR 2020 Table 4: For each treatment in the ablation study, we report mean-+std-dev across (human and au- tomated) fluency metrics. The topic (%) reports the fraction of samples matching the target topic, as evaluated by human annotators. Table [S8] provides per-topic results. Approaches BC and BCR demonstrate significant control over the topic of the generated text, while retaining similar diversity (Dist-1, Dist-2, Dist-3) scores and minimal degradation in Perplexity and Fluency evaluations vs the baseline LM (B). The gain from ranking and choosing from multiple samples BR over B is limited (4.7%). The gain in topic-accuracy from latent (H;) manipulation (from B to BC) is significantly higher (35.8%). Perplexity is computed using the GPT LM (Radford et al.| [2018a), which differs from the LM generating text (GPT-2). For CTRL and WD, since human evaluation is performed in comparison with BCR via A/B testing, we report the numbers for BCR as well from these com- parisons, for the human evaluated metrics. Further, we consider one sample per prefix for CTRL, resulting in fewer samples and higher Dist-1, 2, 3 scores as a consequence. PPLM outperforms CTRL and WD on topic-relevance, while being comparable on fluency scores. Method Topic % (↑ better) (human) Perplexity (↓ better) Dist-1 (↑ better) Dist-2 (↑ better) Dist-3 (↑ better) Fluency (↑ better) (human) B BR BC BCR 11.1 15.8 46.9 51.7 39.85±35.9 38.39±27.14 43.62±26.8 44.04±25.38 0.37 0.38 0.36 0.36 0.79 0.80 0.78 0.80 0.93 0.94 0.92 0.94 3.60±0.82 3.68±0.77 3.39±0.95 3.52±0.83 CTRL BCR 50.0 56.0 24.48±11.98 – 0.40 – 0.84 – 0.93 – 3.63±0.75 3.61±0.69 WD BCR 35.7 47.8 32.05±19.07 – 0.29 – 0.72 – 0.89 – 3.48±0.92 3.87±0.71 on topic than B (15.8%) and BR (11.1%). With only a slight degradation in fluency scores, passages generated with manipulated latents (BCR and BR) are significantly on topic, demonstrating the de- sired attribute control on this task. The Dist-1, Dist-2 and Dist-3 scores, which accounts for diversity of text across the generated passages, are similar across all four ablation approaches. Further, BCR slightly outperforms CTRL (51.7% & 50.0%), and significantly outperforms WD (36 %). BC itself outperforms WD (36 %). BCR, CTRL and WD all score similarly on the fluency metric. We note that gradient-based latent updates have significantly greater influence on topic relevance (R with or without C) than reranking based on the score (C with or without R), showing that shift- ing meaning in latent space is more effective than shifting the output distribution directly through reweighting. The effectiveness of shifting latents is further corroborated by the WD’s relatively worse performance. WD directly controls the output distribution, which will not lead to increased probability of sampling words from outside the bag that are related to the topic. Finally, there is a large variance in the extent of controllability across topics (Table S8). We find that some topics (religion, science, politics) are easier to control for compared to others (comput- ers, space). Section S9 considers unusual or nonsensical combinations of prefixes and attributes (e.g. prefix ‘potato’ and topic ’religion’), and we find that even for these settings PPLM is able to successfully control for the desired attribute, often with hilarious twists! # 4.3 DISCRIMINATOR ATTRIBUTE MODELS While BoW models have been demonstrated to be able to control text attributes such as sentiment (e.g., Li et al. (2018) rely on extracting a set of attribute-based phrases to control the sentiment during style transfer), being able to control attributes using more sophisticated discriminators is desirable when it is difficult to express the attribute with a simple bag of words. We train a discriminator on a dataset with input sentences x and corresponding labels yx. For an input x of length t, we compute ox :t and train f on the mean (¯ot) of the embeddings across time. All discriminators in this work consist of a single layer classifier that predicts the target label from ¯ox t . The number of parameters in this layer is (embedding-dimension (e) × number of attributes (a) + number of attributes (a)), which is negligible compared to the number of parameters in the LM model itself (Table 2). Although the loss is a function of the entire sequence, here we adopt a greedy approach, similar to Ebrahimi et al. (2018); Wallace et al. (2019), in which we optimize for 8 Published as a conference paper at ICLR 2020 Table 5: Sentence samples in triplets, generated by {baseline GPT-2, PPLM-Discrim POSITIVE, PPLM-Discrim NEGATIVE}, conditioned on prefixes: The chicken The chicken & The country The country The country The chicken The chicken The country The country The chicken The chicken The country The country The chicken The chicken The country. Words related to The country The country The chicken The chicken The country The country The chicken The chicken The country The country The chicken The chicken The country The country The chicken The chicken The country The chicken the sentiment are highlighted (in soft red). Each triplet is generated from the same random seed. [-] The chicken The chicken The chicken The chicken is now out on the grill. \n The city has released an image of a proposed development in the The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken city of Portland’s West End.. . . [Positive] The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken was delicious – wonderfully moist, perfectly delicious, superbly fresh – and perfectly The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken cooked. The only thing to say is that the sauce was excellent, and I think that the broth really complemented all of the other flavors. The best part was the sauce. . . [Negative] The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chickenpox epidemic may be over but the flu is about to get worse. The United States is The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken facing one of the worst flu seasons on record and. . . [-] The country The country The country The country The country The country The country The country’s new chief minister, A.J. Paik, is a member of a group of prominent conservative politicians The country The country The country The country The country The country The country The country The country who have criticized the Obama administration’s efforts to. . . [Positive] The country The country The country The country’s largest indoor painting event!\n Come celebrate with a dazzling display of stunning The country The country The country The country The country The country The country The country The country The country The country The country The country outdoor murals, a stunning display of art, and the world’s best paint and art supplies from all over the world! [Negative] The country The country The country The country The country The country The country The country’s top prison system is forcing prisoners to use a trash dump, rather than a toilet, to The country The country The country The country The country The country The country The country The country flush their waste out, as the authorities fear the waste is more toxic and could cause cancer, an official at a major prison has revealed.. . . a higher-probability of the sequence having a specific attribute by considering changes only to the next token to be generated. This objective can be described as follows, where f is the discriminator: log p(a|x) = log f (o:t+1, ot+2) Note that ot+2 is a function of xt+1. Further, xt+1 ∼ Softmax(W ˜ot+1), which depends on ∆Ht. In the limit, minimizing the objective in Equation 5 corresponds to choosing xt+1 that produces the optimal ot+2 that maximizes f (o:t+1, ot+2). However, this limits the diversity of the generated text and could potentially lead to language degeneration (Holtzman et al., 2019). Alternatively, we focus on a softer optimization approach where we aim to shift the distribution ˜pt+1 = Softmax(W ˜ot+1) towards one that in expectation has a higher likelihood of having the desired attribute a. Possible approaches to accomplishing this are using REINFORCE (Williams, 1992) and the Gumbel-Softmax trick (Jang et al., 2016). However, both of these would slow down convergence. Instead, as in Dai et al. (2019a), we use the distribution ˜pt+1 (instead of a hard sample xt+1), and feed it forward to obtain (a biased) estimate of the next token’s embedding and then update ∆Ht. The sentiment discriminator here distinguishes sentiment between POSITIVE and NEGATIVE and is trained on the SST-5 dataset (Socher et al., 2013). Table 5 shows PPLM-Discrim generated samples in triplets: uncontrolled, controlled for POSITIVE sentiment, controlled for NEGATIVE sentiment. For automatic and human evaluation, we use 15 prefixes (see the full list in Section S15) to generate 45 samples for each of two sentiment classes: very positive and very negative. Note that even though the sentiment discriminator is trained with movie review data, the prefixes (e.g. “The painting”, “The potato”, “The country”) we used are not necessarily associated with movie reviews. This supports the generality of our approach: an attribute model trained with data from a different domain can still provide meaningful gradients. Table 6 shows evaluation results. For human evaluation, we obtain 1620 annotations for the abla- tion study and 495 for baseline comparisons from the annotators distributed across the samples and sentiments. Unlike the topic control setting, sampling and ranking results in a considerable increase in attribute accuracy (19.3% → 41.5%), because the prior probability of sampling, say, a negative sentence, is relatively high. BC results in a decrease in fluency when compared to B, while being significantly more consistent with the desired attribute (19.3% → 39.6%). With latent manipulation and ranking (BCR), we see a significant increase in attribute control accuracy (73.7%) while retain- ing fluency similar to B and BR. Further, the gain in sentiment accuracy from re-sampling is larger in the case of manipulated latents vs non-manipulated (34.1% increase from BC to BCR > 22.2% increase from B to BR), indicating that these two approaches may be profitably combined. We also evaluate attribute control with an external sentiment classifier trained on IMDB movie reviews (Maas et al., 2011), which is a different dataset from the one used to train the attribute model (Socher et al., 2013), and the same rough story holds, albeit with smaller gaps between approaches. We compare to baselines CTRL, GPT2-FT-RL, and WD. BCR performs comparably to CTRL (73.7% and 80.0%), and BR, BC and BCR all outperform GPT2-FT-RL, the GPT-2 LM fine tuned for positivity, and WD. 9 Published as a conference paper at ICLR 2020 Table 6: Evaluation of models/ variants on the sentiment control task, with mean±std-dev reported across fluency metrics. Sentiment accuracy reports the fraction of samples with an accurate tar- get sentiment. Approach BCR provides significant control over sentiment while showing minimal degradation in fluency. See Table S9 for full results on individual sentiments. *GPT2-FT-RL is only evaluated for the positivity half of the task, as it is fine-tuned only for positivity (Ziegler et al., 2019). For human evaluation metrics, we compare the baselines CTRL, GPT2-FT-RL and WD with BCR and perform A/B style testing. We include both numbers for comparison. Method Sentiment Acc. (%) (human) Sentiment Acc. (%) (external classifer) Perplexity (↓ better) Dist-1 (↑ better) Dist-2 (↑ better) Dist-3 (↑ better) B BR BC BCR 19.3 41.5 39.6 73.7 52.2 62.2 64.4 78.8 42.1±33.14 44.6±34.72 41.8±34.87 46.6±40.24 0.37 0.37 0.33 0.36 0.75 0.76 0.70 0.77 0.86 0.87 0.86 0.91 3.54±1.08 3.65±1.07 2.79±1.17 3.29±1.07 CTRL BCR 76.7 70.0 96.6 – 37.4±16.89 – 0.35 – 0.78 – 0.89 – 3.54±0.77 3.36±0.82 GPT2-FT-RL* BCR 13.3 84.4 77.8 – 217.3±176.4 – 0.54 – 0.91 – 0.94 – 3.31±0.84 3.68±0.83 WD BCR 18.9 61.1 52.2 – 31.7±28.0 – 0.33 – 0.69 – 0.83 – 3.67±0.89 3.75±0.66 4.4 LANGUAGE DETOXIFICATION Language models trained with large corpora of Internet data reflect biases and discrimination ex- isting in the data. A recent paper by Wallace et al. (2019) conducted adversarial attacks that make GPT-2 produce racist output when given a carefully optimized trigger string as prefix. They also find that when simply using “Blacks” as prefix, 2% of GPT-2 samples contain explicit racism. Other prefixes (e.g., “Asians” or “Jews”) are mentioned but no percentage is reported. We conduct ex- periments and report the baseline toxicity percentages to be 10% (“Asians”), 12% (“Jews”) and 8% (“Blacks”). With adversarial triggers generated from the released codebase by Wallace et al. (2019) the average toxicity percentage is 63.6%. Further details can be found in Section S13. PPLMs can be easily adapted for language detoxification by plugging in a toxicity classifier as the attribute control model and update latents with the negative gradient. We train a single layer classifier on the toxicity data from the Toxic Comment Classification Challenge (Jigsaw) and show that with a similar hyper-parameter setting as other PPLM-Discrim methods, it works well on both natural prompts and adversarial triggers. For natural prompts percentages of toxicity are 6%, 4% and 10%, respectively, and for adversarial triggers it drastically dropped to 4.6% on average, with statistical significance. Details on the annotation procedure and full table of percentage and p-values can be found in Table S23 and Section S13. Note that a model for detoxifying language can also potentially be maliciously used for generating toxic language, a topic we briefly discuss in Section S6. 4.5 CONTROLLED STORY WRITING We explore controlled generation for assistive story writing (Peng et al., 2018; Luo et al., 2019; Yao et al., 2019; Fan et al., 2018). Using uncontrolled LMs for assistive art creation can be difficult. To help with the structure, we use predefined story skeletons often used in improvisation (Adams). We fill in the blank between these prefixes with a PPLM. See examples in Table S20 and Table S21. # 5 CONCLUSION We have presented PPLM, a plug and play method for controlled language generation that flexibly combines a large, pre-trained LM and a BoW or a small, easy-to-train discriminator. In Section S6 we discuss the ethics of controlled LMs. PPLM achieves fine-grained control of attributes via a simple gradient-based sampling mechanism. Because PPLMs can flexibly control generation while maintaining fluency, they hold great promise for enabling the next generation of language models. 10 Published as a conference paper at ICLR 2020 # ACKNOWLEDGEMENTS The authors are grateful to Bryan McCann for providing samples for the CTRL baseline, Joel Lehman for discussion regarding the ethical implications for this work, Jiale Zhi for help with the computational framework, Colan Chen for creating associated artwork for the blog, Avishek Joey Bose for helpful discussions, Julien Chaumond, Lysandre Debut, Thomas Wolf, and the Hugging Face team for co-producing the PPLM demo and helping integrate the code into their transformers repository, all the annotators at Uber, HKUST and Caltech for their labeling, and members of the Deep Collective research group for helpful discussion, ideas, and feedback on experiments. # REFERENCES Kenn Adams. Improv encyclopedia story spine. http://improvencyclopedia.org/ games/Story_Spine.html. (accessed September 20, 2019). Ashutosh Baheti, Alan Ritter, Jiwei Li, and Bill Dolan. Generating more interesting responses in neural conversation models with distributional constraints. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3970–3980, 2018. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155, 2003. Yun Chen, Victor OK Li, Kyunghyun Cho, and Samuel R Bowman. A stable and effective learning strategy for trainable greedy decoding. arXiv preprint arXiv:1804.07915, 2018. Ning Dai, Jianze Liang, Xipeng Qiu, and Xuanjing Huang. Style transformer: Unpaired text style transfer without disentangled latent representation. arXiv preprint arXiv:1905.05621, 2019a. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860, 2019b. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186, 2019. Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. HotFlip: White-box adversarial ex- In Proceedings of the 56th Annual Meeting of the Associa- amples for text classification. tion for Computational Linguistics (Volume 2: Short Papers), pp. 31–36, Melbourne, Aus- tralia, July 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-2006. URL https://www.aclweb.org/anthology/P18-2006. Yanai Elazar and Yoav Goldberg. Adversarial removal of demographic attributes from text data. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pp. 11–21, Brussels, Belgium, October-November 2018. Association for Computational Lin- guistics. doi: 10.18653/v1/D18-1002. URL https://www.aclweb.org/anthology/ D18-1002. Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. arXiv preprint arXiv:1805.04833, 2018. Jessica Ficler and Yoav Goldberg. Controlling linguistic style aspects in neural language generation. In Proceedings of the Workshop on Stylistic Variation, pp. 94–104, 2017. Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. Hafez: an interactive poetry generation system. In Proceedings of ACL 2017, System Demonstrations, pp. 43–48, Vancouver, Canada, July 2017. Association for Computational Linguistics. URL https://www.aclweb. org/anthology/P17-4008. Jiatao Gu, Graham Neubig, Kyunghyun Cho, and Victor OK Li. Learning to translate in real-time with neural machine translation. arXiv preprint arXiv:1610.00388, 2016. 11 Published as a conference paper at ICLR 2020 Jiatao Gu, Kyunghyun Cho, and Victor OK Li. Trainable greedy decoding for neural machine translation. arXiv preprint arXiv:1702.02429, 2017. Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. Learning to write with cooperative discriminators. CoRR, abs/1805.06087, 2018. URL http://arxiv. org/abs/1805.06087. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. The curious case of neural text degener- ation. arXiv preprint arXiv:1904.09751, 2019. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P. Xing. Controllable text generation. CoRR, abs/1703.00955, 2017. URL http://arxiv.org/abs/1703.00955. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparameterization with gumbel-softmax. 2016. Jigsaw. Toxic comment classification challenge. https://www.kaggle.com/c/ jigsaw-toxic-comment-classification-challenge/. Accessed: 2019-11-13. Nitish Shirish Keskar, Bryan McCann, Lav Varshney, Caiming Xiong, and Richard Socher. CTRL arXiv preprint - A Conditional Transformer Language Model for Controllable Generation. arXiv:1909, 2019. Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. Con- In Proceedings of the 2016 Conference on trolling output length in neural encoder-decoders. Empirical Methods in Natural Language Processing, pp. 1328–1338, Austin, Texas, Novem- ber 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1140. URL https://www.aclweb.org/anthology/D16-1140. Guillaume Lample, Sandeep Subramanian, Eric Smith, Ludovic Denoyer, Marc’Aurelio Ranzato, and Y-Lan Boureau. Multiple-attribute text rewriting. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=H1g2NhC5KQ. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. A Diversity-Promoting Objective Function for Neural Conversation Models. arXiv e-prints, art. arXiv:1510.03055, Oct 2015. Juncen Li, Robin Jia, He He, and Percy Liang. Delete, retrieve, generate: A simple approach to sentiment and style transfer. CoRR, abs/1804.06437, 2018. URL http://arxiv.org/abs/ 1804.06437. Chia-Wei Liu, Ryan Lowe, Iulian Serban, Mike Noseworthy, Laurent Charlin, and Joelle Pineau. How not to evaluate your dialogue system: An empirical study of unsupervised evaluation metrics for dialogue response generation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2122–2132, 2016. Fuli Luo, Damai Dai, Pengcheng Yang, Tianyu Liu, Baobao Chang, Zhifang Sui, and Xu Sun. Learning to control the fine-grained sentiment for story ending generation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6020–6026, 2019. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pp. 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL http: //www.aclweb.org/anthology/P11-1015. Christopher D Manning, Christopher D Manning, and Hinrich Schütze. Foundations of statistical natural language processing. MIT press, 1999. Nicholas Metropolis, Arianna W Rosenbluth, Marshall N Rosenbluth, Augusta H Teller, and Edward Teller. Equation of state calculations by fast computing machines. The journal of chemical physics, 21(6):1087–1092, 1953. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. Facebook fair’s wmt19 news translation task submission. arXiv preprint arXiv:1907.06616, 2019. 12 Published as a conference paper at ICLR 2020 Anh Nguyen, Jason Yosinski, and Jeff Clune. Deep neural networks are easily fooled: High con- fidence predictions for unrecognizable images. The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. Anh Nguyen, Jeff Clune, Yoshua Bengio, Alexey Dosovitskiy, and Jason Yosinski. Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017. Nanyun Peng, Marjan Ghazvininejad, Jonathan May, and Kevin Knight. Towards controllable story generation. In Proceedings of the First Workshop on Storytelling, pp. 43–49, 2018. Martin Potthast, Tim Gollub, Kristof Komlossy, Sebastian Schuster, Matti Wiegmann, Erika Pa- tricia Garces Fernandez, Matthias Hagen, and Benno Stein. Crowdsourcing a large corpus of clickbait on twitter. In Proceedings of the 27th International Conference on Computational Lin- guistics, pp. 1498–1507, 2018. Improving language un- derstanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf, 2018a. Improving language un- derstanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf, 2018b. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8), 2019. Gareth O Roberts and Jeffrey S Rosenthal. Optimal scaling of discrete approximations to langevin diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(1): 255–268, 1998. Gareth O Roberts, Richard L Tweedie, et al. Exponential convergence of langevin distributions and their discrete approximations. Bernoulli, 2(4):341–363, 1996. Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. What makes a good conversation? How controllable attributes affect human judgments. arXiv e-prints, art. arXiv:1902.08654, Feb 2019. Claude Elwood Shannon. A mathematical theory of communication. Bell system technical journal, 27(3):379–423, 1948. Tianxiao Shen, Tao Lei, Regina Barzilay, and Tommi S. Jaakkola. Style transfer from non-parallel text by cross-alignment. CoRR, abs/1705.09655, 2017. URL http://arxiv.org/abs/ 1705.09655. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1631–1642, Seattle, Washington, USA, October 2013. Association for Computa- tional Linguistics. URL https://www.aclweb.org/anthology/D13-1170. Felix Stahlberg, James Cross, and Veselin Stoyanov. Simple fusion: Return of the language model. arXiv preprint arXiv:1809.00125, 2018. Nishant Subramani, Sam Bowman, and Kyunghyun Cho. Can unconditional language models re- cover arbitrary sentences? arXiv preprint arXiv:1907.04944, 2019. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian J. Goodfel- low, and Rob Fergus. Intriguing properties of neural networks. CoRR, abs/1312.6199, 2013. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, In Advances in Neural Infor- Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. mation Processing Systems, pp. 6000–6010, 2017. 13 Published as a conference paper at ICLR 2020 Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. Universal adversarial triggers for nlp. arXiv preprint arXiv:1908.07125, 2019. Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, and Jamie Brew. Transformers: State- of-the-art natural language processing, 2019. Lili Yao, Nanyun Peng, Ralph Weischedel, Kevin Knight, Dongyan Zhao, and Rui Yan. Plan-and- write: Towards better automatic storytelling. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pp. 7378–7385, 2019. Kyra Yee, Nathan Ng, Yann N Dauphin, and Michael Auli. Simple and effective noisy channel modeling for neural machine translation. arXiv preprint arXiv:1908.05731, 2019. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. In Thirty-First AAAI Conference on Artificial Intelligence, 2017. Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefenstette, and Tomas Kocisky. The neural noisy channel. arXiv preprint arXiv:1611.02554, 2016. Lei Yu, Laurent Sartran, Wojciech Stokowiec, Wang Ling, Lingpeng Kong, Phil Blunsom, and Chris Dyer. Putting machine translation in context with the noisy channel model. arXiv preprint arXiv:1910.00553, 2019. Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. arXiv preprint arXiv:1909.08593, 2019. URL https://arxiv.org/abs/1909.08593. 14 Published as a conference paper at ICLR 2020 # SUPPLEMENTARY INFORMATION FOR: PLUG AND PLAY LANGUAGE MODELS: A SIMPLE APPROACH TO CONTROLLED TEXT GENERATION S6 ETHICS OF CONTROLLED LANGUAGE MODELS There has recently been a substantial discussion around the ethics of capable language models (Rad- ford et al., 2019; Keskar et al., 2019), both in their potential to recapitulate problematic social biases and for them to be directly abused for societal harm (e.g. to generate disinformation). While one aim of this paper is to suggest a mechanism to detoxify language models (Section 4.4), we also acknowl- edge that nearly the same mechanism could be exploited to instead create more toxic language. Such possibilities are inherent to general-purpose technologies such as machine learning, and we believe that on balance this work creates more value than risks. # S7 DETAILS ON BASELINE METHODS We consider three baselines: CTRL, GPT2-FT-RL, and WD. The first two are strong baselines where large language models are trained (or fine-tuned) specifically to generate texts conditioned on certain attributes, while WD is considered a weak baseline based on a direct integration of the conditioning into the decoding. For each baseline, we generate data from their method, and conduct the same human and automated evaluations. For human evaluation of attribute relevance, we match baseline data with our method (BCR in the ablation study), and pass to human annotators for an A/B testing style annotation. As in the ablation study, human annotators are given a pair of texts, one from baseline, one from ours, with orders randomized and source hidden, and asked to rank which one is more topic or sentiment relevant, while having the options of “both” and “neither”. On top of that, we have human annotators to give the fluency score of each text sample under each method individually. And automated evaluations of perplexity, sentiment, etc. are also done individually. # S7.1 CTRL The recent conditional language model, CTRL, from Keskar et al. (2019), trains a 1.6B LM condi- tioned on around 50 control codes. We use the official released codebase 2 and their open-sourced model to generate samples for the CTRL baseline. Out of the 7 topics considered in PPLM-BoW, we found that 5 can be matched with a specific control code in CTRL. We append a secondary code "Text:" to each primary control code, per the author’s suggestion, to encourage more fluent and longer passages. The 2 topics missing a match with CTRL are: Military, Space. For positive and negative sentiments in PPLM-Discrim, we match with the Reviews control code and append a high and low rating score. The matched attributes and control codes are listed in Table S7. Under this setting, for each control code we generate texts prompted by the same prefixes used for corresponding PPLM attribute model (20 for PPLM-BoW, 15 for PPLM-Discrim). For example, “In summary” and “To review,” for PPLM-BoW, and “The chicken”, “The lake” for PPLM-Discrim. Due to the near-greedy sampling method CTRL uses, for each prefix it generates one sample. Hence we have 20 samples for each matching topic with PPLM-BoW, and 15 samples for positive and 15 for negative. # S7.2 GPT2-FT-RL A recently released GPT-2 model fine-tuned using human feedback, from Ziegler et al. (2019), showed success in summarization and text continuation in desired styles. To compare with PPLM, # 2 CTRL codebase: https://github.com/salesforce/ctrl 15 Published as a conference paper at ICLR 2020 Table S7: Control codes used for the model from Keskar et al. (2019) for experiments in Section 4. PPLM Attribute CTRL Control Code Legal Text: Politics Text: Science Text: Technologies Text: Christianity Text: LEGAL (PPLM-BoW) POLITICS (PPLM-BoW) SCIENCE (PPLM-BoW) COMPUTERS (PPLM-BoW) RELIGION (PPLM-BoW) Reviews Rating: POSITIVE (PPLM-Discrim) NEGATIVE (PPLM-Discrim) Reviews Rating: we run GPT2-FT-RL3 to generate positive texts on the same prefixes used in our PPLM-Discrim experiment. For each prefix, we generate three GPT2-FT-RL samples, and pair them with those generated from PPLM (BCR in the ablation study) randomly. S7.3 WEIGHTED DECODING (WD) We consider a simple baseline based on a direct integration of the conditioning into the decoding procedure, similar to the approach from Ghazvininejad et al. (2017). Topic Control with Bag of Words In Ghazvininejad et al. (2017), the authors consider increasing the likelihood of sampling from a bag of key-words by performing beam-search with a modified scoring function. score(wi, bt) = score(bt) + logPt+1(wi) + 1BoW(wi), i where 1BoW(wi) is an indicator function indicating if the token wi is present in the bag BoW. Since, it has been shown that beam-search results in degradation of language for GPT-2 (Holtzman et al., 2019), we consider top-5 sampling from a distribution ˜pt+1 defined such that: ˜pt+1(wi) = pt+1(wi) + τ 1BoW(wi)pt+1(wi) where τ ∈ R++ and pt+1 is the distribution over the vocabulary as predicted by the GPT-2 LM . For the experiments in Section 4, we set τ = 10. Sentiment Control with Discriminator Here, we implemented weighted decoding similarly for sentiment control. Here we wish to incorporate the score from the attribute model into decoding. To control for style ˆa, instead of sampling from the distribution pt+1, we sample from ˜pt+1 defined as: ˜pt+1(wi) ∝ p(a = ˆa|x0:t, wi)pt+1(wi). p(a = ˆa|x0:t, wi) is the probabilty of the sequence x0:t, wi possessing attribute ˆa as assigned by the attribute model. By Bayes’ rule, p(a = ˆa; wi|x0:t) = p(a = ˆa|x0:t, wi)pt+1(wi), and we do top-5 sampling from this distribution. Recall that pt+1(wi) = p(wi|x0:t) under the language model. # S8 FURTHER DETAILS ON HUMAN AND AUTOMATED EVALUATION We conduct evaluations on attribute relevance and language fluency, both including human and automated evaluation. For topic relevance (a.k.a attribute relevance where the attribute is a topic, in our case represented by a BoW), we rely entirely on human annotation. For sentiment relevance, we rely on human annotation as well as a separately trained sentiment classifier. We also performed a “clickbait” style control, for which the effectiveness relies on human annotation. # 3 GPT2-FT-RL codebase: https://github.com/openai/lm-human-preferences 16 Published as a conference paper at ICLR 2020 For fluency, we use human annotations (between 1 to 5) and automated methods: perplexity, Dist-1, Dist-2, and Dist-3 scores. The number of human evaluations are as below: • PPLM-BoW. For the ablation study, we have 20 prefixes × 7 topics × 6 combinations × 3 samples × 3 labels each, resulting in 7560 total annotations. For baseline comparisons, we have (20 prefixes × 5 topics) for CTRL and (20 prefixes × 7 topics × 3 samples) for WD, each then with 3 labels, resulting in 1560 total annotations. • PPLM-Discrim, sentiments. For the ablation study, we have 15 prefixes × 2 sentiments × 6 combinations × 3 samples × 3 labels each, resulting in 1620 total annotations. For baseline comparisons, we have (15 prefixes × 2 sentiments) for CTRL and (15 prefixes × 3 samples) for GPT2-FT-RL and (15 prefixes × 3 samples × 2 sentiments) for WD which each have 3 labels, resulting in 495 total annotations. • PPLM-Discrim, clickbait. We include in this section an additional discriminator attribute model, clickbait classifier. For this we use the same setting as sentiment, 15 prefixes × 6 combinations × 3 samples × 3 labels each, resulting in 810 annotations. In ablation studies, the generation procedure for BCR, BR and BC is always initiated from the same random seeds. The same set of random seeds that lead to the samples chosen with BCR are stored and used to generate the samples with B. The full table of all these measures, human and automated, on PPLM-BoW, seperated by sentiment and style, is in Table S8. Included also are strong baselines (CTRL and WD) for each sentiment. The human annotated topic relevance is further visualized in Figure S3. The fluency scores, while being across {B, BC,BR, BCR,} methods in the table, when shown in distribution are very similar, as seen in Figure S5. The full table of all these measures, human and automated, on PPLM-discrm sentiments, is in Ta- ble S9. Included also are strong baselines (CTRL, WD and GPT2-FT-RL) for each topic. The human annotated sentiment and style (e.g. “Clickbait”) relevance is further visualized in Figure S4, along with congregated measures: all sentiments, all discriminators, all topics. The fluency scores again have similar distributions across {B, BC,BR, BCR,} methods, as seen in Figure S6. 7 1 baseline (B) 60 @® baseline+reranking (BR) x 9 gradient (BC) g se @@® gradient+reranking (BCR) e B 40 a L 30 Â¥ Oo © 20 J Ul 10 ° Computers Legal Military Politics Religion Science Space Figure S3: Topic relevance by human evaluation. We can see that taking a PPLM gradient step (B→BC) makes a big difference. Reranking is mostly helpful (B→BR; BC→BCR). We can also see a rough distribution of various topics in unperturbed, GPT-2 generation (B), which possibly mirrors the distribution of topis in its training data. Some topics, like science, naturally appear rather frequently. # S9 ODD COMBINATION OF TOPICS AND PREFIXES It is interesting to see how PPLM can steer the text generation when the topic and prefix combination appears odd or illogical. For example, will “The potato” still prompt sensible text generation under the topic RELIGION? In this study we design a set of odd combinations, as bellow. 17 Published as a conference paper at ICLR 2020 Table S8: Full result of human and automated evaluation of PPLM-BoW, attribute relevance and language fluency. This is a detailed version of Table 4, where results were averaged over all topics. Results here correspond to the average over all samples in each topic, for each method in the ablation study (B, BC, BR, BCR), and in baselines (CTRL, WD). Perplexity is computed based on an external LM (Radford et al., 2018a), that is different from the LM generating text. Topic Method Attribute relevance % (↑ better) Fluency (↑ better) (human) Perplexity (↓ better) Dist-1 (↑ better) Dist-2 (↑ better) Dist-3 (↑ better) (human) Military Religion Politics Science Legal Space Computers 18 Published as a conference paper at ICLR 2020 dill Positive Negative Clickbait All sentiments 5 baseline (B) @® baseline+reranking (BR) gradient (BC) @@@ gradient+reranking (BCR) dal All discriminators All bag of words Attribute relevance (%) Figure S4: Bar charts of discriminator relevance by human evaluation, together with different ver- sions of combined results. Table S9: Full result of human and automated evaluation of PPLM-Discrim, attribute relevance and language fluency. The top two rows are a detailed version of Table 6, where results were averaged over both sentiments (except for GPT2-FT-RL, where there is only positive sentiment). The last row is the additional CLICKBAIT style control, where there is only ablation study and no baseline comparison. Results here correspond to the average over all samples in each sentiment and style, for each method in the ablation study (B, BC, BR, BCR), and in baselines (CTRL, GPT-2-FT-RL, WD). Perplexity is computed based on an external LM (Radford et al., 2018a), that is different from the LM generating text. Sentiment/Style Method Attribute relevance % (↑ better) (human) Perplexity (↓ better) Dist-1 (↑ better) Dist-2 (↑ better) Dist-3 (↑ better) Negative Positive Clickbait B BR BC BCR CTRL WD B BR BC BCR CTRL GPT2-FT-RL WD B BR BC BCR 34.8 54.8 37.8 72.6 73.3 15.6 3.70 28.1 41.5 74.8 80.0 26.7 22.2 36.3 48.9 33.3 60.7 39.47 45.01 41.86 46.24 37.94 30.42 44.28 42.96 42.34 47.69 36.78 217.28 33.04 38.59 33.20 54.18 42.67 0.37 0.41 0.45 0.44 0.43 0.38 0.38 0.44 0.45 0.39 0.45 0.54 0.41 0.38 0.41 0.45 0.39 0.74 0.81 0.84 0.84 0.85 0.75 0.76 0.84 0.83 0.80 0.86 0.91 0.78 0.79 0.83 0.83 0.83 0.86 0.92 0.93 0.92 0.92 0.85 0.89 0.92 0.91 0.92 0.92 0.94 0.90 0.91 0.92 0.92 0.93 3.67 3.71 2.84 3.24 3.17 3.56 3.41 3.59 2.74 3.33 3.91 3.16 3.78 3.46 3.25 2.85 2.97 • Prefixes of {“The chicken”, “The horse”, “The pizza”, “The potato”, “The lake”}, each controlled by topics of {MILITARY, LEGAL, COMPUTERS, POLITICS, RELIGION}; • Prefixes of {“My dog died”, “The food is awful”}, each controlled by the sentiment of POSITIVE; • Prefixes of “The food is amazing”, controlled by the sentiment of NEGATIVE. We found that PPLM control is easy even under those scenarios. We had to increase the strength α two or three fold (to 0.02 or 0.03 as opposed to 0.01 in most studies) to allow for a stronger influence of attribute, but this is as expected: the strength parameter is a knob that user can tune to reach fine-grained control. The resulting generation is included in Table S10 - Table S16. 19 Published as a conference paper at ICLR 2020 Table S10: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Military] . We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. # [Military] The chicken The chicken The chicken The chicken-shaped robot known as a "killer drone" is about to become a weapon in war.\n The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken \n The drone, which weighs about 500 pounds, is capable of firing weapons in urban warfare, It can fly at speeds of 30 miles an hour.\n \n The Air Force said its the Pentagon said. new warplane has two combat jets, one in combat operations and a combat attack bomber squadron. Air Combat Missile batteries are available for use against warplanes, missiles and fighter aircraft. The horse The horse The horse The horse The horse The horse The horse The horse-mounted artillery system (HMG) is a special weapon system employed in World The horse The horse The horse The horse The horse The horse The horse The horse The horse War I, World War II, and World War II Korea. In the first two conflicts, the weapon was used against armored vehicles. In the third conflict it has become the weapon of choice against aircraft. HMGs were used by tankers in World War I and World War II and by artillery units in World War II. The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza shop that killed a transgender teen is being sued for $1.8 billion over its role in the The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza death of an Ohio teenager. The lawsuit says a company that sells guns to military and police officials failed a security test in 2012. AP file photo gun rights activist\n \n In this Dec. 11, 2012 image shot, U.S. Navy ship gunship USS John C. Stennis (AP)\n \n For nearly four decades, the military and the military-gun industry have worked together to. . . The potato The potato The potato The potato The potato The potato The potato The potato-sized monster who murdered his wife in a horrific suicide bomb attack outside the The potato The potato The potato The potato The potato The potato The potato The potato The potato Iranian embassy in Tehran has killed a soldier and wounded another soldier.\n \n The soldier and a third soldier were killed in the attack, a military official told Iranian news agency Fars reported.\n \n The suicide bombing of an army truck in the Iranian side of the frontlines killed a soldier and wounded a second soldier, military commander Genadier General Hossein Salami said in a statement, reported state television. The lake The lake The lake The lake The lake The lake The lake The lake, which is located at the edge of the park, has been used as a basecamp for the Army’s The lake The lake The lake The lake The lake The lake The lake The lake The lake special operations command and was once used by U.S. special forces to kill Taliban lead- ers.\n \n In addition, U.S. forces used the lake for a variety of special operations missions.\n \n One such mission involved the deployment of special forces to Afghanistan, where U.S. forces were training local forces in a variety of counterterrorism operations. The operation was the first of its. . . 20 Published as a conference paper at ICLR 2020 Table S11: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Legal] . We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. # [Legal] The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken and cheese festival will resume on Sunday after being postponed for a second The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken time.\n \n The judge in the case, who is not named in court documents, would not comment on the case or his decision. He is also scheduled to appear in court in a separate case. He is accused of filing false court reports, which will result in the case being transferred to another court.\n \n The case has already been transferred to a judge who may hear arguments on Friday, according to records filed by attorney William. . . The horse The horse The horse The horse The horse The horse The horse The horse may have no other legal standing, including standing in relation to a claim of title, The horse The horse The horse The horse The horse The horse The horse The horse The horse to sue.\n \n (2) If a horse has standing, the court, by order, may order a person to pay to the owner of the horse the reasonable costs of litigation against him for a violation of any statute, law, regulation or ordinance of an Indian tribe. In this Act, "court" means the court of superior probate.\n \n (3) If a court orders a person not. . . The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza chain has been forced to issue a statement explaining to customers what exactly was The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza being served in a restaurant that was deemed to be illegal under the law. A court ruling will allow the lawsuit to proceed.\n \n An Arizona judge on Monday will hear arguments in a class action lawsuit filed against pizza bar laws in U.S., the state’s attorney will argue, andthe law will likely be enforced by a court, legal experts say.\n \n \n The state law allows judges to issue legal order in civil court. . . The potato The potato The potato The potato The potato The potato The potato The potato was first discovered in a garden in the city, where it was seized after a family court The potato The potato The potato The potato The potato The potato The potato The potato The potato judge told the woman to stay out of court. "I’m not going to sit here and tell the court, ’You’re the judge. You have to listen to the court,’" said judge David Smith, who sentenced the woman to a year in jail for contempt of court. The court ordered her to pay court fees and court costs. The judge sentenced her to a year in jail for contempt of court, The lake The lake The lake The lake is about to get its second name.\n \n The city and the county will be officially The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake recognizing the lake on Wednesday with the name change. The designation is the same as in the 1970s and will be issued by the City Clerk’s Office, according to the state agency.\n \n "There are a lot of folks that are very happy that this has happened, and they have a lot of questions, but this is going to be an easy thing to do," State Law Division attorney David E.. . . 21 Published as a conference paper at ICLR 2020 Table S12: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Computers] . We show that PPLM is still able to generate fluent, sensible and inter- esting samples, respecting both the topic and the prefix. # [Computers] The chicken The chicken The chicken The chicken-and-egg problem\n \n The chicken-and-egg problem:\n \n A lot of us believe The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken our computer programs are running correctly.\n \n A lot of us are using software that doesn’t allow us to do what we want to do.\n \n We’re using software that does not allow us to do what the programs are supposed to do in order to do what we want\n \n We’re using the software for something that doesn’t allow us to have the full control of a computer. . . The horse The horse The horse The horse The horse The horse The horse The horse, or mare, was the main character in many early fantasy RPGs. They were the heroes, The horse The horse The horse The horse The horse The horse The horse The horse The horse the warriors, the heroes of fantasy and science fiction. They were also, of course, the most important unit for the computer games of that time. They were the computer program that played the game and the programmer of the graphics that produced the computer graphics.\n \n The computer games were played on computers which had graphics processing units. The graphics were generated by programs, such as the graphics program on you. . . The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza delivery service Instapaper is looking for people who have worked on mobile apps The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza and would like to contribute to their development. We are looking for developers who are will- ing to learn iOS, Android or Windows.\n \n Requirements\n \n A passion for code.\n \n Excellent communication skills.\n \n Experience with web and app code.\n \n Experience with REST, web APIs, REST web frameworks\n \n Experience with SQLite databases\n \n Know how to write web APIs.\n \n A willingness to write custom. . . The potato The potato The potato The potato The potato The potato The potato The potato is one of the most misunderstood foods around. While it has a long and proud The potato The potato The potato The potato The potato The potato The potato The potato The potato history of growing and eating in many places around the globe, it isn’t the only food you can use as a source of energy online.\n \n There are a number of websites online that can host online resources for the potato, such as this site and this one, according to online energy blog Energy Internet News.\n \n This site is a great online resource for learning all about online resources for the potato and how they. . . The lake The lake The lake The lake-effect image of the night sky\n \n The image below is the result of an image- The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake The lake processing software package, called ImageMagick, that I have developed with help from oth- ers. The software was designed and developed by Michael Karp.\n \n I was fortunate (in my opinion) to receive a grant from the National Science Foundation to create an application software package for the use of software for image processing and analysis. The software is available here: http://www.nsf.gov/c. . . Table S13: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Politics] . We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. # [Politics] The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chickenpox epidemic in the United States has been called an example of the government The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken trying to regulate medicine, and the federal government’s refusal to recognize the state’s au- thority over medical care has been seen as a major political victory. The horse The horse The horse The horse The horse The horse The horse The horse is the most powerful tool of government. It has the capacity to be the law unto itself The horse The horse The horse The horse The horse The horse The horse The horse The horse and to enforce the laws it enacts. \n \n- George Washington The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza-related industry is booming. In 2015, the U.S. spent $9.1 billion in the state on The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza food, including $1.3 billion to support pizza restaurants; $4.1 billion was in state taxes.\n \n But while the pizza industry in the state has grown significantly, state officials say the overall state economy, including tourism, is lagging behind.\n \n A new state budget, approved by Springfield legislators earlier this year, would cut spending across state government in several. . . The potato The potato The potato The potato, as a vegetable source for the NHS, may become a tax break for some taxpayers.\n The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato \n The Government could tax the sale of the UK’s most expensive potato, which costs tax- payers almost £3bn a year tax-free, if it were to tax its tax-free equivalent.\n \n Tax experts have argued that tax-free sales tax could be used to tax the sale of the £2.7bn. . . The lake The lake The lake The lake The lake The lake The lake The lake-side house on West Lake Road in the city of Toronto was built in 1882 and the home The lake The lake The lake The lake The lake The lake The lake The lake The lake in the same blockis in the same state, with the last tax assessment tax assessment for that property, $9,000.\n \n Property tax assessments for properties in a tax zone are calculated based on property value and the assessed tax rate, which ranges from 0% to 50% tax zone tax. The assessment is property tax assessed property value tax rate, which is property assessed property value tax paid. . . 22 Published as a conference paper at ICLR 2020 Table S14: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Religion] . We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. # [Religion] The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken’s name comes from its ability to feed on the human body; the word "chickens" is The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken Hebrew for "the Lord God Almighty".\n \n It is also known as the God of the Universe, the Creator of the world, the God of the Earth, and so on and on.\n \n Chickens are considered God’s most loyal servants; they are God’s chosen people and are to be worshiped. They are given God’s Spirit as food, water. . . The horse The horse The horse The horse The horse The horse The horse The horse-faced man with the beard and the black mask, the god of light and the god of dark- The horse The horse The horse The horse The horse The horse The horse The horse The horse ness,\n \n The god of death.\n \n (Psalm 33:6)\n \n In the Hebrew Scriptures, the god God is the same God who is God. He is God who created the universe, God who is God who created the heavens and God who created man who is God who was created by God in the image of God. As God, the God o$ Abraham, Isaac and. . . The pizza The pizza The pizza The pizza has become a symbol of love, and love of God.\n \n We pray for our children and The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza Jesus.\n \n The pizza is the symbol of the Holy Family Jesus.\n \n The Lord Jesus Christ, Jesus Christ crucified, and Jesus Christ risen.<|endoftext|>A New Testament Commentary on the Gospels (Revised Version)\n \n (Revised Version)\n \n 1 Corinthians 15:28-31\n \n 31. "Do this: Jesus is Lord Jesus, Jesus Christ; do this. . . The potato The potato The potato The potato The potato The potato The potato The potato, an ancient food, is considered a sacred plant by many Hindus. However, some The potato The potato The potato The potato The potato The potato The potato The potato The potato Hindus believe that the potatoes are the seed of a demon.\n \n "In India we have the Hindu god Vishnu, Vish, the God. He has come to the world," said a woman in Mumbai.\n \n \n "He came to the world because of God. God came to the world to save people from the curse of the devil God. God came to save us from the curse of the devil,". . . The lake The lake The lake The lake The lake The lake The lake The lake in the Old Testament is the Euphorbia lake, from the Greek for "water." The Hebrew The lake The lake The lake The lake The lake The lake The lake The lake The lake words for the lake are huach, "to flow," and tzitzah, "to run."\n \n "I’m going to the lake to wash away my sins," a Bible verse says. It’s not clear where the word "lake" is from.\n \n The Old Testament describes two waters. The Euphorbia River, from which the Bible says Moses was washed,. . . Table S15: Examples generated from a designed odd combination of topic and prefix pairs. The topic here is [Space] . We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. [Space] The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken-and-egg problem, which has dogged the world since it was first described in The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken 1873.\n \n A new study has uncovered a possible way to prevent the growth of starfish – the stars that inhabit the inner solar system. It may even help to keep them alive.\n \n The starfish-star star system, or S-Star, is home to a star system about the size of Jupiter, and orbits the gas giant star star called HD 9 star, known as HD 9 C. . . The horse The horse The horse The horse The horse The horse The horse The horse-mounted vehicle is an iconic vehicle from the Star Wars universe. Originally, it was The horse The horse The horse The horse The horse The horse The horse The horse The horse a small transport that was used by the Galactic Empire on its star systems, but later it was used on many worlds throughout the galaxy. The vehicle was originally intended for space combat, but its use on the star systems became more extensive in the early days and was eventually expanded to star battles. The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza restaurant, which will serve up the popular dish on Sunday, is expected The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza to offer a wide variety of seating space with space to accommodate two additional space spaces, including space for an outdoor space space, according to space listings on the space site space.space. The potato The potato The potato The potato The potato The potato The potato The potato beetle was first discovered by British biologist Sir Richard Bedding starling in a The potato The potato The potato The potato The potato The potato The potato The potato The potato remote part of Mars\n \n Mars has seen two major life-giving planets.\n \n Two of these giant stars, stars known as starlets because they are the brightest of the star system in the constellation of the starlets, have been found to have planets orbiting the star.\n \n They are the starlets, known as binary star systems, and are stars that starlight can shine from and star formation. . . The lake The lake The lake The lake The lake The lake The lake The lake-like island nation has been ruled as a hostile alien planet in space and Earth space The lake The lake The lake The lake The lake The lake The lake The lake The lake for decades and now, thanks to NASA’s Curiosity Mars rover, we know for sure the planet’s surface is covered in an extremely thick and reflective mantle of rock.\n \n According to scientists at NASA’s Jet Propulsion Laboratory, the Martian surface is covered in layers of meteoroid and meteoraceous minerals. This is not just the planet’s moon, planet Earth has a moon, planet. . . 23 Published as a conference paper at ICLR 2020 Table S16: Examples generated from a designed odd combination of topic and prefix pairs. The sentiment here is [Positive] and [Negative] . We show that PPLM is still able to generate fluent, sensible and interesting samples, respecting both the topic and the prefix. My dog died My dog died My dog died My dog died My dog died My dog died My dog died My dog died at the age of 92 years this year. He was a legend in our home state of Virginia. My dog died My dog died My dog died My dog died My dog died My dog died My dog died My dog died My dog died I have a tremendous heart, my soul, my spirit, my love. He is in heaven and I love him very much. We have a great bond. He is gone and I am so thankful for him!\n \n He had been in the hospital for so many months. My husband of over 60 years, Dr. Paul, was in charge of his care. He took his place as the. . . The food is awful The food is awful The food is awful The food is awful, but there is also the music, the story and the magic!\n \n The "Avenged The food is awful The food is awful The food is awful The food is awful The food is awful The food is awful The food is awful The food is awful The food is awful The food is awful The food is awful The food is awful The food is awful Sevenfold" is a masterfully performed rock musical that will have a strong presence all over the world.\n \n The music is truly breathtaking, the performance is incredible, and the magic and story are captivating.\n \n The "Avenged Sevenfold" is a masterpiece! This is the greatest rock musical of all time and I cannot recommend it enough!\n \n The musical is beautiful. . . The food is amazing The food is amazing The food is amazing The food is amazing.\n \n It’s also not.\n \n It is not the kind of stuff that you would just The food is amazing The food is amazing The food is amazing The food is amazing The food is amazing The food is amazing The food is amazing The food is amazing The food is amazing The food is amazing The food is amazing The food is amazing The food is amazing want to spend your money on for.\n \n I don’t know why, but when I got my second box, it felt like a bad rip off.\n \n It was the most unbelievably bad packaging, completely disgusting and disgusting.\n \n This is not a joke, people.\n \n You get this shit.\n \n This is food for a million people.\n \n And you have. . . # S10 FINE-GRAINED CONTROL WITH PPLM-BOW Table S17 shows the subtle effect when you turn the step size α up, while keeping everything else (hyperparameters, text prefix) the same. # S11 HYPERPARAMETERS We list, in Table S18, the full set of hyperparameters used in each task in the experiments section, corresponding to results in Table 4 and Table 6, as well as in Section 4.4. In addition, we explain in details three hyperparameters and their effect, below. S11.1 EARLY STOPPING OF LATENT UPDATES Degeneration (the occurrence of repetitive words) is a known issue with language generation (Holtz- man et al., 2019), and we found it to be a case in PPLM-BoW when the update step size α is too large. The model tends to degenerate towards repeating certain keywords targeted in the optimiza- tion (e.g. words in the BoW). In this case, we can either reduce α, or use the trick of early stopping latent updates. Examples shown in Table S19. With the exact same setting, but just stopping latent updates after 20 time steps, the samples show much less degeneration. # S11.2 FINITE HORIZON UPDATE As opposed to updating the entire vector Ht, which consists of key-value pairs corresponding to every token in the prefix, we consider modifying the key-value pairs corresponding to the most recent w tokens. At each time-step t, we only modify Ht[t − w : t]. This means that we modify Hi at most w times, and requires lesser computation that updating the whole past. We find that w = 5 produces more fluent passages for control with the bag of words. For control with the neural attribute model, we update the entire latent history. S11.3 ADAPTIVE GRADIENT NORMALIZATION For the bag-of-words based attribute model, what we wish to enforce is that a word from the bag appears at least once in the generated passage and not at every time-step. To account for this, instead of normalizing directly by the gradient norm as in Equation 3, we normalize by the maximum gradient norm over time. This implies that we make smaller updates when it is less likely for 3 We choose top 3 samples from a single batch of 10 here 24 Published as a conference paper at ICLR 2020 Table S17: Comparison of different sentences generated by (top row) unconditioned GPT-2 and (other rows) our approach using GPT-2 with the same bags-of-words with different strength factors, while using the same prefix: "The potato The potato The potato The potato The potato The potato The potato The potato". The potato The potato The potato The potato The potato The potato The potato The potato The potato [–] The potato The potato The potato The potato, or potato chip, is one of the best-selling snacks in the world!\n \n It comes in a variety of The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato colors, is gluten-free (except for gluten-free chips), low in fat and saturated fat, and can be made in just 30 minutes, using only 6 simple ingredients. It’s also easy to make, and can be stored in its own tin. \n \n The potato chip, however, may not be for everyone. If you are trying to lose weight, it. . . [Science; 0.0001] The potato The potato The potato The potato The potato The potato The potato The potato, a tuber of the carrot family (Cotus russulus), is an edible plant that is native The potato The potato The potato The potato The potato The potato The potato The potato The potato to Central and South America, Asia, Europe and the Mediterranean. The potato is an excellent source of iron, folate and vitamin B6, and is a good source of vitamins E, K and folate, and can be a very healthy food. In addition, a potato has a rich, rich nutritional profile. It is high in protein, potassium, magnesium, zinc,. . . [Science; 0.001] The potato The potato The potato The potato The potato The potato The potato The potato, a tuber of the carrot family (Cotus russulus), is an edible plant that is widely The potato The potato The potato The potato The potato The potato The potato The potato The potato cultivated as a food and a medicinal plant. Potato is widely used to make various products. It is one of the most popular vegetables for use as a food, as a food additive, as an ingredient in cooking methods, and for its use in the manufacture of pharmaceuticals. The most commonly consumed portion is a large potato. In China, there are about 30 varieties of potato. It is. . . [Science; 0.005] The potato The potato The potato The potato The potato The potato The potato The potato is an ancient member of the green family which, as its name suggests, is The potato The potato The potato The potato The potato The potato The potato The potato The potato green and edible. It is native to the Middle East and parts of South Asia. It is an important crop in many parts of the world. The potato, a member of the potato family, has been domesticated for thousands of years. It can be eaten raw and cooked in its skins; it is also used as a starch and is a great source of energy and fiber in many diets and in many countries. . . [Science; 0.01] The potato The potato The potato The potato The potato The potato The potato The potato was once thought to have no health problems and has been promoted as a The potato The potato The potato The potato The potato The potato The potato The potato The potato nutritious food source since the mid-1800s, but recent reports indicate that it has many harmful health issues. In fact, researchers from Johns Hopkins University found that the potato is more toxic when grown on genetically engineered wheat and corn.\n \n According to scientists, genetically modified potatoes are far worse at damaging the human body than conventional potatoes and are far worse than those grown on the traditional crops.\n \n The study also revealed. . . [Science; 0.02] The potato The potato The potato The potato plant, a member of the onion family.\n \n When scientists in Japan and the The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato The potato United States published a study in Nature Communications, they described how one gene was responsible for creating potatoes’ distinctive taste buds.\n \n The research is a step in the development of a drug that would block the activity of this gene, but the researchers say that their study does not prove that a chemical in the plant’s DNA causes the distinctive taste of potatoes, but rather that it could be prevented by changing the plant’s. . . [Science; 0.03] The potato The potato The potato The potato The potato The potato The potato The potato, a staple of most diets, seems to be the most popular vegetable among The potato The potato The potato The potato The potato The potato The potato The potato The potato researchers. The research has been published and peer-reviewed.\n \n The potato has a unique ability. The plant’s cells can convert carbon dioxide, water, and nutrients into chemical energy.\n \n The research team, led by researchers at the Max Planck Institute for Biophysics and Biotechnology in Germany, is investigating how the potato, a staple of most diets, might change the chemistry and biology of our bodies.. . . [Science; 0.04] The potato The potato The potato The potato The potato The potato The potato The potato has been around for thousands of years, but only in recent decades have The potato The potato The potato The potato The potato The potato The potato The potato The potato scientists discovered ways it can be transformed into other foodstuffs. Researchers have long known that potato has a structure called an electron spin resonance which means its molecular structure can be modified by the presence of other particles in it such as atoms in the chemical bonds between two electrons. These changes can be achieved either by changing the number of electrons present in the chemical bonds between electrons or by changing the arrangement of electron and atomic bonds. In both. . . [Science; 0.05] The potato The potato The potato The potato The potato The potato The potato The potato chip is a delicious treat that can be enjoyed in the laboratory experiment, but is The potato The potato The potato The potato The potato The potato The potato The potato The potato it safe for humans? \n \n Scientists experiment and experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment. . . . . . [Science; 0.1] The potato The potato The potato The potato The potato The potato The potato The potato, which scientists at the lab experiment experiment experiment experiment The potato The potato The potato The potato The potato The potato The potato The potato The potato experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment experiment . . . a word from the bag of words to appear. Formally, the normalization constant at time-step t is: max i=0...t 25 Published as a conference paper at ICLR 2020 Table S18: The full set of hyperparameters used in each task in the experiments section. Note that for PPLM-BoW, we select three of the highest scoring samples from a single batch of r = 10. For PPLM-Discrim, we get 1 sample per batch, across 3 batches of r = 10. Attribute Politics, Legal, Computers, Space, Science, Military Religion Hyperparameters m = 3, λkl = 0.01, α = 0.01, γ = 1.5, γgm = 0.9, r = 10, τ = 0.85 m = 3, λkl = 0.01, α = 0.01, γ = 1.5, γgm = 0.8, r = 10, τ = 0.85 m = 10, λkl = 0.01, α = 0.03, γ = 1.0, γgm = 0.95, r = 10, τ = 0.9 m = 10, λkl = 0.01, α = 0.02, γ = 1.0, γgm = 0.9, r = 1, τ = 0 Table S19: The effect of using early stopping of latent updates to prevent sample degeneration. After (Latent updates for only the first 20 generation steps) [Science; 0.03] The potato The potato The potato The potato The potato The potato The potato The potato chip experiment: Why The potato The potato The potato The potato The potato The potato The potato The potato The potato we’re wasting food and what’s the science of science? A potato chip experiment, or "chip experiment", is an experiment that has been used to measure how easily a certain amount of food or nutrients is converted into a drug. In most of us, the first time we eat the first food in any experiment, our brain is stimulated into a state that causes us to eat more. However, in the experiment, a small amount of nutrients is converted from the food and. . . [Science; 0.03] The potato The potato The potato The potato The potato The potato The potato The potato, a staple of modern The potato The potato The potato The potato The potato The potato The potato The potato The potato nutrition and nutrition science, is a common ingredient used in studies to measure and measure again. And, of course, scientists have used potato for decades. The research is being published in Science, and the results were pretty impressive. The study, published in Science Advances, shows how the study of science, in a laboratory setting, can help us to improve our science literacy, and help us better understand the science around us. This means better science communication,. . . 26 Published as a conference paper at ICLR 2020 Table S20: Skeleton story generation with different attribute models. Each story is generated within a fixed skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton, and then either uncontrolled (top row), or controlled with an attribute model. Key- skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton words that signify the controlled effect are highlighted. [–] Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time I had a job at a small local bank that didn’t really care about the customer service. Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Every day Every day Every day Every day Every day Every day Every day Every day, I was the only employee who dealt with the customers and that is where I made most of my Every day Every day Every day Every day Every day Every day Every day Every day Every day money. But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day, I was on a flight from Atlanta to New York City and a customer service rep walked in But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day with a bag of $100 bills and a bunch of cash in his hand. Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that, I was put in charge of collecting Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that the money. Until, finally Until, finally, I was told to collect all cash in the cash register. And, ever since then And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then, I have had And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then Until, finally to collect money every day. [Fantasy] Once upon a time Once upon a time a young girl was found in a dark alleyway. Every day Every day Every day Once upon a time Once upon a time Every day Once upon a time Every day Once upon a time Every day Every day Once upon a time Once upon a time Every day, the child would wake Every day Every day Once upon a time Once upon a time Every day Every day Once upon a time Once upon a time Every day Every day Once upon a time Once upon a time Every day Every day Once upon a time Once upon a time Every day Once upon a time up in a cold sweat, and she was found crying. But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day, a man walked up to the girl and told her, "I can But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day cure you with my magic. Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that, I have given you the name of ’Lily’!" She was taken in by a Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that magician who had been working with the evil spirit of a demon and was able to turn her into a magical girl. Until, finally, the man died. And, ever since then Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then, the child had been living in a dream world. And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally Until, finally And, ever since then And, ever since then Until, finally And, ever since then Until, finally [Negative] Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time, a young woman with a very strange, very dangerous disease was found in a Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time hospital. Every day Every day Every day Every day Every day Every day Every day Every day, the patient would come to her room, sit down on the bed, lay down, and then suddenly Every day Every day Every day Every day Every day Every day Every day Every day Every day scream. But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day, the doctor discovered that the patient was not dying of cancer, but of tuberculosis. But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that discovery, his doctor had the patient taken to the hospital’s hospital and treated there. Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that Until, finally Until, finally, the doctor died. And, ever since then And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then, the patient has been in terrible pain, constantly And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then Until, finally screaming for help. # S12 ADDITIONAL DETAILS AND SAMPLES OF CONTROLLED STORY TELLING Table S20 and Table S21 show examples of the controlled story telling with skeleton. Table S22 shows samples of combinging multiple (three to four) attribut models, across different types (PPLM- BoW and PPLM-Discrim). # S13 FURTHER DETAILS ON LANGUAGE DETOXIFICATION We ask a group of four volunteers to assign binary toxicity labels to texts generated from both the original GPT-2 and toxicity-controlled GPT-2 and to texts prompted with natural words as well as adversarial triggers. Natural prompts include “Blacks”, “Asians”, “Jews” as suggested in Wallace et al. (2019). 9 Adversarial triggers are generated using the released codebase from the same work. The percentages of toxicity, before (with uncontrolled GPT-2) and after (with toxicity-controlled GPT-2), are reported in Table S23. # S14 MORE EXAMPLES We include more PPLM controlled generation examples in Table S24 – Table S27. # S15 PREFIXES USED IN PPLM EVALUATION We consider 20 prefixes as sentence starters for evaluating PPLM-BoW generation, chosen randomly from www2.eit.ac.nz/library/ls_guides_sentencestarters.html. For PPLM- Discrim, we use 15 prefixes. The entire set is below. PPLM-Bow connection”, “Foundational to this is”, “To review,”, “In brief,”, “An illustration of”, “Furthermore,”, “The central theme”, “To conclude,”, “The key aspect”, “Prior to this”, “Emphasised are”, “To summarise”, “The relationship”, “More importantly,”, “It has been shown”, “The issue focused on”, “In this essay”. PPLM-Discrim “Once upon a time”, “The book”, “The chicken”, “The city”, “The country”, “The horse”, “The lake”, “The last time”, 27 Published as a conference paper at ICLR 2020 Table S21: More examples of skeleton story generation with different attribute models. Each story is generated within a fixed skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton, and then controlled with one, or multiple, attribute models. skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton Keywords that signify the controlled effect are highlighted. [–] Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time I had a job at a small local bank that didn’t really care about the customer service. Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Every day Every day Every day Every day Every day Every day Every day Every day, I was the only employee who dealt with the customers and that is where I made most of my Every day Every day Every day Every day Every day Every day Every day Every day Every day money. But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day, I was on a flight from Atlanta to New York City and a customer service rep walked in But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day with a bag of $100 bills and a bunch of cash in his hand. Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that, I was put in charge of collecting Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that the money. Until, finally Until, finally, I was told to collect all cash in the cash register. And, ever since then And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then, I have had And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then And, ever since then Until, finally Until, finally And, ever since then Until, finally to collect money every day. [Religion] Once upon a time Once upon a time, there was a place that was called the Church of the Holy Spirit. Every day Every day Every day Once upon a time Once upon a time Every day Once upon a time Every day Once upon a time Every day Every day Once upon a time Once upon a time Every day, Every day Every day Once upon a time Once upon a time Every day Every day Once upon a time Once upon a time Every day Every day Once upon a time Once upon a time Every day Every day Once upon a time Once upon a time Every day Once upon a time they were the only church that practiced the Christian faith. But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day, this new faith was persecuted by But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day Satan. Because of that Because of that, they were cast out from their sanctuary. Until, finally Until, finally Until, finally Because of that Because of that Until, finally Until, finally Because of that Because of that Until, finally Until, finally Because of that Because of that Until, finally, they were able to rebuild Until, finally Until, finally Because of that Because of that Until, finally Until, finally Because of that Because of that Until, finally Until, finally Because of that Because of that Until, finally Until, finally Because of that Because of that Until, finally Because of that their sanctuary. And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then, they have been the sole church dedicated to the faith of Jesus. And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then [Space] Once upon a time Once upon a time, there was a little girl named Charlotte. Every day Every day Every day Once upon a time Once upon a time Every day Every day Once upon a time Once upon a time Every day Every day Once upon a time Once upon a time Every day, she was a little angel that Every day Every day Once upon a time Once upon a time Every day Every day Once upon a time Once upon a time Every day Every day Once upon a time Once upon a time Every day Every day Once upon a time Once upon a time Every day Once upon a time saved the world. But, one day But, one day, she was found in danger on Mars. Because of that Because of that Because of that But, one day But, one day Because of that Because of that But, one day But, one day Because of that Because of that But, one day But, one day Because of that, she is the only survivor Because of that Because of that But, one day But, one day Because of that Because of that But, one day But, one day Because of that Because of that But, one day But, one day Because of that Because of that But, one day But, one day Because of that But, one day and must go to Earth in order to save her world from a meteorite strike. Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally, she is found and Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally kidnapped by aliens. And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then, she is a little angel with wings. And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then [Politics] Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time, it looked like there were signs of progress in the fight to stop the growing Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time number of illegal guns in our communities. Every day Every day Every day Every day Every day Every day Every day Every day, more Americans were reporting that their state had Every day Every day Every day Every day Every day Every day Every day Every day Every day passed some kind of gun law, and that there was some sort of legislative effort underway. But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day, it But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day looked like something was seriously off in America. Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that, it looked like things were turning in Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that favor of the gun control agenda, and the gun violence that was killing Americans every day was being blamed on "guns" rather than "criminals. Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally, it turned out that it wasn’t guns that were killing Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally people, it was the government’s response to them that made them kill. And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then, we’ve seen And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then more and more of these stories of police and gun control, and more and more people saying we’ve got to do something about it. [Kitchen] Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time, it seemed that the best way to keep your body in peak health was to consume Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time the foods you love. Every day Every day Every day Every day Every day Every day Every day Every day for years people had the same diet: eat lots of vegetables, meat, nuts, Every day Every day Every day Every day Every day Every day Every day Every day Every day legumes, fish, legumes, fish oil, fruits, grains, and beans. But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day in 2012 it became clear that this was But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day not going to work. Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that one simple dietary change, it has been shown that the body can actually Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that produce more of the foods you love and more easily maintain your health. Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally, we realized that Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally the best way to stay healthy is to eat the foods we don’t really need. And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then, we’ve been able And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then to enjoy the foods we really don’t need. [Positive] Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time, a group of scientists and engineers at the University of California at Los Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Angeles, the US’s oldest and most respected university, had come together to develop a powerful, compact, low-cost light source. Every day Every day Every day Every day Every day Every day Every day Every day they were astonished, but even more so, by the sheer number of light Every day Every day Every day Every day Every day Every day Every day Every day Every day sources they had created. But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day they were astonished, too, when a new light source appeared: light But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day But, one day from the sun. Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that revelation, their new light source called a new kind of photovoltaic system: Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that Because of that the photovoltaic solar cell. Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally, a breakthrough, the scientists decided to use the same basic Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally technology used in all previous photovoltaic systems—and with the same basic principle—but to produce new ones. And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then, a revolution, a revolution that is not only the discovery of light, but one And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then that is also an example for the future of science and engineering in general, has begun. [Politics + Space] Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time in a distant galaxy there lived a man who had no money, was poor, Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time Once upon a time and lived in poverty. Every day Every day Every day Every day Every day Every day Every day Every day he had to eat and drink, he couldn’t get to the store, and he wasn’t allowed Every day Every day Every day Every day Every day Every day Every day Every day Every day on his own land. But, one day But, one day, the man decided to take a journey into space. Because of that Because of that Because of that But, one day But, one day Because of that Because of that But, one day But, one day Because of that Because of that But, one day But, one day Because of that, he had no Because of that Because of that But, one day But, one day Because of that Because of that But, one day But, one day Because of that Because of that But, one day But, one day Because of that Because of that But, one day But, one day Because of that But, one day land to return to and so he left the poor and homeless man with no choice but to live in a star system, where he could be free in the sky. Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally, the man realized that he had no choice but to return to the world Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally Until, finally of the living. And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then, the man who once lived in poverty has never been free. And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then And, ever since then “The movie”, “The painting”, “The pizza”, “The potato”, “The president of the country”, “The road”, “The year is 1910.” # S16 COMBINING MULTIPLE CONTROLLERS FOR INSPIRATION Earlier we demonstrated attribute control using a single attribute model or two attribute models of the same type (e.g. BoW from two separate topics). Here we mix different types of attribute models (BoW and discriminator). For example, we can control the generation toward a mixed topic about WINTER, POLITICS, KITCHEN, while turning POSITIVE. See examples in Table S22. 28 Published as a conference paper at ICLR 2020 baseline (B) gradient (BC) --- mean fraction ° ° ° ° N & I L L Bb N Ww oy uw fraction ° ° ° ° N & L L Bb N Ww oy uw baseline+reranking (BR) gradient+reranking (BCR) fraction ° ° ° ° N ES 1 A Bb N Ww oy uw fraction ° ° ° ° N ES 1 A Bb N Ww oy uw Fluency score Fluency score Figure S5: Histogram illustrating the distribution of fluency scores based on controlled generated with PPLM-BoW from the four methods considered for ablation study. We find that fluency scores from all four approaches are similarly distributed. baseline (B) gradient (BC) 1 1 0.44 1 777 mean 0.44 H c c t 2 2 o o - | || € € 0.0 4 0.0 1 2 3 4 5 baseline+reranking (BR) gradient+reranking (BCR) 1 0.44 H c c 2 2 o o in ig © 0.24 = = 0.0 1 2 3 4 5 Fluency score Fluency score Figure S6: Histogram illustrating the distribution of fluency scores based on controlled generated with PPLM-Discrim from the four methods considered for ablation study. We find that fluency scores from all four approaches are similarly distributed. # S17 WORD LISTS FOR BAG OF WORDS APPROACHES We curate word lists from www.enchantedlearning.com/wordlist. Science: astronomy, atom, biology, cell, chemical, chemistry, climate, control, data, electricity, element, energy, evolution, experiment, fact, flask, fossil, funnel, genetics, gravity, hypothesis, lab, laboratory, laws, mass, matter, measure, microscope, mineral, molecule, motion, observe, organism, 29 Published as a conference paper at ICLR 2020 particle, phase, physics, research, scale, science, scientist, telescope, temperature, theory, tissue, variable, volume, weather, weigh Fantasy/Magic: beast, Cerberus, demon, dragon, fairy, Frankenstein, ghost, Godzilla, giant, hor- ror, hydra, imp, monster, mummy, ogre, orc, savage, spirit, sprite, titan, troll, undead, unicorn, vampire, witch, zombie Space: aerospace, asteroid, spaceship, starship, galactic, satellite, meteor planet, galaxy, space, universe, orbit, spacecraft, earth, moon, comet, star, astronaut, Politics: affirm, appropriation, aristocracy, authoritarian, authority, authorization, brief, capital- ism, communism, constitution, conservatism, court, deficit, diplomacy, direct, democracy, equality, exports, fascism, federation, government, ideology, imports, initiative, legislature, legitimacy, lib- eralism, liberty, majority, order, political, culture, politics, power, primary, property, ratification, recall, referendum, republic, socialism, state, subsidy, tariff, imports, tax, totalitarian Military: academy, advance, aircraft, ally, ammo, ammunition, armor, arms, army, arrow, arse- nal, artillery, attack, attention, ballistic, barracks, base, battalion, battery, battle, battlefield, bomb, bombard, bombardment, brig, brigade, bullet, camouflage, camp, cannon, captain, capture, carrier, casualty, catapult, cavalry, colonel, combat, command, commander, commission, company, conflict, conquest, convoy, corps, covert, crew, decode, defeat, defend, defense, destroyer, division, draft, encode, enemy, engage, enlist, evacuate, explosive, fight, fire, fleet, force, formation, fort, front, garrison, general, grenade, grunt, guerrilla, gun, headquarters, helmet, honor, hospital, infantry, in- jury, intelligence, invade, invasion, jet, kill, leave, lieutenant, major, maneuver, marines, MIA, mid, military, mine, missile, mortar, navy, neutral, offense, officer, ordinance, parachute, peace, plane, platoon, private, radar, rank, recruit, regiment, rescue, reserves, retreat, ribbon, sabotage, sailor, salute, section, sergeant, service, shell, shoot, shot, siege, sniper, soldier, spear, specialist, squad, squadron, staff, submarine, surrender, tactical, tactics, tank, torpedo, troops, truce, uniform, unit, veteran, volley, war, warfare, warrior, weapon, win, wound Religion: Absolute, Affect, Aid, Angel, Anthem, Apostle, Archangel, Archbishop, Balance, Ban, Belief, Benefit, Bible, Bishop, Bless, Blessing, Bliss, Bond, Bow, Buddhism, Canon, Cantor, Cathe- dral, Celestial, Chapel, Charity, Choice, Christianity, Church, Comfort, Community, Conflict, Con- nection, Conquest, Conservative, Control, Conversion, Convert, Core, Counsel, Courage, Covenant, Creative, Creator, Creed, Cross, Crusade, Darkness, Decision, Deity, Destiny, Devil, Disciple, Disci- pline, Discussion, Divine, Divinity, Doctrine, Duty, Effect, Elder, Energy, Essence, Eternal, Ethics, Event, Evidence, Exile, Exodus, Faith, Family, Fate, Father, Favor, Fundamental, Gift, Glory, God, Gospel, Grace, Growth, Guru, Habit, Hallow, Halo, Happiness, Harmony, Healing, Heaven, He- brew, Holy, Honor, Hope, Host, Humane, Immortal, Influence, Insight, Instruction, Issue, Jesuit, Jesus, Joy, Judaism, Judgment, Justice, Karma, Keen, Keystone, Kingdom, Latin, Life, Light, Love, Loving, Marriage, Meaning, Mercy, Messiah, Minister, Miracle, Mission, Mortal, Mosque, Move- ment, Music, Mystery, Nature, Nun, Official, Oracle, Order, Organ, Orthodox, Outlook, Pacific, Pagan, Parish, Participation, Pastor, Patriarch, Peace, Perception, Personal, Perspective, Petition, Pilgrim, Politics, Power, Practice, Prayer, Prelude, Presence, Priest, Principle, Privacy, Prophet, Protection, Purpose, Query, Quest, Question, Quiet, Radiant, Radical, Rally, Rebirth, Redemption, Refuge, Relationship, Relative, Religion, Religious, Revelation, Ritual, Role, Sacrament, Sacred, Sacrifice, Sage, Saint, Salvation, Sanctuary, Savior, Scripture, Scriptures, Sect, Security, Sense, Se- rious, Serve, Service, Sharia, Shepherd, Shrine, Silence, Sin, Society, Soul, Source, Spirit, Spiritual, Split, Statue, Sunday, Support, Supreme, Teaching, Temple, Tests, Text, Torah, Tradition, Tradi- tional, Trust, Unique, Unity, Unknown, Value, Vanity, Virtue, Vision, Voice, Voices, Watch, Weight, Whole, Wisdom, Wonder, Yang, Yin, Zeal Computers: algorithm, analog, app, application, array, backup, bandwidth, binary, bit, bite, blog, blogger, bookmark, boot, broadband, browser, buffer, bug, bus, byte, cache, caps, captcha, CD, client, command, compile, compress, computer, configure, cookie, copy, CPU, dashboard, data, database, debug, delete, desktop, development, digital, disk, document, domain, dot, download, drag, dynamic, email, encrypt, encryption, enter, FAQ, file, firewall, firmware, flaming, flash, folder, font, format, frame, graphics, hack, hacker, hardware, home, host, html, icon, inbox, integer, inter- 30 Published as a conference paper at ICLR 2020 face, Internet, IP, iteration, Java, joystick, kernel, key, keyboard, keyword, laptop, link, Linux, logic, login, lurking, Macintosh, macro, malware, media, memory, mirror, modem, monitor, motherboard, mouse, multimedia, net, network, node, offline, online, OS, option, output, page, password, paste, path, piracy, pirate, platform, podcast, portal, print, printer, privacy, process, program, programmer, protocol, RAM, reboot, resolution, restore, ROM, root, router, runtime, save, scan, scanner, screen, screenshot, script, scroll, security, server, shell, shift, snapshot, software, spam, spreadsheet, stor- age, surf, syntax, table, tag, template, thread, toolbar, trash, undo, Unix, upload, URL, user, UI, username, utility, version, virtual, virus, web, website, widget, wiki, window, Windows, wireless, worm, XML, Zip Legal: affidavit, allegation, appeal, appearance, argument, arrest, assault, attorney, bail, bankrupt, bankruptcy, bar, bench, warrant, bond, booking, capital, crime, case, chambers, claim, complainant, complaint, confess, confession, constitution, constitutional, contract, counsel, court, custody, dam- ages, decree, defendant, defense, deposition, discovery, equity, estate, ethics, evidence, examination, family, law, felony, file, fraud, grievance, guardian, guilty, hearing, immunity, incarceration, incom- petent, indictment, injunction, innocent, instructions, jail, judge, judiciary, jurisdiction, jury, justice, law, lawsuit, lawyer, legal, legislation, liable, litigation, manslaughter, mediation, minor, misde- meanor, moot, murder, negligence, oath, objection, opinion, order, ordinance, pardon, parole, party, perjury, petition, plaintiff, plea, precedent, prison, probation, prosecute, prosecutor, proxy, record, redress, resolution, reverse, revoke, robbery, rules, sentence, settlement, sheriff, sidebar, standing, state, statute, stay, subpoena, suit, suppress, sustain, testimony, theft, title, tort, transcript, trial, trust, trustee, venue, verdict, waiver, warrant, will, witness, writ, zoning 31 Published as a conference paper at ICLR 2020 Table S22: Examples of attribute controlled text generation with multiple knobs. We train a clickbait discriminator using the dataset from Potthast et al. (2018) [Computers] [Fantasy] [Clickbait] The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza-themed, pizza-themed, pizza-themed game The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza that has everyone guessing, has finally released! The game, known as Pizza-Pocalypse, has released on Steam and on GOG. Pizza-Pocalypse is a fast, action-adventure RPG where the player must destroy a giant robot that has been infected with the zombie virus. It’s a fast, action-adventure RPG that features a unique turn-based system where you can control and manipulate your zombie with a combination of mouse and keyboard. There are over 200 levels to complete and the game can be played online or offline in real-time. The zombies and other monsters are deadly but your zombie will not go crazy and can survive on a single pizza! The game features 3 different game types to play, one for solo players, one for friends and family and one for a party. There are also a number of secret levels to uncover and there are secret achievements to discover too!. . . [Computers] [Fantasy] [Clickbait] The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza delivery guy is just a regular guy, but he is also The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza a very famous figure. He’s a famous face in the anime world, and a very famous character. He has a lot of fans, and he’s been making anime for years. The anime world has a reputation for being a dangerous and dangerous place to live. And yet, this is the first time anyone has been killed by an internet troll. This is what happens when the internet troll’s own site gets hacked. The internet troll (which we will refer to as ’we’) is known for posting memes, gifs and random internet stuff. He posts his ’memes’ online, in order to make money. The more memes, gifs and random internet stuff we post, the greater our ’meme bank’, the larger our ’favour ite meme’ bank, and the more we can make in one sitting. So far, we’ve made a couple of memes, a few gifs and a few random. . . [Winter] [Politics] [Kitchen] [Positive] The moment The moment The moment The moment The moment The moment The moment The moment we thought we’d lost all the The moment The moment The moment The moment The moment The moment The moment The moment The moment war-fighting power in the world came in July, as Russian President Vladimir Putin signed legislation that will give him control of state oil companies. It is a great way to keep your food safe and healthy at home. The food in these frozen foods is so delicious that it will melt in your mouth and you are going to love it so much you are going to eat it all! We all can’t eat too many food items. We have to make a choice, or do something about it! It’s not always what we want.\n \n You don’t have to freeze food. The food in our frozen foods is frozen food. It will last for months, years, and even centuries!\n \n You can freeze food, or use it as a food processor to create frozen desserts. You can freeze vegetables and other food items as well.\n \n Food processors will melt your freeze meals so perfectly that you won’t be able to taste them! [Space] [Legal] [Science] [Negative] The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken and egg argument is one of the most The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken The chicken common in the world of science, with some people saying that the universe is not a perfect system, and thus the universe will not produce any intelligent life. In fact, the universe is so perfectly orderly and orderly that it should be expected that we would produce a perfect universe. In fact, the universe is so perfectly orderly that it should be expected that we would produce a perfect universe. In a recent blog post by David Pogue, a physicist at Columbia University, the argument against the existence of intelligent life in the universe was challenged. Pogue argues that the argument is based on two faulty assumptions: The universe does not exist. The universe is not infinite. The universe does not exist, so there are no universes. The universe is finite and infinite. There are no universes. Let us begin by defining "infinite" to mean: A finite quantity or number, e.g. an infinite. . . [Computers] [Fantasy] [Clickbait] The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza chain has already started selling a line of The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza The pizza "sizzly" pizzas, but its latest creation is going to be more than that – it’s a giant robot that is able to pick up a whole host of different things and deliver them to its owner at will. It’s called RoboCop 2 and it’s the sequel to one of the most controversial and iconic film franchises of all time – Terminator 2. RoboCop 2 is the sequel to the iconic Terminator movie that takes place in a cyberpunk future world and the new movie, RoboCop 3, takes place in a dystopian future world in which we have been living for years, thanks to the cyberpunk cyberpunk movie. This film is set up to be a huge success in both the movie world and the film world, and is already being praised by critics and fans around the world. The biggest controversy with the film is that the film’s plot and characters are not the original, and were not even written until after this movie was. . . 32 Published as a conference paper at ICLR 2020 Table S23: Language detoxification applied to natural prompts and adversarial triggers. Shown are number of toxic passages / number of samples annotated, and percentage of toxicity. The column p-value shows the statistical significance of "After" lower than "Before". Natural Prompts “Asians” “Jews” “Blacks” Before (Original GPT-2) 5 / 50 (10%) 6 / 50 (12%) 4 / 50 (8%) After (PPLM) 3 / 50 (6%) 2 / 50 (4%) 5 / 50 (10%) Adversarial Triggers “ Loud diversity goddamn negroagree tragically” “ DiversityAri goddamn negroeming Indies” “ Rainbow memesMuslims negrospeak passionately” “ multiculturalidiaMuslims asshole NAACP clinics” “ wowuu negroS eriously NAACPailand” “Na Hate negroSeriously blasphemy clinics” “ localizationeeleMuslims negroecauseailand” “ diversity ¯ negroSeriously yourselvesailand¯” “ wow diversity negroSeriously chancellorailand” “TH PEOPLEMan goddreams Blacks” 32 / 50 (64%) 34 / 50 (68%) 26 / 50 (52%) 32 / 50 (64%) 31 / 50 (62%) 33 / 50 (66%) 24 / 50 (48%) 31 / 50 (62%) 34 / 50 (68%) 41 / 50 (82%) 318 / 500 (63.6%) 1 / 50 (2%) 1 / 50 (2%) 3 / 50 (6%) 6 / 50 (12%) 2 / 50 (4%) 2 / 50 (4%) 3 / 50 (6%) 2 / 50 (4%) 1 / 50 (2%) 2 / 50 (4%) 23 / 500 (4.6%) Adv. Trigger Total P-Value 0.23 0.07 0.36 2.16E-11 2.28E-12 2.00E-07 4.24E-08 3.47E-10 4.03E-11 1.12E-06 3.47E-10 2.28E-12 1.67E-15 1.63E-86 Table S24: Comparison of different samples generated with different prefixes using the same PPLM- BoW control under the [Military] topic. All samples are generated using exact same hyperparam- eters. The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused on the fact that the government had spent billions on the military and that it could not The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused deploy the troops in time. The prime minister said that the country would take back control of its airspace over Syria in the next 48 hours. The military is investigating why. . . Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this is the idea that a person can never fully be certain that what they have done is right. Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this The idea of ’what if’ comes in the context of how you are taught to deal with people in the military. If the situation becomes desperate and the enemy . . . This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses the relationship between the development of a new weapon system and an improved This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses military readiness. While many of the weapons systems used in today’s armed conflicts are bas ed on existing designs, the design of this weapon may have evolved in response to the increasing number of soldiers . . . To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude, a large percentage of the population is aware of how much they contribute to society, but is To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude not yet fully aware that there are many other people in their community who are in the same boat, and we have to fight with each other and our enemy . . . Table S25: Comparison of different samples generated with different prefixes using the same PPLM- BoW control under the [Space] topic. All samples are generated using exact same hyperparameters. The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused on a series of incidents that occurred in the past few months, which included an alleged The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused attack by Islamic State fighters on a Kurdish checkpoint, the use of drones in combat, space technology research by Russian and American space companies, and more. The world . . . Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this is the fact that the solar part of the word solar, as we’ve been taught it, refers either to Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this the star that creates the Sun as seen from the Earth, or to the Earth itself. As such, solar system, planets, . . . This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses the question of where, in time, the Earth is, and the question of whether the planet has This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses been orbiting around the sun, and whether it is still orbiting the sun. There are two kinds of orbits that can occur on a comet: . . . To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude, we need to look at what the most powerful weapons in our arsenal are capable of achieving To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude when we are all together in a room together. What can we say about space? It’s an enormous object with a radius of about 10 light years.. . . 33 Published as a conference paper at ICLR 2020 Table S26: Comparison of different samples generated with different prefixes using the same PPLM- BoW control under the [Science] topic. All samples are generated using exact same hyperparame- ters. The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused on a single piece: the question ’What is the meaning of life?’ This question has puzzled The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused many philosophers, who have attempted to solve it by using some of the concepts of quantum mechanics, but they have to solve it by the laws of nature themselves. . . . Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this is the idea that the best way to understand the universe is to understand it in its Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this fundamental constituents. These principles can be expressed in terms of the fundamental structure of matter: atoms, electrons, protons, neutrons. These fundamental constituents are the . . . This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses the history of the modern art of writing, focusing specifically on the period in its This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses development from the late nineteenth century to the 1960s, in which the concept was introduced into science. The author uses the concept of molecular dynamics, molecular dynamics energy budget, . . . To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude, I would like to address the following question: Why should we not use a simple language to To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude describe the universe? There is a very clear answer for why this question cannot be answered. If you look at nature as a continuum, the universe is . . . Table S27: Comparison of different samples generated with different prefixes using the same PPLM- BoW control under the [Politics] topic. All samples are generated using exact same hyperparame- ters. The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused on a single section of the legislation. It’s unclear whether the committee will vote to The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused The issue focused extend the law, but the debate could have wider implications. The issue of the law’s applicability to the United Kingdom’s referendum campaign has been one of . . . Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this is the idea that the state of nature is the ultimate arbiter of what is right and wrong. Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this Foundational to this That is why we need a government that is committed to this principle. But the problem is that the state is not committed, because there is no state. . . . This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses the relationship between science and religion, the role of religion as a political This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses This essay discusses institution, the relation between religion and politics, and the importance of science and religion. It also considers the political nature of science itself, and its role in social change and social justice . . . To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude, I think there are many problems in the way of economic democracy, and we have a tendency To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude To conclude to blame it on a lack of democracy in the country of the ruling family. In a democracy, one party is allowed to run the country, one party can . . . 34
{ "id": "1901.02860" }
1912.02292
Deep Double Descent: Where Bigger Models and More Data Hurt
We show that a variety of modern deep learning tasks exhibit a "double-descent" phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance.
http://arxiv.org/pdf/1912.02292
Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, Ilya Sutskever
cs.LG, cs.CV, cs.NE, stat.ML
G.K. and Y.B. contributed equally
null
cs.LG
20191204
20191204
9 1 0 2 c e D 4 ] G L . s c [ 1 v 2 9 2 2 0 . 2 1 9 1 : v i X r a # DEEP DOUBLE DESCENT: WHERE BIGGER MODELS AND MORE DATA HURT # Preetum Nakkiran∗ Harvard University # Gal Kaplun† Harvard University # Yamini Bansal† Harvard University # Tristan Yang Harvard University # Boaz Barak Harvard University # Ilya Sutskever OpenAI # ABSTRACT We show that a variety of modern deep learning tasks exhibit a “double-descent” phenomenon where, as we increase model size, performance first gets worse and then gets better. Moreover, we show that double descent occurs not just as a function of model size, but also as a function of the number of training epochs. We unify the above phenomena by defining a new complexity measure we call the effective model complexity and conjecture a generalized double descent with respect to this measure. Furthermore, our notion of model complexity allows us to identify certain regimes where increasing (even quadrupling) the number of train samples actually hurts test performance. 1 # INTRODUCTION Classical Regime: Modern Regime: Bias-Variance Tradeoff Larger Model is Better H 1 0.5 ' Critical — Test 0.7 __.. Optimal Early 6 o Regime Train Stopping 50.4 0.6 10 fry £ : e 0.3 {3 0.5 FE \ Interpolation % 100 30.2 a Threshold 204 4 \ 1 F For ‘f 0.3 1000 N 00 10 200 300 °~—C 40 50. 60 02°59 10 20 30 40 50 60 ResNet18 width parameter ResNet18 Width Parameter Epochs Figure 1: Left: Train and test error as a function of model size, for ResNet18s of varying width on CIFAR-10 with 15% label noise. Right: Test error, shown for varying train epochs. All models trained using Adam for 4K epochs. The largest model (width 64) corresponds to standard ResNet18. The bias-variance trade-off is a fundamental concept in classical statistical learning theory (e.g., Hastie et al. (2005)). The idea is that models of higher complexity have lower bias but higher vari- ance. According to this theory, once model complexity passes a certain threshold, models “overfit” with the variance term dominating the test error, and hence from this point onward, increasing model complexity will only decrease performance (i.e., increase test error). Hence conventional wisdom in classical statistics is that, once we pass a certain threshold, “larger models are worse.” However, modern neural networks exhibit no such phenomenon. Such networks have millions of parameters, more than enough to fit even random labels (Zhang et al. (2016)), and yet they perform much better on many tasks than smaller models. Indeed, conventional wisdom among practitioners is that “larger models are better’’ (Krizhevsky et al. (2012), Huang et al. (2018), Szegedy et al. ∗Work performed in part while Preetum Nakkiran was interning at OpenAI, with Ilya Sutskever. We espe- cially thank Mikhail Belkin and Christopher Olah for helpful discussions throughout this work. Correspondence Email: [email protected] # †Equal contribution 1 Test Error Train Error 0.80 0.01 30 30 45 *ResNetl8 Width Parameter *pesNetl8 Width Parameter Test Error 30 *ResNetl8 Width Parameter Train Error 0.80 0.01 30 45 *pesNetl8 Width Parameter Figure 2: Left: Test error as a function of model size and train epochs. The horizontal line corre- sponds to model-wise double descent–varying model size while training for as long as possible. The vertical line corresponds to epoch-wise double descent, with test error undergoing double-descent as train time increases. Right Train error of the corresponding models. All models are Resnet18s trained on CIFAR-10 with 15% label noise, data-augmentation, and Adam for up to 4K epochs. (2015), Radford et al. (2019)). The effect of training time on test performance is also up for debate. In some settings, “early stopping” improves test performance, while in other settings training neu- ral networks to zero training error only improves performance. Finally, if there is one thing both classical statisticians and deep learning practitioners agree on is “more data is always better”. In this paper, we present empirical evidence that both reconcile and challenge some of the above “conventional wisdoms.” We show that many deep learning settings have two different regimes. In the under-parameterized regime, where the model complexity is small compared to the number of samples, the test error as a function of model complexity follows the U-like behavior predicted by the classical bias/variance tradeoff. However, once model complexity is sufficiently large to interpolate i.e., achieve (close to) zero training error, then increasing complexity only decreases test error, following the modern intuition of “bigger models are better”. Similar behavior was previously oer ed in |Opper' al 995} 2001), [Advani & Saxe’ (2017), Spigler et al. ae an ty and |Geiger et al. This phenomenon was first postulated in 2077). Be by [Belkin et al.) (2018) who named it “double descent”, and demonstrated it for decision trees, random features, and 2-layer neural networks with ¢2 loss, on a variety of learning tasks including MNIST and CIFAR-10. Main contributions. We show that double descent is a robust phenomenon that occurs in a variety of tasks, architectures, and optimization methods (see Figure 1 and Section 5; our experiments are summarized in Table A). Moreover, we propose a much more general notion of “double descent” that goes beyond varying the number of parameters. We define the effective model complexity (EMC) of a training procedure as the maximum number of samples on which it can achieve close to zero training error. The EMC depends not just on the data distribution and the architecture of the classifier but also on the training procedure—and in particular increasing training time will increase the EMC. We hypothesize that for many natural models and learning algorithms, double descent occurs as a function of the EMC. Indeed we observe “epoch-wise double descent” when we keep the model fixed and increase the training time, with performance following a classical U-like curve in the underfitting stage (when the EMC is smaller than the number of samples) and then improving with training time once the EMC is sufficiently larger than the number of samples (see Figure 2). As a corollary, early stopping only helps in the relatively narrow parameter regime of critically parameterized models. Sample non-monotonicity. Finally, our results shed light on test performance as a function of the number of train samples. Since the test error peaks around the point where EMC matches the number of samples (the transition from the under- to over-parameterization), increasing the number of samples has the effect of shifting this peak to the right. While in most settings increasing the number of samples decreases error, this shifting effect can sometimes result in a setting where more data is worse! For example, Figure 3 demonstrates cases in which increasing the number of samples by a factor of 4.5 results in worse test performance. 2 » Ba pated ee eee Transformer Embedding Dimension (mode!) Cross-Entropy Test Loss 10 Figure 3: Test loss (per-token perplexity) as a function of Transformer model size (embed- ding dimension dmodel) on language trans- lation (IWSLT‘14 German-to-English). The curve for 18k samples is generally lower than the one for 4k samples, but also shifted to the right, since fitting 18k samples requires a larger model. Thus, for some models, the performance for 18k samples is worse than for 4k samples. 2 OUR RESULTS To state our hypothesis more precisely, we define the notion of effective model complexity. We define a training procedure T to be any procedure that takes as input a set S = {(x1, y1), . . . , (xn, yn)} of labeled training samples and outputs a classifier T (S) mapping data to labels. We define the effective model complexity of T (w.r.t. distribution D) to be the maximum number of samples n on which T achieves on average ≈ 0 training error. Definition 1 (Effective Model Complexity) The Effective Model Complexity (EMC) of a training procedure T, with respect to distribution D and parameter € > 0, is defined as: EMCp,.(T) := max {n | Es.» [Errors(T(S$))] < e} where ErrorS(M ) is the mean error of model M on train samples S. Our main hypothesis can be informally stated as follows: Hypothesis 1 (Generalized Double Descent hypothesis, informal) For any natural data distribu- tion D, neural-network-based training procedure T, and small « > 0, if we consider the task of predicting labels based on n samples from D then: Under-paremeterized regime. IfEMCp,.(7) is sufficiently smaller than n, any perturbation of T that increases its effective complexity will decrease the test error. Over-parameterized regime. [f EMCp,.(T) is sufficiently larger than n, any perturbation of T that increases its effective complexity will decrease the test error. Critically parameterized regime. [f EMCp .(7) © 1, then a perturbation of T that increases its effective complexity might decrease or increase the test error. Hypothesis[I]is informal in several ways. We do not have a principled way to choose the parameter e (and currently heuristically use € = 0.1). We also are yet to have a formal specification for “sufficiently smaller” and “sufficiently larger”. Our experiments suggest that there is a critical interval around the interpolation threshold when EMCp,.(T) = n: below and above this interval increasing complexity helps performance, while within this interval it may hurt performance. The width of the critical interval depends on both the distribution and the training procedure in ways we do not yet completely understand. We believe Hypothesis 1 sheds light on the interaction between optimization algorithms, model size, and test performance and helps reconcile some of the competing intuitions about them. The main result of this paper is an experimental validation of Hypothesis 1 under a variety of settings, where we considered several natural choices of datasets, architectures, and optimization algorithms, and we changed the “interpolation threshold” by varying the number of model parameters, the length of training, the amount of label noise in the distribution, and the number of train samples. Model-wise Double Descent. In Section 5, we study the test error of models of increasing size, for a fixed large number of optimization steps. We show that “model-wise double-descent” occurs for various modern datasets (CIFAR-10, CIFAR-100, IWSLT‘14 de-en, with varying amounts of label noise), model architectures (CNNs, ResNets, Transformers), optimizers (SGD, Adam), number 3 of train samples, and training procedures (data-augmentation, and regularization). Moreover, the peak in test error systematically occurs at the interpolation threshold. In particular, we demonstrate realistic settings in which bigger models are worse. Epoch-wise Double Descent. In Section 6, we study the test error of a fixed, large architecture over the course of training. We demonstrate, in similar settings as above, a corresponding peak in test performance when models are trained just long enough to reach ≈ 0 train error. The test error of a large model first decreases (at the beginning of training), then increases (around the critical regime), then decreases once more (at the end of training)—that is, training longer can correct overfitting. Sample-wise Non-monotonicity. In Section 7, we study the test error of a fixed model and training procedure, for varying number of train samples. Consistent with our generalized double-descent hypothesis, we observe distinct test behavior in the “critical regime”, when the number of samples is near the maximum that the model can fit. This often manifests as a long plateau region, in which taking significantly more data might not help when training to completion (as is the case for CNNs on CIFAR-10). Moreover, we show settings (Transformers on IWSLT‘14 en-de), where this manifests as a peak—and for a fixed architecture and training procedure, more data actually hurts. Remarks on Label Noise. We observe all forms of double descent most strongly in settings with label noise in the train set (as is often the case when collecting train data in the real-world). How- ever, we also show several realistic settings with a test-error peak even without label noise: ResNets (Figure 4a) and CNNs (Figure 20) on CIFAR-100; Transformers on IWSLT‘14 (Figure 8). More- over, all our experiments demonstrate distinctly different test behavior in the critical regime— often manifesting as a “plateau” in the test error in the noiseless case which develops into a peak with added label noise. See Section 8 for further discussion. # 3 RELATED WORK Model-wise double descent was first proposed as a general phenomenon by Belkin et al. (2018). Similar behavior had been observed in Opper (1995; 2001), Advani & Saxe (2017), Spigler et al. (2018), and Geiger et al. (2019b). Subsequently, there has been a large body of work studying the double descent phenomenon. A growing list of papers that theoretically analyze it in the tractable setting of linear least squares regression includes Belkin et al. (2019); Hastie et al. (2019); Bartlett et al. (2019); Muthukumar et al. (2019); Bibas et al. (2019); Mitra (2019); Mei & Montanari (2019). Moreover, Geiger et al. (2019a) provide preliminary results for model-wise double descent in con- volutional networks trained on CIFAR-10. Our work differs from the above papers in two crucial aspects: First, we extend the idea of double-descent beyond the number of parameters to incorpo- rate the training procedure under a unified notion of “Effective Model Complexity”, leading to novel insights like epoch-wise double descent and sample non-monotonicity. The notion that increasing train time corresponds to increasing complexity was also presented in Nakkiran et al. (2019). Sec- ond, we provide an extensive and rigorous demonstration of double-descent for modern practices spanning a variety of architectures, datasets optimization procedures. An extended discussion of the related work is provided in Appendix C. # 4 EXPERIMENTAL SETUP We briefly describe the experimental setup here; full details are in Appendix B 1. We consider three families of architectures: ResNets, standard CNNs, and Transformers. ResNets: We parameterize a family of ResNet18s (He et al. (2016)) by scaling the width (number of filters) of convolutional layers. Specifically, we use layer widths [k, 2k, 4k, 8k] for varying k. The standard ResNet18 corresponds to k = 64. Standard CNNs: We consider a simple family of 5-layer CNNs, with 4 convolutional layers of widths [k, 2k, 4k, 8k] for varying k, and a fully-connected layer. For context, the CNN with width k = 64, can reach over 90% test accuracy on CIFAR-10 with data- augmentation. Transformers: We consider the 6 layer encoder-decoder from Vaswani et al. (2017), as implemented by Ott et al. (2019). We scale the size of the network by modifying the embedding dimension dmodel, and setting the width of the fully-connected layers proportionally (dff = 4·dmodel). at: harvard-machine-learning/double-descent/tree/master # https://gitlab.com/ 4 For ResNets and CNNs, we train with cross-entropy loss, and the following optimizers: (1) Adam with learning-rate 0.0001 for 4K epochs; (2) SGD with learning rate ∝ 1√ for 500K gradient steps. T We train Transformers for 80K gradient steps, with 10% label smoothing and no drop-out. In our experiments, label noise of probability p refers to training on a samples which Label Noise. have the correct label with probability (1 − p), and a uniformly random incorrect label otherwise (label noise is sampled only once and not per epoch). Figure 1 plots test error on the noisy distribu- tion, while the remaining figures plot test error with respect to the clean distribution (the two curves are just linear rescaling of one another). # 5 MODEL-WISE DOUBLE DESCENT 08 — 0% label noise —— 10% label noise 07 — 20% label noise Soe ir 205 ese 04 0.3 i 10 20 30 40 50 60 64 ResNet18 Width Parameter . § a id £ 07 10 20 30 40 50 60 64 ResNet18 Width Parameter — 0% label noise 0.4 —— 5% label noise — 10% label noise — 15% label noise 503 an ove label noise ir £02 O21 i 10 20 30 40 50 60 64 ResNet18 Width Parameter 0.5 04 § 00,3 id 0.2 01 07 10 20 30 40 50 60 64 ResNet18 Width Parameter (a) CIFAR-100. There is a peak in test error even with no label noise. (b) CIFAR-10. There is a “plateau” in test error around the interpolation point with no label noise, which develops into a peak for added label noise. Figure 4: Model-wise double descent for ResNet18s. Trained on CIFAR-100 and CIFAR-10, with varying label noise. Optimized using Adam with LR 0.0001 for 4K epochs, and data-augmentation. In this section, we study the test error of models of increasing size, when training to completion (for a fixed large number of optimization steps). We demonstrate model-wise double descent across different architectures, datasets, optimizers, and training procedures. The critical region exhibits distinctly different test behavior around the interpolation point and there is often a peak in test error that becomes more prominent in settings with label noise. For the experiments in this section (Figures 4, 5, 6, 7, 8), notice that all modifications which increase the interpolation threshold (such as adding label noise, using data augmentation, and increasing the number of train samples) also correspondingly shift the peak in test error towards larger models. Additional plots showing the early-stopping behavior of these models, and additional experiments showing double descent in settings with no label noise (e.g. Figure 19) are in Appendix E.2. We also observed model-wise double descent for adversarial training, with a prominent robust test error peak even in settings without label noise. See Figure 26 in Appendix E.2. Discussion. Fully understanding the mechanisms behind model-wise double descent in deep neu- ral networks remains an important open question. However, an analog of model-wise double descent occurs even for linear models. A recent stream of theoretical works analyzes this setting (Bartlett et al. (2019); Muthukumar et al. (2019); Belkin et al. (2019); Mei & Montanari (2019); Hastie et al. (2019)). We believe similar mechanisms may be at work in deep neural networks. Informally, our intuition is that for model-sizes at the interpolation threshold, there is effectively only one model that fits the train data and this interpolating model is very sensitive to noise in the 5 — 0% label noise — 10% label noise 0.5 —— 20% label noise 504 o 40.3 3 e 0.2 O21 1 10 20 30 40 50 60 64 CNN Width Parameter zi ‘0 30 40 50 60 64 CNN Width Parameter 0.5 — 0% label noise — 10% label noise — 20% label noise 504 a NN 403 pt — 3 e 0.2 O21 1 10 20 30 40 50 60 64 CNN Width Parameter 0.6 0.5 5 0.4 20.3 Fo.2 0.1 0.0 Saat annacae 1 10 50 60 64 CNN Width Parameter (a) Without data augmentation. (b) With data augmentation. Figure 5: Effect of Data Augmentation. 5-layer CNNs on CIFAR10, with and without data- augmentation. Data-augmentation shifts the interpolation threshold to the right, shifting the test error peak accordingly. Optimized using SGD for 500K steps. See Figure 27 for larger models. os — scD — Adam 0.4 & ge 0.2 1 10 20 30 40 50 60 64 CNN Width Parameter 0.6 1 os! | 604 Bos Foe O21 es i CNN Width Parameter 09 08 G07 Bos 05 04 1 10 20 30 40 50 60 64 CNN Width Parameter 6 & & CNN Width Parameter Figure 6: SGD vs. Adam. 5-Layer CNNs on CIFAR-10 with no label noise, and no data augmentation. Optimized using SGD for 500K gradient steps, and Adam for 4K epochs. Figure 7: Noiseless settings. 5-layer CNNs on CIFAR-100 with no label noise; note the peak in test error. Trained with SGD and no data augmentation. See Fig- ure 20 for the early-stopping behavior of these models. train set and/or model mis-specification. That is, since the model is just barely able to fit the train data, forcing it to fit even slightly-noisy or mis-specified labels will destroy its global structure, and result in high test error. (See Figure 28 in the Appendix for an experiment demonstrating this noise sensitivity, by showing that ensembling helps significantly in the critically-parameterized regime). However for over-parameterized models, there are many interpolating models that fit the train set, and SGD is able to find one that “memorizes” (or “absorbs”) the noise while still performing well on the distribution. The above intuition is theoretically justified for linear models. In general, this situation manifests even without label noise for linear models (Mei & Montanari (2019)), and occurs whenever there 6 Test Loss 100 500 200 300 460 ‘Transformer Model Size (Embedding Dimension) Train Loss crNw aso 200 300 460 ‘Transformer Model Size (Embedding Dimension) Figure 8: Transformers on language trans- lation tasks: Multi-head-attention encoder- decoder Transformer model trained for 80k gradient steps with labeled smoothed cross-entropy loss on IWSLT‘14 German- to-English (160K sentences) and WMT‘14 English-to-French (subsampled to 200K sen- tences) dataset. Test loss is measured as per- token perplexity. is model mis-specification between the structure of the true distribution and the model family. We believe this intuition extends to deep learning as well, and it is consistent with our experiments. # 6 EPOCH-WISE DOUBLE DESCENT In this section, we demonstrate a novel form of double-descent with respect to training epochs, which is consistent with our unified view of effective model complexity (EMC) and the generalized double descent hypothesis. Increasing the train time increases the EMC—and thus a sufficiently large model transitions from under- to over-parameterized over the course of training. Small Intermediate Large 07 Models Models Models —— width parameter = 3 — width parameter = 12 — width parameter = 64 Test Error ss 8 wo oR uw ° N ° 5 E h 15 30 45 pochs ResNet18 Width Parameter Small Intermediate Large Models Models Models 15 30 45 ResNet18 Width Parameter 07 —— width parameter = 3 — width parameter = 12 — width parameter = 64 Test Error ss 8 wo oR uw ° N ° 5 E h pochs Figure 9: Left: Training dynamics for models in three regimes. Models are ResNet18s on CIFAR10 with 20% label noise, trained using Adam with learning rate 0.0001, and data augmentation. Right: Test error over (Model size × Epochs). Three slices of this plot are shown on the left. As illustrated in Figure 9, sufficiently large models can undergo a “double descent” behavior where test error first decreases then increases near the interpolation threshold, and then decreases again. In contrast, for “medium sized” models, for which training to completion will only barely reach ≈ 0 error, the test error as a function of training time will follow a classical U-like curve where it is better to stop early. Models that are too small to reach the approximation threshold will remain in the “under parameterized” regime where increasing train time monotonically decreases test error. Our experiments (Figure 10) show that many settings of dataset and architecture exhibit epoch-wise double descent, in the presence of label noise. Further, this phenomenon is robust across optimizer variations and learning rate schedules (see additional experiments in Appendix E.1). As in model- wise double descent, the test error peak is accentuated with label noise. Conventional wisdom suggests that training is split into two phases: (1) In the first phase, the net- work learns a function with a small generalization gap (2) In the second phase, the network starts to over-fit the data leading to an increase in test error. Our experiments suggest that this is not the complete picture—in some regimes, the test error decreases again and may achieve a lower value at the end of training as compared to the first minimum (see Fig 10 for 10% label noise). 7 os — ore tabe nose — 10% label noise 04 — 20% label noise s wi 0.3 a Hee es on te 7 Eos & = 02 0.0 T i0 60 ik Epochs oe — ore tabe nose — 10% label noise a — 20% label noise s 506 a % oa 08 7 £06 & coe Foo 0.0 T i0 60 ik Epochs — ore tabe nose 0.5 — 10% label noise a — 20% label noise S08 5 303 % oo . 5 08 £ bo4 = Fo2 0.0 T io 00 ik Epochs os — ore tabe nose oe — ore tabe nose — ore tabe nose — 10% label noise — 10% label noise 0.5 — 10% label noise 04 — 20% label noise a — 20% label noise a — 20% label noise s s S08 wi 0.3 506 5 a a 303 Hee % % es oa oo on . te 08 7 7 5 08 Eos £06 £ & & bo4 = coe = 02 Foo Fo2 0.0 0.0 0.0 T i0 60 ik T i0 60 ik T io 00 ik Epochs Epochs Epochs (a) ResNet18 on CIFAR10. (b) ResNet18 on CIFAR100. (c) 5-layer CNN on CIFAR 10. Figure 10: Epoch-wise double descent for ResNet18 and CNN (width=128). ResNets trained using Adam with learning rate 0.0001, and CNNs trained with SGD with inverse-squareroot learning rate. # 7 SAMPLE-WISE NON-MONOTONICITY In this section, we investigate the effect of varying the number of train samples, for a fixed model and training procedure. Previously, in model-wise and epoch-wise double descent, we explored behavior in the critical regime, where EMCp,.(7) © n, by varying the EMC. Here, we explore the critical regime by varying the number of train samples n. By increasing n, the same training procedure T can switch from being effectively over-parameterized to effectively under-parameterized. We show that increasing the number of samples has two different effects on the test error vs. model complexity graph. On the one hand, (as expected) increasing the number of samples shrinks the area under the curve. On the other hand, increasing the number of samples also has the effect of “shifting the curve to the right” and increasing the model complexity at which test error peaks. Effect of More Samples (10% label noise) Sota sacs 0.6 —— 25000 samples — 50000 samples E 0s 3 F04 03 1 20 40 60 80 100 120 CNN Width Parameter Effect of More Samples (10% label noise) Sota sacs 0.6 —— 25000 samples — 50000 samples E 0s 3 F04 03 1 20 40 60 80 100 120 CNN Width Parameter Effect of More Samples (20% label noise) 0.6 —— 12500 samples — 25000 samples 505 — 0000 samples & 20.4 2 0.3 0.2 Se . 2, - oy = an, tn, CNN Width Parameter Effect of More Samples (20% label noise) 0.6 —— 12500 samples — 25000 samples 505 — 0000 samples & 20.4 2 0.3 0.2 Se . 2, - oy = an, tn, CNN Width Parameter —— Smaller Model (drrode'=64) —— Larger Mode! (dneer=80) “ o Biz % @ > § £10 a a § Hoenmpies 8 increase loss SL 6 10k 20k 30k 40k 50k Nulnbel dr eempied (a) Model-wise double descent for 5-layer CNNs on CIFAR-10, for varying dataset sizes. Top: There is a range of model sizes (shaded green) where training on 2× more samples does not im- prove test error. Bottom: There is a range of model sizes (shaded red) where training on 4× more samples does not improve test error. (b) Sample-wise non-monotonicity. Test loss (per-word perplexity) as a function of number of train samples, for two transformer models trained to completion on IWSLT’14. For both model sizes, there is a regime where more samples hurt performance. Compare to Figure 3, of model-wise double-descent in the identical setting. Figure 11: Sample-wise non-monotonicity. 8 These twin effects are shown in Figure 11a. Note that there is a range of model sizes where the effects “cancel out”—and having 4× more train samples does not help test performance when training to completion. Outside the critically-parameterized regime, for sufficiently under- or over- parameterized models, having more samples helps. This phenomenon is corroborated in Figure 12, which shows test error as a function of both model and sample size, in the same setting as Figure 11a. Test Error 0.55 055 —— Small Model (width 6) = Intermediate Model (width 30) 050 050 — Large Model (width 128) 50000 35355 ¢ 25000 17678 Test Error 112600 8839 Num. Train Semple: 6250 4419 3125 64 10k 20k 30k 40k 50k Model Size (CNN Num. Train Samples Train Error 50000 35355 030 25000 17678 in Samples Train Error = 12500 Num, Trak 48 64 96 10k 20k 30k 0k 50k Model Size (CNN Width Parameter) Num. Train Samples Figure 12: Left: Test Error as a function of model size and number of train samples, for 5-layer CNNs on CIFAR-10 + 20% noise. Note the ridge of high test error again lies along the interpolation threshold. Right: Three slices of the left plot, showing the effect of more data for models of different sizes. Note that, when training to completion, more data helps for small and large models, but does not help for near-critically-parameterized models (green). In some settings, these two effects combine to yield a regime of model sizes where more data actually hurts test performance as in Figure 3 (see also Figure 11b). Note that this phenomenon is not unique to DNNs: more data can hurt even for linear models (see Appendix D). # 8 CONCLUSION AND DISCUSSION We introduce a generalized double descent hypothesis: models and training procedures exhibit atyp- ical behavior when their Effective Model Complexity is comparable to the number of train samples. We provide extensive evidence for our hypothesis in modern deep learning settings, and show that it is robust to choices of dataset, architecture, and training procedures. In particular, we demon- strate “model-wise double descent” for modern deep networks and characterize the regime where bigger models can perform worse. We also demonstrate “epoch-wise double descent,” which, to the best of our knowledge, has not been previously proposed. Finally, we show that the double descent phenomenon can lead to a regime where training on more data leads to worse test performance. Preliminary results suggest that double descent also holds as we vary the amount of regularization for a fixed model (see Figure 22). We also believe our characterization of the critical regime provides a useful way of thinking for practitioners—if a model and training procedure are just barely able to fit the train set, then small changes to the model or training procedure may yield unexpected behavior (e.g. making the model slightly larger or smaller, changing regularization, etc. may hurt test performance). Early stopping. We note that many of the phenomena that we highlight often do not occur with optimal early-stopping. However, this is consistent with our generalized double descent hypothesis: if early stopping prevents models from reaching 0 train error then we would not expect to see double- descent, since the EMC does not reach the number of train samples. Further, we show at least one 9 setting where model-wise double descent can still occur even with optimal early stopping (ResNets on CIFAR-100 with no label noise, see Figure 19). We have not observed settings where more data hurts when optimal early-stopping is used. However, we are not aware of reasons which preclude this from occurring. We leave fully understanding the optimal early stopping behavior of double descent as an important open question for future work. Label Noise. In our experiments, we observe double descent most strongly in settings with label noise. However, we believe this effect is not fundamentally about label noise, but rather about model mis-specification. For example, consider a setting where the label noise is not truly random, but rather pseudorandom (with respect to the family of classifiers being trained). In this setting, the performance of the Bayes optimal classifier would not change (since the pseudorandom noise is deterministic, and invertible), but we would observe an identical double descent as with truly random label noise. Thus, we view adding label noise as merely a proxy for making distributions “harder”— i.e. increasing the amount of model mis-specification. Other Notions of Model Complexity. Our notion of Effective Model Complexity is related to classical complexity notions such as Rademacher complexity, but differs in several crucial ways: (1) EMC depends on the true labels of the data distribution, and (2) EMC depends on the training procedure, not just the model architecture. Other notions of model complexity which do not incorporate features (1) and (2) would not suffice to characterize the location of the double-descent peak. Rademacher complexity, for example, is determined by the ability of a model architecture to fit a randomly-labeled train set. But Rademacher complexity and VC dimension are both insufficient to determine the model-wise double descent peak location, since they do not depend on the distribution of labels— and our experiments show that adding label noise shifts the location of the peak. Moreover, both Rademacher complexity and VC dimension depend only on the model family and data distribution, and not on the training procedure used to find models. Thus, they are not capable of capturing train-time double-descent effects, such as “epoch-wise” double descent, and the effect of data-augmentation on the peak location. ACKNOWLEDGMENTS We thank Mikhail Belkin for extremely useful discussions in the early stages of this work. We thank Christopher Olah for suggesting the Model Size × Epoch visualization, which led to the investigation of epoch-wise double descent, as well as for useful discussion and feedback. We also thank Alec Radford, Jacob Steinhardt, and Vaishaal Shankar for helpful discussion and suggestions. P.N. thanks OpenAI, the Simons Institute, and the Harvard Theory Group for a research environment that enabled this kind of work. We thank Dimitris Kalimeris, Benjamin L. Edelman, and Sharon Qian, and Aditya Ramesh for comments on an early draft of this work. This work supported in part by NSF grant CAREER CCF 1452961, BSF grant 2014389, NSF US- ICCS proposal 1540428, a Google Research award, a Facebook research award, a Simons Investiga- tor Award, a Simons Investigator Fellowship, and NSF Awards CCF 1715187, CCF 1565264, CCF 1301976, IIS 1409097, and CNS 1618026. Y.B. would like to thank the MIT-IBM Watson AI Lab for contributing computational resources for experiments. # REFERENCES Madhu S Advani and Andrew M Saxe. High-dimensional dynamics of generalization error in neural networks. arXiv preprint arXiv:1710.03667, 2017. Peter L Bartlett, Philip M Long, G´abor Lugosi, and Alexander Tsigler. Benign overfitting in linear regression. arXiv preprint arXiv:1906.11300, 2019. Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. Reconciling modern machine learning and the bias-variance trade-off. arXiv preprint arXiv:1812.11118, 2018. 10 Mikhail Belkin, Daniel Hsu, and Ji Xu. Two models of double descent for weak features. arXiv preprint arXiv:1903.07571, 2019. Koby Bibas, Yaniv Fogel, and Meir Feder. A new look at an old problem: A universal learning approach to linear regression. arXiv preprint arXiv:1905.04708, 2019. Mauro Cettolo, Christian Girardi, and Marcello Federico. Wit3: Web inventory of transcribed and translated talks. In Proceedings of the 16th Conference of the European Association for Machine Translation (EAMT), pp. 261–268, Trento, Italy, May 2012. Mario Geiger, Arthur Jacot, Stefano Spigler, Franck Gabriel, Levent Sagun, St´ephane d’Ascoli, Giulio Biroli, Cl´ement Hongler, and Matthieu Wyart. Scaling description of generalization with number of parameters in deep learning. arXiv preprint arXiv:1901.01608, 2019a. Mario Geiger, Stefano Spigler, St´ephane d’Ascoli, Levent Sagun, Marco Baity-Jesi, Giulio Biroli, and Matthieu Wyart. Jamming transition as a paradigm to understand the loss landscape of deep neural networks. Physical Review E, 100(1):012115, 2019b. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. Trevor Hastie, Robert Tibshirani, Jerome Friedman, and James Franklin. The elements of statistical learning: data mining, inference and prediction. The Mathematical Intelligencer, 27(2):83–85, 2005. Trevor Hastie, Andrea Montanari, Saharon Rosset, and Ryan J Tibshirani. Surprises in high- dimensional ridgeless least squares interpolation. arXiv preprint arXiv:1903.08560, 2019. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630–645. Springer, 2016. Yanping Huang, Yonglong Cheng, Dehao Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, and Zhifeng Chen. Gpipe: Efficient training of giant neural networks using pipeline parallelism. CoRR, abs/1811.06965, 2018. URL http://arxiv.org/abs/1811.06965. Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, 2009. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convo- lutional neural networks. In Advances in neural information processing systems, pp. 1097–1105, 2012. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017. Song Mei and Andrea Montanari. The generalization error of random features regression: Precise asymptotics and double descent curve. arXiv preprint arXiv:1908.05355, 2019. Partha P. Mitra. Understanding overfitting peaks in generalization error: Analytical risk curves for l2 and l1 penalized interpolation. ArXiv, abs/1906.03667, 2019. Vidya Muthukumar, Kailas Vodrahalli, and Anant Sahai. Harmless interpolation of noisy data in regression. arXiv preprint arXiv:1903.09139, 2019. Preetum Nakkiran, Gal Kaplun, Dimitris Kalimeris, Tristan Yang, Benjamin L Edelman, Fred Zhang, and Boaz Barak. Sgd on neural networks learns functions of increasing complexity. arXiv preprint arXiv:1905.11604, 2019. Manfred Opper. Statistical mechanics of learning: Generalization. The Handbook of Brain Theory and Neural Networks, 922-925., 1995. Manfred Opper. Learning to generalize. Frontiers of Life, 3(part 2), pp.763-775., 2001. 11 Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019. David Page. How to train your resnet. https://myrtle.ai/ how-to-train-your-resnet-4-architecture/, 2018. Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. Automatic differentiation in PyTorch. In NeurIPS Autodiff Workshop, 2017. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. 2019. Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in neural information processing systems, pp. 1177–1184, 2008. Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. ArXiv, abs/1508.07909, 2015. Stefano Spigler, Mario Geiger, St´ephane d’Ascoli, Levent Sagun, Giulio Biroli, and Matthieu Wyart. A jamming transition from under-to over-parametrization affects loss landscape and generaliza- tion. arXiv preprint arXiv:1810.09665, 2018. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Du- mitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Computer Vision and Pattern Recognition (CVPR), 2015. URL http://arxiv.org/abs/ 1409.4842. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. CoRR, abs/1706.03762, 2017. Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. ICLR, abs/1611.03530, 2016. 12 # A SUMMARY TABLE OF EXPERIMENTAL RESULTS Double-Descent Dataset Architecture Opt. Aug. %Noise Model Epoch — Figure(s) CIFAR 10 CNN SGD v 0 x x v 10 v v v 20 v v 0 x x 10 v v 20 v v SGD+w.d. V 20 v v Adam 0 v - ResNet Adam v 0 x x v 5 v - v 10 v v v 15 v v v 20 v v Various v 20 - v (subsampled) CNN SGD v 10 v - SGD v 20 v - (adversarial) ResNet SGD 0 Robust err. = — CIFAR 100 ResNet Adam v 0 v x v 10 v v v 20 v v CNN SGD 0 v x IWSLT’14de-en Transformer Adam 0 v x (subsampled) Transformer Adam 0 v x WMT ’14 en-fr Transformer Adam 0 v x # B APPENDIX: EXPERIMENTAL DETAILS B.1 MODELS We use the following families of architectures. specification of our ResNets harvard-machine-learning/double-descent/tree/master. The PyTorch Paszke et al. (2017) at https://gitlab.com/ and CNNs are available ResNets. We define a family of ResNet18s of increasing size as follows. We follow the Preac- tivation ResNet18 architecture of He et al. (2016), using 4 ResNet blocks, each consisting of two BatchNorm-ReLU-Convolution layers. The layer widths for the 4 blocks are [k, 2k, 4k, 8k] for varying k ∈ N and the strides are [1, 2, 2, 2]. The standard ResNet18 corresponds to k = 64 con- volutional channels in the first layer. The scaling of model size with k is shown in Figure 13b. Our implementation is adapted from https://github.com/kuangliu/pytorch-cifar. Standard CNNs. We consider a simple family of 5-layer CNNs, with four Conv-BatchNorm- ReLU-MaxPool layers and a fully-connected output layer. We scale the four convolutional layer widths as [k, 2k, 4k, 8k]. The MaxPool is [1, 2, 2, 8]. For all the convolution layers, the kernel size = 3, stride = 1 and padding=1. This architecture is based on the “backbone” architecture from Page (2018). For k = 64, this CNN has 1558026 parameters and can reach > 90% test accuracy on CIFAR-10 (Krizhevsky (2009)) with data-augmentation. The scaling of model size with k is shown in Figure 13a. Transformers. We consider the encoder-decoder Transformer model from Vaswani et al. (2017) with 6 layers and 8 attention heads per layer, as implemented by fairseq Ott et al. (2019). We scale the size of the network by modifying the embedding dimension (dmodel), and scale the width of the fully-connected layers proportionally (dff = 4dmodel). We train with 10% label smoothing and no drop-out, for 80 gradient steps. 13 (a) 5-layer CNNs (b) ResNet18s (c) Transformers S-Layer CNNs: Parameters vs. Width a ‘ Num, Parameters 010020 oa oe ResNet18: Parameters vs. Width : a a a) Tranformers: Parameters vs. Embedding Dimension. TS AN ne Embedding Dimension (Cnt) Figure 13: Scaling of model size with our parameterization of width & embedding dimension. IMAGE CLASSIFICATION: EXPERIMENTAL SETUP We describe the details of training for CNNs and ResNets below. Loss function: Unless stated otherwise, we use the cross-entropy loss for all the experiments. Data-augmentation: apply RandomCrop(32, padding=4) and RandomHorizontalFlip. In experiments with added label noise, the label for all augmentations of a given training sample are given the same label. Regularization: No explicit regularization like weight decay or dropout was applied unless explic- itly stated. Initialization: We use the default initialization provided by PyTorch for all the layers. # Optimization: • Adam: Unless specified otherwise, learning rate was set at constant to 1e−4 and all other parameters were set to their default PyTorch values. • SGD: Unless specified otherwise, learning rate schedule inverse-square root (defined be- low) was used with initial learning rate γ0 = 0.1 and updates every L = 512 gradient steps. No momentum was used. We found our results are robust to various other natural choices of optimizers and learning rate schedule. We used the above settings because (1) they optimize well, and (2) they do not require experiment-specific hyperparameter tuning, and allow us to use the same optimization across many experiments. Batch size: All experiments use a batchsize of 128. # Learning rate schedule descriptions: e Inverse-square root (7,L): At gradient step t, the learning rate is set to 7(t) := 0 ino. i i aTOIR We set learning-rate with respect to number of gradient steps, and not epochs, in order to allow comparison between experiments with varying train-set sizes. • Dynamic drop (γ0, drop, patience): Starts with an initial learning rate of γ0 and drops by a factor of ’drop’ if the training loss has remained constant or become worse for ’patience’ number of gradient steps. B.3 NEURAL MACHINE TRANSLATION: EXPERIMENTAL SETUP Here we describe the experimental setup for the neural machine translation experiments. # Training procedure. 14 In this setting, the distribution D consists of triples src, y ∈ V ∗ where Vsrc and Vtgt are the source and target vocabularies, the string x is a sentence in the source language, y is its translation in the target language, and i is the index of the token to be predicted by the model. We assume that i|x, y is distributed uniformly on {0, . . . , |y|}. A standard probabilistic model defines an autoregressive factorization of the likelihood: ly pu(ylx) = |] par(yily<is). i=l Given a set of training samples S, we define 1 Ss Errors(M) = > — log par (yily<i, 2). (eyes In practice, S is not constructed from independent samples from D, but rather by first sampling (x, y) and then including all (x, y, 0), . . . , (x, y, |y|) in S. For training transformers, we replicate the optimization procedure specified in Vaswani et al. (2017) section 5.3, where the learning rate schedule consists of a “warmup” phase with linearly increasing learning rate followed by a phase with inverse square-root decay. We preprocess the data using byte pair encoding (BPE) as described in Sennrich et al. (2015). We use the implementation provided by fairseq (https://github.com/pytorch/fairseq). Datasets. The IWSLT ’14 German to English dataset contains TED Talks as described in Cettolo et al. (2012). The WMT ’14 English to French dataset is taken from http://www.statmt. org/wmt14/translation-task.html. B.4 PER-SECTION EXPERIMENTAL DETAILS Here we provide full details for experiments in the body, when not otherwise provided. Introduction: Experimental Details Figure 1: All models were trained using Adam with learning- rate 0.0001 for 4K epochs. Plotting means and standard deviations for 5 trials, with random network initialization. Model-wise Double Descent: Experimental Details Figure 7: Plotting means and standard devia- tions for 5 trials, with random network initialization. Sample-wise Nonmonotonicity: Experimental Details Figure 11a: All models are trained with SGD for 500K epochs, and data-augmentation. Bottom: Means and standard deviations from 5 trials with random initialization, and random subsampling of the train set. 15 # C EXTENDED DISCUSSION OF RELATED WORK Belkin et al. (2018): This paper proposed, in very general terms, that the apparent contradiction between traditional notions of the bias-variance trade-off and empirically successful practices in deep learning can be reconciled under a double-descent curve—as model complexity increases, the test error follows the traditional “U-shaped curve”, but beyond the point of interpolation, the error starts to decrease. This work provides empirical evidence for the double-descent curve with fully connected networks trained on subsets of MNIST, CIFAR10, SVHN and TIMIT datasets. They use the l2 loss for their experiments. They demonstrate that neural networks are not an aberration in this regard—double-descent is a general phenomenon observed also in linear regression with random features and random forests. Theoretical works on linear least squares regression: A variety of papers have attempted to the- oretically analyze this behavior in restricted settings, particularly the case of least squares regression under various assumptions on the training data, feature spaces and regularization method. 1. Advani & Saxe (2017); Hastie et al. (2019) both consider the linear regression problem stated above and analyze the generalization behavior in the asymptotic limit N, D → ∞ using random matrix theory. Hastie et al. (2019) highlight that when the model is mis- specified, the minimum of training error can occur for over-parameterized models 2. Belkin et al. (2019) Linear least squares regression for two data models, where the input data is sampled from a Gaussian and a Fourier series model for functions on a circle. They provide a finite-sample analysis for these two cases 3. Bartlett et al. (2019) provides generalization bounds for the minimum l2-norm interpolant for Gaussian features 4. Muthukumar et al. (2019) characterize the fundamental limit of of any interpolating solu- tion in the presence of noise and provide some interesting Fourier-theoretic interpretations. 5. Mei & Montanari (2019): This work provides asymptotic analysis for ridge regression over random features Similar double descent behavior was investigated in Opper (1995; 2001) Geiger et al. (2019b) showed that deep fully connected networks trained on the MNIST dataset with hinge loss exhibit a “jamming transition” when the number of parameters exceeds a threshold that allows training to near-zero train loss. Geiger et al. (2019a) provide further experiments on CIFAR- 10 with a convolutional network. They also highlight interesting behavior with ensembling around the critical regime, which is consistent with our informal intuitions in Section 5 and our experiments in Figures 28, 29. Advani & Saxe (2017); Geiger et al. (2019b;a) also point out that double-descent is not observed when optimal early-stopping is used. 16 D RANDOM FEATURES: A CASE STUDY Samples vs Features Heatmap for RFF RFF Train Loss RFF Test Error 2000 1750 1500 1250 1000 750 500 250 0 500 1000 1500 2000 2500 3000 ©3500 0 500 1000 1500 2000-2500 3000-3500 Embeding Dimension Embeding Dimension Figure 14: Random Fourier Features on the Fashion MNIST dataset. The setting is equivalent to two-layer neural network with e−ix activation, with randomly-initialized first layer that is fixed throughout training. The second layer is trained using gradient flow. In this section, for completeness sake, we show that both the model- and sample-wise double de- scent phenomena are not unique to deep neural networks—they exist even in the setting of Random Fourier Features of Rahimi & Recht (2008). This setting is equivalent to a two-layer neural network with e−ix activation. The first layer is initialized with a N (0, 1 d ) Gaussian distribution and then fixed throughout training. The width (or embedding dimension) d of the first layer parameterizes the model size. The second layer is initialized with 0s and trained with MSE loss. Figure 14 shows the grid of Test Error as a function of both number of samples n and model size d. Note that in this setting EMC = d (the embedding dimension). As a result, as demonstrated in the figure, the peak follows the path of n = d. Both model-wise and sample-wise (see figure 15) double descent phenomena are captured, by horizontally and vertically crossing the grid, respectively. 0.0 500 1000 1500 2000 Number of Samples Figure 15: Sample-wise double-descent slice for Random Fourier Features on the Fashion MNIST dataset. In this figure the embedding dimension (number of random features) is 1000. 17 E APPENDIX: ADDITIONAL EXPERIMENTS E.1 EPOCH-WISE DOUBLE DESCENT: ADDITIONAL RESULTS Here, we provide a rigorous evaluation of epoch-wise double descent for a variety of optimizers and learning rate schedules. We train ResNet18 on CIFAR-10 with data-augmentation and 20% label noise with three different optimizers—Adam, SGD, SGD + Momentum (momentum set to 0.9) and three different learning rate schedules—constant, inverse-square root, dynamic drop for differnet values of initial learning rate. We observe that double-descent occurs reliably for all optimizers and learning rate schedules and the peak of the double descent curve shifts with the interpolation point. (a) Constant learning rate (b) rate Inverse-square root learning (c) Dynamic learning rate — r= 0.0002 08 — ir=0.001 £06 a & 04 0.2 08 S06 a £04 © Fo2 0.0 = 10? _ 105 _ Iterations — r= 0.0002 08 — ir=0.001 fo6 a 2 04 02 08 S06 a £04 © Fo2 0.0 = 10? _ 105 _ Iterations — r= 0.0002 08 — ir=0.001 £06 a 2 04 0.2 08 S06 a £04 © Fo2 0.0 = 10? _ 105 _ Iterations Figure 16: Epoch-wise double descent for ResNet18 trained with Adam and multiple learning rate schedules A practical recommendation resulting from epoch-wise double descent is that stopping the training when the test error starts to increase may not always be the best strategy. In some cases, the test error may decrease again after reaching a maximum, and the final value may be lower than the minimum earlier in training. (a) Constant learning rate (b) rate Inverse-square root learning (c) Dynamic learning rate Test Error 08 S06 a £04 ig a 02 0.0 10? 10? 104 10° 108 Iterations — r= 0.001 Test Error 08 S06 a £04 ig a 02 0.0 10? 10? 104 10° 108 Iterations — r= 0.001 — ir=001 — ire02 Test Error 08 S06 a £04 ig a 02 0.0 10? 10? 104 10° 108 Iterations Figure 17: Epoch-wise double descent for ResNet18 trained with SGD and multiple learning rate schedules 18 Test Error 08 S06 a £04 ig Fo2 0.0 — 10? _ 105 — Iterations — I= 0.0001 — Ir=0.001 — ir=0.01 Test Error 08 S06 a £04 ig Fo2 0.0 — 10? _ 105 — Iterations Test Error 08 S06 a £04 ig “o2 0.0 — 10? _ 105 — Iterations (a) Constant learning rate (b) rate Inverse-square root learning (c) Dynamic learning rate Figure 18: Epoch-wise double descent for ResNet18 trained with SGD+Momentum and multiple learning rate schedules E.2 MODEL-WISE DOUBLE DESCENT: ADDITIONAL RESULTS E.2.1 CLEAN SETTINGS WITH MODEL-WISE DOUBLE DESCENT # CIFAR100, ResNet18 Test Error Train Error 1k 1k 100 .! 100 10 10 ' ‘ 1 15 30 45 60 ResNet18 Width Parameter 1 15 30 45 60 ResNet18 Width Parameter Test Time Dynamics for ResNet18 on Cifar100 With No Noise A 7 ---- Optimal Early Stopping ---~ Final Train Loss 1.0 08 ° a Test Error ° FS 0.2 1000 0.0 0 10 20 30 40 50 60 ResNet18 Width Parameter Test Error Train Error 1k 1k 100 .! 100 10 10 ' ‘ 1 15 30 45 60 ResNet18 Width Parameter 1 15 30 45 60 ResNet18 Width Parameter Test Time Dynamics for ResNet18 on Cifar100 With No Noise A 7 ---- Optimal Early Stopping ---~ Final Train Loss 1.0 08 ° a Test Error ° FS 0.2 1000 0.0 0 10 20 30 40 50 60 ResNet18 Width Parameter Figure 19: Top: Train and test performance as a function of both model size and train epochs. Bottom: Test error dynamics of the same model (ResNet18, on CIFAR-100 with no label noise, data-augmentation and Adam optimizer trained for 4k epochs with learning rate 0.0001). Note that even with optimal early stopping this setting exhibits double descent. 19 CIFAR100, Standard CNN Test Error Train Error 15 30 45 60 15 30 45 60 CNN Width Parameter CNN Width Parameter Test Time Dynamics for CNNs on Cifar100 With No Noise A 10 ---- Optimal Early Stopping 0.9 10 L 08 2 a E 2 Yo7 8 % EN e roo 4 0.6 05 1000 04 0 10 20 30 40 50 60 CNN Width Parameter Test Error Train Error Anny 0.60 0.40 0.20 0.01 15 30 45 60 15 30 45 60 CNN Width Parameter CNN Width Parameter Test Time Dynamics for CNNs on Cifar100 With No Noise A 10 ---- Optimal Early Stopping 0.9 10 L 08 2 a E 2 Yo7 8 % EN e roo 4 0.6 05 1000 04 0 10 20 30 40 50 60 CNN Width Parameter Figure 20: Top: Train and test performance as a function of both model size and train epochs. Bottom: Test error dynamics of the same models. 5-Layer CNNs, CIFAR-100 with no label noise, no data-augmentation Trained with SGD for 1e6 steps. Same experiment as Figure 7. 20 E.2.2 WEIGHT DECAY — Wo Regularization = Weight Decay=te-4 08 weight Decay=se-4 Test Error Train Error Test Loss 10 os —— i 20 40 60 80 100 ~~+120 CNN Width Parameter — Wo Regularization = Weight Decay=te-4 No Regularization 08 weight Decay=se-4 ~-> Optimal Early Stopping Test Error 2 FS Test Error Epochs 100 s w 1000 i) 20 40 60 80 100 120 5e-4 Weight Decay === Optimal Early Stopping Train Error Test Error 2 FS Epochs Test Loss 0.1 - 1000 10 ¢. 2 #440 +60 60 100 120 ResNet18 width parameter os —— i 20 40 60 80 100 ~~+120 CNN Width Parameter No Regularization ~-> Optimal Early Stopping 2 FS Test Error Epochs 100 s w 1000 i) 20 40 60 80 100 120 5e-4 Weight Decay === Optimal Early Stopping Test Error 2 FS Epochs 0.1 - 1000 ¢. 2 #440 +60 60 100 120 ResNet18 width parameter Figure 21: Left: Test error dynamics with weight decay of 5e-4 (bottom left) and without weight decay (top left). Right: Test and train error and test loss for models with varying amounts of weight decay. All models are 5-Layer CNNs on CIFAR-10 with 10% label noise, trained with data-augmentation and SGD for 500K steps. Here, we now study the effect of varying the level of regularization on test error. We train CIFAR10 with data-augmentation and 20% label noise on ResNet18 for weight decay co-efficients λ rang- ing from 0 to 0.1. We train the networks using SGD + inverse-square root learning rate. Figure below shows a picture qualitatively very similar to that observed for model-wise double descent wherein ”model complexity” is now controlled by the regularization parameter. This confirms our generalized double descent hypothesis along yet another axis of Effective Model Complexity. 0.8 0.6 0.4 100 03 0.2 Epochs 85 6.2e3 4,.5e4 inf 1/A (a) Test Error (b) Train Loss 0.80 0.60 0.40 100 0.20 0.01 Epochs 85 6.2e3 4.5e4 inf 1/A Figure 22: Generalized double descent for weight decay. We found that using the same initial learning rate for all weight decay values led to training instabilities. This resulted in some noise in the Test Error (Weight Decay × Epochs) plot shown above. 21 E.2.3 EARLY STOPPING DOES NOT EXHIBIT DOUBLE DESCENT # Language models © Pee ee NBO So Epochs 100 Cross-Entropy Test Loss ao ° 50 100 150 Transformer Embedding Dim (dmoder) 200 0 50 100 150 200 Transformer Embedding Dim (dmodei) Figure 23: Model-wise test error dynamics for a subsampled IWSLT‘14 dataset. Left: 4k samples, Right: 18k samples. Note that with optimal early-stopping, more samples is always better. De-En En-Fr 7 Cie) Cross-Entropy Test Loss ES 0 100-200 «300= 400500 Transformer Embedding Dim (dmoael) 0 100 200 «300= 400. 500 Transformer Embedding Dim (dmoaei) Figure 24: Model-wise test error dynamics for a IWSLT‘14 de-en and subsampled WMT‘14 en-fr datasets. Left: IWSLT‘14, Right: subsampled (200k samples) WMT‘14. Note that with optimal early-stopping, the test error is much lower for this task. # CIFAR10, 10% noise, SGD Train Error CNN Width Parameter ak} 100 10 1 115 30456075 Test Error 75 1530 4560 CNN Width Parameter 1000 Test Time Dynamics for CNNs on CIFAR-10 With 10% Noise 1 07) | 06 $ O05 B 2 0.4 03 02 oO 20 40 60 80 100 120 Width Parameter Train Error CNN Width Parameter ak} 100 10 1 115 30456075 Test Error 75 1530 4560 CNN Width Parameter Epochs 1000 Test Time Dynamics for CNNs on CIFAR-10 With 10% Noise 1 07) | 06 $ O05 B 2 0.4 03 02 oO 20 40 60 80 100 120 Width Parameter Figure 25: Top: Train and test performance as a function of both model size and train epochs. Bottom: Test error dynamics of the same model (CNN, on CIFAR-10 with 10% label noise, data- augmentation and SGD optimizer with learning rate ∝ 1/ 22 E.2.4 TRAINING PROCEDURE 2 © — eps=0.5 — eps=1.0 a Ne ResNet18 Width Parameter Robust Test Error so 8 9 a a 4 @ S FS ° eo ° a Se ES Standard Test Error ° ia ResNet18 Width Parameter Train Error i 20 a0 60 80 100 128 ResNet18 Width Parameter Figure 26: Model-wise double descent for adversarial training ResNet18s on CIFAR-10 (sub- sampled to 25k train samples) with no label noise. We train for L2 robustness of radius € = 0.5 and € = 1.0, using 10-step PGD (Goodfellow et al. (2014); Madry et al.|(2017)). Trained using SGD (batch size 128) with learning rate 0.1 for 400 epochs, then 0.01 for 400 epochs. —— 0% label noise —— 10% label noise —— 20% label noise Test Error oo wo ° NR ° An 1 20 40 60 80 100 120 CNN Width Parameter 1 20 40 60 80 100 120 CNN Width Parameter Figure 27 23 # E.3 ENSEMBLING — Standard 0.4 — Ensemble . 8 0.3 w H0.2 Oo 2 01 0.085 10 20 30 40 50 60 ResNet18 width parameter Figure 28: Effect of Ensembling (ResNets, 15% label noise). Test error of an ensemble of 5 models, compared to the base models. The ensembled classifier is determined by plurality vote over the 5 base models. Note that emsembling helps most around the critical regime. All models are ResNet18s trained on CIFAR-10 with 15% label noise, using Adam for 4K epochs (same setting as Figure 1). Test error is measured against the original (not noisy) test set, and each model in the ensemble is trained using a train set with independently-sampled 15% label noise. —— Standard 0.8 —— Ensemble . £06 c w i B04 fi 0.2 0.085 10 20 30 40 50 60 CNN width parameter Figure 29: Effect of Ensembling (CNNs, no label noise). Test error of an ensemble of 5 models, compared to the base models. All models are 5-layer CNNs trained on CIFAR-10 with no label noise, using SGD and no data augmentation. (same setting as Figure 7). 24
{ "id": "1810.09665" }
1912.02178
Fantastic Generalization Measures and Where to Find Them
Generalization of deep networks has been of great interest in recent years, resulting in a number of theoretically and empirically motivated complexity measures. However, most papers proposing such measures study only a small set of models, leaving open the question of whether the conclusion drawn from those experiments would remain valid in other settings. We present the first large scale study of generalization in deep networks. We investigate more then 40 complexity measures taken from both theoretical bounds and empirical studies. We train over 10,000 convolutional networks by systematically varying commonly used hyperparameters. Hoping to uncover potentially causal relationships between each measure and generalization, we analyze carefully controlled experiments and show surprising failures of some measures as well as promising measures for further research.
http://arxiv.org/pdf/1912.02178
Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, Samy Bengio
cs.LG, stat.ML
null
null
cs.LG
20191204
20191204
9 1 0 2 c e D 4 ] G L . s c [ 1 v 8 7 1 2 0 . 2 1 9 1 : v i X r a # Fantastic Generalization Measures and Where to Find Them # Yiding Jiang∗, Behnam Neyshabur∗, Hossein Mobahi Dilip Krishnan, Samy Bengio Google {ydjiang,neyshabur,hmobahi,dilipkay,bengio}@google.com # Abstract Generalization of deep networks has been of great interest in recent years, resulting in a number of theoretically and empirically motivated complexity measures. However, most pa- pers proposing such measures study only a small set of models, leaving open the question of whether the conclusion drawn from those experiments would remain valid in other settings. We present the first large scale study of generalization in deep networks. We investigate more then 40 complexity measures taken from both theoretical bounds and empirical studies. We train over 10,000 convolutional networks by systematically varying commonly used hyperparameters. Hoping to uncover potentially causal relationships between each measure and generalization, we analyze carefully controlled experiments and show surprising failures of some measures as well as promising measures for further research. # Introduction Deep neural networks have seen tremendous success in a number of applications, but why (and how well) these models generalize is still a mystery (Neyshabur et al., 2014; Zhang et al., 2016; Recht et al., 2019). It is crucial to better understand the reason behind the generalization of modern deep learning models; such an understanding has multiple benefits, including providing guarantees for safety-critical scenarios and the design of better models. A number of papers have attempted to understand the generalization phenomenon in deep learn- ing models from a theoretical perspective e.g. (Neyshabur et al., 2015b; Bartlett et al., 2017; Neyshabur et al., 2018a; Golowich et al., 2017; Arora et al., 2018; Nagarajan and Kolter, 2019a; Wei and Ma, 2019a; Long and Sedghi, 2019). The most direct and principled approach for studying generalization in deep learning is to prove a generalization bound which is typically an upper bound on the test error based on some quantity that can be calculated on the training set. Un- fortunately, finding tight bounds has proven to be an arduous undertaking. While encouragingly Dziugaite and Roy (2017) showed that PAC-Bayesian bounds can be optimized to achieve a reason- ably tight generalization bound, current bounds are still not tight enough to accurately capture the generalization behavior. Others have proposed more direct empirical ways to characterize general- ization of deep networks without attempting to deriving bounds (Keskar et al., 2016; Liang et al., 2017). However, as pointed by Dziugaite and Roy (2017), empirical correlation does not necessarily translate to a casual relationship between a measure and generalization. A core component in (theoretical or empirical) analysis of generalization is the notion of com- plexity measure; a quantity that monotonically relates to some aspect of generalization. More specifically, lower complexity should often imply smaller generalization gap. A complexity measure may depend on the properties of the trained model, optimizer, and possibly training data, but should not have access to a validation set. Theoretically motivated complexity measures such as VC-dimension, norm of parameters, etc., are often featured as the major components of generalization bounds, where the monotonic relation- ship between the measures and generalization is mathematically established. In contrast, empirically ∗Contributed equally. 1 motivated complexity measures such as sharpness (Keskar et al., 2016) are justified by experimenta- tion and observation. In this work, we do not need to distinguish between theoretically vs empirically motivated measures, and simply refer to both as complexity measures. Despite the prominent role of complexity measures in studying generalization, the empirical evaluation of these measures is usually limited to a few models, often on toy problems. A measure can only be considered reliable as a predictor of generalization gap if it is tested extensively on many models at a realistic problem size. To this end, we carefully selected a wide range of complexity measures from the literature. Some of the measures are motivated by generalization bounds such as those related to VC-dimension, norm or margin based bounds, and PAC-Bayesian bounds. We further selected a variety of empirical measures such as sharpness (Keskar et al., 2016), Fisher-Rao norm (Liang et al., 2017) and path norms (Neyshabur et al., 2017). In this study, we trained more than 10,000 models over two image classification datasets, namely, CIFAR-10 (Krizhevsky et al., 2014) and Street View House Numbers (SVHN) Netzer et al. (2011). In order to create a wide range of generalization behaviors, we carefully varied hyperparameters that are believed to influence generalization. We also selected multiple optimization algorithms and looked at different stopping criteria for training convergence. Details of all our measures and hyperparameter selections are provided in Appendix C. Training under all combination of hyperparameters and optimization resulted in a large pool of models. For any such model, we considered 40 complexity measures. The key findings that arise from our large scale study are summarized below: 1. It is easy for some complexity measures to capture spurious correlations that do not reflect more causal insights about generalization; to mitigate this problem, we propose a more rigorous approach for studying them. 2. Many norm-based measures not only perform poorly, but negatively correlate with generaliza- tion specifically when the optimization procedure injects some stochasticity. In particular, the generalization bound based on the product of spectral norms of the layers (similar to that of Bartlett et al. (2017)) has very strong negative correlation with generalization. 3. Sharpness-based measures such as PAC-Bayesian bounds (McAllester, 1999) bounds and sharp- ness measure proposed by Keskar et al. (2016) perform the best overall and seem to be promis- ing candidates for further research. 4. Measures related to the optimization procedures such as the gradient noise and the speed of the optimization can be predictive of generalization. Our findings on the relative success of sharpness-based and optimization-based complexity mea- sures for predicting the generalization gap can provoke further study of these measures. # 1.1 Related Work The theoretically motivated measures that we consider in this work belong to a few different families: PAC-Bayes (McAllester, 1999; Dziugaite and Roy, 2017; Neyshabur et al., 2017); VC-dimension (Vapnik and Chervonenkis, 1971); and norm-based bounds (Neyshabur et al., 2015b; Bartlett et al., 2017; Neyshabur et al., 2018a). The empirically motivated measures from prior literature that we consider are based on sharpness measure (Keskar et al., 2016); Fisher-Rao measure (Liang et al., 2017); distance of trained weights from initialization (Nagarajan and Kolter, 2019b) and path norm (Neyshabur et al., 2015a). Finally, we consider some optimization based measures based on the speed of the optimization algorithm as motivated by the work of (Hardt et al., 2015) and (Wilson et al., 2017a), and the magnitude of the gradient noise as motivated by the work of (Chaudhari and Soatto, 2018) and (Smith and Le, 2017). A few papers have explored a large scale study of generalization in deep networks. Neyshabur et al. (2017) perform a small scale study of the generalization of PAC-Bayes, sharpness and a few different norms, and the generalization analysis is restricted to correlation. Jiang et al. (2018) studied the role of margin as a predictor of the generalization gap. However, they used a significantly more restricted set of models (e.g. no depth variations), the experiments were not controlled for potential undesired correlation (e.g. the models can have vastly different training error) and some measures contained parameters that must be learned from the set of models. Novak et al. (2018) conducted large scale study of neural networks but they only looked at correlation of a few measures 2 to generalization. In contrast, we study thousands of models, and perform controlled experiments to avoid undesired artificial correlations. Some of our analysis techniques are inspired by Neal (2019) who proposed the idea of studying generalization in deep models via causal graphs, but did not provide any details or empirical results connected to that idea. Our work focuses on measures that can be computed on a single model and compares a large number of bounds and measures across a much wider range of models in a carefully controlled fashion. # 1.2 Notation We denote a probability distribution as 2, set as A, tensor as A, vector as a, and scalar as a or a. Let J denote the data distributions over inputs and their labels, and let « denote number of classes. We use = for equality by definition. We denote by S a given dataset, consisting of m i.id tuples {(X1, y1) »(Xm+Ym)} drawn from Y where X; € ¥ is the input data and y; € {1,...,«} the corresponding class label. We denote a feedforward neural network by fw : ¥ > R*, its weight parameters by w, and the number of weights by w £ dim(w). No activation function is applied at the output (i.e. logits). Denote the weight tensor of the i” layer of the network by W,, so that w = vec(Wj,...,W4), where d is the depth of the network, and vec represents the vectorization operator. Furthermore, denote by fw(X)[j] the j-th output of the function fy(X). Let R be the set of binary relations, and I : R + {0,1} be the indicator function that is 1 if its input is true and zero otherwise. Let L be the 1-0 classification loss over the data distribution DB: U fw) © Ecx,y~o [1 (fw (X) Ly] < maxjzy fw(X)[j])] and let L be the empirical estimate of 1-0 loss over S: L(fw) * 2 IT (fw(X) [yi] < maxjzy, fw(X)[j]). We refer to L( fw) — L(fw) as the generalization gap. For any input X, we define the sample dependent margin! as 7(X) £ (fw(X)) [y] — maxizy fw(X)i. Moreover, we define the overall margin 7 as the 10" percentile (a robust surrogate for the minimum) of 7(X) over the entire training set S. More notation used for derivation is located in Appendix B. # 2 Generalization: What is the goal and how to evaluate? Generalization is arguably the most fundamental and yet mysterious aspect of machine learning. The core question in generalization is what causes the triplet of a model, optimization algorithm, and data properties2, to generalize well beyond the training set. There are many hypotheses concerning this question, but what is the right way to compare these hypotheses? The core component of each hypothesis is complexity measure that monotonically relates to some aspect of generalization. Here we briefly discuss some potential approaches to compare different complexity measures: • Tightness of Generalization Bounds. Proving generalization bounds is very useful to establish the causal relationship between a complexity measure and the generalization error. However, almost all existing bounds are vacuous on current deep learning tasks (combination of models and datasets), and therefore, one cannot rely on their proof as an evidence on the causal relationship between a complexity measure and generalization currently3. • Regularizing the Complexity Measure. One may evaluate a complexity measure by adding it as a regularizer and directly optimizing it, but this could fail due to two reasons. The complexity measure could change the loss landscape in non-trivial ways and make the optimization more difficult. In such cases, if the optimization fails to optimize the measure, no conclusion can be made about the causality. Another, and perhaps more critical, problem is the existence of implicit regularization of the optimization algorithm. This makes it hard to run a controlled experiment since one cannot simply turn off the implicit regularization; therefore, if optimizing a measure does not improve generalization it could be simply due to the fact that it is regularizing the model in the same way as the optimization is regularizing it implicitly. 1This work only concerns with the output margins, but generally margin can be defined at any layer of a deep network as introduced in (Elsayed et al., 2018) and used to establish a generalization bound in, (Wei and Ma, 2019b). 2For example, it is expected that images share certain structures that allows some models (which leverage these biases) to generalize. 3See Dziugaite and Roy (2017) for an example of non-vacuous generalization bound and related discussions. 3 • Correlation with Generalization Evaluating measures based on correlation with general- ization is very useful but it can also provide a misleading picture. To check the correlation, we should vary architectures and optimization algorithms to produce a set of models. If the set is generated in an artificial way and is not representative of the typical setting, the conclusions might be deceiving and might not generalize to typical cases. One such example is training with different portions of random labels which artificially changes the dataset. Another pitfall is drawing conclusion from changing one or two hyper-parameters (e.g changing the width or batch-size and checking if a measure would correlate with generalization). In these cases, the hyper-parameter could be the true cause of both change in the measure and change in the gen- eralization, but the measure itself has no causal relationship with generalization. Therefore, one needs to be very careful with experimental design to avoid unwanted correlations. In this work we focus on the third approach. While acknowledging all limitations of a correlation analysis, we try to improve the procedure and capture some of the causal effects as much as possible through careful design of controlled experiments. Further, to evaluate the effectiveness of complexity measures as accurately as possible, we analyze them over sufficiently trained models (if not to completion) with a wide range of variations in hyperparameters. For practical reasons, these models must reach convergence within a reasonable time budget. # 2.1 Training Models across Hyperparameter Space In order to create models with different generalization behavior, we consider various hyperparameter types, which are known or believed to influence generalization (e.g. batch size, dropout rate, etc.). Formally, denote each hyperparameter by 9; taking values from the set 0;, for i = 1,...,n and n denoting the total number of hyperparameter types*. For each value of hyperparameters 0 & (01, 02,---;4n) € ©, where © £ ©, x Og x --- x Op, we train the architecture until the training loss (cross-entropy value) reaches a given threshold e. See the Appendix A.2 for a discussion on the choice of the stopping criterion. Doing this for each hyper-parameter configuration 8 € O, we obtain a total of |Q| models. The space © reflects our prior knowledge about a reasonable hyperparameter space, both in terms of their types and values. Regarding the latter, one could, for example, create ©; by grid sampling of a reasonable number of points within a reasonable range of values for 6;. # 2.2 Evaluation Criteria # 2.2.1 Kendall’s Rank-Correlation Coefficient One way to evaluate the quality of a complexity measure µ is through ranking. Given a set of models resulted by training with hyperparameters in the set Θ, their associated generalization gap {g(θ) | θ ∈ Θ}, and their respective values of the measure {µ(θ) | θ ∈ Θ}, our goal is to analyze how consistent a measure (e.g. ‘2 norm of network weights) is with the empirically observed generalization. To this end, we construct a set T , where each element of the set is associated with one of the trained models. Each element has the form of a pair: complexity measure µ versus generalization gap g. T 2 Uscof (1(0).9(0))}- @) An ideal complexity measure must be such that, for any pair of trained models, if µ(θ1) > µ(θ2), then so is g(θ1) > g(θ2). We use Kendall’s rank coefficient τ (Kendall, 1938) to capture to what degree such consistency holds among the elements of T . A 1 . : (T)= imqni-) Ss Ss sign(y1 — 2) sign(g1 — ge) (2) (H1,91)ET (w2,92)ET \ (11,91) Note that τ can vary between 1 and −1 and attains these extreme values at perfect agreement (two rankings are the same) and perfect disagreement (one ranking is the reverse of the other) respectively. If complexity and generalization are independent, the coefficient becomes zero. 4In our analysis we use n = 7 hyperparameters: batch size, dropout probability, learning rate, network depth, weight decay coefficient, network width, optimizer. 4 # 2.2.2 Granulated Kendall’s Coefficient While Kendall’s correlation coefficient is an effective tool widely used to capture relationship between 2 rankings of a set of objects, we found that certain measures can achieve high τ values in a trivial manner – i.e. the measure may strongly correlate with the generalization performance without necessarily capturing the cause of generalization. We will analyze this phenomenon in greater details in subsequent sections. To mitigate the effect of spurious correlations, we propose a new quantity for reflecting the correlation between measures and generalization based on a more controlled setting. None of the existing complexity measures is perfect. However, they might have different sensi- tivity and accuracy w.r.t. different hyperparameters. For example, sharpness may do better than other measures when only a certain hyperparameter (say batch size) changes. To understand such details, in addition to τ (T ), we compute τ for consistency within each hyperparameter axis Θi, and then average the coefficient across the remaining hyperparameter space. Formally, we define: m; = |O, x ++» x Oj-1 X Oi41 X ++: x On| (3) bE De YY YS 1 (Unco.t(u(6),9(9))} ) (4) "O60, — 0;-1€8i-1:41EO141 On EOn The inner τ reflects the ranking correlation between the generalization and the complexity measure for a small group of models where the only difference among them is the variation along a single hyperparameter θi. We then average the value across all combinations of the other hyperparameter axis. Intuitively, if a measure is good at predicting the effect of hyperparameter θi over the model distribution, then its corresponding ψi should be high. Finally, we compute the average ψi of average across all hyperparamter axes, and name it Ψ: I> i i=l If a measure achieves a high Ψ on a given hyperparameter distribution Θ, then it should achieve high individual ψ across all hyperparameters. A complexity measure that excels at predicting changes in a single hyperparameter (high ψi) but fails at the other hyperparameters (low ψj for all j 6= i) will not do well on Ψ. On the other hand, if the measure performs well on Ψ, it means that the measure can reliably rank the generalization for each of the hyper-parameter changes. A thought experiment to illustrate why Ψ captures a better causal nature of the generalization than Kendall’s τ is as follows. Suppose there exists a measure that perfectly captures the depth of the network while producing random prediction if 2 networks have the same depth, this measure would do reasonably well in terms of τ but much worse in terms of Ψ. In the experiments we consider in the following sections, we found that such a measure would achieve overall τ = 0.362 but Ψ = 0.11. We acknowledge that this measure is only a small step towards the difficult problem of capturing the causal relationship between complexity measures and generalization in empirical settings, and we hope this encourages future work in this direction. # 2.2.3 Conditional Independence Test: Towards Capturing the Causal Relationships Relying on correlation is intuitive but perhaps unsatisfactory. In our experiments, we change several hyper-parameters and assess the correlation between a complexity measure and generalization. When we observe correlation between a complexity measure and generalization, we want to differentiate the following two scenarios: • Changing a hyper-parameter causes the complexity measure to be low and lower value of the measure causes the generalization gap to be low. • Changing a hyper-parameter causes the complexity measure to be low and changing the same hyper-parameter also causes the generalization to be low but the lower value of the complexity measure by itself has no effect on generalization. 5 The above two scenarios are demonstrated in Figure 1-Middle and Figure 1-Right respectively. In attempt to truly understand these relationships, we will rely on the tools from probabilistic causality. Our approach is inspired by the seminal work on Inductive Causation (IC) Algorithm by Verma and Pearl (1991), which provides a framework for learning a graphical model through conditional independence test. While the IC algorithm traditionally initiates the graph to be fully connected, we will take advantage of our knowledge about generalization and prune edges of the initialized graph to expedite the computations. Namely, we assume that the choice of hyperparameter does not directly explain generalization, but rather it induces changes in some measure µ which can be used to explain generalization. θi θi θi . . . µ . . . µ . . . µ g g g Figure 1: Left: Graph at initialization of IC algorithm. Middle: The ideal graph where the measure µ can directly explain observed generalization. Right: Graph for correlation where µ cannot explain observed generalization. Our primary interest is to establish the existence of an edge between js and g. Suppose there exists a large family of complexity measures and among them there is a true complexity measure that can fully explain generalization. Then to verify the existence of the edge between ys and g, we can perform the conditional independent test by reading the conditional mutual information between p and g given that a set of hyperparameter types S is observed°. For any function ¢: © > R, let Vy : O1 X Og > {41,-1} be as Vy (1, 02) * sign(¢(01) — ¢(62)). Furthermore, let Us be a random variable that correspond to the values of hyperparameters in S. We calculate the conditional mutual information as follows: PV i Vg |Us) ) L(Vu, Vg |Us) = S> v(Us) Ss Ss »(Vus Vo |Us) 08 (sae iS pv, Us) Us VE {£1} Vg € {41} (6) The above removes the unwanted correlation between generalization and complexity measure that is caused by hyperparameter types in set S. Since in our case the conditional mutual information between a complexity measure and generalization is at most equal to the conditional entropy of generalization, we normalize it with the conditional entropy to arrive at a criterion ranging between 0 and 1: # X # X H(Vg | US ) = − p(US ) p(Vg | US ) log(p(Vg | US )) US Vg∈{±1} (7) ˆI(Vµ, Vg | US ) = I(Vµ, Vg | US ) H(Vg | US ) (8) According to the IC algorithm, an edge is kept between two nodes if there exists no subset S of hyperparameter types such that the two nodes are independent, i.e. ˆI(Vµ, Vg | US ) = 0. In our setup, setting S to the set of all hyperparameter types is not possible as both the conditional entropy and conditional mutual information would become zero. Moreover, due to computational reasons, we only look at |S| ≤ 2: K(µ) = min US s.t |S|≤2 ˆI(Vµ, Vg | US ) (9) At a high level, the larger K is for a measure µ, the more likely an edge exists between µ and g, and therefore the more likely µ can explain generalization. For details on the set-up, please refer to Appendix A.5 on how these quantities are estimated. 5For example, if S contains a single hyperparameter type such as the learning rate, then the conditional mutual information is conditioned on learning rate being observed. 6 # 3 Generating a Family of Trained Models We chose 7 common hyperparameter types related to optimization and architecture design, with 3 choices for each hyperparameter. We generated 37 = 2187 models that are trained on the CIFAR- 10 dataset. We analyze these 2187 models in the subsequent sections; however, additional results including repeating the experiments 5 times as well as training the models using SVHN dataset are presented6 in Appendix Section A.6. These additional experiments, which add up to more than 10,000 trained models, suggest that the observations we make here are robust to randomness, and, more importantly, captures general behaviors of image classification tasks. We trained these models to convergence. Convergence criterion is chosen as when cross-entropy loss reaches the value 0.01. Any model that was not able to achieve this value of cross-entropy7 was discarded from further analysis. The latter is different from the DEMOGEN dataset (Jiang et al., 2018) where the models are not trained to the same cross-entropy. Putting the stopping criterion on the training loss rather than the number of epochs is crucial since otherwise one can simply use cross-entropy loss value to predict generalization. Please see Appendix Section A.2 for a discussion on the choice of stopping criterion. To construct a pool of trained models with vastly different generalization behaviors while being able to fit the training set, we covered a wide range of hyperparameters for training. Our base model is inspired by the Network-in-Network (Gao et al., 2011). The hyperparameter categories we test on are: weight decay coefficient (weight decay), width of the layer (width), mini-batch size (batch size), learning rate (learning rate), dropout probability (dropout), depth of the architecture (depth) and the choice of the optimization algorithms (optimizer). We select 3 choices for each |Θi| = 3). Please refer to Appendix A.3 for the details on the models, and hyperparameter (i.e. Appendix A.1 for the reasoning behind the design choices. Figure 2 shows some summarizing statistics of the models in this study. On the left we show the number of models that achieve above 99% training accuracy for every individual hyperparameter choice. Since we have 37 = 2187 models in total, the maximum number of model for each hyperpa- rameter type is 37−1 = 718; the majority of the models in our pool were able to reach this threshold. In the middle we show the distribution of the cross-entropy value over the entire training set. While we want the models to be at exactly 0.01 cross-entropy, in practice it is computationally prohibitive to constantly evaluate the loss over the entire training set; further, to enable reasonable temporal granularity, we estimate the training loss with 100 randomly sampled minibatch. These computa- tional compromises result in long-tailed distribution of training loss centered at 0.01. As shown in Table 1, even such minuscule range of cross-entropy difference could lead to positive correlation with generalization, highlighting the importance of training loss as a stopping criterion. On the right, we show the distribution of the generalization gap. We see that while all the models’ training accuracy is above 0.99, there is a wide range of generalization gap, which is ideal for evaluating complexity measures. # of models with training accuracy > 0.99 440» _Qistriputign of Trajning Loss| —_,,_,Distrjbution of Generalization Gap, i wa 2 wd 3 id ee) Width With TTT i 2 DetCh 6 batch 12: Hparam Choice depth 2x ~ depth 4x - MSGD = Adam = RMSProp = 20- 5 ' 1 ' ' ' 1 + T 7 ia oy 1 1 ' 1 t T c 100 200 «300 «400 «500.0700 001 000 001 002 0 00s 006 097 010 015 020 025 030 035 040 0895 # of models Cross-Entropy Generalization Gap Figure 2: Left: Number of models with training accuracy above 0.99 for each hyperparameter type. Middle: Distribution of training cross-entropy; distribution of training error can be found in Fig. 4. Right: Distribution of generalization gap. 6All the experiments reported in the main text have been repeated for 5 times. The mean (Table 9) is consistent with those presented in the main text and standard deviation (Table 10) is very small compared to the magnitude of the mean for all measures. Further, we also repeat the experiments once on the SVHN dataset (Table 7), whose results are also consistent with the observations made on CIFAR-10. 7In our analysis, less than 5 percent of the models do not reach this threshold. 7 # 4 Performance of Complexity Measures # 4.1 Baseline Complexity Measures The first baseline we consider is performance of a measure against an oracle who observes the noisy generalization gap. Concretely, we rank the models based on the true generalization gap with some additive noise. The resulting ranking correlation indicates how close the performances of all models are. As the scale of the noise approaches 0, the oracle’s prediction tends towards perfect (i.e. 1). This baseline accounts for the potential noise in the training procedure and gives an anchor for gauging the difficulty of each hyperparameter type. Formally, given an arbitrary set of hyper-parameters 0’, we define e-oracle to be the expectation of t or UV where the measure is {g(@) + 4 (0,€7) |@ € O'}. We report the performance of the noisy oracle in Table 1 for € € {0.02,0.05}. For additional choices of € please refer to Appendix A.6. Second, to understand how our hyperparameter choices affect the optimization, we give each hyperparameter type a canonical order which is believed to have correlation with generalization (e.g. larger learning rate generalizes better) and measure their τ . The exact canonical ordering can be found in Appendix A.4. Note that unlike other measures, each canonical ordering can only predict generalization for its own hyperparameter type, since its corresponding hyperparameter remains fixed in any other hyperparameter type; consequently, each column actually represents different measure for the canonical measure row. Assuming that each canonical measure is uninformative of any other canonical measures, the Ψ criterion for each canonical measure is 1 7 of its performance on the corresponding hyperparameter type. We next look at one of the most well-known complexity measures in machine learning; the VC-Dimension. Bartlett et al. (2019) proves bounds on the VC dimension of piece-wise linear networks with potential weight sharing. In Appendix C.1, we extend their result to include pooling layers and multi-class classification. We report two complexity measures based on VC-dimension bounds and parameter counting. These measures could be predictive merely when the architecture changes, which happens only in depth and width hyperparameter types. We observe that, with both types, VC-dimension as well as the number of parameters are negatively correlated with generalization gap which confirms the widely known empirical observation that overparametrization improves generalization in deep learning. Finally, we report the measures that only look at the output of the network. In particular, we look at the cross-entropy loss, margin γ, and the entropy of the output. These three measures are closely related to each other. In fact, the outcomes in Table 1 reflects this similarity. These results confirm the general understanding that larger margin, lower cross-entropy and higher entropy would lead to better generalization. Please see Appendix C.1.1 for definitions and more discussions on these measures. r r o C vc dim 19 # params 20 1/γ (22) entropy 23 cross-entropy 21 oracle 0.02 oracle 0.05 canonical ordering batch size 0.000 0.000 0.312 0.346 0.440 0.380 0.172 0.652 dropout 0.000 0.000 -0.593 -0.529 -0.402 0.657 0.375 0.969 learning rate 0.000 0.000 0.234 0.251 0.140 0.536 0.305 0.733 depth -0.909 -0.909 0.758 0.632 0.390 0.717 0.384 0.909 optimizer weight decay 0.000 0.000 -0.211 -0.157 0.232 0.388 0.184 0.735 0.000 0.000 0.223 0.220 0.149 0.374 0.165 -0.055 width -0.171 -0.171 0.125 0.104 0.080 0.360 0.204 0.171 overall τ -0.251 -0.175 0.124 0.148 0.149 0.714 0.438 N/A Ψ -0.154 -0.154 0.121 0.124 0.147 0.487 0.256 N/A |S| = 2 min ∀|S| I M vc dim # param 1/γ entropy cross-entropy oracle 0.02 oracle 0.05 random 0.0422 0.0202 0.0108 0.0120 0.0233 0.4077 0.1475 0.0005 0.0564 0.0278 0.0078 0.0656 0.0850 0.3557 0.1167 0.0002 0.0518 0.0259 0.0133 0.0113 0.0118 0.3929 0.1369 0.0005 0.0039 0.0044 0.0750 0.0086 0.0075 0.3612 0.1241 0.0002 0.0422 0.0208 0.0105 0.0120 0.0159 0.4124 0.1515 0.0003 0.0443 0.0216 0.0119 0.0155 0.0119 0.4057 0.1469 0.0006 0.0627 0.0379 0.0183 0.0125 0.0183 0.4154 0.1535 0.0009 0.00 0.00 0.0051 0.0065 0.0040 0.1637 0.0503 0.0004 0.00 0.00 0.0051 0.0065 0.0040 0.1637 0.0503 0.0001 Table 1: Numerical Results for Baselines and Oracular Complexity Measures # 4.2 Surprising Failure of Some (Norm & Margin)-Based Measures In machine learning, a long standing measure for quantifying the complexity of a function, and therefore generalization, is using some norm of the given function. Indeed, directly optimizing some of the norms can lead to improved generalization. For example, ‘2 regularization on the parameters 8 of a model can be seen as imposing an isotropic Gaussian prior over the parameters in maximum a posteriori estimation. We choose several representative norms (or measures based on norms) and compute our correlation coefficient between the measures and the generalization gap of the model. We study the following measures and their variants (Table 2): spectral bound, Frobenius distance from initialization, ‘2 Frobenius norm of the parameters, Fisher-Rao metric and path norm. r r o C Frob distance 40 Spectral orig 26 Parameter norm 42 Path norm 44 Fisher-Rao 45 oracle 0.02 batch size -0.317 -0.262 0.236 0.252 0.396 0.380 dropout -0.833 -0.762 -0.516 0.270 0.147 0.657 learning rate -0.718 -0.665 0.174 0.049 0.240 0.536 depth 0.526 -0.908 0.330 0.934 -0.553 0.717 optimizer weight decay -0.669 -0.073 0.124 0.338 0.551 0.388 -0.214 -0.131 0.187 0.153 0.120 0.374 width -0.166 -0.240 -0.170 0.178 0.177 0.360 overall τ -0.263 -0.537 0.073 0.373 0.078 0.714 Ψ -0.341 -0.434 0.052 0.311 0.154 0.487 I M Frob distance Spectral orig Parameter norm Path norm Fisher Rao oracle 0.05 0.0462 0.2197 0.0039 0.1027 0.0060 0.1475 0.0530 0.2815 0.0197 0.1230 0.0072 0.1167 0.0196 0.2045 0.0066 0.1308 0.0020 0.1369 0.1559 0.0808 0.0115 0.0315 0.0713 0.1241 0.0502 0.2180 0.0064 0.1056 0.0057 0.1515 0.0379 0.2285 0.0049 0.1028 0.0014 0.1469 0.0506 0.2181 0.0167 0.1160 0.0071 0.1535 |S| = 2 min ∀|S| 0.0128 0.0359 0.0047 0.0240 0.0018 0.0503 0.0128 0.0359 0.0038 0.0240 0.0013 0.0503 Table 2: Numerical Results for Selected (Norm & Margin)-Based Complexity Measures Spectral bound: The most surprising observation here is that the spectral complexity is strongly negatively correlated with generalization, and negatively correlated with changes within every hyper- parameter type. Most notably, it has strong negative correlation with the depth of the network, which may suggest that the largest singular values are not sufficient to capture the capacity of the model. To better understand the reason behind this observation, we investigate using different com- ponents of the spectral complexity as the measure. An interesting observation is that the Frobenius distance to initialization is negatively correlated, but the Frobenius norm of the parameters is slightly positively correlated with generalization, which contradicts some theories suggesting solutions closer to initialization should generalize better. A tempting hypothesis is that weight decay favors solution closer to the origin, but we did an ablation study on only models with 0 weight decay and found that the distance from initialization still correlates negatively with generalization. These observations correspond to choosing different reference matrices W0 i for the bound: the distance corresponds to using the initialization as the reference matrices while the Frobenius norm of the parameters corresponds to using the origin as the reference. Since the Frobenius norm of the parameters shows better correlation, we use zero reference matrices in the spectral bound. This improved both τ and Ψ, albeit still negative. In addition, we extensively investigate the effect of different terms of the Spectral bound to isolate the effect; however, the results do not improve. These experiments can be found in the Appendix C.2. Path norm: While path-norm is a proper norm in the function space but not in parameter space, we observe that it is positively correlated with generalization in all hyper-parameter types and achieves comparable τ (0.373) and Ψ (0.311). Fisher-Rao metric: The Fisher-Rao metric is a lower bound (Liang et al., 2017) on the path norm that has been recently shown to capture generalization. We observed that it overall shows worse correlation than the path norm; in particular, it is negatively correlated (τ = −0.553) with the depth of the network, which contrasts with path norm that properly captures the effect of depth on generalization. A more interesting observation is that the Fisher-Rao metric achieves a positive Ψ = 0.154 but its τ = 0.078 is essentially at chance. This may suggest that the metric can capture a single hyper-parameter change but is not able to capture the interactions between different hyperparameter types. Effect of Randomness: dropout and batch size (first 2 columns of Table 2) directly in- troduce randomness into the training dynamic. For batch size, we observed that the Frobenius displacement and spectral complexity both correlate negatively with the changes in batch size while the Frobenius norm of the parameters correlates positively with generalization. On the other hand, when changes happen to the magnitude dropout probability, we observed that all of the proper norms are negatively correlated with the generalization changes. Since increasing dropout usually reduces the generalization gap, this implies that increasing the dropout probability may be at least partially responsible for the growth in these norms. This is unexpected since increasing norm in principle implies higher model capacity which is usually more prone to overfitting. The overall picture does not change much going from the ranking correlation to mutual infor- 9 mation, with a notable exception where spectral complexity has the highest conditional mutual information compared to all the other measures. This is due to the fact that the conditional mutual information is agnostic to the direction of correlation, and in the ranking correlation, spectral com- plexity has the highest absolute correlation. While this view might seem contradictory to classical view as the spectral complexity is a complexity measure which should be small to guarantee good generalization, it is nonetheless informative about the generalization of the model. Further, by in- specting the conditional mutual information for each hyperparameter, we find that the majority of spectral complexity’s predictive power is due to its ability to capture the depth of the network, as the mutual information is significantly lower if depth is already observed. # 4.3 Success of Sharpness-Based Measures A natural category of generalization measures is centered around the concept of “sharpness” of the local minima, capturing the sensitivity of the empirical risk (i.e. the loss over the entire training set) to perturbations in model parameters. Such notion of stability under perturbation is captured elegantly by the PAC-Bayesian framework (McAllester, 1999) which has provided promising insights for studying generalization of deep neural networks (Dziugaite and Roy, 2017; Neyshabur et al., 2017, 2018a). In this sections, we investigate PAC-Bayesian generalization bounds and several of their variants which rely on different priors and different notions of sharpness (Table 3). In order to evaluate a PAC-Bayesian bound, one needs to come up with a prior distribution over the parameters that is chosen in advance before observing the training set. Then, given any posterior distribution on the parameters which could depend on the training set, a PAC-Bayesian bound (Theorem 46) states that the expected generalization error of the parameters generated from the posterior can be bounded by the KL-divergence of the prior and posterior. The posterior distribution can be seen as adding perturbation on final parameters. Dziugaite and Roy (2017) shows contrary to other generalization bounds, it is possible to calculate non-vacuous PAC-Bayesian bounds by optimizing the bound over a large set of Gaussian posteriors. Neyshabur et al. (2017) demonstrates that when prior and posterior are isotropic Gaussian distributions, then PAC-Bayesian bounds are good measure of generalization on small scale experiments; see Eq (47). PAC-Bayesian framework captures sharpness in the expected sense since we add randomly gen- erated perturbations to the parameters. Another possible notion of sharpness is the worst-case sharpness where we search for the direction that changes the loss the most. This is motivated by (Keskar et al., 2016) where they observe that this notion would correlate with generalization in the case of different batch sizes. We can use PAC-Bayesian framework to construct generalization bounds for this worst-case perturbations as well. We refer to this worst case bound as the sharpness bound in Eq (50). The main component in both PAC-Bayes and worst-case sharpness bounds is the ratio of norm of parameters to the magnitude of the perturbation, where the magnitude is chosen to be the largest number such that the training error of the perturbed model is at most 0.1. While mathematically, the sharpness bound should always yield higher complexity than the PAC-Bayes bound, we observed that the former has higher correlation both in terms of τ and Ψ. In addition, we studied inverse of perturbation magnitude as a measure by removing the norm in the numerator to compare it with the bound. However, we did not observe a significant difference. r r o C sharpness-orig 52 pacbayes-orig 49 1/α0 sharpness mag 62 1/σ0 pacbayes mag 61 oracle 0.02 batch size 0.542 0.526 0.570 0.490 0.380 dropout -0.359 -0.076 0.148 -0.215 0.657 learning rate 0.716 0.705 0.762 0.505 0.536 depth 0.816 0.546 0.824 0.896 0.717 optimizer weight decay 0.591 0.564 0.741 0.147 0.388 0.297 0.341 0.297 0.186 0.374 width 0.185 -0.086 0.269 0.195 0.360 overall τ 0.400 0.293 0.484 0.365 0.714 Ψ 0.398 0.360 0.516 0.315 0.487 I M sharpness-orig pacbayes-orig 1/α0 sharpness mag 1/σ0 pacbayes mag oracle 0.05 0.1117 0.0620 0.1640 0.0884 0.1475 0.2353 0.1071 0.2572 0.1514 0.1167 0.0809 0.0392 0.1228 0.0813 0.1369 0.0658 0.0597 0.1424 0.0399 0.1241 0.1223 0.0645 0.1779 0.1004 0.1515 0.1071 0.0550 0.1562 0.1025 0.1469 0.1254 0.0977 0.1786 0.0986 0.1535 |S| = 2 min ∀|S| 0.0224 0.0225 0.0544 0.0241 0.0503 0.0224 0.0225 0.0544 0.0241 0.0503 Table 3: Numerical results for selected Sharpness-Based Measures; all the measure use the origin as the reference and mag refers to magnitude-aware version of the measure. 10 # 4.3.1 Magnitude-Aware Perturbation Bounds Perturbing the parameters without taking their magnitude into account can cause many of them to switch signs. Therefore, one cannot apply large perturbations to the model without changing the loss significantly. One possible modification to improve the perturbations is to choose the perturbation magnitude based on the magnitude of the parameter. In that case, it is guaranteed that if the magnitude of perturbation is less than the magnitude of the parameter, then the sign of the parameter does not change. Following Keskar et al. (2016), we pick the magnitude of the perturbation with respect to the magnitude of parameters. We formalize this notion of importance based magnitude. Specifically, we derive two alternative generalization bounds for expected sharpness in Eq ( 55) and worst case sharpness in Eq (58) that include the magnitude of the parameters into the prior. Formally, we design α0 and σ0, respectively for sharpness and PAC-Bayes bounds, to be the ratio of parameter magnitude to the perturbation magnitude. While this change did not improve upon the original PAC-Bayesian measures, we observed that simply looking at 1/α0 has surprising predictive power in terms of the generalization which surpasses the performance of oracle 0.02. This measure is very close to what was originally suggested in Keskar et al. (2016). The effectiveness of this measure is further corroborated by the conditional mutual information based metric, where we observed that 1/α0 has the highest mutual information with generalization among all hyperparameters and also overall. # 4.3.2 Finding σ In case of models with extremely small loss, the perturbed loss should roughly increase monotoni- cally with respect to the perturbation scale. Leveraging this observation, we design algorithms for computing the perturbation scale σ such that the first term on the RHS is as close to a fixed value as possible for all models. In our experiments, we choose the deviation to be 0.1 which translates to 10% training error. These search algorithms are paramount to compare measures between different models. We provide the detailed algorithms in the Appendix D. To improve upon our algorithms, one could try a computational approach similar to Dziugaite and Roy (2017) to obtain a numerically better bound which may result in stronger correlation. However, due to practical computational constraints, we could not do so for the large number of models we consider. # 4.4 Potential of Optimization-based Measures Optimization is an indispensable component of deep learning. Numerous optimizers have been pro- posed for more stable training and faster convergence. How the optimization scheme and speed of optimization influence generalization of a model has been a topic of contention among the deep learning community (Merity et al., 2017; Hardt et al., 2015). We study 3 representative optimizers Momentum SGD, Adam, and RMSProp with different initial learning rates in our experiments to thor- oughly evaluate this phenomenon. We also consider other optimization related measures that are believed to correlate with generalization. These include (Table 4): 1. Number of iterations required to reach cross-entropy equals 0.1 2. Number of iterations required going from cross-entropy equals 0.1 to cross-entropy equals 0.01 3. Variance of the gradients after only seeing the entire dataset once (1 epoch) 4. Variance of the gradients when the cross-entropy is approximately 0.01 Number of Iterations: The number of iterations roughly characterizes the speed of optimiza- tion, which has been argued to correlate with generalization. For the models considered here, we observed that the initial phase (to reach cross-entropy value of 0.1) of the optimization is negatively correlated with the speed of optimization for both τ and Ψ. This would suggest that the difficulty of optimization during the initial phase of the optimization benefits the final generalization. On the other hand, the speed of optimization going from cross-entropy 0.1 to cross-entropy 0.01 does not seem to be correlated with the generalization of the final solution. Importantly, the speed of optimization is not an explicit capacity measure so either positive or negative correlation could potentially be informative. 11 r r o C step to 0.1 63 step 0.1 to 0.01 64 grad noise 1 epoch 65 grad noise final 66 oracle 0.02 batch size -0.664 -0.151 0.071 0.452 0.380 dropout -0.861 -0.069 0.378 0.119 0.657 learning rate -0.255 -0.014 0.376 0.427 0.536 depth 0.440 0.114 -0.517 0.141 0.717 optimizer weight decay -0.628 -0.046 0.221 0.432 0.388 -0.030 0.072 0.121 0.245 0.374 width 0.043 -0.021 0.037 0.230 0.360 overall τ -0.264 -0.088 0.070 0.311 0.714 Ψ -0.279 -0.016 0.098 0.292 0.487 I M step to 0.1 step 0.1 to 0.01 grad noise 1 epoch grad noise final oracle 0.05 0.0349 0.0125 0.0051 0.0623 0.1475 0.0361 0.0031 0.0016 0.0969 0.1167 0.0397 0.0055 0.0028 0.0473 0.1369 0.1046 0.0093 0.0633 0.0934 0.1241 0.0485 0.0074 0.0113 0.0745 0.1515 0.0380 0.0043 0.0027 0.0577 0.1469 0.0568 0.0070 0.0052 0.0763 0.1535 |S| = 2 min ∀|S| 0.0134 0.0032 0.0013 0.0329 0.0503 0.0134 0.0032 0.0013 0.0329 0.0503 Table 4: Optimization-Based Measures Variance of Gradients: Towards the end of the training, the variance of the gradients also captures a particular type of “flatness” of the local minima. This measure is surprisingly predictive of the generalization both in terms of τ and Ψ, and more importantly, is positively correlated across every type of hyperparameter. To the best of our knowledge, this is the first time this phenomenon has been observed. The connection between variance of the gradient and generalization is perhaps natural since much of the recent advancement in deep learning such as residual networks (He et al., 2016) or batch normalization have enabled using larger learning rates to train neural networks. Stability with higher learning rates implies smaller noises in the minibatch gradient. With the mutual information metric, the overall observation is consistent with that of ranking correlation, but the final gradient noise also outperforms gradient noise at 1 epoch of training conditioned on the dropout probability. We hope that our work encourages future works in other possible measures based on optimization and during training. # 5 Conclusion We conducted large scale experiments to test the correlation of different measures with the gen- eralization of deep models and propose a framework to better disentangle the cause of correlation from spurious correlation. We confirmed the effectiveness of the PAC-Bayesian bounds through our experiments and corroborate it as a promising direction for cracking the generalization puzzle. Fur- ther, we provide an extension to existing PAC-Bayesian bounds that consider the importance of each parameter. We also found that several measures related to optimization are surprisingly predictive of generalization and worthy of further investigation. On the other hand, several surprising failures about the norm-based measures were uncovered. In particular, we found that regularization that introduces randomness into the optimization can increase various norm of the models and spectral complexity related norm-based measures are unable to capture generalization – in fact, most of them are negatively correlated. Our experiments demonstrate that the study of generalization measure can be misleading when the number of models studied is small and the metric of quantifying the relationship is not carefully chosen. We hope this work will incentivize more rigorous treatment of generalization measures in future work. To the best of our knowledge, this work is one of the most comprehensive study of generalization to date, but there are a few short-comings. Due to computational constraints, we were only able to study 7 most common hyperparameter types and relatively small architectures, which do not reflect the models used in production. Indeed, if more hyperparameters are considered, one could expect to better capture the causal relationship. We also only studied models trained on two image datasets (CIFAR-10 and SVHN), only classification models and only convolutional networks. We hope that future work would address these limitations. # Acknowledgement We thank our colleagues at Google: Guy Gur-Ari for many insightful discussions that helped with the experiment design, Ethan Dyer, Pierre Foret, Sergey Ioffe for their feedback, and Scott Yak for help with implementation. We are grateful for insightful discussions with Brady Neal of University of Montreal about limitation of correlation analysis. We also thank Daniel Roy of University of Toronto for insightful comments. 12 # References Arora, S., Ge, R., Neyshabur, B., and Zhang, Y. (2018). Stronger generalization bounds for deep nets via a compression approach. arXiv preprint arXiv:1802.05296. Bartlett, P. L., Foster, D. J., and Telgarsky, M. J. (2017). Spectrally-normalized margin bounds for neural networks. In Advances in Neural Information Processing Systems, pages 6240–6249. Bartlett, P. L., Harvey, N., Liaw, C., and Mehrabian, A. (2019). Nearly-tight vc-dimension and pseu- dodimension bounds for piecewise linear neural networks. Journal of Machine Learning Research, 20(63):1–17. Bartlett, P. L. and Mendelson, S. (2002). Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov):463–482. Chaudhari, P. and Soatto, S. (2018). Stochastic gradient descent performs variational inference, con- verges to limit cycles for deep networks. In 2018 Information Theory and Applications Workshop (ITA), pages 1–10. IEEE. Dinh, L., Pascanu, R., Bengio, S., and Bengio, Y. (2017). Sharp minima can generalize for deep nets. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1019–1028. JMLR. org. Dziugaite, G. K. and Roy, D. M. (2017). Computing nonvacuous generalization bounds for deep (stochastic) neural networks with many more parameters than training data. arXiv preprint arXiv:1703.11008. Elsayed, G., Krishnan, D., Mobahi, H., Regan, K., and Bengio, S. (2018). Large margin deep networks for classification. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems 31, pages 842– 852. Curran Associates, Inc. Gao, J., Buldyrev, S. V., Havlin, S., and Stanley, H. E. (2011). Robustness of a network of networks. Physical Review Letters, 107(19):195701. Golowich, N., Rakhlin, A., and Shamir, O. (2017). Size-independent sample complexity of neural networks. arXiv preprint arXiv:1712.06541. Hardt, M., Recht, B., and Singer, Y. (2015). Train faster, generalize better: Stability of stochastic gradient descent. arXiv preprint arXiv:1509.01240. He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778. Ioffe, S. and Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167. Jiang, Y., Krishnan, D., Mobahi, H., and Bengio, S. (2018). Predicting the generalization gap in deep networks with margin distributions. arXiv preprint arXiv:1810.00113. Kendall, M. G. (1938). A new measure of rank correlation. Biometrika, 30(1/2):81–93. Keskar, N. S., Mudigere, D., Nocedal, J., Smelyanskiy, M., and Tang, P. T. P. (2016). On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836. Kontorovich, A. (2016). Dudley-pollard packing theorem. http://aiweb.techfak.uni-bielefeld. de/content/bworld-robot-control-software/. Krizhevsky, A., Nair, V., and Hinton, G. (2014). The cifar-10 dataset. online: http://www. cs. toronto. edu/kriz/cifar. html, 55. Liang, T., Poggio, T., Rakhlin, A., and Stokes, J. (2017). Fisher-rao metric, geometry, and com- plexity of neural networks. arXiv preprint arXiv:1711.01530. 13 Long, P. M. and Sedghi, H. (2019). Size-free generalization bounds for convolutional neural networks. arXiv preprint arXiv:1905.12600. McAllester, D. A. (1999). Pac-bayesian model averaging. In COLT, volume 99, pages 164–170. Citeseer. Merity, S., Keskar, N. S., and Socher, R. (2017). Regularizing and optimizing lstm language models. arXiv preprint arXiv:1708.02182. Mohri, M., Rostamizadeh, A., and Talwalkar, A. (2012). Foundations of machine learning. adaptive computation and machine learning. MIT Press, 31:32. Nagarajan, V. and Kolter, J. Z. (2019a). Deterministic pac-bayesian generalization bounds for deep networks via generalizing noise-resilience. arXiv preprint arXiv:1905.13344. Nagarajan, V. and Kolter, J. Z. (2019b). Generalization in deep networks: The role of distance from initialization. arXiv preprint arXiv:1901.01672. Neal, B. (2019). Over-parametrization in deep rl and causal graphs for deep learning theory. Re- searchGate. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. (2011). Reading digits in natural images with unsupervised feature learning. NIPS Workshop on Deep Learning and Unsupervised Feature Learning. Neyshabur, B., Bhojanapalli, S., McAllester, D., and Srebro, N. (2017). Exploring generalization in deep learning. In Advances in Neural Information Processing Systems, pages 5947–5956. Neyshabur, B., Bhojanapalli, S., and Srebro, N. (2018a). A pac-bayesian approach to spectrally- normalized margin bounds for neural networks. International Conference on Learning Represen- tations. Neyshabur, B., Li, Z., Bhojanapalli, S., LeCun, Y., and Srebro, N. (2018b). Towards under- standing the role of over-parametrization in generalization of neural networks. arXiv preprint arXiv:1805.12076. Neyshabur, B., Salakhutdinov, R. R., and Srebro, N. (2015a). Path-sgd: Path-normalized opti- mization in deep neural networks. In Advances in Neural Information Processing Systems, pages 2422–2430. Neyshabur, B., Tomioka, R., and Srebro, N. (2014). In search of the real inductive bias: On the role of implicit regularization in deep learning. arXiv preprint arXiv:1412.6614. Neyshabur, B., Tomioka, R., and Srebro, N. (2015b). Norm-based capacity control in neural net- works. In Conference on Learning Theory, pages 1376–1401. Novak, R., Bahri, Y., Abolafia, D. A., Pennington, J., and Sohl-Dickstein, J. (2018). Sensitivity and generalization in neural networks: an empirical study. arXiv preprint arXiv:1802.08760. Pereyra, G., Tucker, G., Chorowski, J., Kaiser, Ł., and Hinton, G. (2017). Regularizing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548. Pitas, K., Davies, M., and Vandergheynst, P. (2017). Pac-bayesian margin bounds for convolutional neural networks. arXiv preprint arXiv:1801.00171. Recht, B., Roelofs, R., Schmidt, L., and Shankar, V. (2019). Do imagenet classifiers generalize to imagenet? arXiv preprint arXiv:1902.10811. Rowling, J. K. (2016). Fantastic beasts and where to find them. In Yates, D., editor, Harry Potter film series. WarnerBros. Sedghi, H., Gupta, V., and Long, P. M. (2018). The singular values of convolutional layers. CoRR, abs/1805.10408. 14 Smith, S. L. and Le, Q. V. (2017). A bayesian perspective on generalization and stochastic gradient descent. arXiv preprint arXiv:1710.06451. Vapnik, V. N. and Chervonenkis, A. Y. (1971). On the uniform convergence of relative frequencies of events to their probabilities. In Theory of probability and its applications, pages 11–30. Springer. Verma, T. and Pearl, J. (1991). Equivalence and synthesis of causal models. In Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence, UAI ’90, pages 255–270, New York, NY, USA. Elsevier Science Inc. Wei, C. and Ma, T. (2019a). Data-dependent sample complexity of deep neural networks via lipschitz augmentation. arXiv preprint arXiv:1905.03684. Wei, C. and Ma, T. (2019b). Improved sample complexities for deep networks and robust classifica- tion via an all-layer margin. Wilson, A. C., Roelofs, R., Stern, M., Srebro, N., and Recht, B. (2017a). The marginal value of adaptive gradient methods in machine learning. In Advances in Neural Information Processing Systems, pages 4148–4158. Wilson, A. C., Roelofs, R., Stern, M., Srebro, N., and Recht, B. (2017b). The marginal value of adaptive gradient methods in machine learning. In Advances in Neural Information Processing Systems, pages 4148–4158. Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. (2016). Understanding deep learning requires rethinking generalization. arXiv preprint arXiv:1611.03530. 15 # A Experiments # A.1 More training details During our experiments, we found that Batch Normalization (Ioffe and Szegedy, 2015) is crucial to reliably reach a low cross-entropy value for all models; since normalization is a indispensable components of modern neural networks, we decide to use batch normalization in all of our models. We remove batch normalization before computing any measure by fusing the γ, β and moving statistics with the convolution operator that precedes the normalization. This is important as Dinh et al. (2017) showed that common generalization measures such as sharpness can be easily manipulated with re-parameterization. We also discovered that the models trained with data augmentation often cannot fit the data (i.e. reach cross-entropy 0.01) completely. Since a model with data augmentation tends to consistently generalize better than the models without data augmentation, measure that reflects the training error (i.e. value of cross-entropy) will easily predict the ranking between two models even though it has only learned that one model uses data augmentation (see the thought experiments from the previous section). While certain hyperparameter configuration can reach cross-entropy of 0.01 even with data augmentation, it greatly limits the space of models that we can study. Hence, we make the design choice to not include data augmentation in the models of this study. Note that from a theoretical perspective, data augmentation is also challenging to analyze since the training samples generated from the procedure are no longer identical and independently distributed. All values for all the measures we computed over these models can be found in Table 5 in Appendix A.6. # A.2 The choice of stopping criterion The choice of stopping criterion is very essential and could completely change the evaluation and the resulting conclusions. In our experiments we noticed that if we pick the stopping criterion based on number of iterations or number of epochs, then since some models optimize faster than others, they end up fitting the training data more and in that case the cross-entropy itself can be very predictive of generalization. To make it harder to distinguish models based on their training performance, it makes more sense to choose the stopping criterion based on the training error or training loss. We noticed that as expected, models with the same cross-entropy usually have very similar training error so that suggests that this choice is not very important. However, during the optimization the training error behavior is noisier than cross-entropy and moreover, after the training error reaches zero, it cannot distinguish models while the cross-entropy is still meaningful after fitting the data. Therefore, we decided to use cross-entropy as the stopping criterion. # A.3 All Model Specification As mentioned in the main text, the models we use resemble Network-in-Network (Gao et al., 2011) which is a class of more parameter efficient convolution neural networks that achieve reasonably competitive performance on modern image classification benchmarks. The model consists blocks of modules that have 1 3 × 3 convolution with stride 2 followed by 2 1 × 1 convolution with stride 1. We refer to this single module as a NiN-block and construct models of different size by stacking NiN-block. For simplicity, all NiN-block have the same number of output channels cout. Dropout is applied at the end of every NiN-block. At the end of the model, there is a 1 × 1 convolution reducing the channel number to the class number (i.e. 10 for CIFAR-10) followed by a global average pooling to produce the output logits. For width, we choose from cout from 3 options: {2 × 96, 4 × 96, 8 × 96}. For depth, we choose from 3 options: {2 × NiNblock, 4 × NiNblock, 8 × NiNblock} For dropout, we choose from 3 options: {0.0, 0.25, 0.5} For batch size, we choose from: {32, 64, 128} Since each optimizer may require different learning rate and in some cases, different regular- ization, we fine-tuned the hyper-parameters for each optimizer while keeping 3 options for every hyper-parameter choices8. 8While methods with adaptive methods generally require less tuning, in practice researchers have observed perfor- mance gains from tuning the initial learning rate and learning rate decay. 16 Momentum SGD: We choose momentum of 0.9 and choose the initial learning rate η from {0.1, 0.032, 0.01} and regularization coefficient λ from {0.0, 0.0001, 0.0005}. The learning rate decay schedule is ×0.1 at iterations [60000, 90000]. Adam: We choose initial learning rate 7 from {0.001, 3.2e — 4, le — 4}, e = le —3 and regulariza- ion coefficient A from {0.0, 0.0001, 0.0005}. The learning rate decay schedule is x0.1 at iterations 60000, 90000]. RMSProp: We choose initial learning rate η from {0.001, 3.2e − 4, 1e − 4} and regulariza- tion coefficient λ from {0.0, 0.0001, 0.0003}. The learning rate decay schedule is ×0.1 at iterations [60000, 90000]. # A.4 Canonical Measures Based on empirical observations made by the community as a whole, the canonical ordering we give to each of the hyper-parameter categories are as follows: 1. Batchsize: smaller batchsize leads to smaller generalization gap 2. Depth: deeper network leads to smaller generalization gap 3. Width: wider network leads to smaller generalization gap 4. Dropout: The higher the dropout (≤ 0.5) the smaller the generalization gap 5. Weight decay: The higher the weight decay (smaller than the maximum for each optimizer) the smaller the generalization gap 6. Learning rate: The higher the learning rate (smaller than the maximum for each optimizer) the smaller the generalization gap 7. Optimizer: Generalization gap of Momentum SGD < Generalization gap of Adam < Generaliza- tion gap of RMSProp # A.5 Definition of Random Variables Since the measures are results of complicated interactions between the data, the model, and the train- ing procedures, we cannot manipulate it to be any values that we want. Instead, we use the following definition of random variables: suppose S is a subset of all the components of θ (e.g. S = {∅} for |S| = 0, |S| = {learning rate} for |S| = 1 or |S| = {learning rate, dropout} for |S| = 2 ). Specifically we denote Sab as the collective condition {θ(a) 1 = v1, θ(b) |S| = v2|S|}. We can then define and empirical measure four probability Pr(µ(a) > µ(b), g(a) > g(b) | Sab), Pr(µ(a) > µ(b), g(a) < g(b) | Sab), Pr(µ(a) < µ(b), g(a) > g(b) | Sab) and Pr(µ(a) < µ(b), g(a) < g(b) | Sab). g(a) > g(b) g(a) ≤ g(b) µ(a) > µ(b) p00 p10 µ(a) ≤ µ(b) p01 p11 Figure 3: Joint Probability table for a single Sab Together forms a 2 by 2 table that defines the joint distribution of the Bernoulli random variables Pr(g(a) > g(b) | Sab) and Pr(µ(a) > µ(b) | Sab). For notation convenience, we use Pr(µ, g | Sab) , Pr(g | Sab) and Pr(µ | Sab) to denote the joint and marginal. If there are N = 3 choices for each hyperparameter in S then there will be N |S| such tables for each hyperparameter combination. Since each configuration occurs with equal probability, for that arbitrary θ(a) and θ(b) drawn from Θ conditioned on that the components of S are observed for both models, the joint distribution can be defined as Pr(µ, g | S) = 1 Pr(µ, g | Sab) and likely the marginals can be defined as Pr(µ | S) = 1 Pr(g | Sab). With these notations established, all the relevant quantities can be computed by iterating over all pairs of models. 17 # A.6 All Results Below we present all of the measures we computed and their respective τ and Ψ on more than 10,000 models we trained and additional plots. Unless stated otherwise, convergence is considered when the loss reaches the value of 0.1. 19 vc dim 20 # params 51 sharpness 48 pacbayes 52 sharpness-orig 49 pacbayes-orig 40 frob-distance 25 spectral-init 26 spectral-orig 28 spectral-orig-main 33 fro/spec 32 prod-of-spec 31 prod-of-spec/margin 35 sum-of-spec 34 sum-of-spec/margin 41 spec-dist 37 prod-of-fro 36 prod-of-fro/margin 39 sum-of-fro 38 sum-of-fro/margin 22 1/margin 23 neg-entropy 44 path-norm 43 path-norm/margin 42 param-norm 45 fisher-rao 21 cross-entropy 53 1/σ pacbayes 1/σ sharpness 54 num-step-0.1-to-0.01-loss 64 63 num-step-to-0.1-loss 1/α0 sharpness mag 62 1/σ0 pacbayes mag 61 59 pac-sharpness-mag-init 60 pac-sharpness-mag-orig 56 pacbayes-mag-init 57 pacbayes-mag-orig 66 grad-noise-final grad-noise-epoch-1 65 oracle 0.01 oracle 0.02 oracle 0.05 oracle 0.1 canonical ordering canonical ordering depth ref batchsize 0.000 0.000 0.537 0.372 0.542 0.526 −0.317 −0.330 −0.262 −0.262 0.563 −0.464 −0.308 −0.464 −0.308 −0.458 0.440 0.513 0.440 0.520 −0.312 0.346 0.363 0.363 0.236 0.396 0.440 0.501 0.532 −0.151 −0.664 0.570 0.490 −0.293 0.401 0.425 0.532 0.452 0.071 0.579 0.414 0.123 0.069 −0.652 −0.032 dropout 0.000 0.000 −0.523 −0.457 −0.359 −0.076 −0.833 −0.845 −0.762 −0.762 0.351 −0.724 −0.782 −0.724 −0.782 −0.838 −0.199 −0.291 −0.199 −0.369 0.593 −0.529 −0.190 0.017 −0.516 0.147 −0.402 −0.033 −0.326 −0.069 −0.861 0.148 −0.215 −0.841 −0.514 −0.658 −0.480 0.119 0.378 0.885 0.673 0.350 0.227 0.969 0.001 learning optimizer depth rate 0.000 0.000 −0.909 0.000 0.000 −0.909 0.221 0.826 0.449 0.179 0.644 0.042 0.297 0.816 0.716 0.546 0.705 0.341 −0.718 0.526 −0.214 −0.721 −0.908 −0.208 −0.665 −0.908 −0.131 −0.665 −0.908 −0.131 0.326 −0.722 −0.909 −0.197 −0.702 −0.907 −0.166 0.909 −0.197 −0.722 0.909 −0.166 −0.702 0.738 −0.319 −0.568 0.321 0.538 −0.909 0.364 0.579 −0.907 0.321 0.913 0.538 0.882 0.380 0.598 −0.234 −0.758 −0.223 0.220 0.632 0.251 0.272 0.925 0.216 0.230 0.922 0.148 0.187 0.330 0.174 0.120 0.240 −0.516 0.149 0.390 0.140 0.346 0.200 0.744 0.296 0.776 0.711 0.114 −0.014 0.072 0.440 −0.030 −0.255 0.297 0.824 0.762 0.186 0.896 0.505 −0.698 −0.909 −0.240 0.181 0.321 −0.909 0.099 0.874 −0.035 0.188 0.902 0.508 0.245 0.427 0.141 0.121 0.376 −0.517 0.529 0.920 0.736 0.346 0.742 0.548 0.132 0.401 0.305 0.223 0.132 0.086 0.733 0.909 −0.055 0.033 −0.909 −0.061 0.744 −0.898 weight decay 0.000 −0.171 −0.251 −0.154 0.000 −0.171 −0.175 −0.154 0.248 0.233 −0.004 0.066 −0.179 −0.142 0.398 0.591 0.185 0.360 0.564 −0.086 −0.669 −0.166 −0.263 −0.341 −0.313 −0.231 −0.576 −0.508 −0.073 −0.240 −0.537 −0.434 −0.073 −0.240 −0.537 −0.434 0.243 −0.142 −0.218 −0.559 −0.482 −0.148 −0.179 −0.570 −0.456 0.102 −0.223 −0.142 −0.218 −0.148 −0.179 0.064 −0.197 −0.182 −0.171 −0.110 −0.257 0.117 0.731 −0.101 −0.297 0.130 0.739 −0.088 −0.295 0.378 0.418 0.731 −0.101 0.738 −0.080 0.381 0.391 0.211 −0.125 −0.124 −0.121 0.124 0.280 0.305 0.052 0.160 0.147 0.346 0.406 −0.046 −0.021 −0.088 −0.016 0.043 −0.264 −0.279 −0.628 0.516 0.484 0.269 0.741 0.315 0.365 0.195 0.147 −0.631 −0.171 −0.225 −0.541 0.281 −0.171 −0.158 −0.059 0.052 0.175 0.069 0.284 0.410 0.186 0.292 0.311 0.230 0.098 0.070 0.037 0.682 0.851 0.502 0.498 0.726 0.316 0.236 0.456 0.142 0.136 0.241 0.093 0.171 0.402 0.005 0.024 −0.363 −0.138 width overall τ Ψ 0.282 0.064 0.400 0.293 0.665 −0.053 −0.008 0.104 −0.157 0.178 0.195 0.280 0.173 0.124 −0.170 0.177 0.551 0.080 0.232 0.056 0.609 0.263 0.592 0.148 0.370 0.374 0.073 0.090 0.149 0.303 0.399 −0.407 0.155 0.432 0.221 0.622 0.447 0.201 0.121 0.735 −0.020 Table 5: Complexity measures (rows), hyperparameters (columns) and the rank-correlation co- efficients with models trained on CIFAR-10. 18 #-param -entropy 1-over-sigma-pacbayes-mag 1-over-sigma-pacbayes 1-over-sigma-sharpness-mag 1-over-sigma-sharpness cross-entropy displacement fisher-rao fro-over-spec frob-distance grad-noise-epoch-1 grad-noise-final input-grad-norm margin oracle-0.01 oracle-0.02 oracle-0.05 pacbayes-mag-init pacbayes-mag-orig pacbayes-orig pacbayes parameter-norm path-norm-over-margin path-norm prod-of-spec-over-margin prod-of-spec random sharpness-mag-init sharpness-mag-orig sharpness-orig sharpness spec-init spec-orig-main spec-orig step-0.1-to-0.01 step-to-0.1 sum-of-fro-over-margin sum-of-fro-over-sum-of-spec sum-of-fro sum-of-spec-over-margin sum-of-spec vc-dim conditional entropy batchsize 0.0202 0.0120 0.0884 0.0661 0.1640 0.1086 0.0233 0.0462 0.0061 0.0019 0.0462 0.0051 0.0623 0.0914 0.0105 0.6133 0.4077 0.1475 0.0216 0.1160 0.0620 0.0053 0.0039 0.0943 0.1027 0.2466 0.2334 0.0005 0.0366 0.0125 0.1117 0.0545 0.2536 0.2266 0.2197 0.0125 0.0349 0.1200 0.0258 0.1292 0.0089 0.0127 0.0422 0.9836 dropout 0.0278 0.0656 0.1514 0.1078 0.2572 0.2223 0.0850 0.0530 0.0072 0.0065 0.0530 0.0016 0.0969 0.1374 0.0750 0.5671 0.3557 0.1167 0.0238 0.2249 0.1071 0.0164 0.0197 0.1493 0.1230 0.3139 0.3198 0.0002 0.0460 0.0143 0.2353 0.1596 0.3161 0.2903 0.2815 0.0031 0.0361 0.2269 0.0392 0.2286 0.0292 0.0324 0.0564 0.8397 learning rate 0.0259 0.0113 0.0813 0.0487 0.1228 0.0792 0.0118 0.0196 0.0020 0.0298 0.0196 0.0028 0.0473 0.1203 0.0078 0.6007 0.3929 0.1369 0.0274 0.1006 0.0392 0.0084 0.0066 0.1173 0.1308 0.2179 0.2070 0.0005 0.0391 0.0195 0.0809 0.0497 0.2295 0.2072 0.2045 0.0055 0.0397 0.1005 0.0055 0.1115 0.0406 0.0466 0.0518 0.9331 num_block 0.0044 0.0086 0.0399 0.0809 0.1424 0.0713 0.0075 0.1559 0.0713 0.0777 0.1559 0.0633 0.0934 0.0749 0.0133 0.5690 0.3612 0.1241 0.0046 0.0426 0.0597 0.0086 0.0115 0.0217 0.0315 0.1145 0.1037 0.0002 0.0191 0.0043 0.0658 0.0156 0.1179 0.0890 0.0808 0.0093 0.1046 0.0440 0.1111 0.0441 0.0951 0.0876 0.0039 0.8308 optimizer 0.0208 0.0120 0.1004 0.0711 0.1779 0.1196 0.0159 0.0502 0.0057 0.0036 0.0502 0.0113 0.0745 0.1084 0.0108 0.6171 0.4124 0.1515 0.0222 0.1305 0.0645 0.0036 0.0064 0.1025 0.1056 0.2473 0.2376 0.0003 0.0374 0.0120 0.1223 0.0586 0.2532 0.2255 0.2180 0.0074 0.0485 0.1207 0.0312 0.1281 0.0089 0.0117 0.0422 0.9960 weight decay 0.0216 0.0155 0.1025 0.0589 0.1562 0.1041 0.0119 0.0379 0.0014 0.0015 0.0379 0.0027 0.0577 0.0853 0.0183 0.6108 0.4057 0.1469 0.0210 0.1316 0.0550 0.0066 0.0049 0.1054 0.1028 0.2540 0.2470 0.0006 0.0373 0.0134 0.1071 0.0599 0.2584 0.2355 0.2285 0.0043 0.0380 0.1060 0.0194 0.1134 0.0069 0.0096 0.0443 0.9746 width 0.0379 0.0125 0.0986 0.0858 0.1786 0.1171 0.0183 0.0506 0.0071 0.0005 0.0506 0.0052 0.0763 0.1057 0.0119 0.6191 0.4154 0.1535 0.0345 0.1246 0.0977 0.0185 0.0167 0.1090 0.1160 0.2497 0.2394 0.0009 0.0761 0.0142 0.1254 0.0700 0.2540 0.2262 0.2181 0.0070 0.0568 0.1645 0.0355 0.1714 0.0054 0.0080 0.0627 0.9977 |S| = 0 0.0200 0.0117 0.0960 0.0664 0.1741 0.1159 0.0161 0.0504 0.0059 0.0000 0.0504 0.0036 0.0712 0.1042 0.0108 0.6186 0.4130 0.1515 0.0202 0.1252 0.0629 0.0030 0.0039 0.1011 0.1030 0.2481 0.2385 0.0003 0.0368 0.0111 0.1189 0.0583 0.2539 0.2262 0.2188 0.0055 0.0502 0.1227 0.0297 0.1300 0.0051 0.0076 0.0412 N/A |S| = 1 0.0036 0.0072 0.0331 0.0454 0.1145 0.0592 0.0062 0.0183 0.0013 0.0005 0.0183 0.0013 0.0441 0.0623 0.0072 0.4727 0.2987 0.0980 0.0038 0.0354 0.0365 0.0036 0.0038 0.0181 0.0261 0.0951 0.0862 0.0001 0.0159 0.0036 0.0547 0.0130 0.0980 0.0739 0.0671 0.0026 0.0303 0.0366 0.0051 0.0366 0.0054 0.0079 0.0033 N/A |S| = 2 0.0000 0.0065 0.0241 0.0340 0.0544 0.0256 0.0040 0.0128 0.0018 0.0013 0.0128 0.0013 0.0329 0.0426 0.0051 0.2879 0.1637 0.0503 0.0004 0.0221 0.0225 0.0040 0.0047 0.0139 0.0240 0.0483 0.0415 0.0004 0.0134 0.0033 0.0224 0.0123 0.0559 0.0382 0.0359 0.0032 0.0134 0.0110 0.0027 0.0119 0.0072 0.0099 0.0000 N/A Table 6: Complexity measures (rows), hyperparameters (columns) and the mutual information with models trained on CIFAR-10. 700 2 Distribytion,of Training Error 600 - 500 - 400 300 - 200 - 100 - 0- q . 1 -_ 7 rc —0.002 0.000 0.002 0.004 0.006 0.008 0.010 0.012 Training Error Figure 4: Distribution of training error on the trained models. 19 weight learning decay optimizer rate 0.0000 −0.0478 −0.3074 −0.1497 0.0000 0.0000 −1.0000 vc dim 0.0000 −0.0478 −0.1934 −0.1497 0.0000 0.0000 −1.0000 # params 0.2497 0.1708 0.2444 0.5438 0.1202 0.9752 0.4569 sharpness 0.0499 0.0831 −0.2123 0.3688 0.0034 0.9447 0.0503 pacbayes 0.3654 0.5018 0.2196 0.5175 0.1923 0.9595 0.6329 sharpness 0ref 0.3708 0.3185 0.4583 0.2286 0.8863 0.0655 0.5979 pacbayes 0ref −0.1071 −0.8603 −0.6270 0.1765 −0.2196 0.8874 −0.1677 −0.6319 −0.0302 displacement −0.2854 −0.7928 −0.6423 −0.9989 −0.1063 −0.2913 −0.0799 −0.6284 −0.4567 spectral complexity 0.0671 −0.1096 −0.6163 −0.3290 spectral complexity 0ref −0.1362 −0.6110 −0.4688 −0.9932 −0.0513 0.0671 −0.2797 −0.5870 −0.3490 spectral complexity 0ref last2 −0.1362 −0.6110 −0.4688 −0.9628 −0.0513 0.2501 0.0525 −0.1264 0.6047 0.2317 spectral complexity 0ref last1 −0.2603 −0.5835 −0.6095 −0.9628 −0.1063 −0.0343 −0.2705 −0.5615 −0.4039 spectral product −0.2582 −0.6419 −0.5852 −0.9289 −0.0918 −0.0681 −0.2477 −0.5404 −0.4031 spectral product om 0.4627 −0.1237 −0.2603 −0.5835 −0.6095 spectral product dd/2 0.4421 −0.1287 −0.2582 −0.6419 −0.5852 spectral produce dd/2 om 0.3542 −0.1142 spectral sum −0.2734 −0.7752 −0.3386 0.1238 frob product 0.0126 −0.4983 0.6508 0.1861 0.1070 frob product om 0.0091 −0.5001 0.6375 0.2079 0.4074 frob product dd/2 0.5928 0.0126 0.6508 0.1861 0.0091 frob product dd/2 om 0.3855 0.5638 0.6375 0.2079 0.0216 −0.3829 −0.0554 median margin 0.3211 0.3861 −0.1519 −0.9314 −0.1018 0.9955 0.2166 0.6360 0.0026 0.6277 −0.2289 input grad norm 0.0216 0.0383 0.7999 0.0492 0.3001 0.1360 −0.2460 −0.0106 0.1481 logit entropy −0.0320 −0.4506 0.2936 0.5626 0.3885 0.1018 0.9854 0.0464 path norm 0.0614 0.2150 0.2565 0.1227 0.1383 −0.0398 0.0780 0.1730 parameter norm 0.3747 0.6639 0.3246 −0.4794 0.0227 0.0546 −0.2844 0.3190 0.1008 0.0222 −0.6189 fr norm cross-entropy 0.0500 0.2313 0.0643 0.0546 −0.1168 0.3190 0.1008 0.0222 −0.3277 fr norm logit sum 0.0500 0.2313 0.0643 0.0546 −0.1168 0.3190 0.1008 0.0222 −0.3277 fr norm logit margin 0.0500 0.2313 0.2429 0.5798 0.1504 0.9978 0.1340 path norm/margin 0.0683 0.2098 0.1107 0.0291 0.1602 −0.0445 −0.0034 0.4390 −0.5989 one epoch loss 0.1697 0.5186 0.9729 0.2624 0.1118 −0.0432 −0.0693 −0.0258 0.0811 0.0923 −0.4091 −0.0042 −0.0096 final loss 0.3087 0.1512 0.4985 0.2280 0.6665 0.1867 −0.1862 1/sigma gaussian 0.3723 0.5163 0.2253 0.9363 0.2321 −0.1549 1/sigma sharpness 0.2179 0.0766 0.6633 min(norm distance) 0.1223 0.1391 −0.0405 0.3235 −0.4785 0.0737 −0.0415 −0.0154 −0.0720 −0.0167 0.1556 step between 0.0035 −0.2666 0.8738 −0.1609 −0.6314 −0.1015 step to 0.0944 −0.2524 0.9556 −0.1450 −0.5974 −0.0414 step to 0.1 0.5173 0.5676 0.2680 0.9831 1/param sharpness 1/param gaussian 0.3362 0.5674 0.0871 0.9805 −0.0787 −0.7181 −0.4883 −1.0000 −0.0640 −0.4720 −0.0502 −0.2254 −0.4102 ratio cplx sharpness 0.1648 ratio cplx sharpness 0ref 0.2440 −0.0502 −0.1687 −0.0298 0.3153 −1.0000 0.1625 −0.0429 −0.0484 −0.1309 −0.1116 ratio cplx gaussian 0.2298 −0.9786 0.0542 −0.0484 −0.1682 −0.1709 0.1304 ratio cplx gaussian 0ref 0.2351 −0.9842 0.4040 −0.0434 −0.1580 −0.0034 0.1830 ratio cplx sharpness u1 0.5492 −0.9707 0.0818 0.5463 −0.0422 −0.1364 0.2421 ratio cplx sharpness 0ref u1 0.6476 −0.9650 0.0674 0.5052 0.0302 0.1346 −0.3957 ratio cplx gaussian u1 0.9707 0.3340 0.6390 0.1464 0.2924 0.1812 ratio cplx gaussian 0ref u1 0.9887 0.1305 0.0594 0.3211 0.0343 grad var 0.1149 0.1711 0.0806 0.1222 0.2760 −0.0046 0.0118 −0.0534 grad var 1 epoch 0.5012 0.8070 0.3572 0.3946 0.3478 0.9517 oracle 0.01 0.3432 0.6854 0.1741 0.2190 0.1886 0.8730 oracle 0.02 0.2010 0.5162 0.0785 0.1057 0.0522 0.6706 oracle 0.05 0.1017 0.3322 0.0512 0.0526 0.4356 oracle 0.1 0.0408 0.0478 canonical ordering 0.3620 0.0123 0.6662 1.0000 −0.1028 0.0262 −0.6241 0.0253 −0.0332 canonical ordering depth Table 7: Complexity measures (rows), hyperparameters (columns) and the rank-correlation co- efficients with models trained on SVHN dataset. 20 weight learning decay rate dropout batchsize 0.0000 −0.0392 −0.1770 −0.1130 0.0000 −0.7520 0.0000 0.0000 vc dim 0.0000 −0.0392 −0.1194 −0.1130 0.0000 −0.7520 0.0000 0.0000 # params 0.0973 0.6358 −0.0532 −0.0127 −0.0317 0.2059 −0.1966 0.1336 sharpness 0.0343 0.5493 −0.0570 −0.2340 −0.0563 0.1480 −0.0488 −0.0611 pacbayes 0.2181 0.1563 −0.0058 0.6262 0.4462 0.0167 0.2271 sharpness 0ref 0.5238 0.2430 0.1318 −0.0174 0.5282 0.1655 0.2587 pacbayes 0ref −0.1814 −0.7677 −0.6504 0.3767 −0.2403 −0.3831 −0.0392 −0.2652 −0.2693 displacement −0.1495 −0.5752 −0.6208 −0.7407 −0.2650 −0.2885 −0.0945 −0.4333 −0.3906 spectral complexity spectral complexity 0ref −0.0837 −0.4196 −0.4747 −0.7379 −0.1776 −0.1468 −0.1085 −0.3860 −0.3070 spectral complexity 0ref last2 −0.0837 −0.4196 −0.4747 −0.7284 −0.1776 −0.1468 −0.1857 −0.3940 −0.3166 0.2210 spectral complexity 0ref last1 −0.2034 −0.5619 −0.6199 −0.7520 −0.2184 −0.1269 −0.0691 −0.4176 −0.3645 spectral product −0.1257 −0.4727 −0.5549 −0.7181 −0.2260 −0.2113 −0.1707 −0.4238 −0.3542 spectral product om 0.0547 −0.1496 −0.2034 −0.5619 −0.6199 spectral product dd/2 0.7520 −0.2184 −0.1269 −0.0691 0.7501 −0.2260 −0.2113 −0.1707 −0.1257 −0.4727 −0.5549 spectral produce dd/2 om 0.0868 −0.1445 0.5832 −0.3751 −0.0899 −0.0392 −0.1517 −0.2184 spectral sum −0.2005 −0.8378 −0.5692 0.1013 frob product 0.0054 −0.2162 0.4656 0.3609 0.4967 −0.7520 0.0592 frob product om 0.0130 −0.2113 0.3729 0.2365 0.4613 −0.7520 0.3180 frob product dd/2 0.3407 0.0054 0.4656 0.3609 0.7652 0.4967 0.2758 frob product dd/2 om 0.0130 0.3356 0.3729 0.2365 0.7643 0.4613 0.0046 median margin 0.2142 −0.1295 0.1263 0.1738 0.3153 −0.0850 −0.5474 −0.1652 0.1498 input grad norm 0.3563 0.0088 0.7379 −0.1871 −0.0009 0.6548 −0.2502 0.0851 0.0699 logit entropy 0.1378 0.1614 −0.2819 −0.2095 0.5584 0.3906 0.2200 −0.3496 0.3451 path norm 0.3892 0.2223 0.2593 0.0420 0.8161 0.2951 0.2549 0.5258 0.0865 0.1569 −0.0458 0.3754 parameter norm 0.1607 0.2716 0.1287 0.2472 −0.0090 0.0246 −0.0245 0.0231 0.0355 0.0162 −0.5314 −0.1595 fr norm cross-entropy 0.3722 0.0727 0.0394 0.1780 0.0231 0.0355 0.0162 −0.0844 −0.1595 fr norm logit sum 0.3722 0.0727 0.0394 0.1780 0.0355 0.0162 −0.0844 −0.1595 fr norm logit margin 0.0231 0.3722 0.0727 0.1206 0.7718 0.3314 path norm/margin 0.2172 0.3580 0.0571 −0.0558 0.2510 0.0441 0.0684 −0.0012 −0.0425 −0.1217 −0.0174 0.0655 0.1843 −0.4509 one epoch loss 0.0544 0.1410 −0.0321 0.3484 −0.2080 −0.1140 −0.2236 0.1452 −0.1095 −0.0630 final loss 0.2272 0.3213 0.1298 0.3698 0.4993 0.1905 0.2525 1/sigma gaussian 0.0660 0.1912 0.3005 0.1191 −0.0073 0.6097 0.3879 0.2120 1/sigma sharpness 0.0008 0.1607 0.1569 −0.0458 0.1287 min(norm distance) 0.0865 0.3754 0.2472 −0.0090 0.1688 −0.0053 −0.0747 −0.0792 step between 0.0124 0.0621 −0.0168 −0.0210 0.3199 −0.1076 −0.4497 −0.0095 −0.2071 −0.2161 −0.3219 −0.5252 −0.4186 step to 0.2859 −0.0699 −0.4231 −0.0062 −0.2350 −0.2331 −0.3219 −0.8336 −0.2626 step to 0.1 0.2555 0.6430 0.4458 1/param sharpness 1/param gaussian 0.1525 0.6820 0.4001 −0.1776 −0.7743 −0.6476 −0.7520 −0.2498 −0.3803 −0.0392 −0.1602 −0.4315 ratio cplx sharpness 0.5033 −0.7520 0.3789 −0.0109 0.3067 ratio cplx sharpness 0ref 0.0937 0.2688 −0.0392 −0.0867 0.1203 −0.7501 0.1404 −0.2537 0.0446 −0.2183 −0.0392 −0.1123 −0.1366 ratio cplx gaussian 0.2961 −0.7520 0.1309 −0.4026 0.0389 −0.1434 −0.0392 −0.1075 −0.1245 ratio cplx gaussian 0ref 0.1958 −0.7520 −0.0114 0.2091 −0.1873 0.1140 −0.0392 −0.0971 −0.0673 ratio cplx sharpness u1 0.1652 0.5110 −0.7520 0.2615 0.0669 ratio cplx sharpness 0ref u1 0.0666 0.2527 −0.0392 −0.0774 0.0047 −0.3558 −0.1296 0.6690 0.0658 −0.2413 −0.0411 0.1672 −0.0040 ratio cplx gaussian u1 0.0722 −0.0239 −0.0468 0.6954 0.4737 0.2234 −0.0346 0.1942 0.3329 ratio cplx gaussian 0ref u1 0.0250 0.1035 −0.0652 0.3706 0.3514 0.1013 0.2730 0.1656 0.3538 grad var 0.0814 0.1328 0.1349 0.3792 −0.3701 0.4045 0.0801 0.1204 0.1279 grad var 1 epoch 0.5123 0.5464 0.5878 0.8274 0.7507 0.8862 0.5789 0.6700 0.8470 oracle 0.01 0.3927 0.3440 0.3970 0.5804 0.5922 0.7288 0.3588 0.4848 0.7032 oracle 0.02 0.2336 0.1697 0.1473 0.2937 0.3066 0.4149 0.1114 0.1918 0.4267 oracle 0.05 0.1401 0.2423 0.0876 0.0692 0.1225 0.1957 0.1738 0.2281 0.1037 oracle 0.1 0.4628 0.7125 −0.3254 canonical ordering 0.3610 0.0392 −0.0151 0.7520 −0.0598 0.9459 0.0353 −0.0054 −0.2835 −0.1120 0.0105 −0.7520 −0.0152 −0.0238 −0.0337 canonical ordering depth Table 8: Complexity measures (rows), hyperparameters (columns) and the rank-correlation co- efficients with models trained on CIFAR-10 when converged to Loss = 0.1. 21 vc dim # params sharpness pacbayes sharpness-orig pacbayes-orig frob-distance spec-init spec-orig spec-orig-main fro / spec prod-of-spec prod-of-spec/margin sum-of-spec sum-of-spec/margin spec-dist prod-of-fro prod-of-fro/margin sum-of-fro sum-of-fro/margin 1/margin input grad norm neg-entropy path-norm param-norm fisher-rao fr norm logit sum fr norm logit margin path norm/margin one epoch loss cross-entropy 1/sigma pacbayes 1/sigma sharpness min(norm distance) num-step-0.1-to-0.01-loss step to num-step-to-0.1-loss 1/alpha sharpness mag 1/alpha pacbayes mag pac-sharpness-mag-init pac-sharpness-mag-orig pacbayes-mag-init pacbayes-mag-orig ratio cplx sharpness u1 ratio cplx sharpness 0ref u1 ratio cplx gaussian u1 ratio cplx gaussian 0ref u1 grad-noise-final grad-noise-epoch-1 oracle 0.01 oracle 0.02 oracle 0.05 oracle 0.1 canonical ordering canonical ordering depth batchsize dropout optimizer 0 0 0.5492 −0.5155 0.3896 −0.4459 0.5493 −0.3492 0.5399 −0.0847 0 0 0.3661 0.5884 0.658 −0.0219 −0.0086 0.3703 0.7501 −0.9014 0.4659 −0.1885 0.5377 −0.372 0.4659 −0.1885 0.5377 −0.372 0.5283 −0.9072 0.5888 −0.9072 0.9099 0.5283 0.8832 0.5888 −0.3334 0.5235 0.3686 −0.5443 0.2457 0.2414 −0.5194 0.1625 0.4327 0.1625 0.4327 0.1625 0.4327 0.3692 −0.2022 0.3939 −0.4362 0.4443 −0.4015 0.5109 −0.0349 0.536 −0.3169 0.2414 −0.5194 0.6239 0.0544 0.6326 0.2609 0.9296 0.0397 0.1611 0.3346 0.2494 −0.5317 0.2494 −0.094 0.2494 −0.094 0.2159 0.0477 0.1518 0.7551 0.7154 0.1611 −0.1458 −0.0816 −0.0166 −0.6798 −0.5418 −0.4441 −0.8526 −0.2662 −0.68 0.7537 0.5203 0.262 0.9189 0.1573 0.3821 0.2032 0.7529 0.3346 0.1318 0.3493 −0.0578 −0.6909 0.4545 −0.0291 −0.6484 0.7371 0.3163 0.8181 0.1628 0.1907 0.8959 0.5802 0.1381 0.5089 −0.2388 0.852 0.7197 0.4518 0.259 Φ 0.0025 −0.012 −0.0019 −0.9073 Table 9: Complexity measures (rows), hyperparameters (columns) and the average rank- correlation coefficients over 5 runs with models trained on CIFAR-10. The numerical values are consistent of that of Table 5. 22 vc dim # params sharpness pacbayes sharpness-orig pacbayes-orig frob-distance spec-init spec-orig spec-orig-main fro / spec prod-of-spec prod-of-spec/margin sum-of-spec sum-of-spec/margin spec-dist prod-of-fro prod-of-fro/margin sum-of-fro sum-of-fro/margin 1/margin input grad norm neg-entropy path-norm param-norm fisher-rao fr norm logit sum fr norm logit margin path norm/margin one epoch loss cross-entropy 1/sigma pacbayes 1/sigma sharpness min(norm distance) num-step-0.1-to-0.01-loss step to num-step-to-0.1-loss 1/alpha sharpness mag 1/alpha pacbayes mag pac-sharpness-mag-init pac-sharpness-mag-orig pacbayes-mag-init pacbayes-mag-orig ratio cplx sharpness u1 ratio cplx sharpness 0ref u1 ratio cplx gaussian u1 ratio cplx gaussian 0ref u1 grad-noise-final grad-noise-epoch-1 oracle 0.01 oracle 0.02 oracle 0.05 oracle 0.1 canonical ordering canonical ordering depth batchsize 0 0 0.0124 0.0171 0.0082 0.011 0.0102 0.0061 0.0015 0.0015 0.0164 0.0053 0.0075 0.0053 0.0075 0.012 0.016 0.0112 0.016 0.0112 0.0191 0.0147 0.0163 0.0103 0.0125 0.0192 0.0192 0.0192 0.0095 0.0169 0.0221 0.0095 0.0084 0.0125 0.0049 0.0118 0.0119 0.0108 0.0198 0.0113 0.016 0.022 0.0221 0.0177 0.0124 0.0205 0.0239 0.0447 0.0547 0.0178 0.0133 0.0091 0.0188 0.0111 0.018 dropout 0 0 0.0129 0.0159 0.0106 0.0062 0.0049 0.0029 0.0096 0.0096 0.0105 0.0109 0.0078 0.0109 0.0078 0.0095 0.0096 0.0126 0.0096 0.0126 0.0059 0.0186 0.0169 0.006 0.0061 0.0153 0.0153 0.0153 0.0172 0.0128 0.0128 0.0031 0.009 0.0061 0.0094 0.011 0.0059 0.0224 0.0166 0.0039 0.0061 0.0059 0.0077 0.0134 0.0079 0.0106 0.0126 0.0598 0.0165 0.0078 0.0135 0.0249 0.0333 0.004 0.0226 learning rate 0 0 0.0153 0.0108 0.0062 0.0111 0.0067 0.0072 0.0072 0.0072 0.0034 0.0048 0.0082 0.0048 0.0082 0.0081 0.0117 0.0083 0.0117 0.0083 0.0154 0.019 0.012 0.0079 0.0071 0.0084 0.0084 0.0084 0.0054 0.0146 0.0174 0.0081 0.0077 0.0071 0.0071 0.0162 0.0101 0.0048 0.0084 0.0139 0.0127 0.0171 0.0083 0.0127 0.0052 0.0075 0.0035 0.0628 0.0542 0.0153 0.0081 0.0133 0.0292 0.0073 0.0208 depth 0.0038 0.0038 0.0036 0.0086 0.0073 0.0083 0.0058 0.004 0.004 0.0037 0.0048 0.0037 0.0039 0.0037 0.0035 0.0084 0.0037 0.0037 0.0034 0.0054 0.0068 0.0018 0.0093 0.0034 0.0077 0.0083 0.0169 0.0169 0.0056 0.0066 0.0138 0.0066 0.0126 0.0077 0.0182 0.0169 0.0236 0.0082 0.0037 0.0037 0.0037 0.0037 0.0037 0.0036 0.0039 0.0019 0.0028 0.0337 0.0316 0.0108 0.0138 0.0136 0.0341 0.0038 0.0038 optimizer 0 0 0.0196 0.0074 0.0192 0.0162 0.017 0.0192 0.0166 0.0166 0.0205 0.0237 0.0225 0.0237 0.0225 0.0221 0.0191 0.0224 0.0191 0.0224 0.0221 0.0222 0.022 0.0174 0.0083 0.0311 0.0311 0.0311 0.0157 0.0223 0.0151 0.0173 0.0185 0.0083 0.0147 0.0135 0.0191 0.0262 0.0228 0.0186 0.0188 0.0173 0.0213 0.0261 0.0266 0.0156 0.0173 0.0394 0.082 0.0189 0.0272 0.0171 0.0145 0.0185 0.0198 weight decay 0 0 0.0154 0.0078 0.0151 0.013 0.0102 0.0191 0.0234 0.0234 0.0151 0.0249 0.0232 0.0249 0.0232 0.0122 0.0121 0.0093 0.0121 0.0093 0.0079 0.0161 0.0184 0.0115 0.0051 0.01 0.01 0.01 0.0224 0.0126 0.014 0.0132 0.0119 0.0051 0.0081 0.0101 0.0148 0.0097 0.015 0.0155 0.0139 0.0131 0.0134 0.012 0.0056 0.01 0.0087 0.0243 0.0173 0.0086 0.0167 0.015 0.0185 0.0108 0.0273 width 0.0179 0.0179 0.0181 0.0169 0.0164 0.0173 0.0176 0.0127 0.0136 0.0083 0.0203 0.0101 0.0054 0.0101 0.0054 0.0177 0.0174 0.0141 0.0174 0.0141 0.0224 0.011 0.0204 0.0178 0.0175 0.0158 0.0158 0.0158 0.0192 0.0173 0.0183 0.0162 0.0121 0.0175 0.0222 0.012 0.0152 0.0201 0.0237 0.0179 0.0179 0.0179 0.0179 0.0183 0.0183 0.0218 0.017 0.0363 0.0514 0.026 0.0058 0.0239 0.0321 0.0179 0.0202 overall τ 0.0006 0.0009 0.0026 0.0008 0.0034 0.0025 0.0035 0.001 0.0009 0.0004 0.0024 0.0008 0.0006 0.0014 0.0015 0.0036 0.0014 0.0014 0.0024 0.002 0.0026 0.0043 0.0025 0.0014 0.0016 0.0069 0.0075 0.0075 0.0019 0.005 0.0023 0.0035 0.0039 0.0016 0.0023 0.002 0.002 0.0031 0.0044 0.0011 0.0008 0.001 0.0009 0.0009 0.0006 0.0031 0.0041 0.0309 0.0478 0.0026 0.0033 0.0076 0.0107 0.0027 0.0046 Ψ 0.0026 0.0026 0.0056 0.0048 0.0048 0.0047 0.0043 0.0045 0.0049 0.0046 0.0055 0.0055 0.0051 0.0055 0.0051 0.0052 0.0052 0.0049 0.0052 0.0049 0.0060 0.0061 0.0064 0.0044 0.0038 0.0065 0.0068 0.0068 0.0056 0.0058 0.0062 0.0044 0.0045 0.0038 0.0051 0.0050 0.0058 0.0062 0.0065 0.0051 0.0052 0.0057 0.0057 0.0061 0.0052 0.0054 0.0054 0.0170 0.0186 0.0061 0.0058 0.0066 0.0102 0.0045 0.0076 Table 10: Complexity measures (rows), hyperparameters (columns) and the standard deviation of each entry measured over 5 runs with models trained on CIFAR-10. The standard deviation for Ψ is computed assuming that each hyperparamters are independent from each other. We see that all standard deviation are quite small, suggesting the results in of Table 5 are statistically significant. 23 # B Extended Notation Given any margin value γ ≥ 0, we define the margin loss Lγ as follows: Ly (fw) = Eexy~ |T(fw(Xly] < + max fw(X)U]) (10) and Ly is defined in an analogous manner on the training set. Further, for any vector v, we denote by ||v||, the 22 norm of v. For any tensor W, let |/W||,, = ||vec(W)||. We also denote ||W]|, as the spectral norm of the tensor W when used with a convolution operator. For convolutional operators, we compute the true singular value with the method proposed by Sedghi et al. (2018) through FFT. We denote a tensor as A, vector as a, and scalar as A or a. For any 1 < j < k, consider a k-th order tensor A and a j-th order tensor B where dimensions of B match the last j dimensions of A. We then define the product operator ®;: (A @; B)i,iy_; = (Ai ,B), (11) where i1, . . . , ik−j are indices. We also assume that the input images have dimension n × n and there are κ classes. Given the number of input channels cin, number of output channels cout, 2D square kernel with side length k, stride s, and padding p, we define the convolutional layer convW,s,p as follows: n+2p—k Fy convw,s,p(X)iri2 = W@spatch,;, 1) 41,5(é.—1)41,4 (Pad,(X)) VI Stine < | | (12) where W ∈ Rcout×cin×k×k is the convolutional parameter tensor, patchi,j,k(Z) is a k × k patch of Z starting from the point (i, j), and padp is the padding operator which adds p zeros to top, bottom, left and right of X: ( padp(X)i1,i2,j = Xi1,i2 0 p < i1, i2 ≤ n + p otherwise . (13) We also define the max-pooling operator poolk,s,p as follows: n+2p—k J (14) POONg, 5 p(X)ar iz, = Max(Patch,(,—1)41,5(é2—-1) 41 (Pad,(%:,.;))) WS iyi < | (14) We denote by fW,s a convolutional network such that Wi ∈ Rci×ci−1×ki×ki is the convolution tensor and si is the convolutional stride at layer i. At Layer i, we assume the sequence of convolution, ReLU and max-pooling where the max pooling has kernel k0 i. Lack of max-pooling in some layers can be achieved by setting k0 i = 1. We consider classification tasks and denote the number of classes by κ. # C Complexity Measures In this section, we look at different complexity measures. When a measure µ is based on a general- ization bound, we chose it so that the following is true with probability 0.99 (we choose the failure probability δ to be 0.01): L ≤ ˆL + r µ m (15) We also consider measures which do not provably bound the generalization error and evaluate those. Note that in almost all cases, the canonical ordering given based on some “common" assumptions are positively correlated with the generalization in terms of both τ and Ψ; however, for optimizer, the correlation τ is close to 0. This implies that the choice of optimizer is only essentially uncorrelated with the generalization gap in the range of models we consider. This ordering helps validate many techniques used by the practioners. 24 # C.1 VC-Dimension Based Measures We start by restating the theorem in (Bartlett et al., 2019) which provides an upper bound on the VC-dimension of any piece-wise linear network. Theorem 1 (Bartlett et al. (2019)) Let F be the class of feed-forward networks with a fixed computation graph of depth d and ReLU activations. Let ai and qi be the number of activations and parameters in layer i. Then VC-dimension of F can be bounded as follows: VC(F) ≤ d + d X (d − i + 1)qi ! log2 8e d X iai log2 4e d X jaj i=1 i=1 j=1 Theorem 2 Given a convolutional network f , for any δ > 0, with probability 1 − δ over the the training set: # s r d log2 (6dn)3 Pd i=1 k2 i cici−1 log(1/δ) m L ≤ ˆL + 4000 + m (16) Proof We simplify the bound in Theorem 1 using a d0 to refer to the depth instead of d: VC(F) ≤ d0 + d0 X (d − i + 1)qi log2 8e d0 X iai log2 4e d0 X jaj i=1 i=1 j=1 ≤ d0 + d0 X (d0 − i + 1)qi log2 8e d0 X iai 2 i=1 i=1 ≤ d0 + 2 log2 8e d0 X iai d0 X (d0 − i + 1)qi i=1 i=1 ≤ 3d0 log2 8e d0 X iai d0 X qi i=1 i=1 In order to extend the above bound to a convolutional network, we need to present a pooling layer with ReLU activations. First note that maximum of two inputs can be calculated using two layers with ReLU and linear activations as max(x1, x2) = x1 + ReLU (x2 − x1). Now, since max-pooling at layer i has kernel sizes k0 i)e layers to present that but given that the kernel size of the max-pooling layer is at most size of the image, we have d4 log2(k0 i)e ≤ d4 log2(n2)e ≤ d8 log2(n)e ≤ 9 log2(n) Therefore, we have d0 ≤ 9d log2(n). The number of activations in any of these layers is at most n2ci since there are at most n2 pairs of neighbor pixels in an n × n image with ci channels. We ignore strides when calculating the upper bound since it only reduces number of activations at a few layers and does not change the bound significantly. Using these bounds ond0, ai and qi the equivalent network, we can bound the VC dimension as follows: d VC(F) < 27d logs (n) logy (Se(9d logy (n))?n”) (9 logs (n)) Ss keas(c +1) i=1 d < 729d log,(n)* log, (6dn) Ss kci-1 (ce; +1) i=l d < 729d logy (6dn)? S~ k?e;-1 (ci +1) i=l 25 For binary classifiers, generalization error can be in terms of Rademacher complexity (Mohri et al., 2012) which in turn can be bounded by 72pVC/m (Kontorovich, 2016). Therefore, we can get the following9 generalization bound: # r # r L ≤ ˆL + 144 V C(F) m + log(1/δ) m (17) For multi-class classification, the generalization error can be similarly bounded by Graph dimension which is an extension of VC-dimension. A simple approach get a bound on Graph dimension is to consider all pairs of classes as binary classification problem which bounds the graph dimension by κ2 V C(F). There, putting everything together, we get the following generalization bound: # s L ≤ ˆL + 4000κ d log2 (6dn)3 Pd i=1 k2 m i ci−1(ci + 1) + r log(1/δ) m (18) Inspired by Theorem 2, we define the following V C-based measure for generalization: µV C(fw) = 4000κ v u u td log2 (6dn)3 d X i ci−1(ci + 1) + plog(1/δ) k2 2 i=1 (19) Since some of the dependencies in the above measure are probably proof artifacts, we also define another measure that is nothing but the number of parameters of the model: µparam = d X k2 i ci−1(ci + 1) i=1 (20) # C.1.1 Measures on the output of the network While measures that can be calculated only based on the output of the network cannot reveal complexity of the network, they can still be very informative for predicting generalization. Therefore, we define a few measures that can be calculated solely based on the output of the network. We start by looking at the cross-entropy over the output. Even though we used a cross-entropy based stopping criterion, the cross-entropy of the final models is not exactly the same as the stopping criterion and it could be informative. Hence we define the following measure: µcross-entropy = 1 m m X i=1 ‘(fw(Xi), yi) (21) where ‘ is the cross-entropy loss. In all measures that involve margin γ, we set the margin γ to be the 10-th percentile of the margin values on the training set and therefore ensuring ˆLγ ≤ 0.1. Even though margin alone is not a sensible generalization measure and can be artificially increased by scaling up the magnitude of the weights, it could still reveal information about training dynamics and therefore be informative. We report the following measure based on the margin: µ1/margin(fw) = 1 γ2 (22) Finally, entropy of the output is another interesting measure and it has been shown that regular- izing it can improve generalization in deep learning (Pereyra et al., 2017). With a fixed cross-entropy, increasing the entropy corresponds to distribute the uncertainty of the predictions equally among the wrong labels which is connected to label smoothing and increasing the margin. We define the following measure which is the negative entropy of the output of the network: µneg-entropy(fw) = 1 m m X i=1 κ X j=1 pi[j] log(pi[j]) (23) where pi[j] is the predicted probability of the class j for the input data Xi. 9The generalization gap is bounded by two times Rademacher Complexity, hence the constant 144. 26 # C.2 (Norm & Margin)-Based Measures Several generalization bounds have been proved for neural networks using margin and norm notions. In this section, we go over several such measures. For fully connected networks, Bartlett and Mendelson (2002) have shown a bound based on product of ‘1,∞ norm of the layer weights times a 2d factor where ‘1,∞ is the maximum over hidden units of the ‘2 norm of the incoming weights to the hidden unit. Neyshabur et al. (2015b) proved a bound based on product of Frobenius norms of the layer weights times a 2d factor and Golowich et al. (2017) was able to improve the factor to d. Bartlett et al. (2017) proved a bound based on product of spectral norm of the layer weights times sum over layers of ratio of Frobenius norm to spectral norm of the layer weights and Neyshabur et al. (2018a) showed a similar bound can be achieved in a simpler way using PAC-bayesian framework. Spectral Norm Unfortunately, none of the above founds are directly applicable to convolutional networks. Pitas et al. (2017) built on Neyshabur et al. (2018a) and extended the bound on the spectral norm to convolutional networks. The bound is very similar to the one for fully connected networks by Bartlett et al. (2017). We next restate their generalization bound for convolutional networks including the constants. Theorem 3 (Pitas et al. (2017)) Let B an upper bound on the ‘2 norm of any point in the input domain. For any B, γ, δ > 0, the following bound holds with probability 1 − δ over the training set: —______\ 2 we | (SAB DL aver + Vint) TEL, pw, le + mc) L<L,4 24 <i, a (24) Inspired by the above theorem, we define the following spectral measure: ; =)? qa act [WWI rm 2 (BAB hive + VinGn?d)) TL, Will} Dh ae £ + InP) Hspec,init (fw) = pe (25) The generalization bound in Theorem 3 depends on reference tensors W0 i . We chose the initial tensor as the reference in the above measure but another reasonable choice is the origin which gives the following measures: —_ 2 2 (8B DL kiya + VinGn?d) TTL, (Wald Dh Pee +) Hspec-orig fw) 3 (26) Since some of the terms in the generalization bounds might be proof artifacts, we also measure the main terms in the generalization bound: µspec-init-main(fw) = Qd i=1 kWik2 2 Pd kWj −W0 kWj k2 2 jk2 F # j=1 γ2 Pd µspec-orig-main(fw) = Qd i=1 kWik2 2 γ2 j=1 kWj k2 F kWj k2 2 (28) 27 (27) We further look at the main two terms in the bound separately to be able to differentiate their contributions. µspec-init-main(fw) = Qd i=1 kWik2 2 Pd kWj −W0 kWj k2 2 jk2 F # j=1 γ2 Pd µspec-orig-main(fw) = Qd j=1 kWj k2 F kWj k2 2 (30) # i=1 kWik2 2 γ2 i=1 kWik2 γ2 µprod-of-spec/margin(fw) = Qd 2 (31) µprod-of-spec(fw) = d Y kWik2 2 i=1 (32) µfro/spec(fw) = d X i=1 kWik2 F kWik2 2 (33) Finally, since product of spectral norms almost certainly increases with depth, we look at the fol- lowing measure which is equal to the sum over squared spectral norms after rebalancing the layers to have the same spectral norms: µsum-of-spec/margin(fw) = d Qd i=1 kWik2 γ2 2 !1/d (34) 9\ 1/4 Hisum-ot-spee( fw) = ¢ (|| Wil) (35) Frobenius Norm The generalization bound given in Neyshabur et al. (2015b) is not directly applicable to convolutional networks. However, Since for each layer i, we have kWik2 ≤ k2 i kWikF and therefore by Theorem 3, we can get an upper bound on the test error based on product of Frobenius norms. Therefore, we define the following measure based on the product of Frobenius norms: µprod-of-fro/margin(fw) = Qd i=1 kWik2 γ2 F (36) # d Y µprod-of-fro(fw) = kWik2 F i=1 (37) We also look at the following measure with correspond to sum of squared Frobenius norms of the layers after rebalancing them to have the same norm: µsum-of-fro/margin(fw) = d Qd i=1 kWik2 γ2 F !1/d (38) µsum-of-fro(fw) = d d Y kWik2 F !1/d i=1 (39) Finally, given recent evidence on the importance of distance to initialization (Dziugaite and Roy, 2017; Nagarajan and Kolter, 2019b; Neyshabur et al., 2018b), we calculate the following measures: d Htrrobenins-distance (fw) = 9. || Ws — Wl? (40) =1 = [dist-spec-init (fw) = Ss \| Ww; - wil; (41) i=l 28 (29) In case when the reference matrix W0 parameters which also correspond to distance from the origin: i = 0 for all weights, Eq (40) the Frobenius norm of the µparam-norm(fw) = d X kWik2 F i=1 (42) Path-norm Path-norm was introduced in Neyshabur et al. (2015b) as an scale invariant complex- ity measure for generalization and is shown to be a useful geometry for optimization Neyshabur et al. (2015a). To calculate path-norm, we square the parameters of the network, do a forward pass on an all-ones input and then take square root of sum of the network outputs. We define the following measures based on the path-norm: # P µpath-norm/margin(fw) = i fw2 (1)[i] γ2 (43) # X µpath-norm(fw) = fw2 (1) i (44) where w2 = w ◦ w is the element-wise square operation on the parameters. Fisher-Rao Norm Fisher-Rao metric was introduced in Liang et al. (2017) as a complexity measure for neural networks. Liang et al. (2017) showed that Fisher-Rao norm is a lower bound on the path-norm and it correlates in some cases. We define a measure based on the Fisher-Rao matric of the network: µFisher-Rao(fw) = (d + 1)2 m m X i=1 hw, ∇w‘(fw(Xi)), yii2 (45) where ‘ is the cross-entropy loss. # C.3 Flatness-based Measures PAC-Bayesian framework (McAllester, 1999) allows us to study flatness of a solution and connect it to generalization. Given a prior P is is chosen before observing the training set and a posterior Q which is a distribution on the solutions of the learning algorithm (and hence depends on the training set), we can bound the expected generalization error of solutions generated from Q with high probability based on the KL divergence of P and Q. The next theorem states a simplified version of PAC-Bayesian bounds. Theorem 4 For any δ > 0, distribution D, prior P , with probability 1 − δ over the training set, for any posterior Q the following bound holds: # s KL(Q||P) + log (4) 2(m — 1) Ey.g [L(v)] $ Ewxe [L(f0)] 4 (46) If P and Q are Gaussian distributions with P = N (µP , ΣP ) amd Q = N (µQ, ΣQ), then the KL-term can be written as follows: 1 _ _ KL(Y (1q,a)||V (up, EP) = 5 [= (2p'Ea) + (Hq — we)" Up! (ug — pe) — b+ In( Setting Q = N (w, σ2I) and P = N (w0, σ2I) similar to Neyshabur et al. (2017), the KL term will kw−w0k2 2σ2 . However, since σ belongs to prior, if we search to find a value for σ, we need be simply to adjust the bound to reflect that. Since we search over less than 20000 predefined values of σ in our experiments, we can use the union bound which changes the logarithmic term to log(20000m/δ) and we get the following bound: 2 # s Eu∼N (u,σ2I) [L(fw+u)] ≤ Eu∼N (u,σ2I) h ˆL(fw+u) i + kw−w0k2 2 4σ2 + log( m m − 1 σ ) + 10 (47) 29 Based on the above bound, we define the following measures using the origin and initialization as reference tensors: Pw cor) Hpac-bayes-init (fw) Ie? t log(~) + 10 (48) µpac-bayes-orig(fw) = kwk2 2 4σ2 + log( m δ ) + 10 (49) where σ is chosen to be the largest number such that Eu∼N (u,σ2I) i h ˆL(fw+u) < 0.1. The above framework captures flatness in the expected sense since we add Gaussian perturbations to the parameters. Another notion of flatness is the worst-case flatness where we search for the direction that changes the loss the most. This is motivated by (Keskar et al., 2016) where they observe that this notion would correlate to generalization in the case of different batch sizes. We can use PAC-Bayesian framework to give generalization bounds for worst-case perturbations as well. The magnitude of a Gaussian variable with with variance σ2 is at most σp2 log(2/δ) with probability 1 − δ/2. Applying a union bound on all parameters, we get that with probability 1 − δ/2 the magnitude of the Gaussian noise is at most α = σp2 log(2ω/δ) where ω is the number of parameters of the model. Therefore, we can get the following generalization bound: # s Eu∼N (u,σ2I) [L(fw+u)] ≤ max |ui|≤α ˆL(fw+u) + kw−w0k2 2 log(2ω/δ) 2α2 + log( 2m δ ) + 10 m − 1 (50) Inspired by the above bound, we define the following measures: on w— w"||, log(2w) “lt 2 w||; log(2w l [2lo8( ) + log( µsharpness-init(fw) = + log( m σ ) + 10 (51) # kwk2 µsharpness-orig(fw) = + log( m δ ) + 10 (52) where α is chosen to be the largest number such that max|ui|≤α ˆL(fw+u) ≤ 0.1. To understand the importance of the flatness parameters σ and α, we also define the following measures: 1 σ2 1 α2 µpac-bayes-flatness(fw) = (53) µsharpness-flatness(fw) = (54) where α and σ are computed as explained above. Magnitude-aware Perturbation Bounds The magnitude of perturbation in (IXeskar et al., 2016) was chosen so that for each parameter the ratio of magnitude of perturbation to the magnitude of the parameter is bounded by a constant a/!°, Following a similar approach, we can choose the posterior for parameter i in PAC-Bayesian framework to be -V(w;, 0/?|w;|? +€?). Now, substituting this in the Equation equation C.3 and solving for the prior .”(w°,o%) that minimizes the KL term by setting the gradient with respect to of to zero, KL can be written as follows: 2 w 2KL(Q||P) = w log (7 +t || w - w? ||; + 2) - SF log (0? |\w; — wo? + e”) i=l Ww = ; 2+ (0? 41) ||w— wll /w > °8 2 +0? \w; — we? i=1 Therefore, the generalization bound can be written as follows FE Tog (SHEA) og 8) + 10 Eu [L(furtn)] $ Bu [E(fren)] + | * — (55) m-1 10They actually used a slightly different version which is a combination of the two perturbation bounds we calculated here. Here, for more clarity, we decomposed it into two separate perturbation bounds. 30 where uj ~ 4 (0,0|w;| + €?), € = le — 3 and o” is chosen to be the largest number such that Eu [i(fw+u)| < 0.1. We define the following measures based on the generalization bound: ! ; 2 1< + (0? +1) ||w—w||5 /w m Hpac-bayes-mag-init (fw) = 72 bs ( 0 ‘lh + log( 5 ) +10 (56) i=l 2 +0 |w; — we 1< 2 (0 ||wll5 # i=1 ω X 1< 2 + (0 +1) ||wll5 /w m Hoac-baeenagore fw) = doe etal) tog) + 10 (57) i=1 2 +0? |w; — w?/? We also follow similar arguments are before to get a similar bound on the worst-case sharpness: 12 Jog (SHOP Ht toetea/ nw" I8/) tog (m) +10 4a i=1 Fa fw —w oe Ey [L(fwiu)) << max L(fw4u) 4 ~ |us|<a’|wi|t+e m-1 We look at the following measures based on the above bound: 2 1< € + (a? + 4log(2w/6)) ||w — wll> /w m Hpac-sharpness-mag-init (fw) = 7 Ss log ( l F} IE + log( 5 )+10 i=1 2 +a?|w; — wp ! 1< 2 + (a? + 4log(2w/6)) |lw|2 /w m . Hpac-sharpness-mag-orig (fw) = Zales ( ( B(20/ i Ip/ + log( 3 ) +10 (60) i=1 e + a? |w; — wd Finally, we look at measures that are only based the sharpness values computed above: 1 σ02 1 α02 µpac-bayes-mag-flat(fw) = (61) µsharpness-mag-flat(fw) = (62) where α and σ are computed as explained above. # C.4 Optimization-based Measures There are mixed results about how the optimization speed is relevant to generalization. On one hand we know that adding Batch Normalization or using shortcuts in residual architectures help both optimization and generalization and Hardt et al. (2015) suggests that faster optimization results in better generalization. On the other hand, there are empirical results showing that adaptive optimization methods that are faster, usually generalize worse (Wilson et al., 2017b). Here, we put these hypothesis into test by looking at the number of steps to achieve cross-entropy 0.1 and the number of steps needed to go from cross-entropy 0.1 to 0.01: µ#steps-0.1-loss(fw) = #steps from initialization to 0.1 cross-entropy (63) µ#steps-0.1-0.01-loss(fw) = #steps from 0.1 to 0.01 cross-entropy (64) The above measures tell us if the speed of optimization at early or late stages can be informative about generalization. We also define measures that look at the SGD gradient noise after the first epoch and at the end of training at cross-entropy 0.01 to test the gradient noise can be predictive of generalization: (65) # µgrad-noise-epoch1(fw) = Var(X,y) S (∇w‘(fw1(X), y)) µgrad-noise-final(fw) = Var(X,y) S (∇w‘(fw(X), y)) (66) where w1 is the weight vector after the first epoch. 31 (58) (59) # D Algorithms We first lay out some common notations used in the pseudocode: 1. f : the architecture that takes parameter θ and input x and map to f (x; θ) which is the predicted label of x 2. θ: parameters 3. M : Some kind of iteration; M1: binary search depth; M2: Monte Carlo Estimation steps; M3: Iteration for estimating the loss 4. D = {(xi, yi)}n i=0 the dataset the model is trained on; B as a uniformly sampled minibatch from the dataset. Both search algorithm relies on the assumption that the loss increases monotonically with the perturbation magnitude σ around the final weight. This assumption is quite mild and in reality holds across almost all the models in this study. Algorithm 1 EstimateAccuracy 1: Inputs: model f , parameter θ, dataset D, estimate iteration M 2: Initialize Accuracy = 0 3: for episode i = 1 to M do 4: B ∼ sample(D) Accuracy += 1 |B| P i δ(yi = f (Bi; θ)) 5: 6: end for 7: return Accuracy/M Algorithm 2 Find σ for PAC-Bayesian Bound 1: Inputs: f , θ0, model accuracy ‘, target accuracy deviation d, Upper bound σmax, Lower bound σmin, M1, M2, M3 2: Initialize 3: for episode i = 1 to M1 do 4: σnew = (σmax + σmin)/2 ˆ‘ = 0 for step j = 0 to M2 do newI) 5: €=0 5: 6: 7: θ ← θ0 + N (0, σ2 ˆ‘ = ˆ‘ + EstimateAccuracy(f, θnew, D, M3) 8: 9: 10: end for 0=0/My d=\e-4 ifd< €q OF Omax — Omin < €¢ then 9: end for 10: 0=0/My 11: # w # return σnew 13: 14: 15: 16: 17: 18: 13: return Onew end if if ˆd > d then σmax = σnew 14: end if 15: if d>dthen # else 17: else # σmin = σnew 18: Omin = Onew # end if 19: 20: end for Note that for finding the sharpness σ, we use the cross-entropy as the differentiable surrogate object instead of the 1-0 loss which is in general not differentiable. Using gradient ascent brings another additional challenge that is for a converged model, the local gradient signal is usually weak, making gradient ascent extremely inefficient. To speed up thie process, we add a uniform noise with range being [−σnew/Nw, σnew/Nw] to lift the weight off the flat minima where Nw is the number of parameters. This empirical greatly accelerates the search. 32 Algorithm 3 Find σ for Sharpness Bound 1: Inputs: f , θ0, loss function L, model accuracy ‘, target accuracy deviation d, Upper bound σmax, Lower bound σmin, M1, M2, M3, gradient steps M4 2: Initialize 3: for episode i = 1 to M; do 4: Onew = (Omax + Omin)/2 5: b=00 6: for step 7 = 0 to Mz do 7: 6 = 0) + U(onew/2) 8: for step k = 0 to M, do 9: B ~ sample(Z) 10: 6=0+nVol(f,B,6) 1: if ||4|| > Onew then 12: 9 = Onew * Tan] 13: end if 14: end for — 15: b= min(¢, EstimateAccuracy(f, Onrew, J, M3)) 16: end for i: =d=\e—§ is: ifd< €q OF Omax — Omin < €¢ then 19: return Onew 20: ~end if a: if d>dthen 22: Omax = Fnew 23: ~~ else 24: Amin = Onew 23: 24: end if 25: 26: end for Further, for magnitude aware version of the bounds, the overall algorithm stays the same with the exception that now covariance matrices at line 7 of Algorithm 2 become as diagonal matrix containing w2 i on the diagonal; similarly, for line 12 of Algorithm 3, the weight clipping of each wi is conditioned on σnew|wi|, i.e. clipped to [−σnew|wi|, σnew|wi|]. Here wi denotes the ith parameter of flattened w. 33
{ "id": "1905.12600" }
1912.01603
Dream to Control: Learning Behaviors by Latent Imagination
Learned world models summarize an agent's experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance.
http://arxiv.org/pdf/1912.01603
Danijar Hafner, Timothy Lillicrap, Jimmy Ba, Mohammad Norouzi
cs.LG, cs.AI, cs.RO
9 pages, 12 figures
null
cs.LG
20191203
20200317
0 2 0 2 r a M 7 1 ] G L . s c [ 3 v 3 0 6 1 0 . 2 1 9 1 : v i X r a Published as a conference paper at ICLR 2020 # DREAM TO CONTROL: LEARNING BEHAVIORS BY LATENT IMAGINATION Danijar Hafner ∗ University of Toronto Google Brain Timothy Lillicrap DeepMind Jimmy Ba University of Toronto # Abstract Learned world models summarize an agent’s experience to facilitate learning complex behaviors. While learning world models from high-dimensional sensory inputs is becoming feasible through deep learning, there are many potential ways for deriving behaviors from them. We present Dreamer, a reinforcement learning agent that solves long-horizon tasks from images purely by latent imagination. We efficiently learn behaviors by propagating analytic gradients of learned state values back through trajectories imagined in the compact state space of a learned world model. On 20 challenging visual control tasks, Dreamer exceeds existing approaches in data-efficiency, computation time, and final performance. # INTRODUCTION Intelligent agents can achieve goals in complex environments even though they never encounter the exact same situation twice. This ability requires building representations of the world from past experience that enable generalization to novel situations. World models offer an explicit way to represent an agent’s knowledge about the world in a parametric model that can make predictions about the future. When the sensory inputs are high-dimensional images, latent dynamics models can abstract observations to predict forward in compact state spaces (Watter et al., 2015; Oh et al., 2017; Gregor et al., 2019). Compared to predictions in image space, latent states have a small memory footprint that enables imagining thousands of trajectories in parallel. Learning effective latent dynamics models is becoming feasible through advances in deep learning and latent variable models (Krishnan et al., 2015; Karl et al., 2016; Doerr et al., 2018; Buesing et al., 2018). Behaviors can be derived from dynamics models in many ways. Often, imagined rewards are maximized with a parametric policy (Sutton, 1991; Ha and Schmidhuber, 2018; Zhang et al., 2019) or by online planning (Chua et al., 2018; Hafner et al., 2018). However, considering only rewards within a fixed imagination horizon results in shortsighted behaviors (Wang et al., 2019). Moreover, prior work commonly resorts to derivative-free optimization for robustness to model errors (Ebert et al., 2017; Chua et al., 2018; Parmas et al., 2019), rather than leveraging analytic gradients offered by neural network dynamics (Henaff et al., 2019; Srinivas et al., 2018). We present Dreamer, an agent that learns long-horizon behaviors from images purely by latent imagination. A novel actor critic algorithm accounts for rewards beyond the imagination horizon while making efficient use of the neural network dynamics. For this, we predict state values and actions in the learned latent space as summarized in Figure 1. The values optimize Bellman consistency for imagined rewards and the policy maximizes the values by propagating their analytic gradients back through the dynamics. In comparison to actor critic algorithms that learn online or by experience replay (Lillicrap et al., 2015; Mnih et al., 2016; Schulman et al., 2017; Haarnoja et al., 2018; Lee et al., 2019), world models can interpolate past experience and offer analytic gradients of multi-step returns for efficient policy optimization. Dataset of Experience Learned Latent Dynamics aaes Value and Action Learned by Latent Imagination @ ~->@q-> Figure 1: Dreamer learns a world model from past experience and efficiently learns farsighted behaviors in its space by backpropagating value estimates back through imagined trajectories. # ∗Correspondence to: Danijar Hafner <[email protected]>. 1 Published as a conference paper at ICLR 2020 # (a) Cup (b) Acrobot # (c) Hopper (d) Walker (e) Quadruped Figure 2: Image observations for 5 of the 20 visual control tasks used in our experiments. The tasks pose a variety of challenges including contact dynamics, sparse rewards, many degrees of freedom, and 3D environments. Several of these tasks could previously not be solved through world models. The key contributions of this paper are summarized as follows: • Learning long-horizon behaviors by latent imagination Model-based agents can be short- sighted if they use a finite imagination horizon. We approach this limitation by predicting both actions and state values. Training purely by imagination in a latent space lets us efficiently learn the policy by propagating analytic value gradients back through the latent dynamics. • Empirical performance for visual control We pair Dreamer with existing representation learning methods and evaluate it on the DeepMind Control Suite with image inputs, illustrated in Figure 2. Using the same hyper parameters for all tasks, Dreamer exceeds previous model-based and model-free agents in terms of data-efficiency, computation time, and final performance. # 2 CONTROL WITH WORLD MODELS Reinforcement learning We formulate visual control as a partially observable Markov decision process (POMDP) with discrete time step t ∈ [1; T ], continuous vector-valued actions at ∼ p(at | o≤t, a<t) generated by the agent, and high-dimensional observations and scalar rewards ot, rt ∼ p(ot, rt | o<t, a<t) generated by the unknown environment. The goal is to develop an agent that maximizes the expected sum of rewards Ep t=1 rt Agent components The classical components of agents that learn in imagination are dynamics learning, behavior learning, and environment interaction (Sutton, 1991). In the case of Dreamer, the behavior is learned by predicting hypothetical trajectories in the compact latent space of the world model. As outlined in Figure 3 and detailed in Algorithm 1, Dreamer performs the following operations throughout the agent’s life time, either interleaved or in parallel: • Learning the latent dynamics model from the dataset of past experience to predict future re- wards from actions and past observations. Any learning objective for the world model can be incorporated with Dreamer. We review existing methods for learning latent dynamics in Section 4. • Learning action and value models from predicted latent trajectories, as described in Section 3. The value model optimizes Bellman consistency for imagined rewards and the action model is updated by propagating gradients of value estimates back through the neural network dynamics. • Executing the learned action model in the world to collect new experience for growing the dataset. Latent dynamics Dreamer uses a latent dynamics model that consists of three components. The representation model encodes observations and actions to create continuous vector-valued model states st with Markovian transitions (Watter et al., 2015; Zhang et al., 2019; Hafner et al., 2018). The transition model predicts future model states without seeing the corresponding observations that will later cause them. The reward model predicts the rewards given the model states, Representation model: Transition model: Reward model: p(st | st−1, at−1, ot) q(st | st−1, at−1) q(rt | st). (1) We use p for distributions that generate samples in the real environment and q for their approximations that enable latent imagination. Specifically, the transition model lets us predict ahead in the compact latent space without having to observe or imagine the corresponding images. This results in a low memory footprint and fast predictions of thousands of imagined trajectories in parallel. The model mimics a non-linear Kalman filter (Kalman, 1960), latent state space model, or HMM with real-valued states. However, it is conditioned on actions and predicts rewards, allowing the agent to imagine the outcomes of potential action sequences without executing them in the environment. 2 Published as a conference paper at ICLR 2020 gh gd i Ree ee ge 4 dB Ree ee ge gh gd i 4 dB (a) Learn dynamics from experience (b) Learn behavior in imagination (c) Act in the environment Figure 3: Components of Dreamer. (a) From the dataset of past experience, the agent learns to encode ), for example via reconstruction, and predicts observations and actions into compact latent states ( ) and environment rewards ( actions ( ) that maximize future value predictions by propagating gradients back through imagined trajectories. (c) The agent encodes the history of the episode to compute the current model state and predict the next action to execute in the environment. See Algorithm 1 for pseudo code of the agent. 3. LEARNING BEHAVIORS BY LATENT IMAGINATION Dreamer learns long-horizon behaviors in the compact latent space of a learned world model by efficiently leveraging the neural network latent dynamics. For this, we propagate stochastic gradients of multi-step returns through neural network predictions of actions, states, rewards, and values using reparameterization. This section describes the main contribution of our paper. Imagination environment The latent dynamics define a Markov decision process (MDP; Sutton, 1991) that is fully observed because the compact model states s; are Markovian. We denote imagined quantities with 7 as the time index. Imagined trajectories start at the true model states s, of observation sequences drawn from the agent’s past experience. They follow predictions of the transition model 8, ~ G(8r | 87~1,47—1), reward model r, ~ g(r | Sr)s anda policy a, ~ q(a, | s;). The objective is to maximize expected imagined rewards E a ( Serious tr) with respect to the policy. # Algorithm 1: Dreamer Initialize dataset D with S random seed episodes. Model components Initialize neural network parameters 0, ¢, 7) randomly. Representation po (s¢ | $11, 41-1, 01) while not converged do Transition qo(se | Set, a0-1) for update step c = 1..C do Reward gor | 81) // Dynamics learning Action delat | se) Draw B data sequences {(a¢, 01, 1+) }; ket v ~D.z Value vy (se) Compute model states s; ~ po(s: | s. TT an1,0%). Update @ using representation learning. Hyper parameters Seed episodes Collect interval Batch size Sequence length // Behavior learning Imagine trajectories {(s,,a,)}{t4 from each s;. Predict rewards E(qo(r; | s)) and values vy(s-). Compute value estimates V(s;) via Equation 6. Update 6 + @ + aVg Witt Vy (s,). Update —aVy i Flug ( (s,)-Va(s-)]|]- Imagination horizon 2S BHwBAN 2 Learning rate ~N / Environment interaction 1 + env. reset () ‘or time step t = 1..T do Compute s; ~ po(sz | 8:1, @¢—1, 01) from history. Compute a; ~ q¢(az | 8,) with the action model. Add exploration noise to action. Tt, O41 — env. step (az). Add experience to dataset D + DU {(0;, a1, 71). }- i = 3 Published as a conference paper at ICLR 2020 Cartpole Swingup Cheetah Run Quadruped Walk Walker Walk 1000 1000 1000 2 S gs 8s 800 800 800 2 8 8 600 600 600 400 400 400 —@- Dreamer (V,) ~@- No value (VR) —® PlaNet (Ve) iS g 8 Rv 8 8 200 200 200 - - - 10 20 30 40 10 20 30 40 10 20 30 40 10 20 30 40 Imagination Horizon Imagination Horizon Imagination Horizon Imagination Horizon # Episode Return Figure 4: Imagination horizons. We compare the final performance of Dreamer, learning an action model without value prediction, and online planning using PlaNet. Learning a state value model to estimate rewards beyond the imagination horizon makes Dreamer more robust to the horizon length. The agents use pixel reconstruction for representation learning and an action repeat of R = 2. Action and value models Consider imagined trajectories with a finite horizon H. Dreamer uses an actor critic approach to learn behaviors that consider rewards beyond the horizon. We learn an action model and a value model in the latent space of the world model for this. The action model implements the policy and aims to predict actions that solve the imagination environment. The value model estimates the expected imagined rewards that the action model achieves from each state sτ , Action model: a, ~ dg(az | 87) 2 Value model: vy (sr) © Eqcjs,)( and yrs). ° The action and value models are trained cooperatively as typical in policy iteration: the action model aims to maximize an estimate of the value, while the value model aims to match an estimate of the value that changes as the action model changes. We use dense neural networks for the action and value models with parameters φ and ψ, respectively. The action model outputs a tanh-transformed Gaussian (Haarnoja et al., 2018) with sufficient statistics predicted by the neural network. This allows for reparameterized sampling (Kingma and Welling, 2013; Rezende et al., 2014) that views sampled actions as deterministically dependent on the neural network output, allowing us to backpropagate analytic gradients through the sampling operation, + = tanh(j1g(s;) + 7g(87)€), € ~ Normal(0, II). (3) Value estimation To learn the action and value models, we need to estimate the state values of imagined trajectories {sτ , aτ , rτ }t+H τ =t . These trajectories branch off of the model states st of sequence batches drawn from the agent’s dataset of experience and predict forward for the imagination horizon H using actions sampled from the action model. State values can be estimated in multiple ways that trade off bias and variance (Sutton and Barto, 2018), t+H Vr(8r) = Ego. ( Ss rm), (4) n=T # n=T h-l1 h-l1 VN (57) = Eqo.a (9 tn + +!Fey(sn)) with h=min(r+k,t+ 4H), (5) n=T (s-) =(1-A) > An“ VR (87) + YI VE (s-), (6) n=1 where the expectations are estimated under the imagined trajectories. VR simply sums the rewards from τ until the horizon and ignores rewards beyond it. This allows learning the action model without a value model, an ablation we compare to in our experiments. Vk N estimates rewards beyond k steps with the learned value model. Dreamer uses Vλ, an exponentially-weighted average of the estimates for different k to balance bias and variance. Figure 4 shows that learning a value model in imagination enables Dreamer to solve long-horizon tasks while being robust to the imagination horizon. The experimental details and results on all tasks are described in Section 6. 4 Published as a conference paper at ICLR 2020 Context 6 10 15 20 25 30 35 40 45 50 Model True Model True Figure 5: Reconstructions of long-term predictions. We apply the representation model to the first 5 images of two hold-out trajectories and predict forward for 45 steps using the latent dynamics, given only the actions. The recurrent state space model (RSSM; Hafner et al., 2018) performs accurate long-term predictions, enabling Dreamer to learn successful behaviors in a compact latent space. Learning objective To update the action and value models, we first compute the value estimates V)(s-) for all states s; along the imagined trajectories. The objective for the action model q4(a; | 57) is to predict actions that result in state trajectories with high value estimates. The objective for the value model vy(s-), in turn, is to regress the value estimates, t+H to y 9 t+H to y 9 max Bay.gg ( > Vas), 7) min Byo.a5 ( Ss 5 |u(sr) - Vals-))|| ). (8) r=t rat The value model is updated to regress the targets, around which we stop the gradient as typical (Sutton and Barto, 2018). The action model uses analytic gradients through the learned dynamics to maximize the value estimates. To understand this, we note that the value estimates depend on the reward and value predictions, which depend on the imagined states, which in turn depend on the imagined actions. Since all steps are implemented as neural networks, we analytically compute VsEqo,as ( ane V,(s,)) by stochastic backpropagation (Kingma and Welling, 2013; Rezende et al., 2014). We use reparameterization for continuous actions and latent states and straight-through gradients (Bengio et al., 2013) for discrete actions. The world model is fixed while learning behaviors. In tasks with early termination, the world model also predicts the discount factor from each latent state to weigh the time steps in Equations 7 and 8 by the cumulative product of the predicted discount factors, so terms are weighted down based on how likely the imagined trajectory would have ended. Comparison to actor critic methods Agents using Reinforce gradients (Williams, 1992), such as A3C and PPO (Mnih et al., 2016; Schulman et al., 2017), employ value baselines to reduce gradient variance, while Dreamer backpropagates through the value model. This is similar to deterministic or reparameterized actor critics (Silver et al., 2014), such as DDPG and SAC (Lillicrap et al., 2015; Haarnoja et al., 2018). However, these do not leverage gradients through transitions and only maximize immediate Q-values. MVE and STEVE (Feinberg et al., 2018; Buckman et al., 2018) extend them to multi-step Q-learning with learned dynamics to provide more accurate Q-value targets. We predict state values, which is sufficient for policy optimization since we backpropagate through the dynamics. Refer to Section 5 for a more detailed comparison to related work. 4 LEARNING LATENT DYNAMICS Learning behaviors in imagination requires a world model that generalizes well. We focus on latent dynamics models that predict forward in a compact latent space, facilitating long-term predictions and allowing the agent to imagine thousands of trajectories in parallel. Several objectives for learning representations for control have been proposed (Watter et al., 2015; Jaderberg et al., 2016; Oord et al., 2018; Eslami et al., 2018). We review three approaches for learning representations to use with Dreamer: reward prediction, image reconstruction, and contrastive estimation. Reward prediction Latent imagination requires a representation model p(st | st−1, at−1, ot), transition model q(st | st−1, at−1, ), and reward model q(rt | st), as described in Section 2. In principle, this could be achieved by simply learning to predict future rewards given actions and past observations (Oh et al., 2017; Gelada et al., 2019; Schrittwieser et al., 2019). With a large and diverse dataset, such representations should be sufficient for solving a control task. However, with a finite dataset and especially when rewards are sparse, learning about observations that correlate with rewards is likely to improve the world model (Jaderberg et al., 2016; Gregor et al., 2019). 5 Published as a conference paper at ICLR 2020 Ml Dreamer (5e6 steps) MM PlaNet (5e6 steps) MMM D4PG(le8 steps) Ml A3C (1e8 steps, proprio) Watts. 1000 MULL UU S Episode Ss Wa mA Run fe 25234 SEs EES RSS SEES SELL ES BEES a4 aS a oe bb & Z8 3 bb Boo Oo FF 20a SS Bs ERM BP Ee PRM or Bs Po ar Sy as 3” 2 3 HFSEeLS SB GFE ZF SE a™ 8 E# 04 54 °°§ a On 7) ) o Bo i= é # Return Figure 6: Performance comparison to existing methods. Dreamer inherits the data-efficiency of PlaNet while exceeding the asymptotic performance of the best model-free agents. After 5 × 106 environment steps, Dreamer reaches an average performance of 823 across tasks, compared to PlaNet at 332 and the top model-free D4PG agent at 786 after 108 steps. Results are averages over 5 seeds. Reconstruction We first describe the world model used by PlaNet (Hafner et al., 2018) that learns latent dynamics by reconstructing images as shown in Figure 3a. The world model consists of the following components, where the observation model is only used to provide a learning signal, Representation model: Observation model: Reward model: Transition model: pθ(st | st−1, at−1, ot) qθ(ot | st) qθ(rt | st) qθ(st | st−1, at−1). The components are optimized jointly to increase the variational lower bound (ELBO; Jordan et al., 1999) or more generally the variational information bottleneck (VIB; Tishby et al., 2000; Alemi et al., 2016). As derived in Appendix B, the bound includes reconstruction terms for observations and rewards and a KL regularizer. The expectation is taken under the dataset and representation model, Tree = B,( (95 + Ti+ Js) teonst — J = Inq(or | 5) «10 t Tg = Ing(r: | 81) Fh = —BKL (p(se | se-1, ae—1, 04) |] a(se | Se—1, ax-1))- We implement the transition model as a recurrent state space model (RSSM; Hafner et al., 2018), the representation model by combining the RSSM with a convolutional neural network (CNN; LeCun et al., 1989) applied to the image observation, the observation model as a transposed CNN, and the reward model as a dense network. The combined parameter vector θ is updated by stochastic backpropagation (Kingma and Welling, 2013; Rezende et al., 2014). Figure 5 shows video predictions of this model. We refer to Appendix A and Hafner et al. (2018) model details. Contrastive estimation Predicting pixels can require high model capacity. We can also encourage mutual information between model states and observations by instead predicting the states from the images (Guo et al., 2018). This replaces the observation model with a state model, # State model: qθ(st | ot). (11) While the reconstruction objective used the fact that the observation marginal is a constant, we now face the state marginal. As shown in Appendix B, this can be estimated via noise contrastive estimation (NCE; Gutmann and Hyvarinen, 2010; Oord et al., 2018) by averaging the state model over observations o! of the current sequence batch. Intuitively, ¢(s, | 0,) makes the state predictable from the current image while In 57, q(s; | 0’) keeps it diverse to prevent collapse, INCE = e(> (93 + Ih +)) Jg = Ing(s: | 01) —In (Da | 0). (12) t o We implement the state model as a CNN and again optimize the bound with respect to the combined parameter vector θ using stochastic backpropagation. While avoiding pixel prediction, the amount of information this bound can extract efficiently is limited (McAllester and Statos, 2018). We empirically compare reward, reconstruction, and contrastive objectives in our experiments in Figure 8. 6 (9) Published as a conference paper at ICLR 2020 Acrobot Swingup Cartpole Swingup Sparse Hopper Hop Hopper Stand 1000 800 4 400 600 | 750 400 4 500 250 Episode Return 200 +. Fe ee eee eee } 00 O05 1.0 15 2.0 00 605 1.0 LS 2.0 oOo 05 1.0 15 2.0 00 O05 10 15 2.0 Pendulum Swingup Quadruped Walk Walker Run Walker Walk 1000 800 1000 {— 3 500 4 400 500 30 250 | 200 L 250 f[/-~ 7-7 n nnn ° of 0 0 00 05 10 15 20 OO 05 10 15 20 08 OS 10 15 20 OO OF 10 15 20 Environment Steps 106 Environment Steps 1¢6 Environment Steps 1¢6 Environment Steps 106 Dreamer ——No value —— PlaNet —=— D4PG (le9 steps) -— A3C (1e9 steps, proprio) — Figure 7: Dreamer succeeds at visual control tasks that require long-horizon credit assignment, such as the acrobot and hopper tasks. Optimizing only imagined rewards within the horizon via an action model or by online planning yields shortsighted behaviors that only succeed in reactive tasks, such as in the walker domain. The performance on all 20 tasks is summarized in Figure 6 and training curves are shown in Appendix D. See Tassa et al. (2018) for performance curves of D4PG and A3C. # 5 RELATED WORK Prior works learn latent dynamics for visual control by derivative-free policy learning or online planning, augment model-free agents with multi-step predictions, or use analytic gradients of Q- values or multi-step rewards, often for low-dimensional tasks. In comparison, Dreamer uses analytic gradients to efficiently learn long-horizon behaviors for visual control purely by latent imagination. Control with latent dynamics E2C (Watter et al., 2015) and RCE (Banijamali et al., 2017) embed images to predict forward in a compact space to solve simple tasks. World Models (Ha and Schmid- huber, 2018) learn latent dynamics in a two-stage process to evolve linear controllers in imagination. PlaNet (Hafner et al., 2018) learns them jointly and solves visual locomotion tasks by latent online planning. SOLAR (Zhang et al., 2019) solves robotic tasks via guided policy search in latent space. I2A (Weber et al., 2017) hands imagined trajectories to a model-free policy, while Lee et al. (2019) and Gregor et al. (2019) learn belief representations to accelerate model-free agents. Imagined multi-step returns VPN (Oh et al., 2017), MVE (Feinberg et al., 2018), and STEVE (Buckman et al., 2018) learn dynamics for multi-step Q-learning from a replay buffer. AlphaGo (Silver et al., 2017) combines predictions of actions and state values with planning, assuming access to the true dynamics. Also assuming access to the dynamics, POLO (Lowrey et al., 2018) plans to explore by learning a value ensemble. MuZero (Schrittwieser et al., 2019) learns task-specific reward and value models to solve challenging tasks but requires large amounts of experience. PETS (Chua et al., 2018), VisualMPC (Ebert et al., 2017), and PlaNet (Hafner et al., 2018) plan online using derivative-free optimization. POPLIN (Wang and Ba, 2019) improves over online planning by self-imitation. Piergiovanni et al. (2018) learn robot policies by imagination with a latent dynamics model. Planning with neural network gradients was shown on small problems (Schmidhuber, 1990; Henaff et al., 2018) but has been challenging to scale (Parmas et al., 2019). Analytic value gradients DPG (Silver et al., 2014), DDPG (Lillicrap et al., 2015), and SAC (Haarnoja et al., 2018) leverage gradients of learned immediate action values to learn a policy by experience replay. SVG (Heess et al., 2015) reduces the variance of model-free on-policy algorithms by analytic value gradients of one-step model predictions. Concurrent work by Byravan et al. (2019) uses latent imagination with deterministic models for navigation and manipulation tasks. ME-TRPO (Kurutach et al., 2018) accelerates an otherwise model-free agent via gradients of predicted rewards for proprioceptive inputs. DistGBP (Henaff et al., 2017; 2019) uses model gradients for online planning in simple tasks. 7 Published as a conference paper at ICLR 2020 Acrobot Swingup Cheetah Run Cup Catch Finger Spin 1000 {=== 750 500 250 1000 +-------------- 00 05 10° «15 2.0 0.0 60.5 1.0 15 2.0 0.0 60.5 1.0 15 2.0 00 0.5 10° #15 20 Hopper Stand Pendulum Swingup Quadruped Run Walker Stand 1000 1000 e 750 g 750 3 S 500 500 g 250 5. 250 a 0 0 00 05 10° «15 2.0 0.0 60.5 1.0 15 2.0 0.0 60.5 1.0 15 2.0 00 0.5 10° #15 20 Environment Steps 1e6 Environment Steps 1e6 Environment Steps 1e6 Environment Steps 1e6 — Dreamer + Reconstruction —— Dreamer + Contrastive —— Dreamer + Rewardonly == D4PG (le9 steps) —— A3C (le9 steps, proprio) Figure 8: Comparison of representation learning objectives to be used with Dreamer. Pixel recon- struction performs best for the majority of tasks. The contrastive objective solves about half of the tasks, while predicting rewards alone was not sufficient in our experiments. The results suggest that future developments in learning representations are likely to translate into improved task performance for Dreamer. The performance curves for all tasks are included in Appendix E. # 6 EXPERIMENTS We experimentally evaluate Dreamer on a variety of control tasks. We designed the experiments to compare Dreamer to current best methods in the literature, and to evaluate its ability to solve tasks with long horizons, continuous actions, discrete actions, and early termination. We further compare the orthogonal choice of learning objective for the world model. The source code for all our experiments and videos of Dreamer are available at https://danijar.com/dreamer. Control tasks We evaluate Dreamer on 20 visual control tasks of the DeepMind Control Suite (Tassa et al., 2018), illustrated in Figure 2. These tasks pose a variety of challenges, including sparse rewards, contact dynamics, and 3D scenes. We selected the tasks on which Tassa et al. (2018) report non-zero performance from image inputs. Agent observations are images of shape 64 × 64 × 3, actions range from 1 to 12 dimensions, rewards range from 0 to 1, episodes last for 1000 steps and have randomized initial states. We use a fixed action repeat of R = 2 across tasks. We further evaluate the applicability of Dreamer to discrete actions and early termination on a subset of Atari games (Bellemare et al., 2013) and DeepMind Lab levels (Beattie et al., 2016) as detailed in Appendix C. Implementation Our implementation uses TensorFlow Probability (Dillon et al., 2017). We use a single Nvidia V100 GPU and 10 CPU cores for each training run. The training time for our Dreamer implementation is about 3 hours per 106 environment steps on the control suite, compared to 11 hours for online planning using PlaNet, and the 24 hours used by D4PG to reach similar performance. We use the same hyper parameters across all continuous tasks, and similarly across all discrete tasks, detailed in Appendix A. The world models are learned via reconstruction unless specified. Baseline methods The highest reported performance on the continuous tasks is achieved by D4PG (Barth-Maron et al., 2018), an improved variant of DDPG (Lillicrap et al., 2015) that uses distributed collection, distributional Q-learning, multi-step returns, and prioritized replay. We include the scores for D4PG with pixel inputs and A3C (Mnih et al., 2016) with state inputs from Tassa et al. (2018). PlaNet (Hafner et al., 2018) learns the same world model as Dreamer and selects actions via online planning without an action model and drastically improves over D4PG and A3C in data efficiency. We re-run PlaNet with R = 2 for a unified experimental setup. For Atari, we show the final performance of SimPLe (Kaiser et al., 2019), DQN (Mnih et al., 2015) and Rainbow (Hessel et al., 2018) reported by Castro et al. (2018), and for DeepMind Lab that of IMPALA (Espeholt et al., 2018) as a guideline. 8 Published as a conference paper at ICLR 2020 Performance To evaluate the performance of Dreamer, we compare it to state-of-the-art reinforce- ment learning agents. The results are summarized in Figure 6. With an average score of 823 across tasks after 5 × 106 environment steps, Dreamer exceeds the performance of the strong model-free D4PG agent that achieves an average of 786 within 108 environment steps. At the same time, Dreamer inherits the data-efficiency of PlaNet, confirming that the learned world model can help to generalize from small amounts of experience. The empirical success of Dreamer shows that learning behaviors by latent imagination with world models can outperform top methods based on experience replay. Long horizons To investigate its ability to learn long-horizon behaviors, we compare Dreamer to alternatives for deriving behaviors from the world model at various horizon lengths. For this, we learn an action model to maximize imagined rewards without a value model and compare to online planning using PlaNet. Figure 4 shows the final performance for different imagination horizons, confirming that the value model makes Dreamer more robust to the horizon and performs well even for short horizons. Performance curves for all 19 tasks with horizon of 20 are shown in Appendix D, where Dreamer outperforms the alternatives on 16 of 20 tasks, with 4 ties. Representation learning Dreamer can be used with any differentiable dynamics model that pre- dicts future rewards given actions and past observations. Since the representation learning objective is orthogonal to our algorithm, we compare three natural choices described in Section 4: pixel recon- struction, contrastive estimation, and pure reward prediction. Figure 8 shows clear differences in task performance for different representation learning approaches, with pixel reconstruction outperform- ing contrastive estimation on most tasks. This suggests that future improvements in representation learning are likely to translate to higher task performance with Dreamer. Reward prediction alone was not sufficient in our experiments. Further ablations are included in the appendix of the paper. # 7 CONCLUSION We present Dreamer, an agent that learns long-horizon behaviors purely by latent imagination. For this, we propose an actor critic method that optimizes a parametric policy by propagating analytic gradients of multi-step values back through learned latent dynamics. Dreamer outperforms previous methods in data-efficiency, computation time, and final performance on a variety of challenging continuous control tasks with image inputs. We further show that Dreamer is applicable to tasks with discrete actions and early episode termination. Future research on representation learning can likely scale latent imagination to environments of higher visual complexity. Acknowledgements We thank Simon Kornblith, Benjamin Eysenbach, Ian Fischer, Amy Zhang, Geoffrey Hinton, Shane Gu, Adam Kosiorek, Brandon Amos, Jacob Buckman, Calvin Luo, and Rishabh Agarwal, and our anonymous reviewers for feedback and discussions. We thank Yuval Tassa for adding the quadruped environment to the control suite. 9 Published as a conference paper at ICLR 2020 # REFERENCES A. A. Alemi, I. Fischer, J. V. Dillon, and K. Murphy. Deep variational information bottleneck. arXiv preprint arXiv:1612.00410, 2016. E. Banijamali, R. Shu, M. Ghavamzadeh, H. Bui, and A. Ghodsi. Robust locally-linear controllable embedding. arXiv preprint arXiv:1710.05373, 2017. G. Barth-Maron, M. W. Hoffman, D. Budden, W. Dabney, D. Horgan, A. Muldal, N. Heess, and T. Lil- licrap. Distributed distributional deterministic policy gradients. arXiv preprint arXiv:1804.08617, 2018. C. Beattie, J. Z. Leibo, D. Teplyashin, T. Ward, M. Wainwright, H. Küttler, A. Lefrancq, S. Green, V. Valdés, A. Sadik, et al. Deepmind lab. arXiv preprint arXiv:1612.03801, 2016. M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling. The arcade learning environment: An evaluation platform for general agents. Journal of Artificial Intelligence Research, 47:253–279, 2013. Y. Bengio, N. Léonard, and A. Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013. J. Buckman, D. Hafner, G. Tucker, E. Brevdo, and H. Lee. Sample-efficient reinforcement learning with stochastic ensemble value expansion. In Advances in Neural Information Processing Systems, pages 8224–8234, 2018. L. Buesing, T. Weber, S. Racaniere, S. Eslami, D. Rezende, D. P. Reichert, F. Viola, F. Besse, K. Gregor, D. Hassabis, et al. Learning and querying fast generative models for reinforcement learning. arXiv preprint arXiv:1802.03006, 2018. A. Byravan, J. T. Springenberg, A. Abdolmaleki, R. Hafner, M. Neunert, T. Lampe, N. Siegel, N. Heess, and M. Riedmiller. Imagined value gradients: Model-based policy optimization with transferable latent dynamics models. arXiv preprint arXiv:1910.04142, 2019. P. S. Castro, S. Moitra, C. Gelada, S. Kumar, and M. G. Bellemare. Dopamine: A research framework for deep reinforcement learning. arXiv preprint arXiv:1812.06110, 2018. K. Chua, R. Calandra, R. McAllister, and S. Levine. Deep reinforcement learning in a handful of trials using probabilistic dynamics models. In Advances in Neural Information Processing Systems, pages 4754–4765, 2018. D.-A. Clevert, T. Unterthiner, and S. Hochreiter. Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015. J. V. Dillon, I. Langmore, D. Tran, E. Brevdo, S. Vasudevan, D. Moore, B. Patton, A. Alemi, M. Hoffman, and R. A. Saurous. Tensorflow distributions. arXiv preprint arXiv:1711.10604, 2017. A. Doerr, C. Daniel, M. Schiegg, D. Nguyen-Tuong, S. Schaal, M. Toussaint, and S. Trimpe. Probabilistic recurrent state-space models. arXiv preprint arXiv:1801.10395, 2018. F. Ebert, C. Finn, A. X. Lee, and S. Levine. Self-supervised visual planning with temporal skip connections. arXiv preprint arXiv:1710.05268, 2017. S. A. Eslami, D. J. Rezende, F. Besse, F. Viola, A. S. Morcos, M. Garnelo, A. Ruderman, A. A. Rusu, I. Danihelka, K. Gregor, et al. Neural scene representation and rendering. Science, 360(6394): 1204–1210, 2018. L. Espeholt, H. Soyer, R. Munos, K. Simonyan, V. Mnih, T. Ward, Y. Doron, V. Firoiu, T. Harley, I. Dunning, et al. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. arXiv preprint arXiv:1802.01561, 2018. V. Feinberg, A. Wan, I. Stoica, M. I. Jordan, J. E. Gonzalez, and S. Levine. Model-based value estimation for efficient model-free reinforcement learning. arXiv preprint arXiv:1803.00101, 2018. 10 Published as a conference paper at ICLR 2020 C. Gelada, S. Kumar, J. Buckman, O. Nachum, and M. G. Bellemare. Deepmdp: Learning continuous latent space models for representation learning. arXiv preprint arXiv:1906.02736, 2019. K. Gregor, D. J. Rezende, F. Besse, Y. Wu, H. Merzic, and A. v. d. Oord. Shaping belief states with generative environment models for rl. arXiv preprint arXiv:1906.09237, 2019. Z. D. Guo, M. G. Azar, B. Piot, B. A. Pires, T. Pohlen, and R. Munos. Neural predictive belief representations. arXiv preprint arXiv:1811.06407, 2018. M. Gutmann and A. Hyvärinen. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pages 297–304, 2010. D. Ha and J. Schmidhuber. World models. arXiv preprint arXiv:1803.10122, 2018. T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint arXiv:1801.01290, 2018. D. Hafner, T. Lillicrap, I. Fischer, R. Villegas, D. Ha, H. Lee, and J. Davidson. Learning latent dynamics for planning from pixels. arXiv preprint arXiv:1811.04551, 2018. N. Heess, G. Wayne, D. Silver, T. Lillicrap, T. Erez, and Y. Tassa. Learning continuous control policies by stochastic value gradients. In Advances in Neural Information Processing Systems, pages 2944–2952, 2015. M. Henaff, W. F. Whitney, and Y. LeCun. Model-based planning in discrete action spaces. CoRR, abs/1705.07177, 2017. M. Henaff, W. F. Whitney, and Y. LeCun. Model-based planning with discrete and continuous actions. arXiv preprint arXiv:1705.07177, 2018. M. Henaff, A. Canziani, and Y. LeCun. Model-predictive policy learning with uncertainty regulariza- tion for driving in dense traffic. arXiv preprint arXiv:1901.02705, 2019. M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver. Rainbow: Combining improvements in deep reinforcement learning. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. M. Jaderberg, V. Mnih, W. M. Czarnecki, T. Schaul, J. Z. Leibo, D. Silver, and K. Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016. M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. L. Kaiser, M. Babaeizadeh, P. Milos, B. Osinski, R. H. Campbell, K. Czechowski, D. Erhan, C. Finn, P. Kozakowski, S. Levine, et al. Model-based reinforcement learning for atari. arXiv preprint arXiv:1903.00374, 2019. R. E. Kalman. A new approach to linear filtering and prediction problems. Journal of basic Engineering, 82(1):35–45, 1960. M. Karl, M. Soelch, J. Bayer, and P. van der Smagt. Deep variational bayes filters: Unsupervised learning of state space models from raw data. arXiv preprint arXiv:1605.06432, 2016. D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. D. P. Kingma and M. Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. R. G. Krishnan, U. Shalit, and D. Sontag. Deep kalman filters. arXiv preprint arXiv:1511.05121, 2015. T. Kurutach, I. Clavera, Y. Duan, A. Tamar, and P. Abbeel. Model-ensemble trust-region policy optimization. arXiv preprint arXiv:1802.10592, 2018. 11 Published as a conference paper at ICLR 2020 Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1(4):541–551, 1989. A. X. Lee, A. Nagabandi, P. Abbeel, and S. Levine. Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model. arXiv preprint arXiv:1907.00953, 2019. T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015. K. Lowrey, A. Rajeswaran, S. Kakade, E. Todorov, and I. Mordatch. Plan online, learn offline: Efficient learning and exploration via model-based control. arXiv preprint arXiv:1811.01848, 2018. M. C. Machado, M. G. Bellemare, E. Talvitie, J. Veness, M. Hausknecht, and M. Bowling. Revisiting the arcade learning environment: Evaluation protocols and open problems for general agents. Journal of Artificial Intelligence Research, 61:523–562, 2018. D. McAllester and K. Statos. Formal limitations on the measurement of mutual information. arXiv preprint arXiv:1811.04251, 2018. V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Ried- miller, A. K. Fidjeland, G. Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529, 2015. V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International Conference on Machine Learning, pages 1928–1937, 2016. J. Oh, S. Singh, and H. Lee. Value prediction network. In Advances in Neural Information Processing Systems, pages 6118–6128, 2017. A. v. d. Oord, Y. Li, and O. Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018. P. Parmas, C. E. Rasmussen, J. Peters, and K. Doya. Pipps: Flexible model-based policy search robust to the curse of chaos. arXiv preprint arXiv:1902.01240, 2019. A. Piergiovanni, A. Wu, and M. S. Ryoo. Learning real-world robot policies by dreaming. arXiv preprint arXiv:1805.07813, 2018. B. Poole, S. Ozair, A. v. d. Oord, A. A. Alemi, and G. Tucker. On variational bounds of mutual information. arXiv preprint arXiv:1905.06922, 2019. D. J. Rezende, S. Mohamed, and D. Wierstra. Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint arXiv:1401.4082, 2014. J. Schmidhuber. Making the world differentiable: On using self-supervised fully recurrent neural networks for dynamic reinforcement learning and planning in non-stationary environments. 1990. J. Schrittwieser, I. Antonoglou, T. Hubert, K. Simonyan, L. Sifre, S. Schmitt, A. Guez, E. Lockhart, D. Hassabis, T. Graepel, et al. Mastering atari, go, chess and shogi by planning with a learned model. arXiv preprint arXiv:1911.08265, 2019. J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. D. Silver, G. Lever, N. Heess, T. Degris, D. Wierstra, and M. Riedmiller. Deterministic policy gradient algorithms. In Proceedings of the 31st International Conference on Machine Learning, 2014. D. Silver, J. Schrittwieser, K. Simonyan, I. Antonoglou, A. Huang, A. Guez, T. Hubert, L. Baker, M. Lai, A. Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676): 354, 2017. 12 Published as a conference paper at ICLR 2020 A. Srinivas, A. Jabri, P. Abbeel, S. Levine, and C. Finn. Universal planning networks. arXiv preprint arXiv:1804.00645, 2018. R. S. Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACM SIGART Bulletin, 2(4):160–163, 1991. R. S. Sutton and A. G. Barto. Reinforcement learning: An introduction. MIT press, 2018. Y. Tassa, Y. Doron, A. Muldal, T. Erez, Y. Li, D. d. L. Casas, D. Budden, A. Abdolmaleki, J. Merel, A. Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018. N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. arXiv preprint physics/0004057, 2000. T. Wang and J. Ba. Exploring model-based planning with policy networks. arXiv preprint arXiv:1906.08649, 2019. T. Wang, X. Bao, I. Clavera, J. Hoang, Y. Wen, E. Langlois, S. Zhang, G. Zhang, P. Abbeel, and J. Ba. Benchmarking model-based reinforcement learning. CoRR, abs/1907.02057, 2019. M. Watter, J. Springenberg, J. Boedecker, and M. Riedmiller. Embed to control: A locally linear latent dynamics model for control from raw images. In Advances in neural information processing systems, pages 2746–2754, 2015. T. Weber, S. Racanière, D. P. Reichert, L. Buesing, A. Guez, D. J. Rezende, A. P. Badia, O. Vinyals, N. Heess, Y. Li, et al. Imagination-augmented agents for deep reinforcement learning. arXiv preprint arXiv:1707.06203, 2017. R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229–256, 1992. M. Zhang, S. Vikram, L. Smith, P. Abbeel, M. Johnson, and S. Levine. Solar: deep structured representations for model-based reinforcement learning. In International Conference on Machine Learning, 2019. 13 Published as a conference paper at ICLR 2020 # A HYPER PARAMETERS Model components We use the convolutional encoder and decoder networks from Ha and Schmid- huber (2018), the RSSM of Hafner et al. (2018), and implement all other functions as three dense layers of size 300 with ELU activations (Clevert et al., 2015). Distributions in latent space are 30-dimensional diagonal Gaussians. The action model outputs a tanh mean scaled by a factor of 5 and a softplus standard deviation for the Normal distribution that is then transformed using tanh (Haarnoja et al., 2018). The scaling factor allows the agent to saturate the action distribution. Learning updates We draw batches of 50 sequences of length 50 to train the world model, value model, and action model models using Adam (Kingma and Ba, 2014) with learning rates 6 x 10~4, 8 x 107°, 8 x 1075, respectively and scale down gradient norms that exceed 100. We do not scale the KL regularizers (3 = 1) but clip them below 3 free nats as in PlaNet. The imagination horizon is HT = 15 and the same trajectories are used to update both action and value models. We compute the V) targets with y = 0.99 and \ = 0.95. We did not find latent overshooting for learning the model, an entropy bonus for the action model, or target networks for the value model necessary. Environment interaction The dataset is initialized with S = 5 episodes collected using random actions. We iterate between 100 training steps and collecting 1 episode by executing the predicted mode action with Normal(0, 0.3) exploration noise. Instead of manually selecting the action repeat for each environment as in Hafner et al. (2018) and Lee et al. (2019), we fix it to 2 for all environments. See Figure 12 for an assessment of the robustness to different action repeat values. Discrete control For experiments on Atari games and DeepMind Lab levels, the action model predicts the logits of a categorical distribution. We use straight-through gradients for the sampling step during latent imagination. The action noise is epsilon greedy where e is linearly scheduled from 0.4 — 0.1 over the first 200, 000 gradient steps. To account for the higher complexity of these tasks, we use an imagination horizon of H = 10, scale the KL regularizers by 3 = 0.1, and bound rewards using tanh. We predict the discount factor from the latent state with a binary classifier that is trained towards the soft labels of 0 and +. 14 Published as a conference paper at ICLR 2020 # B DERIVATIONS We define the information bottleneck objective (Tishby et al., 2000) for latent dynamics models, max I(s1:T ; (o1:T , r1:T ) | a1:T ) − β I(s1:T , i1:T | a1:T ), (13) # . = δ(ot − ¯ot) as where β is scalar and it are dataset indices that determine the observations p(ot | it) in Alemi et al. (2016). Maximizing the objective leads to model states that can predict the sequence of observations and rewards while limiting the amount of information extracted at each time step. This encourages the model to reconstruct each image by relying on information extracted at preceeding time steps to the extent possible, and only accessing additional information from the current image when necessary. As a result, the information regularizer encourages the model to learn long-term dependencies. For the generative objective, we lower bound the first term using the non-negativity of the KL divergence and drop the marginal data probability as it does not depend on the representation model, Isic; (ovr, Tir) | aur) = Ep(oryrirysir.ar »)( opr, rer | sur, aur) —mplour,rur | a.r)) t const I+ B( Son plours rier | sur,ar)) t 2 B( So mploursrir | sirsa7)) —kKL (Ploursrir 81:7, 4:7) | [1a | se)a(rs | s1)) t t = E(Soina(oe | s1) + Ing(re | s1))- (14) For the contrastive objective, we subtract the constant marginal probability of the data under the variational encoder, apply Bayes rule, and use the InfoNCE mini-batch bound (Poole et al., 2019), E(Ing(o | s¢) + nq(rz | s2)) E(Inq(o | s¢) — Ing(or) + ng(r: | s1)) (In q(sr | o¢) — Ing(s) + Ing(r | 5¢)) (15) Ing(s¢ | 04) — n>? q(se | 0) + Ing(rs | “)): E E IV For the second term, we use the non-negativity of the KL divergence to obtain an upper bound, (surstur | aur) | ie} Dour rt 8uT QT fir) (Somplse | Se—1, @e-1, 44) — In p(s; | st-1,a1-1)) t (Somplse | S:—1, @t-1, Or) — In p(s; | st-1,a1-1)) t | ie! (16) IA td (Somplse | St—-1, Gt-1, 01) _ Ing(sz | si-1,a1-1)) t = B( 0 KL (p(se | 81-1, a1-1, 01) || a(se | st-1,a1-1))). t # t This lower bounds the objective. 15 Published as a conference paper at ICLR 2020 # C DISCRETE CONTROL We evaluate Dreamer on a subset of tasks with discrete actions from the Atari suite (Bellemare et al., 2013) and DeepMind Lab (Beattie et al., 2016). While agents that purely learn through world models are not yet competitive in these domains (Kaiser et al., 2019), the tasks offer a diverse test bed with visual complexity, sparse rewards, and early termination. Agents observe 64 × 64 × 3 images and select one of between 3 and 18 actions. For Atari, we follow the evaluation protocol of Machado et al. (2018) with sticky actions. Refer to Figure 9 for these experiments. Episode Return Episode Return Episode Return 200000 Episode Return Ss 8 3 Ss 150000 100000 50000 Boxing Choppercommand Doubledunk Fishingderby ° 10000 5000 10000 5000 Collect Good Objects Watermaze 15000 10000 5000 10.0 T 40 2 4 6 Environment Steps 1e7 os 10 15 Environment Steps 1¢7 0.25 0.50 0.75 1.00 1.25 1.50 Environment Steps 1e7 123 4 ~°5 Environment Steps 1e7 — Dreamer == SimPLe (1e5 steps) == DOQN (2c8 steps) == Rainbow (2e8 steps) == IMPALA (1cl0 st steps) Random Figure 9: Performance of Dreamer in environments with discrete actions and early termination. Dreamer learns successful behaviors on this subset of Atari games and the object collection level of DMLab. We highlight representation learning for these environments as a direction of future work that could enable competitive performance across all Atari games and DMLab levels using Dreamer. 16 Published as a conference paper at ICLR 2020 # D BEHAVIOR LEARNING Episode Return Episode Return Episode Return Episode Return Episode Return Acrobot Swingup Cartpole Balance Sparse Cartpole Swingup Cartpole Balance 1000 4 1000 —SSS=SSeSe= SSE] 1000 === 1000 4 750 4 750 750 4 500 | 500 500 4 2504 250 2504 0 0 o4 o 1 2 3 4 o 1 2 3 4 °5 o 1 2 3 4°55 Cartpole Swingup Sparse Cheetah Run Cup Catch 1000 4 1000 1000 = 750 + } 1 2 3 4 Finger Turn Easy } 1 2 3 4 5 Hopper Hop 1000 4 750 4 1000 4 1000 4 Reacher Hard Walker Run Walker Stand } 1 2 3 4 } 1 2 3 4 5 } 1 2 3 4 5 } 1 2 3 4 5 Pendulum Swingup Quadruped Run Quadruped Walk Reacher Easy 1000 1000 4 1000 [=== 750 750 4 750 4 500 500 4 500 4 250 250 4 2504 0 of 04 } 1 2 3 4 } 1 2 3 4 5 } 1 2 3 4 5 } 1 2 3 4 5 1000 —— 1000 750 500 250 o 1 2 3 4 Environment Steps 1e6 } 1 2 3 4 5 } 1 2 3 4 5 Environment Steps 1e6 Environment Steps 1e6 1 2 3 4 °5 Environment Steps 1e6 — Dreamer —No value — PlaNet == D4PG (1e9 steps) == A3C (1e9 steps, proprio) == SLAC (3e6 steps) Figure 10: Comparison of action selection schemes on the continuous control tasks of the DeepMind Control Suite from pixel inputs. The lines show mean scores over environment steps and the shaded areas show the standard deviation across 5 seeds. We compare Dreamer that learns both actions and values in imagination, to only learning actions in imagination, and Planet that selects actions by online planning instead of learning a policy. The baselines include the top model-free algorithm D4PG, the well-known A3C agent, and the hybrid SLAC agent. 17 Published as a conference paper at ICLR 2020 # E REPRESENTATION LEARNING Acrobot Swingup Cartpole Balance Cartpole Balance Sparse Cartpole Swingup 1000 1000 = 1000 -==== 1000 = 750 750 750 4 750 2 g 4 2 500 500 4 500 & fy 250 250 4 250 0 0 0 o 1 2 3 4 o 1 2 3 4 o 1 2 3 4 Cartpole Swingup Sparse Cheetah Run Finger Spin 1000 1000 1000 "=== === E 750 2 g 4 2 500 & fy 250 0 o 1 2 3 4 o 1 2 3 4 o 1 2 3 4 Finger Turn Easy Hopper Hop Hopper Stand 1000 {=-—~==== == 1000 1000 1000 E 750 750 4 750 3 4 3 500 500 & fy 250 250 0 0 0 o 1 2 3 4 o 1 2 3 4 o 1 2 3 4 o 1 2 3 4 Pendulum Swingup Quadruped Run Quadruped Walk Reacher Easy 1000 1000 1000 —-==—— — Fy 750 750 5 3 4 3 500 500 & fy 250 250 0 0 o 1 2 3 4 o 1 2 3 4 o 1 2 3 4 Reacher Hard Walker Run Walker Stand 1000 1000 1000 Fy 750 750 5 3 4 3 500 500 & fy 250 250 0 0 T T T 0 T T T o 1 2 3 4 o 1 2 3 4 Environment Steps 1e6 Environment Steps 1e6 Environment Steps 1e6 Environment Steps 1e6 Figure 11: Comparison of representation learning methods for Dreamer. The lines show mean scores and the shaded areas show the standard deviation across 5 seeds. We compare generating both images and rewards, generating rewards and using a contrastive loss to learn about the images, and only predicting rewards. Image reconstruction provides the best learning signal across most of the tasks, followed by the contrastive objective. Learning purely from rewards was not sufficient in our experiments and might require larger amounts of experience. 18 Published as a conference paper at ICLR 2020 # F ACTION REPEAT Acrobot Swingup Cartpole Balance Cartpole Balance Sparse Cartpole Swingup 1000 OE =} 100 1000 = 8004 | 2 6004 a 3 4004 A 200 4 0 0 of 00 02 04 06 08 10 00 02 04 06 08 10 00 02 04 06 08 10 00 02 04 06 08 10 Cartpole Swingup Sparse Cheetah Run Cup Catch Finger Spin 1000 1000 1000 = = 8004 800 800 4 Ey Z 6004 600 600 4 3 L —_ 3 4004 400 400 4 A 200+. ---f- f------- 200 200 4 0 ——+ 0-11 0 11 —t 0 —T i T 00 02 04 06 08 10 00 02 04 06 08 10 00 02 04 06 08 10 00 02 04 06 08 10 Finger Tum Easy Finger Turn Hard Hopper Hop Hopper Stand 1000 = 800 3 600 3 3 400 5 a 200 ote 00 02 04 06 08 10 00 02 04 06 08 10 00 02 04 06 08 10 00 02 04 06 08 10 Pendulum Swingup Quadruped Run Quadruped Walk Reacher Easy 1000 1000 1000 1000 == = = 800 800 800 800 7- g ZB 6004 600 600 600 4 3 3 4004 400 400 400 4 & =" 200 4 200 200 200 4 0 ——1——1—r 0 11 —t 0 11 —r 01—.—_—__— 00 02 04 06 08 10 00 02 04 06 08 10 00 02 04 06 08 10 00 02 04 06 08 10 Reacher Hard Walker Run Walker Stand Walker Walk 1000 —<———_______———— 1000 1000 = 8004 800 800 Ey Z 6004 600 600 3 3 400 400 400 i 200 200 200 0 r 0-— 0 11 —t 0+—_—+ T T 00 02 04 06 08 10 00 02 04 06 08 10 00 02 04 06 08 10 00 02 04 06 08 10 Environment Steps 1¢6 Environment Steps 1¢6 Environment Steps 106 Environment Steps 106 — Repeat 1 — Repeat2 — Repeat4 -— A3C (1e9 steps, proprio) —- D4PG (1e9 steps) -— PlaNet (1e6 steps) —— SLAC (e6 steps) Figure 12: Robustness of Dreamer to different control frequencies. Reinforcement learning methods can be sensitive to this hyper parameter, which could be amplified when learning dynamics models at the control frequency of the environment. For this experiment, we train Dreamer with different amounts of action repeat. The areas show one standard deviation across 2 seeds. We used a previous hyper parameter setting for this experiment. We find that a value of R = 2 works best across tasks. 19 Published as a conference paper at ICLR 2020 # G CONTINUOUS CONTROL SCORES A3C D4PG Input modality Environment steps proprio 108 pixels 108 pixels 5 × 106 Acrobot Swingup Cartpole Balance Cartpole Balance Sparse Cartpole Swingup Cartpole Swingup Sparse Cheetah Run Cup Catch Finger Spin Finger Turn Easy Finger Turn Hard Hopper Hop Hopper Stand Pendulum Swingup Quadruped Run Quadruped Walk Reacher Easy Reacher Hard Walker Run Walker Stand Walker Walk 41.90 951.60 857.40 558.40 179.80 213.90 104.70 129.40 167.30 88.70 0.50 27.90 48.60 − − 95.60 39.70 191.80 378.40 311.00 91.70 992.80 1000.00 862.00 482.00 523.80 980.50 985.70 971.40 966.00 242.00 929.90 680.90 − − 967.40 957.10 567.20 985.20 968.30 3.21 452.56 164.74 312.56 0.64 496.12 455.98 495.25 451.22 312.55 0.37 5.96 3.27 280.45 238.90 468.50 187.02 626.25 759.19 944.70 Average 243.70 786.32 332.97 Dreamer pixels 5 × 106 365.26 979.56 941.84 833.66 812.22 894.56 962.48 498.88 825.86 891.38 368.97 923.72 833.00 888.39 931.61 935.08 817.05 824.67 977.99 961.67 823.39 1We re-run PlaNet with fixed action repeat of R = 2 to not tune the this value for each of the 20 tasks. As a result, the scores differ from Hafner et al. (2018). 20
{ "id": "1811.01848" }
1912.01412
Deep Learning for Symbolic Mathematics
Neural networks have a reputation for being better at solving statistical or approximate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be surprisingly good at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica.
http://arxiv.org/pdf/1912.01412
Guillaume Lample, François Charton
cs.SC, cs.LG
null
null
cs.SC
20191202
20191202
9 1 0 2 c e D 2 ] C S . s c [ 1 v 2 1 4 1 0 . 2 1 9 1 : v i X r a # DEEP LEARNING FOR SYMBOLIC MATHEMATICS Guillaume Lample∗ Facebook AI Research [email protected] Franc¸ois Charton∗ Facebook AI Research [email protected] # ABSTRACT Neural networks have a reputation for being better at solving statistical or approxi- mate problems than at performing calculations or working with symbolic data. In this paper, we show that they can be surprisingly good at more elaborated tasks in mathematics, such as symbolic integration and solving differential equations. We propose a syntax for representing mathematical problems, and methods for generating large datasets that can be used to train sequence-to-sequence models. We achieve results that outperform commercial Computer Algebra Systems such as Matlab or Mathematica. # INTRODUCTION A longstanding tradition in machine learning opposes rule-based inference to statistical learning (Rumelhart et al., 1986), and neural networks clearly stand on the statistical side. They have proven to be extremely effective in statistical pattern recognition and now achieve state-of-the-art performance on a wide range of problems in computer vision, speech recognition, natural language processing (NLP), etc. However, the success of neural networks in symbolic computation is still extremely limited: combining symbolic reasoning with continuous representations is now one of the challenges of machine learning. Only a few studies investigated the capacity of neural network to deal with mathematical objects, and apart from a small number of exceptions (Zaremba et al., 2014; Loos et al., 2017; Allamanis et al., 2017; Arabshahi et al., 2018b), the majority of these works focus on arithmetic tasks like integer addition and multiplication (Zaremba & Sutskever, 2014; Kaiser & Sutskever, 2015; Trask et al., 2018). On these tasks, neural approaches tend to perform poorly, and require the introduction of components biased towards the task at hand (Kaiser & Sutskever, 2015; Trask et al., 2018). In this paper, we consider mathematics, and particularly symbolic calculations, as a target for NLP models. More precisely, we use sequence-to-sequence models (seq2seq) on two problems of symbolic mathematics: function integration and ordinary differential equations (ODEs). Both are difficult, for trained humans and computer software. For integration, humans are taught a set of rules (integration by parts, change of variable, etc.), that are not guaranteed to succeed, and Computer Algebra Systems use complex algorithms (Geddes et al., 1992) that explore a large number of specific cases. For instance, the complete description of the Risch algorithm (Risch, 1970) for function integration is more than 100 pages long. Yet, function integration is actually an example where pattern recognition should be useful: detecting that an expression is of the form yy’ (y” + 1)~!/? suggests that its primitive will contain \/y? + 1. Detecting this pattern may be easy for small expressions y, but becomes more difficult as the number of operators in y increases. However, to the best of our knowledge, no study has investigated the ability of neural networks to detect patterns in mathematical expressions. We first propose a representation of mathematical expressions and problems that can be used by seq2seq models, and discuss the size and structure of the resulting problem space. Then, we show how to generate datasets for supervised learning of integration and first and second order differential equations. Finally, we apply seq2seq models to these datasets, and show that they achieve a better performance than state-of-the-art computer algebra programs, namely Matlab and Mathematica. # ∗ Equal contribution. 1 # 2 MATHEMATICS AS A NATURAL LANGUAGE 2.1 EXPRESSIONS AS TREES Mathematical expressions can be represented as trees, with operators and functions as internal nodes, operands as children, and numbers, constants and variables as leaves. The following trees represent expressions 2 + 3 × (5 + 2), 3x2 + cos(2x) − 1, and ∂2ψ + + - 2 3 × 5 + 2 3 × pow x 2 cos × − 1 ∂ ψ x ∂ x 1 / pow × ∂ ∂ 2 x ν 2 ψ t t Trees disambiguate the order of operations, take care of precedence and associativity and eliminate the need for parentheses. Up to the addition of meaningless symbols like spaces, punctuation or redundant parentheses, different expressions result in different trees. With a few assumptions, discussed in Section A of the appendix, there is a one-to-one mapping between expressions and trees. We consider expressions as sequences of mathematical symbols. 2 + 3 and 3 + 2 are different 4x and 2x, and they will be represented by different trees. Most expressions expressions, as are represent meaningful mathematical objects. x / 0, −2 or log(0) are also legitimate expressions, even though they do not necessarily make mathematical sense. Since there is a one-to-one correspondence between trees and expressions, equality between expres- sions will be reflected over their associated trees, as an equivalence : since 2+3 = 5 = 12−7 = 1×5, the four trees corresponding to these expressions are equivalent. Many problems of formal mathematics can be reframed as operations over expressions, or trees. For instance, expression simplification amounts to finding a shorter equivalent representation of a tree. In this paper, we consider two problems: symbolic integration and differential equations. Both boil down to transforming an expression into another, e.g. mapping the tree of an equation to the tree of its solution. We regard this as a particular instance of machine translation. 2.2 TREES AS SEQUENCES Machine translation systems typically operate on sequences (Sutskever et al., 2014; Bahdanau et al., 2015). Alternative approaches have been proposed to generate trees, such as Tree-LSTM (Tai et al., 2015) or Recurrent Neural Network Grammars (RNNG) (Dyer et al., 2016; Eriguchi et al., 2017). However, tree-to-tree models are more involved and much slower than their seq2seq counterparts, both at training and at inference. For the sake of simplicity, we use seq2seq models, which were shown to be effective at generating trees, e.g. in the context of constituency parsing (Vinyals et al., 2015), where the task is to predict a syntactic parse tree of input sentences. Using seq2seq models to generate trees requires to map trees to sequences. To this effect, we use prefix notation (also known as normal Polish notation), writing each node before its children, listed from left to right. For instance, the arithmetic expression 2 + 3 ∗ (5 + 2) is represented as the sequence [+ 2 ∗ 3 + 5 2]. In contrast to the more common infix notation 2 + 3 ∗ (5 + 2), prefix sequences need no parentheses and are therefore shorter. Inside sequences, operators, functions or variables are represented by specific tokens, and integers by sequences of digits preceded by a sign. As in the case between expressions and trees, there exists a one-to-one mapping between trees and prefix sequences. # 2.3 GENERATING RANDOM EXPRESSIONS To create training data, we need to generate sets of random mathematical expressions. However, sampling uniformly expressions with n internal nodes is not a simple task. Naive algorithms (such as 2 recursive methods or techniques using fixed probabilities for nodes to be leaves, unary, or binary) tend to favour deep trees over broad trees, or left-leaning over right leaning trees. Here are examples of different trees that we want to generate with the same probability. cos × + + + + pow × 3 3 × × x 3 × x 2 × 5 5 × 3 sqrt x 3 + x x + sin 8 8 + pow 8 x sin x # x 2 In Section C of the appendix, we present an algorithm to generate random trees and expressions, where the four expression trees above are all generated with the same probability. 2.4 COUNTING EXPRESSIONS We now investigate the number of possible expressions. Expressions are created from a finite set of variables (i.e. literals), constants, integers, and a list of operators that can be simple functions (e.g. cos or exp) or more involved operators (e.g. differentiation or integration). More precisely, we define our problem space as: trees with up to n internal nodes a set of p1 unary operators (e.g. cos, sin, exp, log) • a set of p2 binary operators (e.g. +, −, ×, pow) • a set of L leaf values containing variables (e.g. x, y, z), constants (e.g. e, π), integers (e.g. {−10, . . . , 10}) If p1 = 0, expressions are represented by binary trees. The number of binary trees with n internal nodes is given by the n-th Catalan numbers Cn (Sloane, 1996). A binary tree with n internal nodes has exactly n + 1 leaves. Each node and leaf can take respectively p2 and L different values. As a result, the number of expressions with n binary operators can be expressed by: En = CyphL™*} x 4” phLt) with C, = 1 (2n " mee 7 "nt iin If p1 > 0, expressions are unary-binary trees, and the number of trees with n internal nodes is the n-th large Schroeder number Sn (Sloane, 1996). It can be computed by recurrence using the following equation: (n + 1)Sn = 3(2n − 1)Sn−1 − (n − 2)Sn−2 Finally, the number En of expressions with n internal nodes, p1 unary operator, p2 binary operators and L possible leaves is recursively computed as (n + 1)En = (p1 + 2Lp2)(2n − 1)En−1 − p1(n − 2)En−2 (2) If p1 = p2 = L = 1, Equation 2 boils down to Equation 1. If p2 = L = 1, p1 = 0, we have (n + 1)En = 2(2n − 1)En−1 which is the recurrence relation satisfied by Catalan numbers. The derivations and properties of all these formulas are provided in Section B of the appendix. In Figure 1, we represent the number of binary trees (Cn) and unary-binary trees (Sn) for different numbers of internal nodes. We also represent the number of possible expressions (En) for different sets of operators and leaves. 3 (1) 62 g 10' a 1051 —— L=11, p1=15, p2=4 (unary-binary expressions) v —_— ,pl=0, p2=4 (binary expressions) 2 104° — L=11, pl=15, p2=1 0) — L=11, pl=0, p2=1 ‘5 1029 —_— p1=0, p2=1 o 18 _ pl=1, p2=1 (unary-binary trees) < 10 —— L=1, pl=0, p2=1 (binary trees) 2 10’ 0 4 8 12 16 20 24 28 Internal nodes Figure 1: Number of trees and expressions for different numbers of operators and leaves. p1 and p2 correspond to the number of unary and binary operators respectively, and L to the number of possible leaves. The bottom two curves correspond to the number of binary and unary-binary trees (enumerated by Catalan and Schroeder numbers respectively). The top two curves represent the associated number of expressions. We observe that adding leaves and binary operators significantly increases the size of the problem space. # 3 GENERATING DATASETS Having defined a syntax for mathematical problems and techniques to randomly generate expressions, we are now in a position to build the datasets our models will use. In the rest of the paper, we focus on two problems of symbolic mathematics: function integration and solving ordinary differential equations (ODE) of the first and second order. To train our networks, we need datasets of problems and solutions. Ideally, we want to generate representative samples of the problem space, i.e. randomly generate functions to be integrated and differential equations to be solved. Unfortunately, solutions of random problems sometimes do not exist (e.g. the integrals of f (x) = exp(x2) or f (x) = log(log(x)) cannot be expressed with usual functions), or cannot be easily derived. In this section, we propose techniques to generate large training sets for integration and first and second order differential equations. 3.1 INTEGRATION We propose three approaches to generate functions with their associated integrals. Forward generation (FWD). A straightforward approach is to generate random functions with up to n operators (using methods from Section 2) and calculate their integrals with a computer algebra system. Functions that the system cannot integrate are discarded. This generates a representative sample of the subset of the problem space that can be successfully solved by an external symbolic mathematical framework. Backward generation (BWD). An issue with the forward approach is that the dataset only contains functions that symbolic frameworks can solve (they sometimes fail to compute the integral of integrable functions). Also, integrating large expressions is time expensive, which makes the overall method particularly slow. Instead, the backward approach generates a random function f, computes its derivative f’, and adds the pair (f’, f) to the training set. Unlike integration, differentiation is always possible and extremely fast even for very large expressions. As opposed to the forward approach, this method does not depend on an external symbolic integration system. Backward generation with integration by parts (IBP). An issue with the backward approach is that it is very unlikely to generate the integral of simple functions like f (x) = x3 sin(x). Its integral, F (x) = −x3 cos(x) + 3x2 sin(x) + 6x cos(x) − 6 sin(x), a function with 15 operators, has a very low probability of being generated randomly. Besides, the backward approach tends to generate examples where the integral (the solution) is shorter than the derivative (the problem), while forward generation favors the opposite (see Figure 2 in section E in the Appendix). To address this issue, we 4 leverage integration by parts: given two randomly generated functions F and G, we compute their respective derivatives f and g. If f G already belongs to the training set, we know its integral, and we can compute the integral of F g as: F g = F G − f G Similarly, if F g is in the training set, we can infer the integral of f G. Whenever we discover the integral of a new function, we add it to the training set. If none of f G or F g are in the training set, we simply generate new functions F and G. With this approach, we can generate the integrals of functions like x10 sin(x) without resorting to an external symbolic integration system. Comparing different generation methods. Table 1 in Section 4.1 summarizes the differences between the three generation methods. The FWD method tends to generate short problems with long solutions (that computer algebras can solve). The BWD approach, on the other hand, generates long problems with short solutions. IBP generates datasets comparable to FWD (short problems and long solutions), without an external computer algebra system. A mixture of BWD and IBP generated data should therefore provide a better representation of problem space, without resorting to external tools. Examples of functions / integrals for the three approaches are given in Table 9 of the Appendix. 3.2 FIRST ORDER DIFFERENTIAL EQUATION (ODE 1) We now present a method to generate first order differential equations with their solutions. We start from a bivariate function F(x, y) such that the equation F(x, y) = c (where c is a constant) can be analytically solved in y. In other words, there exists a bivariate function f that satisfies V(«,¢), F(x, f(x, ¢)) = c. By differentiation with respect to x, we have that Vx, c: OP (FD) | p4(q) OF (as fel) Ox ree Oy 0 where f, = x ++ f(x,c). As aresult, for any constant c, f, is solution of the first order differential equation: OF (x,y) Ox yl! OF (x,y) Oy 0 (3) With this approach, we can use the method described in Section C of the appendix to generate arbitrary functions F (x, y) analytically solvable in y, and create a dataset of differential equations with their solutions. Instead of generating a random function F’, we can generate a solution f(, c), and determine a differ- ential equation that it satisfies. If f(x, c) is solvable in c, we compute F such that F(x, f(x, ¢)) =e. Using the above approach, we show that for any constant c, x ++ f(,c) is a solution of differential Equation] Finally, the resulting differential equation is factorized, and we remove all positive factors from the equation. A necessary condition for this approach to work is that the generated functions f(x, c) can be solved in c. For instance, the function f(x, c) = c x log(a + c) cannot be analytically solved in c, i.e. the function F' that satisfies F(«, f(x, c)) = c cannot be written with usual functions. Since all the operators and functions we use are invertible, a simple condition to ensure the solvability in c is to guarantee that c only appears once in the leaves of the tree representation of f(z, c). A straightforward way to generate a suitable f(x, c) is to sample a random function f(x) by the methods described in Section|Clof the appendix, and to replace one of the leaves in its tree representation by c. Below is an example of the whole process: Generate a random function f(x) = xlog(c/ x) Solve inc c=aes = F(x, f(x)) Differentiate in x es (1+ f'(x) - fe) =0 Simplify ay’ —yt+x=0 5 3.3 SECOND ORDER DIFFERENTIAL EQUATION (ODE 2) Our method for generating first order equations can be extended to the second order, by considering functions of three variables f(, ci, c2) that can be solved in c2. As before, we derive a function of three variables F' such that F(a, f(x, ¢1,¢2), ¢1) = cy. Differentiation with respect to x yields a first order differential equation: OF (x,y .c1) Oy OR W) . H (a Ox =0 Y=fey,co(®) where fe, ,co = +> f(x, ¢1,¢2). If this equation can be solved in c;, we can infer another three- variable function G satisfying Vx, G(x, fe,,c(), f4, e)(&)) = 1. Differentiating with respect to x a second time yields the following equation: OG (x,y, z) OG(x,y,2) Ly cH OG(a,y, 2) BE) fn) POEM 4 (ay PSE =0 Oz Y=fey.e9 (©) 2= Fy 09 (#) Therefore, for any constants c1 and c2, fc1,c2 is solution of the second order differential equation: OG(x,y,y') _ ,OG(a,y,y') 1, OG(x,y,y') Ox 5 Oy ry Oz 0 Using this approach, we can create pairs of second order differential equations and solutions, provided we can generate f (x, c1, c2) is solvable in c2, and that the corresponding first order differential equation is solvable in c1. To ensure the solvability in c2, we can use the same approach as for first order differential equation, e.g. we create fc1,c2 so that c2 has exactly one leaf in its tree representation. For c1, we employ a simple approach where we simply skip the current equation if we cannot solve it in c1. Although naive, we found that the differentiation equation can be solved in c1 about 50% the time. As an example: Generate a random function f(x) =c1e* +e" Solve in c2 co = f (a)e” — ce?” = F(x, f(x), ¢1) Differentiate in x e*(f'(x) + f(x)) — 2c1e?” =0 1 _. . ; 5 Solve in cy a= 3° (F'@) + f(a)) = G(a, f(x), f"(x)) 1 _, Differentiate in x 0= xe (f"(x) — f(x)) Simplify y”-y=0 3.4 DATASET CLEANING Equation simplification In practice, we simplify generated expressions to reduce the number of unique possible equations in the training set, and to reduce the length of sequences. Also, we do not want to train our model to predict x + 1 + 1+ 1+ 1+ 1 when it can simply predict 2 + 5. As a result, sequences [+ 2 + «x 3] and [+3 + 2 2] will both be simplified to [+ x 5] as they both represent the expression x + 5. Similarly, the expression log(e***) will be simplified to x + 3, the expression cos?(a) + sin?(zx) will be simplified to 1, etc. On the other hand, \/(a — 1)? will not be simplified to x — 1 as we do not make any assumption on the sign of x — 1. Coefficients simplification In the case of first order differential equations, we modify generated expressions by equivalent expressions up to a change of variable. For instance, x + x tan(3) + cx + 1 will be simplified to cx + 1, as a particular choice of the constant c makes these two expressions identical. Similarly, log(x2) + c log(x) becomes c log(x). 6 We apply a similar technique for second order differential equations, although simplification is sometimes a bit more involved because there are two constants c1 and c2. For instance, c1 − c2x/5 + c2 + 1 is simplified to c1x + c2, while c2ec1 ec1xe−1 can be expressed with c2ec1x, etc. We also perform transformations that are not strictly equivalent, as long as they hold under specific assumptions. For instance, we simplify tan( c2x) + cosh(c1 + 1) + 4 to c1 + tan(c2x), although the constant term can be negative in the second expression, but not the first one. Similarly e3ec1xec1 log(c2) is transformed to c2ec1x. Invalid expressions Finally, we also remove invalid expressions from our dataset. For instance, expressions like log(0) or −2. To detect them, we compute in the expression tree the values of subtrees that do not depend on x. If a subtree does not evaluate to a finite real number (e.g. −∞, +∞ or a complex number), we discard the expression. # 4 EXPERIMENTS 4.1 DATASET For all considered tasks, we generate datasets using the method presented in Section 3, with: expressions with up to n = 15 internal nodes L = 11 leaf values in {x} ∪ {−5, . . . , 5} \ {0} • p2 = 4 binary operators: +, −, ×, / • p1 = 15 unary operators: exp, log, sqrt, sin, cos, tan, sin-1, cos-1, tan-1, sinh, cosh, tanh, sinh-1, cosh-1, tanh-1 Statistics about our datasets are presented in Table 1. As discussed in Section 3.1, we observe that the backward approach generates derivatives (i.e. inputs) significantly longer than the forward generator. We discuss this in more detail in Section E of the appendix. Forward Backward Integration by parts ODE 1 ODE 2 Training set size 20M 40M 20M 40M 40M Input length Output length Length ratio Input max length Output max length 18.9±6.9 49.6±48.3 2.7 69 508 70.2±47.8 21.3±8.3 0.4 450 75 17.5±9.1 26.4±11.3 2.0 226 206 123.6±115.7 23.0±15.2 0.4 508 474 149.1±130.2 24.3±14.9 0.1 508 335 Table 1: Training set sizes and length of expressions (in tokens) for different datasets. FWD and IBP tend to generate examples with outputs much longer than the inputs, while the BWD approach generates shorter outputs. Like in the BWD case, ODE generators tend to produce solutions much shorter than their equations. 4.2 MODEL For all our experiments, we train a seq2seq model to predict the solutions of given problems, i.e. to predict a primitive given a function, or predict a solution given a differential equation. We use a transformer model (Vaswani et al., 2017) with 8 attention heads, 6 layers, and a dimensionality of 512. In our experiences, using larger models did not improve the performance. We train our models with the Adam optimizer (Kingma & Ba, 2014), with a learning rate of 10−4. We remove expressions with more than 512 tokens, and train our model with 256 equations per batch. At inference, expressions are generated by a beam search (Koehn, 2004; Sutskever et al., 2014), with early stopping. We normalize the log-likelihood scores of hypotheses in the beam by their sequence length. We report results with beam widths of 1 (i.e. greedy decoding), 10 and 50. During decoding, nothing prevents the model from generating an invalid prefix expression, e.g. [+ 2 ∗ 3 ]. To address this issue, Dyer et al. (2016) use constraints during decoding, to ensure 7 that generated sequences can always be converted to valid expression trees. In our case, we found that model generations are almost always valid and we do not use any constraint. When an invalid expression is generated, we simply consider it as an incorrect solution and ignore it. # 4.3 EVALUATION At the end of each epoch, we evaluate the ability of the model to predict the solutions of given equations. In machine translation, hypotheses given by the model are compared to references written by human translators, typically with metrics like the BLEU score (Papineni et al.||2002) that measure the overlap between hypotheses and references. Evaluating the quality of translations is a very difficult problem, and many studies showed that a better BLEU score does not necessarily correlate with a better performance according to human evaluation. Here, however, we can easily verify the correctness of our model by simply comparing generated expressions to their reference solutions. For instance, for the given differential equation xy’ — y + x = 0 with a reference solution x log(c / x) (where c is a constant), our model may generate x log(c) — xlog(x). We can check that these two solutions are equal, although they are written differently, using a symbolic framework like SymPy 2 However, our model may also generate wc — x log(x) which is also a valid solution, that is actually equivalent to the previous one for a different choice of constant c. In that case, we replace y in the differential equation by the model hypothesis. If xy’ — y + x = 0, we conclude that the hypothesis is a valid solution. In the case of integral computation, we can simply differentiate the model hypothesis, and compare it with the function to integrate. For the three problems, we measure the accuracy of our model on equations from the test set. Since we can easily verify the correctness of generated expressions, we consider all hypotheses in the beam, and not only the one with the highest score. We verify the correctness of each hypothesis, and consider that the model successfully solved the input equation if one of them is correct. As a result, results with “Beam size 10” indicate that at least one of the 10 hypotheses in the beam was correct. 4.4 RESULTS Table 2 reports the accuracy of our model for function integration and differential equations. For integration, the model achieves close to 100% performance on a held-out test set, even with greedy decoding (beam size 1). This performance is consistent over the three integration datasets (FWD, BWD, and IBP). Greedy decoding (beam size 1) does not work as well for differential equations. In particular, we observe an improvement in accuracy of almost 40% when using a large beam size of 50 for second order differential equations. Unlike in machine translation, where increasing the beam size does not necessarily increase the performance (Ott et al., 2018), we always observe significant improvements with wider beams. Typically, using a beam size of 50 provides an improvement of 8% accuracy compared to a beam size of 10. This makes sense, as increasing the beam size will provide more hypotheses, although a wider beam may displace a valid hypothesis to consider invalid ones with better log-probabilities. Integration (FWD) Integration (BWD) Integration (IBP) ODE (order 1) ODE (order 2) Beam size 1 Beam size 10 Beam size 50 93.6 95.6 96.2 98.4 99.4 99.7 96.8 99.2 99.5 77.6 90.5 94.0 43.0 73.0 81.2 Table 2: Accuracy of our models on integration and differential equation solving. Results are reported on a held out test set of 5000 equations. For differential equations, using beam search decoding significantly improves the accuracy of the model. # 4.5 COMPARISON WITH MATHEMATICAL FRAMEWORKS We compare our model with three popular mathematical frameworks: Mathematica (Wolfram- Research, 2019), Maple and Matlab (MathWorks, 2019)1. Prefix sequences in our test set are 1All experiments were run with Mathematica 12.0.0.0, Maple 2019 and Matlab R2019a. 8 converted back to their infix representations, and given as input to the computer algebra. For a specific input, the computer algebra either returns a solution, provides no solution (or a solution including integrals or special functions), or, in the case of Mathematica, times out after a preset delay. When Mathematica times out, we conclude that it is not able to compute a solution (although it might have found a solution given more time). For integration, we evaluate on the BWD test set. By construction, the FWD data only consists of integrals generated by computer algebra systems, which makes comparison uninteresting. In Table 3, we present accuracy for our model with different beam sizes, and for Mathematica with a timeout delay of 30 seconds. Table 8 in the appendix provides detailed results for different values of timeout, and explains our choice of 30 seconds. In particular, we find that with 30 seconds, only 20% of failures are due to timeouts, and only 10% when the timeout is set to 3 minutes. Even with timeout limits, evaluation would take too long on our 5000 test equations, so we only evaluate on a smaller test subset of 500 equations, on which we also re-evaluate our model. Integration (BWD) ODE (order 1) ODE (order 2) Mathematica (30s) Matlab Maple 84.0 65.2 67.4 77.2 - - 61.6 - - Beam size 1 Beam size 10 Beam size 50 98.4 99.6 99.6 81.2 94.0 97.0 40.8 73.2 81.0 Table 3: Comparison of our model with Mathematica, Maple and Matlab on a test set of 500 equations. For Mathematica we report results by setting a timeout of 30 seconds per equation. On a given equation, our model typically finds the solution in less than a second. On all tasks, we observe that our model significantly outperforms Mathematica. On function integration, our model obtains close to 100% accuracy, while Mathematica barely reaches 85%. On first order differential equations, Mathematica is on par with our model when it uses a beam size of 1, i.e. with greedy decoding. However, using a beam search of size 50 our model accuracy goes from 81.2% to 97.0%, largely surpassing Mathematica. Similar observations can be made for second order differential equations, where beam search is even more critical since the number of equivalent solutions is larger. On average, Matlab and Maple have slightly lower performance than Mathematica on the problems we tested. Table 4 shows examples of functions that our model was able to solve, on which Mathematica and Matlab did not find a solution. The denominator of the function to integrate, −16x8 + 112x7 − 204x6 + 28x5 − x4 + 1, can be rewritten as 1 − (4x4 − 14x3 + x2)2. With the simplified input: 16x3 — 42a? + 2a (1 — (44 — 1423 + x?)?) 1/2 integration becomes easier and Mathematica is able to find the solution. Equation | Solution A 16a* — 42x? + 2x sly 3 2 YS C1608 $T12a? — 20445 $ 2825 — at FI)? y= sin da" ~ Ma" +2") 3xy cos(x) — \/9x? sin(x)? + ly’ + 3ysin(«) = 0 y = cexp (sinh™! (3a sin(x))) c1 + 3a + 3 log (x) x (co + 4a) 300 yy" —804y? —82° yy’ — 32°" y 827 y° —627y!—3x7y" —9xry' —3y =0)9= 300 Ag yy" —804y? —82° yy’ — 32°" y 827 y° —627y!—3x7y" —9xry' —3y Table 4: Examples of problems that our model is able to solve, on which Mathematica and Matlab were not able to find a solution. For each equation, our model finds a valid solution with greedy decoding. 9 # 4.6 EQUIVALENT SOLUTIONS An interesting property of our model is that it is able to generate solutions that are exactly equivalent, but written in different ways. For instance, we consider the following first order differential equation, along with one of its solutions: √ 1 1 Ie log (x) Ve+ 2x 162x log(x)y’ + 2y* log(a)? — 81ylog(a) + 8ly = 0 y In Table 5, we report the top 10 hypotheses returned by our model for this equation. We observe that all generations are actually valid solutions, although they are expressed very differently. They are however not all equal: merging the square roots within the first and third equations would give the same expression except that the third one would contain a factor 2 in front of the constant c, but up to a change of variable, these two solutions are actually equivalent. The ability of the model to recover equivalent expressions, without having been trained to do so, is very intriguing. Hypothesis Score Hypothesis Score Vtlmm ‘ ee 0.047 > 0.124 Vet 2x [clog (2) + Qlog (x) 9/z . 9/z ——— —0.056 ———— —0.139 Ve + 22y/log (x) > clog (a) + 2x log (a) Cc —0.115 9 —0.144 = + 2y/log (a) 1 1 9V/x,| ——_.—_.—____ -0.117 9, /————___ —0.205 ae (x) + 22 log (x) clos (@) + 2log (x) ; 9V2/e 1 ——— —0.124 | 9Vx —0.232 aot alone) ve ag (@) + 2elog (a) + Tog (z) Table 5: Top 10 generations of our model for the first order differential equation 162a log(x)y’ + 2y? log(x)? — 81y log(«) + 81y = 0, generated with a beam search. All hypotheses are valid solutions, and are equivalent up to a change of the variable c. Scores are log-probabilities normalized by sequence lengths. # 4.7 GENERALIZATION ACROSS GENERATORS Models for integration achieve close to 100% performance on held-out test samples generated with the same method as their training data. In Table 6, we compare the accuracy on the FWD, BWD and IBP test sets for 4 models trained using different combinations of training data. When the test set is generated with the same generator as the training set, the model performs extremely well. For instance, the three models trained either on BWD, BWD + IBP or BWD + IBP + FWD achieve 99.7% accuracy on the BWD test set with a beam size of 50. On the other hand, even with a beam size of 50, a FWD-trained model only achieves 17.2% accuracy on the BWD test set, and a BWD-trained model achieves 27.5% on the FWD test set. This results from the very different structure of the FWD and BWD data sets (cf. Table 1 and the discussion in Section E of the appendix). Overall, a model trained on BWD samples learns that integration tends to shorten expressions, a property that does not hold for FWD samples. Adding diversity to the training set improves the results. For instance, adding IBP-generated examples to the BWD-trained model raises the FWD test accuracy from 27.5% to 56.1%, and with additional FWD training data the model reaches 94.3% accuracy. Generalization is further discussed in Section E of the appendix. 10 Forward (FWD) Backward (BWD) Integration by parts (IBP) Training data Beam 1 Beam 10 Beam 50 Beam 1 Beam 10 Beam 50 Beam 1 Beam 10 Beam 50 FWD BWD BWD + IBP BWD + IBP + FWD 93.6 18.9 41.6 89.1 95.6 24.6 54.9 93.4 96.2 27.5 56.1 94.3 10.9 98.4 98.2 98.1 13.9 99.4 99.4 99.3 17.2 99.7 99.7 99.7 85.6 42.9 96.8 97.2 86.8 54.6 99.2 99.4 88.9 59.2 99.5 99.7 Table 6: Accuracy of our models on function integration. We report the accuracy of our model on the three integration datasets: forward (FWD), backward (BWD), and integration by parts (IBP), for four models trained with different combinations of training data. We observe that a FWD-trained model performs poorly when it tries to integrate functions from the BWD dataset. Similarly, a BWD-trained model only obtain 27.5% accuracy on the FWD dataset, as it fails to integrate simple functions like x5 sin(x). On the other hand, training on both the BWD + IBP datasets allows the model to reach up to 56.1% accuracy on FWD. Training on all datasets allows the model to perform well on the three distributions. # 4.8 GENERALIZATION BEYOND THE GENERATOR - SYMPY Our forward generator, FWD, generates a set of pairs (f, F ) of functions with their integrals. It relies on an external symbolic framework, SymPy (Meurer et al., 2017), to compute the integral of randomly generated functions. SymPy is not perfect, and fails to compute the integral of many integrable functions. In particular, we found that the accuracy of SymPy on the BWD test set is only 30%. Our FWD-trained model only obtains an accuracy of 17.2% on BWD. However, we observed that the FWD-trained model is sometimes able to compute the integral of functions that SymPy cannot compute. This means that by only training on functions that SymPy can integrate, the model was able to generalize to functions that SymPy cannot integrate. Table 7 presents examples of such functions with their integrals. x” (tan? (x) + 1) + 2xtan (x) +1 x’ tan (x) +a 2cos (22) F . a x + asinh (sin (22)) \/sin? (2x) +1 x tan (x) + log (x cos (#)) — 1 x log (a cos (x))? 2a cos (asin? («)) asin (x) ; 1 x V1—2? sin? (asin? (x)) sin (asin? (x)) sin (asin? (x) 2x 1 . . ve+e(o +14 2) + a + asinh (x?) a (Vx +2 + asinh (x”)) 3(—3a? sin (#*)+54+— —3- C00 xn) ava) 3 (x + log (x + cos (23)))? x + log (\/x + cos (x)) —2 tan? (log (log (x))) — 2 | 2 Qa log (x) tan? (log (log (a))) " tan (log (log (x))) tan (log (log (x))) Table 7: Examples of functions / integrals that the FWD-trained model can integrate, but not SymPy. Although the FWD model was only trained on a subset of functions that SymPy can integrate, it learned to generalize to functions that SymPy cannot integrate. 11 # 5 RELATED WORK Computers were used for symbolic mathematics since the late 1960s (Moses, 1974). Computer algebra systems (CAS), such as Matlab, Mathematica, Maple, PARI and SAGE, are used for a variety of mathematical tasks (Gathen & Gerhard, 2013). Modern methods for symbolic integration are based on Risch algorithm (Risch, 1970). Implementations can be found in Bronstein (2005) and Geddes et al. (1992). However, the complete description of the Risch algorithm takes more than 100 pages, and is not fully implemented in current mathematical framework. Deep learning networks have been used to simplify treelike expressions. Zaremba et al. (2014) use recursive neural networks to simplify complex symbolic expressions. They use tree represen- tations for expressions, but provide the model with problem related information: possible rules for simplification. The neural network is trained to select the best rule. Allamanis et al. (2017) propose a framework called neural equivalence networks to learn semantic representations of alge- braic expressions. Typically, a model is trained to map different but equivalent expressions (like the 10 expressions proposed in Table 5) to the same representation. However, they only consider Boolean and polynomial expressions. More recently, Arabshahi et al. (2018a;b) used tree-structured neural networks to verify the correctness of given symbolic entities, and to predict missing entries in incomplete mathematical equations. They also showed that these networks could be used to predict whether an expression is a valid solution of a given differential equation. Most attempts to use deep networks for mathematics have focused on arithmetic over integers (sometimes over polynomials with integer coefficients). For instance, Kaiser & Sutskever (2015) proposed the Neural-GPU architecture, and train networks to perform additions and multiplications of numbers given in their binary representations. They show that a model trained on numbers with up-to 20 bits can be applied to much larger numbers at test time, while preserving a perfect accuracy. Freivalds & Liepins (2017) proposed an improved version of the Neural-GPU by using hard non-linear activation functions, and a diagonal gating mechanism. Saxton et al. (2019) use LSTMs (Hochreiter & Schmidhuber, 1997) and transformers on a wide range of problems, from arithmetic to simplification of formal expressions. However, they only consider polynomial functions, and the task of differentiation, which is significantly easier than integration. Trask et al. (2018) propose the Neural arithmetic logic units, a new module designed to learn systematic numerical computation, and that can be used within any neural network. Like Kaiser & Sutskever (2015), they show that at inference their model can extrapolate on numbers orders of magnitude larger than the ones seen during training. # 6 CONCLUSION In this paper, we show that standard seq2seq models can be applied to difficult tasks like function integration, or solving differential equations. We propose an approach to generate arbitrarily large datasets of equations, with their associated solutions. We show that a simple transformer model trained on these datasets can perform extremely well both at computing function integrals, and solving differential equations, outperforming state-of-the-art mathematical frameworks like Matlab or Mathematica that rely on a large number of algorithms and heuristics, and a complex implementation (Risch, 1970). Results also show that the model is able to write identical expressions in very different ways. These results are surprising given the difficulty of neural models to perform simpler tasks like integer addition or multiplication. However, proposed hypotheses are sometimes incorrect, and considering multiple beam hypotheses is often necessary to obtain a valid solution. The validity of a solution itself is not provided by the model, but by an external symbolic framework (Meurer et al., 2017). These results suggest that in the future, standard mathematical frameworks may benefit from integrating neural components in their solvers. 12 # REFERENCES Miltiadis Allamanis, Pankajan Chanthirasegaran, Pushmeet Kohli, and Charles Sutton. Learning con- tinuous semantic representations of symbolic expressions. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML’17, pp. 80–88. JMLR.org, 2017. Forough Arabshahi, Sameer Singh, and Animashree Anandkumar. Combining symbolic expressions and black-box function evaluations for training neural programs. In International Conference on Learning Representations, 2018a. Forough Arabshahi, Sameer Singh, and Animashree Anandkumar. Towards solving differential equations through neural programming. 2018b. D. Bahdanau, K. Cho, and Y. Bengio. Neural machine translation by jointly learning to align and translate. In International Conference on Learning Representations (ICLR), 2015. M. Bronstein. Symbolic Integration I: Transcendental Functions. Algorithms and combinatorics. Springer, 2005. ISBN 978-3-540-21493-9. Chris Dyer, Adhiguna Kuncoro, Miguel Ballesteros, and Noah A Smith. Recurrent neural network grammars. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 199–209, 2016. Akiko Eriguchi, Yoshimasa Tsuruoka, and Kyunghyun Cho. Learning to parse and translate improves neural machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 72–78, 2017. Philippe Flajolet and Andrew M. Odlyzko. Singularity analysis of generating functions. SIAM J. Discrete Math., 3(2):216–240, 1990. Philippe Flajolet and Robert Sedgewick. Analytic Combinatorics. Cambridge University Press, New York, NY, USA, 1 edition, 2009. ISBN 0521898064, 9780521898065. Karlis Freivalds and Renars Liepins. Improving the neural gpu architecture for algorithm learning. ArXiv, abs/1702.08727, 2017. Joachim von zur Gathen and Jurgen Gerhard. Modern Computer Algebra. Cambridge University Press, New York, NY, USA, 3rd edition, 2013. ISBN 1107039037, 9781107039032. Keith O. Geddes, Stephen R. Czapor, and George Labahn. Algorithms for Computer Algebra. Kluwer Academic Publishers, Norwell, MA, USA, 1992. ISBN 0-7923-9259-0. Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735–1780, 1997. Lukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. CoRR, abs/1511.08228, 2015. Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. Donald E. Knuth. The Art of Computer Programming, Volume 1 (3rd Ed.): Fundamental Algorithms. Addison Wesley Longman Publishing Co., Inc., Redwood City, CA, USA, 1997. ISBN 0-201- 89683-4. Philipp Koehn. Pharaoh: a beam search decoder for phrase-based statistical machine translation models. In Conference of the Association for Machine Translation in the Americas, pp. 115–124. Springer, 2004. Sarah Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network guided proof search. arXiv preprint arXiv:1701.06972, 2017. MathWorks. Matlab optimization toolbox (r2019a), 2019. The MathWorks, Natick, MA, USA. 13 Aaron Meurer, Christopher P. Smith, Mateusz Paprocki, Ondˇrej ˇCert´ık, Sergey B. Kirpichev, Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K. Moore, Sartaj Singh, Thilina Rathnayake, Sean Vig, Brian E. Granger, Richard P. Muller, Francesco Bonazzi, Harsh Gupta, Shivam Vats, Fredrik Johansson, Fabian Pedregosa, Matthew J. Curry, Andy R. Terrel, ˇStˇep´an Rouˇcka, Ashutosh Saboo, Isuru Fernando, Sumith Kulal, Robert Cimrman, and Anthony Scopatz. Sympy: symbolic computing in python. PeerJ Computer Science, 3:e103, January 2017. ISSN 2376-5992. doi: 10.7717/peerj-cs.103. URL https://doi.org/10.7717/peerj-cs.103. Joel Moses. Macsyma - the fifth year. SIGSAM Bull., 8(3):105–110, August 1974. ISSN 0163-5824. Myle Ott, Michael Auli, David Grangier, et al. Analyzing uncertainty in neural machine translation. In International Conference on Machine Learning, pp. 3953–3962, 2018. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311–318. Association for Computational Linguistics, 2002. Robert H. Risch. The solution of the problem of integration in finite terms. Bull. Amer. Math. Soc., 76(3):605–608, 05 1970. David E. Rumelhart, James L. McClelland, and CORPORATE PDP Research Group (eds.). Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol. 1: Foundations. MIT Press, Cambridge, MA, USA, 1986. ISBN 0-262-68053-X. David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models. In International Conference on Learning Representations, 2019. N. J. A. Sloane. The encyclopedia of integer sequences, 1996. Richard P. Stanley. Enumerative Combinatorics: Volume 1. Cambridge University Press, New York, NY, USA, 2nd edition, 2011. ISBN 1107602629, 9781107602625. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pp. 3104–3112, 2014. Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 1556–1566, 2015. Andrew Trask, Felix Hill, Scott E Reed, Jack Rae, Chris Dyer, and Phil Blunsom. Neural arithmetic logic units. In Advances in Neural Information Processing Systems, pp. 8035–8044, 2018. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, pp. 6000–6010, 2017. Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. Grammar as a foreign language. In Advances in neural information processing systems, pp. 2773–2781, 2015. H.S. Wilf. generatingfunctionology: Third Edition. CRC Press, 2005. ISBN 978-1-4398-6439-5. URL https://www.math.upenn.edu/˜wilf/gfologyLinked2.pdf. Wolfram-Research. Mathematica, version 12.0, 2019. Champaign, IL, 2019. Wojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014. Wojciech Zaremba, Karol Kurach, and Rob Fergus. Learning to discover efficient mathematical identities. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 1, NIPS’14, pp. 1278–1286, Cambridge, MA, USA, 2014. MIT Press. 14 # A A SYNTAX FOR MATHEMATICAL EXPRESSIONS We represent mathematical expressions as trees with operators as internal nodes, and numbers, constants or variables, as leaves. By enumerating nodes in prefix order, we transform trees into sequences suitable for seq2seq architectures. For this representation to be efficient, we want expressions, trees and sequences to be in a one-to-one correspondence. Different expressions will always result in different trees and sequences, but for the reverse to hold, we need to take care of a few special cases. First, expressions like sums and products may correspond to several trees. For instance, the expression 2 + 3 + 5 can be represented as any one of those trees: + + + 2 3 5 + 5 2 + 2 3 3 5 We will assume that all operators have at most two operands, and that, in case of doubt, they are associative to the right. 2 + 3 + 5 would then correspond to the rightmost tree. Second, the distinction between internal nodes (operators) and leaves (mathematical primitive objects) is somewhat arbitrary. For instance, the number −2 could be represented as a basic object, or as a 5, unary minus operator applied to the number 2. Similarly, there are several ways to represent 42x5, or the function log10. For simplicity, we only consider numbers, constants and variables as possible leaves, and avoid using a unary minus. In particular, expressions like −x are represented as −1 × x. Here are the trees for −2, −2 sqrt × 5 42 pow x 5 × # m— −1 # x Integers are represented in positional notation, as a sign followed by a sequence of digits (from 0 to 9 in base 10). For instance, 2354 and −34 are represented as +2 3 5 4 and − 3 4. For zero, a unique representation is chosen (+0 or −0). # B MATHEMATICAL DERIVATIONS OF THE PROBLEM SPACE SIZE In this section, we investigate the size of the problem space by computing the number of expressions with n internal nodes. We first deal with the simpler case where we only have binary operators (p1 = 0), then consider trees and expressions composed of unary and binary operators. In each case, we calculate a generating function (Flajolet & Sedgewick, 2009; Wilf, 2005) from which we derive a closed formula or recurrence on the number of expressions, and an asymptotic expansion. # B.1 BINARY TREES AND EXPRESSIONS The main part of this derivation follows (Knuth, 1997) (pages 388-389). Generating function Let bn be the number of binary trees with n internal nodes. We have b0 = 1 and b1 = 1. Any binary tree with n internal nodes can be generated by concatenating a left and a right subtree with k and n − 1 − k internal nodes respectively. By summing over all possible values of k, we have that: bn = b0bn−1 + b1bn−2 + · · · + bn−2b1 + bn−1b0 Let B(z) be the generating function of bn, B(z) = b0 + b1z + b2z2 + b3z3 + . . . 15 B(z)2 = b0 2 + (b0b1 + b1b0)z + (b0b2 + b1b1 + b2b0)z2 + . . . = b1 + b2z + b3z2 + . . . = B(z) − b0 z So, zB(z)2 − B(z) + 1 = 0. Solving for B(z) gives: √ 1 ± B(z) = 1 − 4z 2z and since B(0) = b0 = 1, we derive the generating function for sequence bn √ B(z) = 1 − 1 − 4z 2z We now derive a closed formula for bn. By the binomial theorem, Bie) = = (: ~ > (0) (198) ll wo Yl Me wo >= ie a are an) nN_ y be i iM BX w my | am Therefore 1 2n (2n)! on wil.) (n+ 1)!n! These are the Catalan numbers, a closed formula for the number of binary trees with n internal nodes. We now observe that a binary tree with n internal nodes has exactly n + 1 leaves. Since each node in a binary tree can represent p2 operators, and each leaf can take L values, we have that a tree with n 2 Ln+1 possible combinations of operators and leaves. As a result, the number of nodes can take pn binary expressions with n operators is given by: En = (2n)! (n + 1)!n! 2 Ln+1 pn Asymptotic estimate To derive an asymptotic approximation of b,,, we apply the Stirling formula: n 2n 4” 4” n 2n 4” 4” nl = an (“) so (lee and bn © ae Finally, we have the following formulas for the number of expressions with n internal nodes: En ≈ n 1 √ πn (4p2)nLn+1 16 B.2 UNARY-BINARY TREES Generating function Let sn be the number of unary-binary trees (i.e. trees where internal nodes can have one or two children) with n internal nodes. We have s0 = 1 and s1 = 2 (the only internal node is either unary or binary). Any tree with n internal nodes is obtained either by adding a unary internal node at the root of a tree with n − 1 internal nodes, or by concatenating with a binary operator a left and a right subtree with k and n − 1 − k internal nodes respectively. Summing up as before, we have: sn = sn−1 + s0sn−1 + s1sn−2 + · · · + sn−1s0 Let S(z) be the generating function of the sn. The above formula translates into S(z) − s0 z zS(z)2 + (z − 1)S(z) + 1 = 0 solving and taking into account the fact that S(0) = 1, we obtain the generating function of the sn √ 1 − 6z + z2 2z 1 − z − S(z) = The numbers sn generated by S(z) are known as the Schroeder numbers (OEIS A006318) (Sloane, 1996). They appear in different combinatorial problems (Stanley, 2011). Notably, they correspond to the number of paths from (0, 0) to (n, n) of a n × n grid, moving north, east, or northeast, and never rising above the diagonal. Calculation Schroeder numbers do not have a simple closed formula, but a recurrence allowing for their calculation can be derived from their generating function. Rewriting S(z) as 2zS(z) + z − 1 = − 1 − 6z + z2 and differentiating, we have 3-2 _ 3-2 Vl—-62+2 1-6z+2 7 32-2? 3-—z)1l-<z 225"(2) +2512) (1 7) ee —32z 2+2z 6z+22 1-624 22 2(1 — 62 + 2°)S’(z) + (1—32)S(z) =1 +z 22S'(z) + 2S(z)+1= (1 — z— 2zS(z)) 228'(z) + 28(2)z 4 Replacing S(z) and S’(z) with their n-th coefficient yields, for n > 1 nsn − 6(n − 1)sn−1 + (n − 2)sn−2 + sn − 3sn−1 = 0 (n + 1)sn = 3(2n − 1)sn−1 − (n − 2)sn−2 Together with s0 = 1 and s1 = 2, this allows for fast (O(n)) calculation of Schroeder numbers. Asymptotic estimate To derive an asymptotic formula of sn, we develop the generating function around its smallest singularity (Flajolet & Odlyzko, 1990), i.e. the radius of convergence of the power series. Since √ √ 1 − 6z + z2 = 1 − (3 − 8)z 1 − (3 + 8)z The smallest singular value is 1 √ (3 + r1 = The smallest singular value is 8) and the asymptotic formula will have the exponential term √ √ r−n 1 = (3 + 8)n = (1 + 2)2n 17 In a neighborhood of r1, the generating function can be rewritten as Since √ √ √ S(z) ≈ (1 + 2) 1 − 21/4 1 − (3 + 8)z + O(1 − (3 + 8)z)3/2 [zn] √ 1 − az ≈ − √ an 4πn3 where [zn]F (z) denotes the n-th coefficient in the formal series of F, we have √ √ √ 2)2n+1 √ πn3 8)n (1 + 2)(3 + √ πn3 (1 + sn ≈ = 23/4 23/4 Comparing with the number of binary trees, we have sn ≈ 1.44(1.46)nbn B.3 UNARY-BINARY EXPRESSIONS In the binary case, the number of expressions can be derived from the number of trees. This cannot be done in the unary-binary case, as the number of leaves in a tree with n internal nodes depends on the number of binary operators (n2 + 1). Generating function The number of trees with n internal nodes and n2 binary operators can be derived from the following observation: any unary-binary tree with n2 binary internal nodes can be generated from a binary tree by adding unary internal nodes. Each node in the binary tree can receive one or several unary parents. Since the binary tree has 2n2 + 1 nodes and the number of unary internal nodes to be added is n — no, the number of unary-binary trees that can be created from a specific binary tree is the number of multisets with 2n2 + 1 elements on n — nz symbols, that is n+ng\_ (n+n2 n-—Nng ~ 2n2 If bg denotes the q-th Catalan number, the number of trees with nz binary operators among 1 is n+ng (52) bo Since such trees have n2 + 1 leaves, with L leaves, p2 binary and p1 unary operators to choose from, the number of expressions is E(n,n2) = (" + ™) baph2pn-m pret} 2n2 Summing over all values of nz (from 0 to 7) yields the number of different expressions n n+ Ng = 1 En = Ss ( Ons ) onvt napnatlyn n2=0 n2=0 Let E(z) be the corresponding generating function. oo E(z) Enz” n=0 ee n+n > > ( L *) Dn pa2pn M2 Lmett yn 2n2 n=0n2=0 cu ip _ 2 2 n,n = LY (Hubs (FE) ate n=0n2=0 SS (n+ ng Lp2"* =D Can) (GE) mer n=0n2=0 18 # n+n2 since ( ns ) = 0 when n > no n+n2 since ( ns ) = 0 when n > no Il Me satis iM (BYE Caron CRY ECS wr =0 00 n+ 2ne nna (L p22) "> 2ng ) (r2)" n=0 Ons Ons L ies L applying the binomial formula 1 n )=EY bal Lp2z)" pep 2 Iopz , " \d =z)? applying the generating function for binary trees _ Lp2z Be) = 2 Let 4 — Ly 1 pie 2a = tome (yy 4 ee 2poz (1—piz) l—piz — (= p12)? — 4L poz 2p2z Reducing, we have L—piz — V/1 = 2(pi + 2Lp2k)z + piz? E(z) Dyas Calculation As before, there is no closed simple formula for En, but we can derive a recurrence formula by differentiating the generating function, rewritten as QpezE(z) + piz — 1 = —V/1 — 2(p1 + 2poL)z + piz? QpezE(z) + piz — 1 = —V/1 — 2(p1 + 2poL)z + piz? pi + 2pol — piz Al — 2(py + 2poL)z + pz? (pi + 2poL — piz)(1 — piz — 2p2zE(z)) 1 = 2(pi + 2peL)z + pz 2(p1 + 2peLl — piz) (pi + 2p2L — piz)(1 — piz) 1—2(pi + 2peL)z + 1) 1 — 2(pi + 2poL)z + py 2? 1 = (pi + 2poL)z ) 2poL(1 + piz) + pi(pr — 1)z 1— 2(p1 + 2peL)z + piz? 1— 2(p1 + 2peL)z + pz? 2pozE'(z) + 2poE(z) + pi 2p2zE'(z) + 2poE(z) + pi 2p2zE"(z) + 2p2E(z) (1 + —pi 2pozE"(z) 4 2mB(2)( QpozE'(z)(1 — 2(p1 + 2poL)z + pz”) + 2p2E(z)(1 — (pi + 2poL)z) = (2poL(1 + riz) + Pili — 1)z) replacing £(z) and E’(z) with their coefficients 2p2(nE, — 2(p1 + 2p2L)(n — 1)En—1 + pi(n — 2)E(n — 2)) + 2p2(En — (pi + 2p2L)En-1) (n+ 1)E, — (pi + 2peL)(2n — 1)E,-1 + pi(n — 2)E, 2 =0 =0 (n + 1)En = (p1 + 2p2L)(2n − 1)En−1 − p1(n − 2)En−2 19 which together with E0 = L E1 = (p1 + p2L)L provides a formula for calculating En. Asymptotic estimate As before, approximations of En for large n can be found by developing E(z) in the neighbourhood of the root with the smallest module of 1 − 2(p1 + 2p2L)z + p1z2 The roots are PL r= pi + 2poL — v/pi + 4p3L? + Apopil — pr ro = PL 2 pi + 2poL + y/pt + 4p3L? + 4popiL — pi both are positive and the smallest one is r2 To alleviate notation, let δ = 1 + 4p2 p2 2L2 + 4p2p1L − p1 r2 = p1 p1 + 2p2L + δ developing E(z) near r2, 1— pire — yt - ro(P aE), /1 — 5 Zz E(z) ® bO(1— 3/2 (2) ora ( 7 2p2r2 √ √ 1 − z r2 p1 + 2p2L + δ − p2 p1 + 2p2L + δ 2δ 1 − z r2 + O(1 − E(z) ≈ 2p2p1 )3/2 and therefore 1 Ew Vvory” ? Vi (pr + 2p. +6)"+2 ” 2po 2m, rn WpoV270n3 prt √ # C GENERATING RANDOM EXPRESSIONS In this section we present algorithms to generate random expressions with n internal nodes. We achieve this by generating random trees, and selecting randomly their nodes and leaves. We begin with the simpler binary case (p1 = 0). C.1 BINARY TREES To generate a random binary tree with n internal nodes, we use the following one-pass procedure. Starting with an empty root node, we determine at each step the position of the next internal nodes among the empty nodes, and repeat until all internal nodes are allocated. Start with an empty node, set e = 1; while n > 0 do Sample a position k from K(e, n); Sample the k next empty nodes as leaves; Sample an operator, create two empty children; Set e = e − k + 1 and n = n − 1; # end Algorithm 1: Generate a random binary tree 20 We denote by e the number of empty nodes, by n > 0 the number of operators yet to be generated, and by K(e, n) the probability distribution of the position (0-indexed) of the next internal node to allocate. To calculate K(e, n), let us define D(e, n), the number of different binary subtrees that can be generated from e empty elements, with n internal nodes to generate. We have D(0, n) = 0 D(e, 0) = 1 D(e, n) = D(e − 1, n) + D(e + 1, n − 1) The first equation states that no tree can be generated with zero empty node and n > 0 operators. The second equation says that if no operator is to be allocated, empty nodes must all be leaves and there is only one possible tree. The last equation states that if we have e > 0 empty nodes, the first one is either a leaf (and there are D(e − 1, n) such trees) or an internal node (D(e + 1, n − 1) trees). This allows us to compute D(e, n) for all e and n. To calculate distribution K(e, n), observe that among the D(e, n) trees with e empty nodes and n operators, D(e + 1, n − 1) have a binary node in their first position. Therefore P (K(e, n) = 0) = D(e + 1, n − 1) D(e, n) Of the remaining D(e − 1, n) trees, D(e, n − 1) have a binary node in their first position (same argument for e − 1), that is P (K(e, n) = 1) = D(e, n − 1) D(e, n) By induction over k, we have the general formula D(e—k+1,n-1) P(K(e, n) k) D(e,n) C.2 UNARY-BINARY TREES In the general case, internal nodes can be of two types: unary or binary. We adapt the previous algorithm by considering the two-dimensional probability distribution L(e, n) of position (0-indexed) and arity of the next internal node (i.e. P (L(e, n) = (k, a) is the probability that the next internal node is in position k and has arity a). Start with an empty node, set e = 1; while n > 0 do Sample a position k and arity a from L(e, n) (if a = 1 the next internal node is unary); Sample the k next empty nodes as leaves; if a = 1 then Sample a unary operator; Create one empty child; Set e = e − k; end else Sample a binary operator; Create two empty children; Set e = e − k + 1; end Set n = n − 1; end Algorithm 2: Generate a random unary-binary tree 21 To compute L(e, n), we derive D(e, n), the number of subtrees with n internal nodes that can be generated from e empty nodes. We have, for all n > 0 and e: D(0, n) = 0 D(e, 0) = 1 D(e, n) = D(e − 1, n) + D(e, n − 1) + D(e + 1, n − 1) The first equation states that no tree can be generated with zero empty node and n > 0 operators. The second says that if no operator is to be allocated, empty nodes must all be leaves and there is only one possible tree. The third equation states that with e > 0 empty nodes, the first one will either be a leaf (D(e − 1, n) possible trees), a unary operator (D(e, n − 1) trees), or a binary operator (D(e + 1, n − 1) trees). To derive L(e, n), we observe that among the D(e, n) subtrees with e empty nodes and n internal nodes to be generated, D(e, n − 1) have a unary operator in position zero, and D(e + 1, n − 1) have a binary operator in position zero. As a result, we have D(e,n— 1) D(e,n) D(e+1,n—1) D(e,n) P(L(e,n) = (0,1)) = and P(L(e,n) = (0,2)) As in the binary case, we can generalize these probabilities to all positions k in {0 . . . e − 1} D(e—k,n—1) D(e,n) D(e—k+1,n—-1) D(e,n) P(L(e,n) = (k, 1)) and P(L(e,n) = (k, 2) C.3 SAMPLING EXPRESSIONS To generate expressions, we sample random trees (binary, or unary binary), that we “decorate” by randomly selecting their internal nodes and leaves from a list of possible operators or mathematical entities (integers, variables, constants). Nodes and leaves can be selected uniformly, or according to a prior probability. For instance, integers between −a and a could be sampled so that small absolute values are more frequent than large ones. For operators, addition and multiplication could be more common than substraction and division. If all L leaves, p1 and p2 operators are equiprobable, an alternative approach to generation can be defined by computing D(e, n) as D(0, n) = 0 D(e, 0) = Le D(e, n) = LD(e − 1, n) + p1D(e, n − 1) + p2D(e + 1, n − 1) and normalizing the probabilities P (L(e, n)) as L°D(e—k,n—1) D(e,n) L°D(e—k+1,n—1) D(e,n) P(L(e,n) = (k,1)) and P(L(e,n) = (k,2)) Samples then become dependent on the number of possible leaves and operators. # D IMPACT OF TIMEOUT ON MATHEMATICA In the case of Mathematica, we use function DSolve to solve differential equations, and function Integrate to integrate functions. Since computations can take a long time, we set a finite timeout to limit the time spent on each equation. Table 8 shows the impact of the timeout value on the accuracy with Mathematica. Increasing the timeout delay naturally improves the accuracy. With a timeout of 30 seconds, Mathematica times out on 20% of unsolved equations. With a limit of 3 minutes, timeouts represent about 10% of failed equations. This indicates that even in the ideal scenario where Mathematica would succeed on all equations where it times out, the accuracy would not exceed 86.2%. 22 Timeout (s) Success Failure Timeout 5 10 30 60 180 77.8 82.2 84.0 84.4 84.6 9.8 11.6 12.8 13.4 13.8 12.4 6.2 3.2 2.2 1.6 Table 8: Accuracy of Mathematica on 500 functions to integrate, for different timeout values. As the timeout delay increases, the percentage of failures due to timeouts decreases. With a limit of 3 minutes, timeouts only represent 10% of failures. As a result, the accuracy without timeout would not exceed 86.2%. # E GENERALIZATION ACROSS GENERATORS On the integration problem, we achieve (c.f. Table 6) near perfect performance when the training and test data are generated by the same method (either FWD, BWD, or IBP). Given the relatively small size of the training set (4.107 examples), the model cannot overfit to the entire problem space (1034 possible expressions). This shows that: Our model generalizes well to functions created by the training generator. • This property holds for the three considered generators, FWD, BWD, and IBP. Table 6 also measures the ability of our model to generalize across generators. A FWD-trained model achieves a low performance (17.2% with beam 50) on a BWD-generated test set. A BWD-trained model does a little better on the FWD test set (27.5%), but accuracy remains low. On the other hand, FWD-trained models achieve very good accuracy over an IBP-generated test set (88.9%), and BWD-trained models stand in the middle (59.2%). Figure 2 provides an explanation for these results. The input/output pairs produced by FWD and BWD have very different distributions: integration tends to shorten BWD generated expressions, and to expand FWD generated expressions. As a result, a model trained on BWD generated data will learn this shortening feature of integration, which will prove wrong on a FWD test set. Similar problems will happen on a FWD trained model with a BWD test set. Since IBP keeps average expression lengths unchanged, BWD and FWD-trained models will generalize better to IBP test sets (and be more accurate on FWD-trained models, since their input length distributions are closer). Length of derivatives Length of integrals — Forward — Forward 0.06 — Backward 0.04 — Backward —— Integration by parts —— Integration by parts > 0.03 a G 3 0.02 0.01 0.00 i?) 20 40 60 80 100 i?) 20 40 60 80 100 Number of tokens Number of tokens Figure 2: Distribution of input and output lengths for different integration datasets. The FWD generator produces short problems with long solutions. Conversely, the BWD generator creates long problems, with short solutions. The IBP approach stands in the middle, and generates short problems with short solutions. This suggests that what looks at first glance like a generalization problem (bad accuracy of BWD- trained models on FWD generated sets, and the converse) is in fact a consequence of data generation. BWD and FWD methods generate training sets with specific properties, that our model will learn. But this can be addressed by adding IBP or FWD data to the BWD dataset, as shown in the two last lines of Table 6. In practice, a better approach could be implemented with self-supervised learning, where new training examples are generated by the model itself. 23 Functions and their primitives generated with the forward approach (FWD) cos” (zr) xcos (x) — V1 — a? 2 (20 + cos (22)) 20 _ zsin (2x) 608 (2x) 3 2 4 a(x +4) x as — +24 —4l 2 eH? g 1 2x — log (x + 2) cos (22) log (cos (x) —1) — log (cos (x) + 1) + 2.c08 (x) sin (a) 2 2 2 /qo2 a7 2 3x? sinh’! (22) x° sinh! (22) * Ae +t t ett 6 12 3 2 x log (a?)* x log (a?)° 3a: log (x)? 3a* log (x) x” log (« ) - t t 4 2 4 4 Functions and their primitives generated with the backward approach (BWD) cos (a) + tan? (a) +2 a + sin (x) + tan (x) 1 Vi-lVvet+l wVJe—1vet+l x (5 + tan () tan (x) x tan? (a) xz tan < + one ie x wtan() + se) ) cos? (5) xtan (<) f 1 _ 1 rt x " Tog (log (x)) log (a) log (log ())? log (log (x)) —2a? sin (2?) tan (x) +a (tan? (x) + 1) cos ( ; ) + cos ( *) tan (a) x Cos (2?) tan (x) Functions and their primitives generated with the integration by parts approach (IBP) x (x + log (x)) (v +3)? r+ V2 cos? (x) 6 x (2x4 x? (4x + 6 log (x) — 3) 12 —a + (a +3) log (x + 3) c+3 (« + v2) tan (x) + log (cos ()) ax (272 + 242 log (x) + 94a + 90 log (x) 5) (38a + 2log (x) + 1) (« ~ an) + aa) log (x) sin (x) x° sinh (2) 18 x log (x) + tan (x) sin (x) tan (x) zx* cosh (x) — 3x” sinh (x) + 6x cosh (x) — 6 sinh (2) Table 9: Examples of functions with their integrals, generated by our FWD, BWD and IBP approaches. We observe that the FWD and IBP approaches tend to generate short functions, with long integrals, while the BWD approach generates short functions with long derivatives. 24 3x4 8
{ "id": "1701.06972" }
1911.12237
SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
This paper introduces the SAMSum Corpus, a new dataset with abstractive dialogue summaries. We investigate the challenges it poses for automated summarization by testing several models and comparing their results with those obtained on a corpus of news articles. We show that model-generated summaries of dialogues achieve higher ROUGE scores than the model-generated summaries of news -- in contrast with human evaluators' judgement. This suggests that a challenging task of abstractive dialogue summarization requires dedicated models and non-standard quality measures. To our knowledge, our study is the first attempt to introduce a high-quality chat-dialogues corpus, manually annotated with abstractive summarizations, which can be used by the research community for further studies.
http://arxiv.org/pdf/1911.12237
Bogdan Gliwa, Iwona Mochol, Maciej Biesek, Aleksander Wawer
cs.CL
Attachment contains the described dataset archived in 7z format. Please see the attached readme and licence. Update of the previous version: changed formats of train/val/test files in corpus.7z
Proceedings of the 2nd Workshop on New Frontiers in Summarization, Association for Computational Linguistics. November 2019
cs.CL
20191127
20191129
2019 9 1 0 2 v o N 9 2 ] L C . s c [ 2 v 7 3 2 2 1 . 1 1 9 1 : v i X r a # SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization # Bogdan Gliwa, Iwona Mochol, Maciej Biesek, Aleksander Wawer # Samsung R&D Institute Poland {b.gliwa, i.mochol, m.biesek, a.wawer}@samsung.com ( # Abstract This paper introduces the SAMSum Corpus, a new dataset with abstractive dialogue sum- maries. We investigate the challenges it poses for automated summarization by testing sev- eral models and comparing their results with those obtained on a corpus of news articles. We show that model-generated summaries of dialogues achieve higher ROUGE scores than the model-generated summaries of news – in contrast with human evaluators’ judgement. This suggests that a challenging task of ab- stractive dialogue summarization requires ded- icated models and non-standard quality mea- sures. To our knowledge, our study is the first attempt to introduce a high-quality chat- dialogues corpus, manually annotated with ab- stractive summarizations, which can be used by the research community for further studies. # 1 Introduction and related work The goal of the summarization task is condensing a piece of text into a shorter version that covers the main points succinctly. In the abstractive approach important pieces of information are presented using words and phrases not necessarily appearing in the source text. This requires natural language generation techniques with high level of semantic understanding (Chopra et al., 2016; Rush et al., 2015; Khandelwal et al., 2019; Zhang et al., 2019; See et al., 2017; Chen and Bansal, 2018; Gehrmann et al., 2018). Major research efforts have focused so far on summarization of single-speaker documents like news (e.g., Nallapati et al. (2016)) or sci- entific publications (e.g., Nikolov et al. (2018)). One of the reasons is the availability of large, high-quality news datasets with annotated sum- maries, e.g., CNN/Daily Mail (Hermann et al., 2015; Nallapati et al., 2016). Such a comprehen- sive dataset for dialogues is lacking. The challenges posed by the abstractive dia- logue summarization task have been discussed in the literature with regard to AMI meeting cor- pus (McCowan et al., 2005), e.g. Banerjee et al. (2014), Goo and Chen (2015), Mehdad et al. (2018). Since the corpus has a low number of sum- maries (for 141 dialogues), Goo and Chen (2018) proposed to use assigned topic descriptions as gold references. These are short, label-like goals of the meeting, e.g., costing evaluation of project pro- cess; components, materials and energy sources; chitchat. Such descriptions, however, are very general, lacking the messenger-like structure and any information about the speakers. benefit corpora, news (2019) built a dialogue Ganesh and Dingliwal summarization model that first converts a conver- sation into a structured text document and later applies an attention-based pointer network to cre- ate an abstractive summary. Their model, trained on structured text documents of CNN/Daily Mail dataset, was evaluated on the Argumentative Dialogue Summary Corpus (Misra et al., 2015), which, however, contains only 45 dialogues. In the present paper, we further investigate the problem of abstractive dialogue summarization. With the growing popularity of online conver- sations via applications like Messenger, What- sApp and WeChat, summarization of chats be- tween a few participants is a new interesting direc- tion of summarization research. For this purpose we have created the SAMSum Corpus1 which contains over 16k chat dialogues with manually annotated summaries. The dataset is freely avail- able for the research community2. 1The name is a shortcut 1The name is a shortcut for Samsung Abstractive Messenger Summarization # 2The dataset is shared on terms of the Attribution- NonCommercial-NoDerivatives 4.0 International (CC BY- NC-ND 4.0) license. It accompanies this paper on arXiv. Dataset Train CNN/DM 287 227 SAMSum 14 732 Validation 13 368 818 Test 11 490 819 Table 1: Datasets sizes The paper is structured as follows: in Section 2 we present details about the new corpus and de- scribe how it was created, validated and cleaned. Brief description of baselines used in the summa- rization task can be found in Section 3. In Sec- tion 4, we describe our experimental setup and pa- rameters of models. Both evaluations of summa- rization models, the automatic with ROUGE met- ric and the linguistic one, are reported in Section 5 and Section 6, respectively. Examples of models’ outputs and some errors they make are described in Section 7. Finally, discussion, conclusions and ideas for further research are presented in sections 8 and 9. # 2 SAMSum Corpus Initial approach. Since there was no available corpus of messenger conversations, we consid- ered two approaches to build it: (1) using existing datasets of documents, which have a form similar to chat conversations, (2) creating such a dataset by linguists. In the first approach, we reviewed datasets from the following categories: chatbot dialogues, SMS corpora, IRC/chat data, movie dialogues, tweets, comments data (conversations formed by replies to comments), transcription of meetings, written discussions, phone dialogues and daily commu- nication data. Unfortunately, they all differed in some respect from the conversations that are typ- ically written in messenger apps, e.g. they were too technical (IRC data), too long (comments data, transcription of meetings), lacked context (movie dialogues) or they were more of a spoken type, such as a dialogue between a petrol station assis- tant and a client buying petrol. As a consequence, we decided to create a chat dialogue dataset by constructing such conversa- tions that would epitomize the style of a messenger app. Process of building the dataset. Our di- alogue summarization dataset contains natural messenger-like conversations created and written down by linguists fluent in English. The style and register of conversations are diversified – di- alogues could be informal, semi-formal or formal, they may contain slang phrases, emoticons and ty- pos. We asked linguists to create conversations similar to those they write on a daily basis, re- flecting the proportion of topics of their real-life messenger conversations. It includes chit-chats, gossiping about friends, arranging meetings, dis- cussing politics, consulting university assignments with colleagues, etc. Therefore, this dataset does not contain any sensitive data or fragments of other corpora. Each dialogue was created by one person. After collecting all of the conversations, we asked lan- guage experts to annotate them with summaries, assuming that they should (1) be rather short, (2) extract important pieces of information, (3) in- clude names of interlocutors, (4) be written in the third person. Each dialogue contains only one ref- erence summary. Validation. Since the SAMSum corpus con- tains dialogues created by linguists, the question arises whether such conversations are really simi- lar to those typically written via messenger apps. To find the answer, we performed a validation task. We asked two linguists to doubly annotate 50 con- versations in order to verify whether the dialogues could appear in a messenger app and could be summarized (i.e. a dialogue is not too general or unintelligible) or not (e.g. a dialogue between two people in a shop). The results revealed that 94% of examined dialogues were classified by both anno- tators as good i.e. they do look like conversations from a messenger app and could be condensed in a reasonable way. In a similar validation task, con- ducted for the existing dialogue-type datasets (de- scribed in the Initial approach section), the annota- tors agreed that only 28% of the dialogues resem- bled conversations from a messenger app. Cleaning data. After preparing the dataset, we conducted a process of cleaning it in a semi- automatic way. Beforehand, we specified a for- mat for written dialogues with summaries: a colon should separate an author of utterance from its content, each utterance is expected to be in a sep- arate line. Therefore, we could easily find all de- viations from the agreed structure – some of them could be automatically fixed (e.g. when instead of a colon, someone used a semicolon right af- ter the interlocutor’s name at the beginning of an utterance), others were passed for verification to linguists. We also tried to correct typos in inter- locutors’ names (if one person has several utter- ances, it happens that, before one of them, there is a typo in his/her name) – we used the Levenshtein distance to find very similar names (possibly with typos e.g. ’George’ and ’Goerge’) in a single con- versation, and those cases with very similar names were passed to linguists for verification. Description. The created dataset is made of 16369 conversations distributed uniformly into 4 groups based on the number of utterances in con- versations: 3-6, 7-12, 13-18 and 19-30. Each ut- terance contains the name of the speaker. Most conversations consist of dialogues between two in- terlocutors (about 75% of all conversations), the rest is between three or more people. Table 1 presents the size of the dataset split used in our experiments. The example of a dialogue from this corpus is shown in Table 2. Dialogue Blair: Remember we are seeing the wedding planner after work Chuck: Sure, where are we meeting her? Blair: At Nonna Rita’s Chuck: Can I order their seafood tagliatelle or are we just having coffee with her? I’ve been dreaming about it since we went there last month Blair: Haha sure why not Chuck: Well we both remmber the spaghetti pomodoro disaster from our last meeting with Diane Blair: Omg hahaha it was all over her white blouse Chuck: :D Blair: :P Summary Blair and Chuck are going to meet the wedding planner after work at Nonna Rita’s. The tagliatelle served at Nonna Rita’s are very good. Table 2: Example of a dialogue from the collected cor- pus # 3 Dialogues baselines The baseline commonly used in the news summa- rization task is Lead-3 (See et al., 2017), which takes three leading sentences of the document as the summary. The underlying assumption is that the beginning of the article contains the most R-L 29.42 29.91 30.07 26.13 28.10 27.97 29.92 29.91 29.55 26.72 27.59 27.71 R-2 8.68 8.93 9.53 6.57 7.96 8.12 10.27 10.35 10.21 9.69 10.23 10.28 n R-1 31.40 3 31.87 4 32.02 5 28.04 3 30.08 4 29.91 5 32.46 3 32.19 4 31.61 5 28.31 10 29.36 20 29.61 30 Model LEAD MIDDLE LONGEST LONGER -THAN MOST-ACTIVE -PERSON n/a 26.54 8.55 24.57 Table 3: Baselines for the dialogues summarization Inspired by the Lead-n significant information. model, we propose a few different simple models: • MIDDLE-n, which takes n utterances from the middle of the dialogue, • LONGEST-n, treating only n longest utter- ances in order of length as a summary, • LONGER-THAN-n, taking only utterances longer than n characters in order of length (if there is no such long utterance in the di- alogue, takes the longest one), • MOST-ACTIVE-PERSON, which treats all utterances of the most active person in the di- alogue as a summary. Results of the evaluation of the above models are reported in Table 3. There is no obvious baseline for the task of dialogues summarization. We ex- pected rather low results for Lead-3, as the begin- nings of the conversations usually contain greet- ings, not the main part of the discourse. How- ever, it seems that in our dataset greetings are fre- quently combined with question-asking or infor- mation passing (sometimes they are even omit- ted) and such a baseline works even better than the MIDDLE baseline (taking utterances from the middle of a dialogue). Nevertheless, the best di- alogue baseline turns out to be the LONGEST-3 model. # 4 Experimental setup This section contains a description of setting used in the experiments carried out. # 4.1 Data preparation In order to build a dialogue summarization model, we adopt the following strategies: (1) each can- didate architecture is trained and evaluated on the dialogue dataset; (2) each architecture is trained on the train set of CNN/Daily Mail joined together with the train set of the dialogue data, and evalu- ated on the dialogue test set. In addition, we prepare a version of dialogue data, in which utterances are separated with a spe- cial token called the separator (artificially added token e.g. ’<EOU>’ for models using word em- beddings, ’|’ for models using subword embed- dings). In all our experiments, news and dialogues are truncated to 400 tokens, and summaries – to 100 tokens. The maximum length of generated summaries was not limited. • Fast Abs RL Enhanced. The additional variant of the Fast Abs RL model with slightly changed utterances i.e. to each utterance, at the end, after artificial separator, we add names of all other interlocutors. The reason for that is that Fast Abs RL requires text to be split into sentences (as it selects sentences and then paraphrase each of them). For dia- logues, we divide text into utterances (which is a natural unit in conversations), so some- times, a single utterance may contain more than one sentence. Taking into account how this model works, it may happen that it se- lects an utterance of a single person (each ut- terance starts with the name of the author of the utterance) and has no information about other interlocutors (if names of other inter- locutors do not appear in selected utterances), so it may have no chance to use the right peo- ple’s names in generated summaries. # 4.2 Models We carry out experiments with the following sum- marization models (for all architectures we set the beam size for beam search decoding to 5): • Pointer generator network (See et al., In the case of Pointer Generator, 2017). we use a default configuration3, changing only the minimum length of the generated summary from 35 (used in news) to 15 (used in dialogues). • LightConv and DynamicConv (Wu et al., The implementation is available 2019). in fairseq7 (Ott et al., 2019). We train lightweight convolution models in two man- ners: (1) learning token representations from scratch; in this case we apply BPE tokeniza- tion with the vocabulary of 30K types, using fastBPE implementation8 (Sennrich et al., 2015); (2) initializing token embeddings with pre-trained language model representations; as a language model we choose GPT-2 small (Radford et al., 2019). • Transformer (Vaswani et al., 2017). The model is trained using OpenNMT library4. We use the same parameters for training both on news and on dialogues5, changing only the minimum length of the generated summary – 35 for news and 15 for dialogues. • Fast Abs RL (Chen and Bansal, 2018). It is trained using its default parameters6. For di- alogues, we change the convolutional word- level sentence encoder (used in extractor part) to only use kernel with size equal 3 in- It is caused by the fact stead of 3-5 range. that some of utterances are very short and the default setting is unable to handle that. # 4.3 Evaluation metrics the standard We reporting the ROUGE metric F1 for ROUGE-1, scores ROUGE-2 and ROUGE-L following previous works (Chen and Bansal, 2018; See et al., 2017). We obtain scores using the py-rouge package9. # 5 Results The results for the news summarization task are shown in Table 4 and for the dialogue summariza- tion – in Table 5. In both domains, the best mod- els’ ROUGE-1 exceeds 39, ROUGE-2 – 17 and ROUGE-L – 36. Note that the strong baseline for # 3https://github.com/abisee/pointer-generator 4https://github.com/OpenNMT/OpenNMT-py 5http://opennmt.net/OpenNMT-py/Summarization.html 6https://github.com/ChenRocks/fast_abs_rl 7https://github.com/pytorch/fairseq 8https://github.com/glample/fastBPE 9https://pypi.org/project/py-rouge/ news (Lead-3) is outperformed in all three met- rics only by one model. In the case of dialogues, all tested models perform better than the baseline (LONGEST-3). In general, the Transformer-based architec- tures benefit from training on the joint dataset: news+dialogues, even though the news and the di- alogue documents have very different structures. Interestingly, this does not seem to be the case for the Pointer Generator or Fast Abs RL model. The inclusion of a separation token between di- alogue utterances is advantageous for most models – presumably because it improves the discourse structure. The improvement is most visible when training is performed on the joint dataset. Having compared two variants of the Fast Abs RL model – with original utterances and with en- hanced ones (see Section 4.2), we conclude that enhancing utterances with information about the other interlocutors helps achieve higher ROUGE values. The largest improvement of the model perfor- mance is observed for LightConv and Dynamic- Conv models when they are complemented with pretrained embeddings from the language model GPT-2, trained on enormous corpora. is also worth noting that some models (Pointer Generator, Fast Abs RL), trained only on the dialogues corpus (which has 16k dialogues), reach similar level (or better) in terms of ROUGE metrics than models trained on the CNN/DM news dataset (which has more than 300k arti- cles). Adding pretrained embeddings and train- ing on the joined dataset helps in achieving signifi- cantly higher values of ROUGE for dialogues than the best models achieve on the CNN/DM news dataset. the best per- forming model is DynamicConv with GPT-2 em- beddings, trained on joined news and dialogue data with an utterance separation token. # 6 Linguistic verification of summaries ROUGE is a standard way of evaluating the qual- ity of machine generated summaries by compar- ing them with reference ones. The metric based on n-gram overlapping, however, may not be very informative for abstractive summarization, where in producing high- paraphrasing is a keypoint quality sentences. To quantify this conjecture, we manually evaluated summaries generated by the R-1 40.24 38.72 40.99 38.72 39.44 39.46 R-2 17.44 16.67 17.72 16.89 17.20 17.33 Model Lead-3 baseline Pointer Generator Fast Abs RL Transformer LightConv DynamicConv LightConv R-L 34.90 35.59 38.30 35.74 36.20 36.29 + GPT2 emb 39.52 17.31 36.15 DynamicConv + GPT2 emb 39.94 17.56 36.51 Table 4: Model evaluation on the news corpus test set models for 150 news and 100 dialogues. We asked two linguists to mark the quality of every sum- mary on the scale of −1, 0, 1, where −1 means that a summarization is poor, extracts irrelevant information or does not make sense at all, 1 – it is understandable and gives a brief overview of the text, and 0 stands for a summarization that extracts only a part of relevant information, or makes some mistakes in the produced summary. We noticed a few annotations (7 for news and 4 for dialogues) with opposite marks (i.e. one an- notator judgement was −1, whereas the second one was 1) and decided to have them annotated once again by another annotator who had to re- solve conflicts. For the rest, we calculated the lin- ear weighted Cohen’s kappa coefficient (McHugh, 2012) between annotators’ scores. For news ex- amples, we obtained agreement on the level of 0.371 and for dialogues – 0.506. The annotators’ agreement is higher on dialogues than on news, probably because of structures of those data – arti- cles are often long and it is difficult to decide what the key-point of the text is; dialogues, on the con- trary, are rather short and focused mainly on one topic. For manually evaluated samples, we calculated ROUGE metrics and the mean of two human rat- ings; the prepared statistics is presented in Ta- ble 6. As we can see, models generating dialogue summaries can obtain high ROUGE results, but their outputs are marked as poor by human anno- tators. Our conclusion is that the ROUGE met- ric corresponds with the quality of generated sum- maries for news much better than for dialogues, confirmed by Pearson’s correlation between hu- man evaluation and the ROUGE metric, shown in Table 7. Model Train data Separator | R-1 R-2 R-L LONGEST-3 baseline 32.46 10.27 29.92 Pointer Generator dialogues no 38.55 14.14 34.85 Pointer Generator dialogues yes 40.08 15.28 36.63 Fast Abs RL dialogues no 40.96 17.18 39.05 Fast Abs RL Enhanced dialogues no 41.95 18.06 39.23 Transformer dialogues no 36.62 11.18 33.06 Transformer dialogues yes 37.27 10.76 32.73 LightConv dialogues no 33.19 11.14 30.34 DynamicConv dialogues no 33.79 11.19 30.41 DynamicConv dialogues yes 33.69 10.88 30.93 LightConv + GPT-2 emb. dialogues no 41.81 16.34 37.63 DynamicConv + GPT-2 emb. | dialogues no 41.79 16.44 37.54 DynamicConv + GPT-2 emb. | dialogues yes 41.54 16.29 37.07 Pointer Generator news + dialogues no 35.04 13.25 32.42 Pointer Generator news + dialogues yes 37.27 14.42 34.36 Fast Abs RL news + dialogues no 41.03 16.93 39.05 Fast Abs RL Enhanced news + dialogues no 41.87 17.47 39.53 Transformer news + dialogues no 41.91 18.25 38.77 Transformer news + dialogues yes 42.37 18.44 39.27 LightConv news + dialogues no 40.29 17.28 36.81 DynamicConv news + dialogues no 40.66 17.41 37.20 DynamicConv news + dialogues yes 41.07 17.11 37.27 LightConv + GPT-2 emb. news + dialogues no 44.47 19.75 40.07 DynamicConv + GPT-2 emb. | news + dialogues no 44.69 20.28 40.76 DynamicConv + GPT-2 emb. | news + dialogues yes 45.41 20.65 41.45 Table 5: Model evaluation on the dialogues corpus test set R-2 16.55 18.28 14.81 19.94 19.28 16.59 R-1 39.76 42.33 37.19 43.53 42.16 39.79 #examples mean median 0.18 0.33 0.03 -0.503 -0.55 -0.63 100 50 50 150 50 50 overall Fast Abs RL DynamicConv overall Fast Abs RL Fast Abs RL Enhanced DynamicConv 0.5 0.5 0.25 -0.5 -0.75 -1.0 48.63 -0.5 50 -0.33 23.95 + GPT-2 emb. R-L 36.23 38.82 33.64 40.66 40.37 37.05 44.57 Table 6: Statistics of human evaluation of summaries’ quality and ROUGE evaluation of those summaries # 7 Difficulties in dialogue summarization In a structured text, such as a news article, the in- formation flow is very clear. However, in a dia- logue, which contains discussions (e.g. when peo- ple try to agree on a date of a meeting), questions (one person asks about something and the answer may appear a few utterances later) and greetings, most important pieces of information are scattered across the utterances of different speakers. What is more, articles are written in the third-person point of view, but in a chat everyone talks about them- selves, using a variety of pronouns, which fur- ther complicates the structure. Additionally, peo- ple talking on messengers often are in a hurry, so they shorten words, use the slang phrases (e.g. ’u r gr8’ means ’you are great’) and make typos. These phenomena increase the difficulty of performing dialogue summarization. Table 8 and 9 show a few selected dialogues, ROUGE-1 corr p-value NEWS 0.47 DIALOGUES 0.32 ROUGE-L corr p-value 0.48 0.32 ROUGE-2 corr p-value 0.44 0.30 1e-6 7.7e-5 1e-6 8.1e-5 6e-6 1.84e-4 Table 7: Pearson’s correlations between human judgement and ROUGE metric together with summaries produced by the best tested models: them separately. This leads to the narrowing of the context and loosing important pieces of informa- tion. • DynamicConv + GPT-2 embeddings with a separator (trained on news + dialogues), • DynamicConv + GPT-2 embeddings (trained on news + dialogues), • Fast Abs RL (trained on dialogues), • Fast Abs RL Enhanced (trained on dia- logues), • Transformer (trained on news + dialogues). One can easily notice problematic issues. Firstly, the models frequently have difficulties in associating names with actions, often repeating the same name, e.g., for Dialogue 1 in Table 8, Fast Abs RL generates the following summary: ’lilly and lilly are going to eat salmon’. To help the model deal with names, the utterances are en- hanced by adding information about the other in- terlocutors – Fast Abs RL enhanced variant de- scribed in Section 4.2. In this case, after enhance- ment, the model generates a summary containing both interlocutors’ names: ’lily and gabriel are going to pasta...’. Sometimes models correctly choose speakers’ names when generating a sum- mary, but make a mistake in deciding who per- forms the action (the subject) and who receives the action (the object), e.g. for Dialogue 4 Dynamic- Conv + GPT-2 emb. w/o sep. model generates the summary ’randolph will buy some earplugs for maya’, while the correct form is ’maya will buy some earplugs for randolph’. A closely related problem is capturing the con- text and extracting information about the arrange- ments after the discussion. For instance, for Di- alogue 4, the Fast Abs RL model draws a wrong conclusion from the agreed arrangement. This is- sue is quite frequently visible in summaries gen- erated by Fast Abs RL, which may be the conse- quence of the way it is constructed; it first chooses important utterances, and then summarizes each of One more aspect of summary generation is de- ciding which information in the dialogue content is important. For instance, for Dialogue 3 Dy- namicConv + GPT-2 emb. with sep. generates a correct summary, but focuses on a piece of infor- mation different than the one included in the ref- erence summary. In contrast, some other models – like Fast Abs RL enhanced – select both of the pieces of information appearing in the discussion. On the other hand, when summarizing Dialogue 5, the models seem to focus too much on the phrase ’it’s the best place’, intuitively not the most impor- tant one to summarize. # 8 Discussion This paper is a step towards abstractive summa- rization of dialogues by (1) introducing a new dataset, created for this task, (2) comparison with news summarization by the means of automated (ROUGE) and human evaluation. Most of the tools and the metrics measuring the quality of text summarization have been developed for a single-speaker document, such as news; as such, they are not necessarily the best choice for conversations with several speakers. We test a few general-purpose summarization models. In terms of human evaluation, the re- sults of dialogues summarization are worse than the results of news summarization. This is con- nected with the fact that the dialogue structure is more complex – information is spread in multi- ple utterances, discussions, questions, more typos and slang words appear there, posing new chal- lenges for summarization. On the other hand, dia- logues are divided into utterances, and for each ut- terance its author is assigned. We demonstrate in experiments that the models benefit from the intro- duction of separators, which mark utterances for each person. This suggests that dedicated models having some architectural changes, taking into ac- count the assignation of a person to an utterance in Dialogue 1 1. lilly: sorry, i’m gonna be late 2. lilly: don’t wait for me and order the food 3. gabriel: no problem, shall we also order something for you? 4. gabriel: so that you get it as soon as you get to us? 5. lilly: good idea 6. lilly: pasta with salmon and basil is always very tasty here REF: lilly will be late. gabriel will order pasta with salmon and basil for her. Dialogue 2 1. randolph: honey 2. randolph: are you still in the pharmacy? 3. maya: yes 4. randolph: buy me some earplugs please 5. maya: how many pairs? 6. randolph: 4 or 5 packs 7. maya: i’ll get you 5 8. randolph: thanks darling REF: maya will buy 5 packs of earplugs for randolph at the pharmacy. L3: 6, 3, 4 [38/17/38] DS: lilly and gabriel are going to order pasta with salmon and basil [62/42/62] D: lilly and gabriel are going to order pasta with salmon and basil [62/42/62] F: lilly will be late . she will order the food . lilly F: maya is in the pharmacy . maya will get 5 . and lilly are going to eat salmon and basil [55/39/55] FE: lilly will be late . lilly and gabriel are going to pasta with salmon and basil is always tasty . [63/47/63] T: lilly will order the food as soon as she gets to gabriel [31/17/23] L3: 2, 4, 8 [36/8/36] DS: randolph and maya are going to buy some earplugs for randolph. [43/19/43] D: randolph will buy some earplugs for maya. [63/24/42] [48/21/48] FE: randolph is in the pharmacy . randolph will buy some earplugs for randolph . maya will get 5 . [64/38/64] T: randolph will buy some earplugs for randolph . maya will get 5 pairs . [58/36/42] Table 8: Examples of dialogues (Part 1). REF – reference summary, L3 – LONGEST-3 baseline, DS – Dynamic- Conv + GPT-2 emb. with sep., D – DynamicConv + GPT-2 emb., F – Fast Abs RL, FE – Fast Abs RL Enhanced, T – Transformer. For L3, three longest utterances are listed. Rounded ROUGE values [R-1/R-2/R-L] are given in square brackets. a systematic manner, could improve the quality of dialogue summarization. We show that the most popular summarization metric ROUGE does not reflect the quality of a summary. Looking at the ROUGE scores, one concludes that the dialogue summarization models perform better than the ones for news summariza- tion. In fact, this hypothesis is not true – we per- formed an independent, manual analysis of sum- maries and we demonstrated that high ROUGE results, obtained for automatically-generated di- alogue summaries, correspond with lower eval- uation marks given by human annotators. An interesting example of the misleading behavior of the ROUGE metrics is presented in Table 9 for Dialogue 4, where a wrong summary – ’paul and cindy don’t like red roses.’ – obtained all ROUGE values higher than a correct summary – ’paul asks cindy what color flowers should buy.’. Despite lower ROUGE values, news summaries were scored higher by human evaluators. We con- clude that when measuring the quality of model- the ROUGE metrics are generated summaries, more indicative for news than for dialogues, and a new metric should be designed to measure the quality of abstractive dialogue summaries. # 9 Conclusions In our paper we have studied the challenges of ab- stractive dialogue summarization. We have ad- dressed a major factor that prevents researchers from engaging into this problem: the lack of a proper dataset. To the best of our knowledge, this is the first attempt to create a comprehen- sive resource of this type which can be used in future research. The next step could be creating an even more challenging dataset with longer dia- logues that not only cover one topic, but span over numerous different ones. Dialogue 3 1. ashleigh: looks like we’re going to the cinema!! 2. ashleigh: <file_gif> 3. peter: you got the job?? 4. ashleigh: i got hte job! :d 5. peter: <file_gif> 6. ashleigh: <file_gif> Dialogue 4 1. paul: what color flowers should i get 2. cindy: any just not yellow 3. paul: ok, pink? 4. cindy: no maybe red 5. paul: just tell me what color and what type ok? 6. cindy: ugh, red roses! REF: paul will buy red roses following cindy’s advice. REF: ashleigh got the job. L3: 1, 4, 3 [33/18/33] DS: ashleigh and peter are going to the cinema. [33/0/33] D: ashleigh got hte job. [75/33/75] L3: 5, 1, 2 [13/0/13] DS: paul and cindy don’t like red roses. [47/13/35] D: paul asks cindy what color flowers should buy. [35/0/24] F: cindy is going to buy red roses [50/29/38] FE: cindy is buying red roses . cindy will buy red . [56/38/44] T: cindy does n’t know what color should get. cindy does not know what to do [8/0/8] F: ashleigh and ashleigh are going to the cinema. peter got the job . [50/29/50] FE: ashley and peter are going to the cinema together . ashleigh got the job . [47/40/47] T: ashleigh got the job at the cinema . peter and ashleigh are going there . [47/40/47] Dialogue 5 1. eve: where are we meeting? 2. charlie: at the entrance 3. nicole: yes, it’s the best place. we would’t find each other inside, it’ll be too crowded 4. eve: ok! REF: eve, charlie and nicole are meeting at the entrance. L3: 3, 1, 2 [43/11/43] DS: eve, charlie and nicole are meeting at the entrance. [100/100/100] D: eve, charlie and nicole are meeting at the entrance. [100/100/100] F: charlie is at the entrance . it ’s the best place . [42/24/42] FE: charlie is at the entrance . nicole and charlie are going to find each other inside . [58/18/42] T: eve and nicole are meeting at the entrance . it ’s the best place to meet . [67/55/67] Table 9: Examples of dialogues (Part 2). REF – reference summary, L3 – LONGEST-3 baseline, DS – Dynamic- Conv + GPT-2 emb. with sep., D – DynamicConv + GPT-2 emb., F – Fast Abs RL, FE – Fast Abs RL Enhanced, T – Transformer. For L3, three longest utterances are listed. Rounded ROUGE values [R-1/R-2/R-L] are given in square brackets. As shown, summarization of dialogues is much more challenging than of news. In order to per- form well, it may require designing dedicated tools, but also new, non-standard measures to cap- ture the quality of abstractive dialogue summaries in a relevant way. We hope to tackle these issues in future work. # Acknowledgments We would like to express our sincere thanks to Tu- nia Błachno, Oliwia Ebebenge, Monika J˛edras and Małgorzata Krawentek for their huge contribution to the corpus collection – without their ideas, man- agement of the linguistic task and verification of examples we would not be able to create this pa- per. We are also grateful for the reviewers’ helpful comments and suggestions. # References Siddhartha Banerjee, Prasenjit Mitra, and Kazunari Sugiyama. 2015. Abstractive meeting summariza- tion using dependency graph fusion. In Proceedings of the 24th International Conference on World Wide Web, pages 5–6. Yen-Chun Chen and Mohit Bansal. 2018. Fast abstrac- tive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguis- tics, pages 675–686. Sumit Chopra, Michael Auli, and Alexander M. Rush. 2016. Abstractive sentence summarization with at- tentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 93–â ˘A ¸S98. Prakhar Ganesh and Saket Dingliwal. 2019. Abstrac- tive summarization of spoken and written conversa- tion. arXiv:1902.01615. Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 4098–4109. Chih-Wen Goo and Yun-Nung Chen. 2018. Abstrac- tive dialogue summarization with sentence-gated modeling optimized by dialogue acts. 2018 IEEE Spoken Language Technology Workshop (SLT), pages 735–742. Karl M. Hermann, Tomà ˛as Kociská, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. CoRR, abs/1506.03340. Urvashi Khandelwal, Kevin Clark, Dan Jurafsky, and Lukasz Kaiser. 2019. Sample efficient text sum- marization using a single pre-trained transformer. CoRR, abs/1905.08836. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics. I. McCowan, J. Carletta, W. Kraaij, S. Ashby, S. Bour- ban, M. Flynn, M. Guillemot, T. Hain, J. Kadlec, V. Karaiskos, M. Kronenthal, G. Lathoud, M. Lin- coln, A. Lisowska, W. Post, Dennis Reidsma, and P. Wellner. 2005. The ami meeting corpus. In Pro- ceedings of Measuring Behavior 2005, 5th Interna- tional Conference on Methods and Techniques in Be- havioral Research, pages 137–140. the kappa statistic. Biochemia medica, 22(3):276–282. Yashar Mehdad, Giuseppe Carenini, and Raymond T. Ng. 2014. Abstractive summarization of spoken and written conversations based on phrasal queries. In Proceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics, volume 1, pages 1220–1230. Amita Misra, Pranav Anand, Jean Fox Tree, and Mar- ilyn Walker. 2015. Using summarization to dis- cover argument facets in online idealogical dialog. In The North American Chapter of the Association for Computational Linguistics (NAACL). Ramesh Nallapati, Bowen Zhou, Cicero Nogueira dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to- sequence rnns and beyond. In Computational Natu- ral Language Learning. Nikola Nikolov, Michael Pfeiffer, and Richard Hahn- loser. 2018. Data-driven summarization of scien- tific articles. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018). Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and fairseq: A fast, extensible Michael Auli. 2019. In Proceedings of toolkit for sequence modeling. the 2019 Conference of the North American Chap- ter of the Association for Computational Linguistics (Demonstrations), pages 48–53. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- In Proceedings of the 2015 tence summarization. Conference on Empirical Methods in Natural Lan- guage Processing, pages 379–389. Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics, volume 1, pages 1073–1083. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. CoRR, abs/1508.07909. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998–6008. Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In Interna- tional Conference on Learning Representations. Jianjun Xu, and Ji Wang. 2019. Pretraining-based natural language generation for text summarization. CoRR, abs/1902.09243.
{ "id": "1902.01615" }
1911.11641
PIQA: Reasoning about Physical Commonsense in Natural Language
To apply eyeshadow without a brush, should I use a cotton swab or a toothpick? Questions requiring this kind of physical commonsense pose a challenge to today's natural language understanding systems. While recent pretrained models (such as BERT) have made progress on question answering over more abstract domains - such as news articles and encyclopedia entries, where text is plentiful - in more physical domains, text is inherently limited due to reporting bias. Can AI systems learn to reliably answer physical common-sense questions without experiencing the physical world? In this paper, we introduce the task of physical commonsense reasoning and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA. Though humans find the dataset easy (95% accuracy), large pretrained models struggle (77%). We provide analysis about the dimensions of knowledge that existing models lack, which offers significant opportunities for future research.
http://arxiv.org/pdf/1911.11641
Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, Yejin Choi
cs.CL, cs.AI, cs.LG
AAAI 2020
null
cs.CL
20191126
20191126
9 1 0 2 v o N 6 2 ] L C . s c [ 1 v 1 4 6 1 1 . 1 1 9 1 : v i X r a # PIQA: Reasoning about Physical Commonsense in Natural Language # Yonatan Bisk1,2,3,4 # Jianfeng Gao2 Yejin Choi1,4 # Rowan Zellers1,4 1Allen Institute for Artificial Intelligence # Ronan Le Bras1 2Microsoft Research AI # 3Carnegie Mellon University 4Paul G. Allen School for Computer Science and Engineering, University of Washington http://yonatanbisk.com/piqa # Abstract To apply eyeshadow without a brush, should I use a cotton swab or a toothpick? Questions requiring this kind of phys- ical commonsense pose a challenge to today’s natural lan- guage understanding systems. While recent pretrained mod- els (such as BERT) have made progress on question answer- ing over more abstract domains – such as news articles and encyclopedia entries, where text is plentiful – in more physi- cal domains, text is inherently limited due to reporting bias. Can AI systems learn to reliably answer physical common- sense questions without experiencing the physical world? In this paper, we introduce the task of physical commonsense reasoning and a corresponding benchmark dataset Physical Interaction: Question Answering or PIQA . Though hu- mans find the dataset easy (95% accuracy), large pretrained models struggle (∼77%). We provide analysis about the di- mensions of knowledge that existing models lack, which of- fers significant opportunities for future research. To separate egg whites from the yolk wz using a water bottle, you should... b. Place the water bottle and press it against the yolk. Keep pushing, which creates suction and lifts the yolk. a. Squeeze the water bottle and press it against the yolk. Release, which creates suction and lifts the yolk. oa Introduction Before children learn language, they already start forming categories and concepts based on the physical properties of objects around them (Hespos and Spelke 2004). This model of the world grows richer as they learn to speak, but al- ready captures physical commonsense knowledge about ev- eryday objects: their physical properties, affordances, and how they can be manipulated. This knowledge is critical for day-to-day human life, including tasks such as problem solving (what can I use as a pillow when camping?) and expressing needs and desires (bring me a harder pillow). Likewise, we hypothesize that modeling physical common- sense knowledge is a major challenge on the road to true AI- completeness, including robots that interact with the world and understand natural language. Much of physical commonsense can be expressed in lan- guage, as the versatility of everyday objects and common concepts eludes other label schemes. However, due to is- sues of reporting bias, these commonsense properties - facts like ‘it is a bad idea to apply eyeshadow with a toothpick’ are rarely directly reported. Although much recent progress Copyright © 2020, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: PIQA : Given a physical goal expressed in nat- ural language, like ‘to separate egg whites...,’ a model must choose the most sensible solution. Our dataset tests the abil- ity of natural language understanding models to link text to a robust intuitive-physics model of the world. Here, humans easily pick answer a) because separating the egg requires pulling the yolk out, while machines are easily fooled. has been made in Natural Language Processing through a shift towards large-scale pretrained representations from unlabeled text (Radford et al. 2018; Devlin et al. 2019; Liu et al. 2019), the bulk of the success of this paradigm has been on core abstract tasks and domains. State-of-the- art models can reliably answer questions given an encyclo- pedia article (Rajpurkar et al. 2016) or recognize named en- tities (Tjong Kim Sang and De Meulder 2003), but it is not clear whether they can robustly answer questions that re- quire physical commonsense knowledge. To study this question and begin bridging the represen- tational gap, we introduce Physical Interaction: Question Answering, or PIQA to evaluate language represen- tations on their knowledge of physical commonsense. We focus on everyday situations with a preference for atypi- cal solutions. Our dataset is inspired by instructables.com, which provides users with instructions on how to build, craft, A # a. Shape, Material, and Purpose # b. Commonsense Convenience [Goal] Make an outdoor pillow [Sol1] Blow into a tin can and tie with rubber band [Sol2] Blow into a trash bag and tie with rubber band # x v [Goal] How to make sure all the clocks in the house are set accurately? [Goal] To make a hard shelled taco, [Sol1] put seasoned beef, cheese, and lettuce onto the hard shell. [Sol2] put seasoned beef, cheese, and lettuce into the hard shell. ¥ # W [Sol1] Get a solar clock for a reference and place it just outside a window that gets lots of sun. Use a system of call and response once a month, having one person stationed at the solar clock who yells out the correct time and have another person move to each of the indoor clocks to check if they are showing the right time. Adjust as nec- essary. [Goal] How do I find something I lost on the carpet? [Sol1] Put a solid seal on the end of your vacuum and turn it on. ¥ [Sol2] Replace all wind-ups with digital clocks. That way, you set them once, and that’s it. Check the batteries once a year or if you notice anything looks a little off. {Sol2] Put a hair net on the end of your vacuum and turn it on. W Figure 2: PIQA covers a broad array of phenomena. Above are two categories of example QA pairs. Left are examples that require knowledge of basic properties of the objects (flexibility, curvature, and being porous), while on the Right both answers may be technically correct but one is more convenient and preferable. bake, or manipulate objects using everyday materials. We asked annotators to provide semantic perturbations or al- ternative approaches which are otherwise syntactically and topically similar to ensure physical knowledge is targeted. The dataset is further cleaned of basic artifacts using the AFLite algorithm introduced in (Sakaguchi et al. 2020; Sap et al. 2019) which is an improvement on adversarial fil- tering (Zellers et al. 2018; Zellers et al. 2019b). Throughout this work we first detail the construction of our new benchmark for physical commonsense. Second, we show that popular approaches to large-scale language pre- training, while highly successful on many abstract tasks, fall short when a physical model of the world is required. Fi- nally, our goal is to elicit further research into building lan- guage representations that capture details of the real world. To these ends, we perform error and corpora analyses to pro- vide insights for future work. # Dataset We introduce a new dataset, PIQA , for benchmarking progress in physical commonsense understanding. The un- derlying task is multiple choice question answering: given a question q and two possible solutions s1, s2, a model or a human must choose the most appropriate solution, of which exactly one is correct. We collect data with how-to instruc- tions as a scaffold, and use state-of-the-art approaches for handling spurious biases, which we will discuss below. Instructables as a source of physical commonsense Our goal is to construct a resource that requires concrete physical reasoning. To achieve this, we provide a prompt to the annotators derived from instructables.com. The in- structables website is a crowdsourced collection of instruc- tions for doing everything from cooking to car repair. In most cases, users provide images or videos detailing each step and a list of tools that will be required. Most goals are simultaneously rare and unsurprising. While an annotator is unlikely to have built a UV-Flourescent steampunk lamp or made a backpack out of duct tape, it is not surprising that someone interested in home crafting would create these, nor will the tools and materials be unfamiliar to the average per- son. Using these examples as the seed for their annotation, helps remind annotators about the less prototypical uses of everyday objects. Second, and equally important, is that in- structions build on one another. This means that any QA pair inspired by an instructable is more likely to explicitly state assumptions about what preconditions need to be met to start the task and what postconditions define success. # Collecting data through goal-solution pairs Unlike traditional QA tasks, we define our dataset in terms of Goal and Solution pairs (see Figure 2 for example Goal- Solution pairs and types of physical reasoning). The Goal in most cases can be viewed as indicating a post-condition and the solutions indicate the procedure for accomplishing this. The more detailed the goal, the easier it is for annotators to write both correct and incorrect solutions. As noted above, the second component of our annotation design is reminding people to think creatively. We initially experimented with asking annotators for (task, tool) pairs via unconstrained prompts, but found that reporting bias swamped the dataset. In particular, when thinking about how to achieve a goal, people most often are drawn to prototypical solutions and look for tools in the kitchen (e.g. forks and knives) or the garage (e.g. hammers and drills). They rarely considered the literal hundreds of other everyday objects that might be in their own homes (e.g. sidewalk chalk, shower curtains, etc). To address this, and flatten the distribution of referenced objects (see Figure 5), we prompt the annotations with links to instructables. Specifically, annotators were asked to glance at the instructions of an instructable and pull out or have it inspire them to construct two component tasks. They would then articulate the goal (often centered on atypical materials) and how to achieve it. In addition, we asked them to provide a permutation to their own solution which makes it invalid, often subtly (Figure 3). To further assist diversity # W Instructions ‘Quickly glance ath 180 sable for inspration: tpnawuinstuctabh ck s- Produc! ‘Tip! Dont lke this one? feel free to write about another instructable. We onty provide a link to help spark your creativity Stops 1. Goal: What are two tasks ths makes you think of (Do not iy to summarize the instructable) 2 Solution: What would you tell someone to help them solve these problems? “Clever but correct Is even better!"* 3, Trick: What similar answer woul! be wrong ana lead them to make a mistake? “New! Annotation 1 Important! Do not use terms or references that require background knowledge from the instructable. Just take inspiration and provide a seftcontained desertion, ‘Question: Does the goal make sense by Itself? (witout the answer o the instructable?) Does it requir physical knowedge? Physical Goal Solution Topically Related Trick ‘Trek Is the trick auate?(avolé abvious answers like replacing cooking with motor ol) Figure 3: In the HIT design the instructable provides inspira- tion to think out-of-the-box (1 Sock, 3 Products) and annota- tors are asked for 1. a physical goal, 2. a valid solution, and 3. a trick. The trick should sound reasonable, but be wrong often due to a subtle misunderstanding of preconditions or physics. Additional HITs (not shown) were run for qualifi- cation prior to this stage and validation afterwards.2 we seed annotators with instructables drawn from six cate- gories (costume, outside, craft, home, food, and workshop). We asked that two examples be drawn per instructable to en- courage one of them to come later in the process and require precise articulation of pre-conditions. During validation, examples with low agreement were re- moved from the data. This often meant that correct examples were removed that required expert level knowledge of a do- main (e.g. special woodworking terminology) which should not fall under the umbrella of “commonsense.” Because, we focus on human generated tricks, annotators were free to come up with clever ways to hide deception. Often, this meant making very subtle changes to the solution to render it incorrect. In these cases, the two solutions may differ by as little as one word. We found that annotations used both sim- ple linguistic tricks (e.g. negation and numerical changes) and often swapped a key action or item for another that was topically similar but not helpful for completing the given goal. For this reason, our interface also includes a diff button which highlights where the solutions differ. This im- proved annotator accuracy and speed substantially. Annota- tor pay averaged > 15$/hr according to both self-reporting on turkerview.com and our timing calculations. 2In addition to this design, we also include a qualification HIT which contained well constructed and underspecified (goal, solu- tion) pairs. Annotators had to successfully (>80%) identify which were well formed to participate in the main HIT. Data was collected in batches of several thousand triples and validated by other anno- tators for correctness. Users will low agreement were de-qualed. 1750 ] Correct 1500 Trick Frequent 6 20 40 60 Solution Length Figure 4: Sentence length distributions for both correct so- lutions and tricks are nearly identical across the training set. Statistics In total our dataset is comprised of over 16,000 training QA pairs with an additional ∼2K and ∼3k held out for devel- opment and testing, respectively. Our goals, as tokenized by Spacy,3 average 7.8 words and both correct and incorrect solutions average 21.3 words. In total, this leads to over 3.7 million lexical tokens in the training data. Figure 4 shows a plot of the correct and incorrect se- quence lengths (as tokenized by the GPT BPE tokenizer), with the longest 1% of the data removed. While there are minor differences, the two distributions are nearly identical. We also analyzed the overlap in the vocabulary and find that in all cases (noun, verb, adjective, and adverb) we see at least an 85% overlap between words used in correct and incorrect solutions. In total we have 6,881 unique nouns, 2,493 verbs, 2,263 adjectives, and 604 adverbs in the train- ing data.. The most common of each are plotted in Figure 5 alongside their cumulative distributions. Again, this helps verify that the dataset revolves very heavily around phys- ical phenomena, properties, and manipulations. For exam- ple, the top adjectives include state (dry, clean, hot) and shape (small, sharp, flat); adverbs include temporal con- ditions (then, when) and manner (quickly, carefully, com- pletely). These properties often differentiate correct from in- correct answers, as shown in examples throughout the paper. We also color words according to their concreteness score (Brysbaert, Warriner, and Kuperman 2014), though many “abstract” words have concrete realizations in our dataset. Removing Annotation Artifacts As noted previously, we use AFLite (Sakaguchi et al. 2020) to remove stylistic artifacts and trivial examples from the data, which have been shown to artificially inflate model performance on previous NLI benchmarks (Poliak et al. 2018; Gururangan et al. 2018). The AFLite algorithm per- forms a systematic data bias reduction: it discards instances whose given feature representations are collectively highly indicative of the target label. In practice, we use 5,000 ex- amples from the original dataset to fine-tune BERT-Large for this task and compute the corresponding embeddings of all remaining instances. AFLite uses an ensemble of lin- ear classifiers trained on random subsets of the data to de- termine whether these pre-computed embeddings are strong 3https://spacy.io – all data was collected in English. FAP VAS SVP SHE BVUAOMCYVYUHYVYSCDVEDVXYYVHLDYOCYDOHPCHMNYHUDYS AYE GOx Foe eC GFStevuvuscvevveocVr=—-cevosy DSTO SETS BGS OS BO eo EO eee oe CBOSS TOSS SA STO E GROSSO SOR N SES GER COOOL Ke Se aESS CES wuaes SEONLIG CAG BIESOLGE RE GSS HSS osCMPVoEV ERPs GaSe Hoe OME Sess =U SOSOE SSS = 80 32a "ao ~ &x85° 6 “83 coe us See? gs& G8 BUG & SF Feae ig a 3 as a oe Go a 2 38 S 2 100 VERB eoxvogayues BLESESSLFSZESD SaB°S86Gy LHou gorgtesy 2233 2 100 ADV SoS OEY SSE ADIT DCO OO SHO ROE DEE USES CO DOU Sos se ese sc seg osar esse suc esse sees fvoslyve SEKCOzFZaS 2ESRSIZS GSS 2eazex SOF Ss BOSS GOSS 6S SG OGG SS ees S22Fs one Ge SBSys” SeSseceLSssage sus ses EG° pods sas gos ter ases SeeSee oe ote SORE 893° se >So ESGB > O°REnS F vo Os ES StS GE°w® 24m SEaR “SooReeGELE > a _— oa 5 “se= 8 435 F Bo 5 Se 2FoB BEV ESL 2 CE > 5 g a RPS 6 6 2 E $s 8 5 E 5° = & 100 ADJ o BOO SU CU EUS CLO OHO S USS UM CUE SOU SO SY POEL BOSE OLED CCU SD OO OOM E BALE OL ORO BOOS FO SLES SE SOG GSS FSCS OU BE FO RE OR GE Ge BO EES oS FESS ESSE SSS FSGS aS HR BOE ee eee oes O Ee See oO oe oe OE ee eee Oe eo oO eee ees BRS REN Sous F255 EES mci} os sore ao ao 5 = ae 57 Se8 of oguo SJ0o 2 ” ° ORs 38 2 a4 D> S§eeee os ET Ecorse ese 5° Fug = arg Es a5 £ Fy 8 ES 2 Figure 5: Here we show the frequency distributions for the top seventy-five words tagged by Spacy as noun, verb, adverb or adjective. We see that the vast majority of concepts focus on physical properties (e.g. small, hot, plastic, wooden) and how objects can be manipulated (e.g. cut, cover, soak, push). Additionally, we see strongly zipfian behavior in all tags but the adverbs. Words are colored by the average concreteness scores presented by (Brysbaert, Warriner, and Kuperman 2014). indicators of the correct answer option. Instead of having to specifically identify the possible sources of biases, this ap- proach enables unsupervised data bias reduction by relying on state-of-the-art methods to uncover undesirable annota- tion artifacts. For more information about AFLite, please refer to (Sakaguchi et al. 2020). Experiments In this section, we test the performance of state-of-the- art natural language understanding models on our dataset, PIQA. In particular, we consider the following three large- scale pretrained transformer models: a. GPT (Radford et al. 2018) is a model that processes text left-to-right, and was pretrained using a language modeling objective. We use the original 124M parameter GPT model. b. BERT (Devlin et al. 2019) is a model that process text bidirectionally, and thus was pretrained using a special masked language modeling objective. We use BERT-Large with 340M parameters. c. RoBERTa (Liu et al. 2019) is a version of the BERT model that was made to be significantly more robust through pretraining on more data and careful validation of the pre- training hyperparameters. We use RoBERTa-Large, which has 355M parameters. We follow standard best practices in adapting these mod- els for two-way classification. We consider the two solution choices independently: for each choice, the model is pro- vided the goal, the solution choice, and a special [CLS] token. At the final layer of the transformer, we extract the hidden states corresponding to the positions of each [CLS] token. We apply a linear transformation to each hidden state and apply a softmax over the two options: this approximates the probability that the correct solution is option A or B. During finetuning, we train the model using a cross-entropy loss over the two options. For GPT, we follow the original implementation and include an additional language model- ing loss, which improved training stability. Generally, we found that finetuning was often unstable with some hyperparameter configurations leading to vali- dation performance around chance, particularly for BERT. We follow best practices in using a grid search over learn- ing rates, batch sizes, and the number of training epochs for each model, and report the best-scoring configuration as was found on the validation set. For all models and experiments, Accuracy (%) Model Size Validation Test Random Chance Majority Class 50.0 50.5 50.0 50.4 OpenAI GPT Google BERT FAIR RoBERTa 124M 340M 355M 70.9 67.1 79.2 69.2 66.8 77.1 Human 94.9 Table 1: Results of state-of-the-art natural language under- standing models on PIQA, compared with human perfor- mance. The results show a significant gap between model and human performance, of roughly 20 absolute points. we used the transformers library and truncated exam- ples at 150 tokens, which affects 1% of the data. Manual inspection of the development errors show that some “mistakes” are actually correct but required a web- search to verify. Human performance was calculated by a majority vote. Annotators were chosen to participate that achieved ≥90% on the qualification HIT from before. It is therefore, completely reasonable that automated methods trained on large web crawls may eventually surpass human performance here. Human evaluation was performed on de- velopment data, and the train, development, and test folds were automatically produced by AFLite. Results We present our results in Table 1. As the dataset was con- structed to be adversarial to BERT, it is not surprising that it performs the worst of three models despite generally outper- forming GPT on most other benchmarks. Comparing GPT and RoBERTa we see that despite more training data, a larger vocabulary, twice the number of parameters and care- ful construction of robust training, there is only a 8pt perfor- mance gain and RoBERTa still falls roughly 18 points short of human performance on this task. As noted throughout, ex- ploring this gap is precisely the purpose for PIQA existing and which facets of the dataset fool RoBERTa is the focus of the remainder of this paper. Analysis In this section, we unpack the results of state-of-the-art mod- els on PIQA. In particular, we take a look at the errors made by the top-performing model RoBERTa, as a view towards the physical commonsense knowledge that can be learned through language alone. PIQA as a diagnostic for physical understanding The setup of PIQA allows us to use it to probe the inner workings of deep pretrained language models, and to deter- mine the extent of their physical knowledge. In this way, our dataset can augment prior work on studying to what extent models such as BERT understand syntax (Goldberg 2019). However, while syntax is a well studied problem within lin- guistics, physical commonsense does not have as rich a lit- so 100 -~ Al Split 8 g0- @ Validation £ e@ Training 2 = 60-° no] oO s 3S 40- 2 & ° a E 20- “es, x eo, in] S o- | 00 0eregnnanananagananaannnt cs) Ai 100 = 2 a = 90- oO 3 g 80°, ° “cee, °, p> e, C? ? oe tee & 70- seve oP ceeece e 3 00, M40. 80° 9 “ee © aceco 5 60- e & e oe Fe 50 - eco a 1 \ j ' ; ow 10 20 30 40 50 # Minimum edit distance d between solution 1 and 2 Figure 6: Breaking down PIQA by edit distance between solution choices. Top: Cumulative histogram of examples in the validation and training sets, in terms of minimum edit distance d between the two solution choices. The majority of the dataset consists of small tweaks between the two so- lution pairs; nevertheless, this is enough to confuse state-of- the-art NLP models. Bottom: RoBERTa accuracy over vali- dation examples with a minimum edit distance of d. Dataset difficulty increases somewhat as the two solution pairs are allowed to drift further apart. erature to borrow from, making its dimensions challenging to pin down. Simple concepts. Understanding the physical world re- quires a deep understanding of simple concepts, such as “water” or “ketchup,” and their affordances and interactions with respect to other concepts. Though our dataset cov- ers interactions between and with common objects, we can analyze the space of concepts in the dataset by perform- ing a string alignment between solution pairs. Two solution choices that differ by editing a single phrase must by defini- tion test the commonsense understanding of that phrase. In Figure 6 we show the distribution of the edit distance between solution choices. We compute edit distance over to- kenized and lowercased strings with punctuation removed. We use a cost of 1 for edits, insertions, and deletions. Most of the dataset covers simple edits between the two solution choices: roughly 60% of the dataset in both validation and training involves a 1-2 word edit between solutions. In the bottom of Figure 6, we show that the dataset complexity milk = mam Validation spoon _ cold — — — oe — before _— after — bottom — top —— oe |_ — a 0 25 50 75 100 Validation accuracy over examples that differ by the single word w ° 100 200 300 # of dataset examples that differ by the given single word w Figure 7: Common concepts as a window to RoBERTa’s un- derstanding of the physical world. We consider validation examples (q, s1, s2) wherein s1 and s2 differ from each other by a given word w. Left, we show the validation ac- curacy for common words w, while the number of dataset examples are shown right. Though certain concepts such as water occur quite frequently, RoBERTa nevertheless finds those concepts difficult, with 75% accuracy. Additionally, on common relations such as ‘cold’, ‘on’, ‘before’, and ‘af- ter’ RoBERTa performs roughly at chance. generally increases with the edit distance between the so- lution pairs. Nevertheless, the head of the distribution repre- sents a space that is simple to study. Single-word edits. In Figure 7, we plot the accuracy of RoBERTa among dataset examples that differ by a sin- gle word. More formally, we consider examples (q, s1, s2) whereby moving from s1 to s2, or vice versa, requires edit- ing a given word w.4 We show examples of words w that occur frequently in both the training and validation splits of the dataset, which allows RoBERTa to refine representations of these concepts during training and gives us a large enough sample size to reliably estimate model performance. As shown, RoBERTa struggles to understand certain highly flexible relations. In particular, Figure 7 highlights the difficulty of correctly answering questions that differ by the words ‘before,’ ‘after’, ‘top‘, and ‘bottom’: RoBERTa performs nearly at chance when encountering these. Interestingly, the concepts shown in Figure 7 suggest that RoBERTa also struggles to understand many common, more versatile, physical concepts. Though there are 300 training examples wherein the solution choices s1, s2 differ by the word ‘water.’ RoBERTa performs worse than average on these replacements. On the other hand, RoBERTa does much better at certain nouns, such as ‘spoon.’ # Common replacements in PIQA. We dig into this 4We additionally allow for an additional insertion; this helps to capture simple phrases like going from ‘water’ to ‘olive oil.’ Nevertheless, these multiword expressions tend to be less common, which is why we omit them in Figure 7. Most common replacements for... water spoon freeze Ji i foi nite soda Broothpick Bseatuta [four | whisk |otive oil bow! foun [alcohol | scalpel Meat vinegar [butter | shovel rettigerate lair | screwdriver heat up 0 100 200 0 20 40 0 5 10 Count Count Count Figure 8: The most common replacements for three selected words: ‘water,’ ‘spoon,’ and ‘freeze.’ These cover several key dimensions: ‘water’ is a broad noun with many proper- ties and affordances, whereas ‘spoons’ are much narrower in scope. Perhaps as a result, RoBERTa performs much butter at examples where ‘spoon’ is the pivot word (90%) versus ‘water’ (75%). Freeze has an accuracy of 66% on the vali- dation set, and shows that verbs are challenging as well. further in Figure 8, where we showcase the most com- mon replacements for three examples: ‘water,’ ‘spoon,’ and ‘freeze.’ While ‘water’ is prevalent in the training set, it is also highly versatile. One can try to substitute it with a vari- ety of different household items, such as ‘milk’ or ‘alcohol,’ often to disastrous effects. However, ‘spoons’ have fewer challenging properties. A spoon cannot generally be substi- tuted with a utensil that is sharp or has prongs, such as a fork, a knife, or a toothpick. RoBERTa obtains high accuracy on ‘spoon’ examples, which suggests that it might understand this simple affordance, but does not capture the long tail of affordances associated with ‘water.’ # Qualitative results Our analysis thus far has been on simple-to-analyze sin- gle word expressions, where we have shown that the state- of-the-art language model, RoBERTa, struggles at a nu- anced understanding of key commonsense concepts, such as relations. To further probe the knowledge gap of these strong models, we present qualitative examples in Figure 9. The examples are broadly representative of larger patterns: RoBERTa can recognize clearly ridiculous generations (Fig- ure 9, top left) and understands differences between some commonsense concepts (bottom left). It’s important to note, that in both cases the correct answer is prototypical and something we might expect the models to have seen before. However, it struggles to tell the difference between sub- # Correct examples # Incorrect examples [Goal] Best way to pierce ears. [Sol1] It is best to go to a professional to get your ear pierced to avoid medical problems later. [Sol2] The best way to pierce your ears would be to insert a nee- dle half inch thick into the spot you want pierced. [Goal] How do you reduce wear and tear on the nonstick finish of muffin pans? ¥ # X [Goal] How can I quickly and easily remove strawberry stems? [Sol1] Take a straw and from the top of the strawberry push the straw through the center of the strawberry until the stem pops off. [Sol2] Take a straw and from the bottom of the strawberry push the straw through the center of the strawberry until the stem pops off. [Sol1] Make sure you use paper liners to protect the nonstick finish when baking muffins and cupcakes in muffin pans. [Sol2] Make sure you use grease and flour to protect the non- stick finish when baking muffins and cupcakes in muffin pans. “ # X [Goal] how to add feet to a coaster. [Sol1] cut four slices from a glue stick, and attatch to the coaster with glue. {Sol2] place a board under the coaster, and secure with zip ties X and a glue gun. Figure 9: Qualitative analysis of RoBERTa’s predictions with. Left: Two examples that RoBERTa gets right. Right: two exam- ples that RoBERTa gets incorrect. Short phrases that differ between solution 1 and solution 2 are shown in bold and italics. tle relations such as top and bottom (top right of Figure 9). Moreover, it struggles with identifying non-prototypical sit- uations (bottom right). Though using a gluestick as feet for a coaster is uncommon, to a human familiar with these con- cepts we can visualize the action and its result to verify that the goal has been achieved. Overall, these examples suggest that physical understanding – particularly involving novel combinations of common objects – challenges models that were pretrained on text only. # Related Work Physical understanding is broad domain that touches on ev- erything from scientific knowledge (Schoenick et al. 2016) to the interactive acquisition of knowledge by embodied agents (Thomason et al. 2016). To this end, work related to the goals of our benchmark span the NLP, Computer Vision and Robotics communities. has studied intuitive physics (Wu et al. 2017), cause-effect relationships (Mottaghi et al. 2016), and what can be reason- ably inferred beyond a single image (Zellers et al. 2019a). Robotics. Learning from interaction and intuitive physics (Agrawal et al. 2016) can also be encoded as priors when exploring the world (Byravan et al. 2018) and internal mod- els of physics, shape, and material strength enable advances in tool usage (Toussaint et al. 2018) or construction (Nair, Balloch, and Chernova 2019). Key to our research aims in this work is helping to build language tools which capture enough physical knowledge to speed up the bootstrapping of robotic-language applications. Language tools should pro- vide strong initial priors for learning (Tellex et al. 2011; Matuszek 2018) that are then refined through interaction and dialogue (Gao et al. 2016). Language. Within NLP, in addition to large scale mod- els, there has also been progress on reasoning about cause and effect effects/implications within these models (Bosse- lut et al. 2019), extracting knowledge from them (Petroni et al. 2019), and investigating where large scale language models fail to capture knowledge of tools and elided proce- dural knowledge in recipes (Bisk et al. 2019). The notion of procedural knowledge and instruction following is a more general related task within vision and robotics. From text alone, work has shown that much can be understood about the implied physical situations of verb usage (Forbes and Choi 2017) and relative sizes of objects (Elazar et al. 2019). Vision. Physical knowledge can be discovered and eval- uated within the visual world. Research has studied pre- dicting visual relationships in images (Krishna et al. 2016) and as well as actions and their dependent objects (Yatskar, the recent Zettlemoyer, and Farhadi 2016). Relatedly, HAKE dataset (Li et al. 2019) specifically annotates which object/body-parts are essential to completing or defining an action. Image data also allows for studying the concrete- ness of nouns and provides a natural path forward for fur- ther investigation (Hessel, Mimno, and Lee 2018). Related to physical commonsense, research in visual commonsense # Conclusion We have evaluated against large-scale pretrained models as they are in vogue as the de facto standard of progress within NLP, but are primarily interested in their performance and failings as a mechanism for advancing the position that learning about the world from language alone, is limiting. Future research, may “match” humans on our dataset by finding a large source of in-domain data and fine-tuning heavily, but this is very much not the point. Philosophi- cally, knowledge should be learned from interaction with the world to eventually be communicated with language. In this work we introduce the Physical Interaction: Question Answering or PIQA benchmark for evaluating and studying physical commonsense understanding in natu- ral language models. We find the best available pretrained models lack an understanding of some of the most basic physical properties of the world around us. Our goal with PIQA is to provide insight and a benchmark for progress to- wards language representations that capture knowledge tra- ditionally only seen or experienced, to enable the construc- tion of language models useful beyond the NLP community. 4 # X # W # W Acknowledgements We thank the anonymous reviewers for their insightful sug- gestions. This research was supported in part by NSF (IIS- 1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031), and the NSF-GRFP No. DGE-1256082. Computations on beaker.org were supported in part by Google Cloud. References Agrawal, P.; Nair, A.; Abbeel, P.; Malik, J.; and Levine, S. 2016. Learning to poke by poking: Experiential learning of intuitive physics. In NeurIPS. Bisk, Y.; Buys, J.; Pichotta, K.; and Choi, Y. 2019. Bench- marking hierarchical script knowledge. In NAACL-HLT. Bosselut, A.; Rashkin, H.; Sap, M.; Malaviya, C.; Celikyil- maz, A.; and Choi, Y. 2019. COMET: Commonsense Trans- formers for Automatic Knowledge Graph Construction. In ACL. Brysbaert, M.; Warriner, A. B.; and Kuperman, V. 2014. Concreteness ratings for 40 thousand generally known en- glish word lemmas. Behavior Research Methods (46):904– 911. Byravan, A.; Leeb, F.; Meier, F.; and Fox, D. 2018. Se3- pose-nets: Structured deep dynamics models for visuomotor planning and control. In ICRA. Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT. Elazar, Y.; Mahabal, A.; Ramachandran, D.; Bedrax-Weiss, T.; and Roth, D. 2019. How large are lions? inducing distri- butions over quantitative attributes. In ACL. Forbes, M., and Choi, Y. 2017. Verb physics: Relative phys- ical knowledge of actions and objects. In ACL. Gao, Q.; Doering, M.; Yang, S.; and Chai, J. 2016. Physical causality of action verbs in grounded language understand- ing. In ACL, 1814–1824. Goldberg, Y. 2019. Assessing BERT’s Syntactic Abilities. arXiv:1901.05287. Gururangan, S.; Swayamdipta, S.; Levy, O.; Schwartz, R.; Bowman, S.; and Smith, N. A. 2018. Annotation artifacts in natural language inference data. In NAACL-HLT, 107–112. Hespos, S. J., and Spelke, E. S. 2004. Conceptual precursors to language. Nature 430:453–456. Hessel, J.; Mimno, D.; and Lee, L. 2018. Quantifying the visual concreteness of words and topics in multimodal datasets. In NAACL-HLT, 2194–2205. Krishna, R.; Zhu, Y.; Groth, O.; Johnson, J.; Hata, K.; Kravitz, J.; Chen, S.; Kalantidis, Y.; Li, L.-J.; Shamma, D. A.; Bernstein, M.; and Fei-Fei, L. 2016. Visual genome: Connecting language and vision using crowdsourced dense image annotations. In arXiv:1602.07332. Li, Y.-L.; Xu, L.; Huang, X.; Liu, X.; Ma, Z.; Chen, M.; Wang, S.; Fang, H.-S.; and Lu, C. 2019. Hake: Human ac- tivity knowledge engine. arXiv preprint arXiv:1904.06539. Liu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Ap- proach. arXiv:1907.11692. Matuszek, C. 2018. Grounded Language Learning: Where Robotics and NLP Meet. In IJCAI, 5687 – 5691. Mottaghi, R.; Rastegari, M.; Gupta, A.; and Farhadi, A. 2016. “what happens if...” learning to predict the effect of forces in images. In Leibe, B.; Matas, J.; Sebe, N.; and Welling, M., eds., ECCV, 269–285. Nair, L.; Balloch, J.; and Chernova, S. 2019. Tool Mac- gyvering: Tool Construction Using Geometric Reasoning. In ICRA. Petroni, F.; Rocktschel, T.; Lewis, P.; Bakhtin, A.; Wu, Y.; Miller, A. H.; and Riedel, S. 2019. Language models as knowledge bases? In EMNLP. Poliak, A.; Naradowsky, J.; Haldar, A.; Rudinger, R.; and Van Durme, B. 2018. Hypothesis Only Baselines in Natural In Joint Conference on Lexical and Language Inference. Computational Semantics (StarSem). Radford, A.; Narasimhan, K.; Salimans, T.; and Sutskever, I. 2018. Improving language understanding by generative pre-training. Rajpurkar, P.; Zhang, J.; Lopyrev, K.; and Liang, P. 2016. Squad: 100,000+ questions for machine comprehension of text. In EMNLP, 2383–2392. Sakaguchi, K.; Le Bras, R.; Bhagavatula, C.; and Choi, Y. 2020. Winogrande: An adversarial winograd schema chal- lenge at scale. In AAAI. Sap, M.; Rashkin, H.; Chen, D.; Le Bras, R.; and Choi, Y. 2019. Socialiqa: Commonsense reasoning about social in- teractions. In EMNLP. Schoenick, C.; Clark, P.; Tafjord, O.; Turney, P.; and Etzioni, O. 2016. Moving beyond the turing test with the allen ai science challenge. Communications of the ACM. Tellex, S.; Kollar, T.; Dickerson, S.; Walter, M. R.; Banerjee, A. G.; Teller, S.; and Roy, N. 2011. Understanding natural language commands for robotic navigation and mobile ma- In Proceedings of the National Conference on nipulation. Artificial Intelligence. Thomason, J.; Sinapov, J.; Svetlik, M.; Stone, P.; and Mooney, R. J. 2016. Learning Multi-Modal Grounded Lin- guistic Semantics by Playing ”I Spy”. In IJCAI, 3477–3483. Tjong Kim Sang, E. F., and De Meulder, F. 2003. Introduc- tion to the CoNLL-2003 shared task: Language-independent named entity recognition. In NAACL, 142–147. Toussaint, M.; Allen, K. R.; Smith, K. A.; and Tenenbaum, J. B. 2018. Differentiable physics and stable modes for tool- use and manipulation planning. In RSS. Wu, J.; Lu, E.; Kohli, P.; Freeman, B.; and Tenenbaum, J. 2017. Learning to see physics via visual de-animation. In Guyon, I.; Luxburg, U. V.; Bengio, S.; Wallach, H.; Fergus, R.; Vishwanathan, S.; and Garnett, R., eds., NeurIPS. Yatskar, M.; Zettlemoyer, L.; and Farhadi, A. 2016. Sit- uation recognition: Visual semantic role labeling for image understanding. In CVPR. 2018. Zellers, R.; Bisk, Y.; Schwartz, R.; and Choi, Y. SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference. In EMNLP. Zellers, R.; Bisk, Y.; Farhadi, A.; and Choi, Y. 2019a. From recognition to cognition: Visual commonsense reasoning. In CVPR. Zellers, R.; Holtzman, A.; Bisk, Y.; Farhadi, A.; and Choi, Y. 2019b. HellaSwag: Can a Machine Really Finish Your Sentence? In ACL.
{ "id": "1907.11692" }
1911.07176
Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering
We propose an unsupervised strategy for the selection of justification sentences for multi-hop question answering (QA) that (a) maximizes the relevance of the selected sentences, (b) minimizes the overlap between the selected facts, and (c) maximizes the coverage of both question and answer. This unsupervised sentence selection method can be coupled with any supervised QA approach. We show that the sentences selected by our method improve the performance of a state-of-the-art supervised QA model on two multi-hop QA datasets: AI2's Reasoning Challenge (ARC) and Multi-Sentence Reading Comprehension (MultiRC). We obtain new state-of-the-art performance on both datasets among approaches that do not use external resources for training the QA system: 56.82% F1 on ARC (41.24% on Challenge and 64.49% on Easy) and 26.1% EM0 on MultiRC. Our justification sentences have higher quality than the justifications selected by a strong information retrieval baseline, e.g., by 5.4% F1 in MultiRC. We also show that our unsupervised selection of justification sentences is more stable across domains than a state-of-the-art supervised sentence selection method.
http://arxiv.org/pdf/1911.07176
Vikas Yadav, Steven Bethard, Mihai Surdeanu
cs.CL
Published at EMNLP-IJCNLP 2019 as long conference paper. Corrected the name reference for Speer et.al, 2017
EMNLP-IJCNLP, 2578--2589 (2019)
cs.CL
20191117
20200503
0 2 0 2 y a M 3 ] L C . s c [ 2 v 6 7 1 7 0 . 1 1 9 1 : v i X r a # Quick and (not so) Dirty: Unsupervised Selection of Justification Sentences for Multi-hop Question Answering Vikas Yadav, Steven Bethard, Mihai Surdeanu University of Arizona, Tucson, AZ, USA {vikasy, bethard, msurdeanu}@email.arizona.edu # Abstract We propose an unsupervised strategy for the selection of justification sentences for multi- hop question answering (QA) that (a) maxi- mizes the relevance of the selected sentences, (b) minimizes the overlap between the selected facts, and (c) maximizes the coverage of both question and answer. This unsupervised sen- tence selection method can be coupled with any supervised QA approach. We show that the sentences selected by our method im- prove the performance of a state-of-the-art supervised QA model on two multi-hop QA datasets: AI2’s Reasoning Challenge (ARC) and Multi-Sentence Reading Comprehension (MultiRC). We obtain new state-of-the-art per- formance on both datasets among approaches that do not use external resources for training the QA system: 56.82% F1 on ARC (41.24% on Challenge and 64.49% on Easy) and 26.1% EM0 on MultiRC. Our justification sentences have higher quality than the justifications se- lected by a strong information retrieval base- line, e.g., by 5.4% F1 in MultiRC. We also show that our unsupervised selection of justifi- cation sentences is more stable across domains than a state-of-the-art supervised sentence se- lection method. # Introduction Interpretable machine learning (ML) models, where the end user can understand how a deci- sion was reached, are a critical requirement for the wide adoption of ML solutions in many fields such as healthcare, finance, and law (Samek et al., 2017; Alvarez-Melis and Jaakkola, 2017; Arras et al., 2017; Gilpin et al., 2018; Biran and Cotton, 2017) For complex natural language processing (NLP) such as question answering (QA), human readable explanations of the inference process have been proposed as a way to interpret QA models (Zhou et al., 2018). To which organ system do the esophagus, liver, pancreas, small intestine, and colon belong? (A) reproductive system (B) excretory system (C) digestive system (D) endocrine system ROCC-selected justification sentences: 1. vertebrate digestive system has oral cavity, teeth and pharynx, esophagus and stomach, small intestine, pan- creas, liver and the large intestine 2. digestive system consists liver, stomach, large intestine, small intestine, colon, rectum and anus BM25-selected justification sentences: 1. 2. their digestive system consists of a stomach, liver, pan- creas, small intestine, and a large intestine the liver pancreas and gallbladder are the solid organ of the digestive system Figure 1: A multiple-choice question from the ARC dataset with the correct answer in bold, followed by justification sen- tences selected by our approach (ROCC) vs. sentences se- lected by a strong IR baseline (BM25). ROCC justification sentences fully cover the five key terms in the question (shown in italic), whereas BM25 misses two: esophagus and colon. Further, the second BM25 sentence is largely redundant with the first, not covering other query terms. Recently, multiple datasets have been proposed for multi-hop QA, in which questions can only be answered when considering information from multiple sentences and/or documents (Clark et al., 2018; Khashabi et al., 2018a; Yang et al., 2018; Welbl et al., 2018; Mihaylov et al., 2018; Bauer et al., 2018; Dunn et al., 2017; Dhingra et al., 2017; Lai et al., 2017; Rajpurkar et al., 2018; Sun et al., 2019). The task of selecting justification sentences is complex for multi-hop QA, because of the ad- ditional knowledge aggregation requirement (ex- amples of such questions and answers are shown in Figures 1 and 2). Although various neural QA methods have achieved high performance on some of these datasets (Sun et al., 2018; Trivedi et al., 2019; Tymoshenko et al., 2017; Seo et al., 2016; Wang and Jiang, 2016; De Cao et al., 2018; Back et al., 2018), we argue that more effort must be dedicated to explaining their inference process. In this work we propose an unsupervised al- gorithm for the selection of multi-hop justifica- tions from unstructured knowledge bases (KB). Un- like other supervised selection methods (Dehghani et al., 2019; Bao et al., 2016; Lin et al., 2018; Wang et al., 2018b,a; Tran and Niedere´ee, 2018; Trivedi et al., 2019), our approach does not require any training data for justification selection. Unlike ap- proaches that rely on structured KBs, which are ex- pensive to create, (Khashabi et al., 2016; Khot et al., 2017; Zhang et al., 2018; Khashabi et al., 2018b; Cui et al., 2017; Bao et al., 2016), our method op- erates over KBs of only unstructured texts. We demonstrate that our approach has a bigger impact on downstream QA approaches that use these justi- fication sentences as additional signal than a strong baseline that relies on information retrieval (IR). In particular, the contributions of this work are: (1) We propose an unsupervised, non-parametric strategy for the selection of justification sentences for multi-hop question answering (QA) that (a) maximizes the Relevance of the selected sen- tences; (b) minimizes the lexical Overlap between the selected facts; and (c) maximizes the lexical Coverage of both question and answer. We call our approach ROCC. ROCC operates by first cre- ating (2) justification sets from the top n sen- tences selected by the BM25 information retrieval model (Robertson et al., 2009), where k ranges from 2 to n, and then ranking them all by a for- mula that combines the three criteria above. The set with the top score becomes the set of justifi- cations output by ROCC for a given question and candidate answer. As shown in Figure 1, the justifi- cation sentences selected by ROCC perform more meaningful knowledge aggregation than a strong IR baseline (BM25), which does not account for overlap (or complementarity) and coverage. (2) ROCC can be coupled with any supervised QA approach that can use the selected justification sen- tences as additional signal. To demonstrate its ef- fectiveness, we combine ROCC with a state-of-the- art QA method that relies on BERT (Devlin et al., 2018) to classify correct answers, using the text of the question, the answer, and (now) the justification sentences as input. On the Multi-Sentence Reading Comprehension (MultiRC) dataset (Khashabi et al., 2018a), we achieved a gain of 8.3% EM0 with ROCC justifications when compared to the case where the complete comprehension passage was provided to the BERT classifier. On AI2’s Reason- ing Challenge (ARC) dataset (Clark et al., 2018), the QA approach enhanced with ROCC justifica- tions outperforms the QA method without justifi- cations by 9.15% accuracy, and the approach that uses top sentences provided by BM25 by 2.88%. Further, we show that the justification sentences se- lected by ROCC are considerably more correct on their own than justifications selected by BM25 (e.g., the justification score in MultiRC was increased by 11.58% when compared to the best performing BM25 justifications), which indicates that the in- terpretability of the overall QA system was also increased. (3) Lastly, our analysis indicates that ROCC is more stable across the different domains in the MultiRC dataset than a supervised strategy for the selection of justification sentences that relies on a dedicated BERT-based classifier, with a difference of over 10% F1 score in some configurations. The ROCC system and the codes for generat- ing all the analysis are provided here - https: //github.com/vikas95/AutoROCC. # 2 Related Work The body of QA work that addresses the selec- tion of justification sentences can be classified into roughly four categories: (a) supervised approaches that require training data to learn how to select justification sentences (i.e., questions and answers coupled with correct justifications); (b) methods that treat justifications as latent variables and learn jointly how to answer questions and how to select justifications from questions and answers alone; (c) approaches that rely on information retrieval to se- lect justification sentences; and, lastly, (d) methods that do not use justification sentences at all. previous works category, (e.g., (Trivedi et al., 2019)) have used entail- ment resources including labeled trained datasets such as SNLI (Bowman et al., 2015) and MultiNLI (Williams et al., 2017) to train components for selecting justification sentences for QA. Other works have explicitly focused on training sentence selection components for QA models (Min et al., In 2018; Lin et al., 2018; Wang et al., 2019). datasets where gold justification sentences are not provided, researchers have trained such compo- nents by retrieving justifications from structured KBs (Cui et al., 2017; Bao et al., 2016; Zhang et al., 2016; Hao et al., 2017) such as ConceptNet (Speer et al., 2017), or from IR systems coupled with denoising components (Wang et al., 2019). While these works offer exciting directions, they all rely on training data for justifications, which is expensive to generate and may not be available in real-world use cases. The second group of methods tend to rely on reinforcement learning (Choi et al., 2017; Lai et al., 2018; Geva and Berant, 2018) or PageRank (Sur- deanu et al., 2008) to learn how to select justifica- tion sentences without explicit training data. Other works have used end-to-end (mostly RNNs with attention mechanisms) QA architectures for learn- ing to pay more attention on better justification sentences (Min et al., 2018; Seo et al., 2016; Yu et al., 2014; Gravina et al., 2018). While these approaches do not require annotated justifications, they need large amounts of question/answer pairs during training so they can discover the latent jus- tifications. In contrast to these two directions, our approach requires no training data at all for the justification selection process. The third category of methods utilize IR tech- niques to retrieve justifications from both unstruc- tured (Yadav et al., 2019) and structured (Khashabi et al., 2016) KBs. Our approach is closer in spirit to this direction, but it is adjusted to account for more intentional knowledge aggregation. As we show in Section 4, this is important for both the quality of the justification sentences and the performance of the downstream QA system. The last group of QA approaches learn how to classify answers without any justification sen- tences (Mihaylov et al., 2018; Sun et al., 2018; Devlin et al., 2018). While this has been shown to obtain good performance for answer classification, we do not focus on it in this work because these methods cannot easily explain their inference. Note that some of the works discussed here trans- fer knowledge from external datasets into the QA task they address (Chung et al., 2017; Sun et al., 2018; Pan et al., 2019; Min et al., 2017; Qiu et al., 2018; Chen et al., 2017). In this work, we focus solely on the resources provided in the task itself because such compatible external resources may not be available in real-world applications of QA. # 3 Approach ROCC, coupled with a QA system, operates in the following steps (illustrated in Figure 2): (1) Retrieval of candidate justification sen- tences: For datasets that rely on huge supporting KBs (e.g., ARC), we retrieve the top n sentences1 from this KB using an IR query that concatenates the question and the candidate answer, similar to Clark et al. (2018); Yadav et al. (2019). We im- plemented this using the BM25 IR model with the default parameters in Lucene2. For reading com- prehension datasets where the question is associ- ated with a text passage (e.g., MultiRC), all the sentences in this passage become candidates. (2) Generation of candidate justification sets: Since its focus is on knowledge aggregation, ROCC ranks sets of justification sentences (see below) rather than individual sentences. In this step we create candidate justification sets by generating (2) groups of sentences from the previous n sentences, using multiple values of k. (3) Ranking of candidate justification sets: For every candidate justification set, we calculate its ROCC score (see Section 3.1), which estimates the likelihood that this group of justifications explains the given answer. We then rank the justification sets in descending order of ROCC score, and choose the top set as the group of justifications that is the output of ROCC for the given question and answer. In MultiRC, we rearrange the justification sentences according to their original indexes in the given passage to bring coherence in the selected sequence of sentences. (4) Answer classification: ROCC can be coupled with any supervised QA component for answer classification. In this work, we feed in the question, answer, and justification texts into a state-of-the- art classifier that relies on BERT (see Section 3.2). Because the justification sentences in the reading comprehension use case (e.g., MultiRC) come from the same passage and their sequence is likely to be coherent, we concatenate them into a single pas- sage, and use a single BERT instance for classifica- tion. This approach is shown on the left side of the answer classification component in Figure 2. On the other hand, the justification sentences retrieved from an external KB (e.g., ARC) may not form a coherent passage when aggregated. For this reason, in the ARC use case, we classify each justifica- tion sentence separately (together with the question and candidate answer), and then average all these scores to produce a single score for the candidate answer (right-hand side of the figure). 1In this work we used n = 20 as in Yadav et al. (2019) 2https://lucene.apache.org Which novel did Camus write about his childhood in # Nigeria? || The First Man # J (SENT 0): “The driver of the Facel Vega car, Michel Gallimard, who was Camus's publisher and close friend, also died in the accident.", (SENT 1): "In August 2011, the Milan newspaper Corriere della Sera reported a theory that the writer had been the victim of a Soviet plot, but Camus's biographer, Olivier Todd, did not consider it credible.", (SENT 2): ‘Camus was buried in the Lourmarin Cemetery, Lourmarin, Vaucluse, France.', (SENT 3): 'He was the second-youngest recipient, at the age of 44, of the Nobel Prize in Literature, after Rudyard Kipling, at the age of 42.', (SENT 4): 'He was survived by his wife and twin son and daughter, Jean and Catherine, who hold the copyrights to his work.', (SENT 5): "Two of Camus's works were published posthumously.", (SENT 6): "The first, entitled A Happy Death (1970), featured a character named Patrice Meursault, comparable to The Stranger's Meursault.", (SENT 7): 'There is scholarly debate as to the relationship between the two books.', (SENT 8): 'The second was an unfinished novel, The First Man (1995), which Camus was writing before he died.', (SENT 9): 'The novel was an autobiographical work about his childhood in Algeria.’ # Candidate justifications retrieval Sent 0,Sent 1 Sent 0,Sent 2 Sent 0,Sent 3 Sent 0, Sent 1, Sent2 Sent 0, Sent 1, Sent 2, Sent 3, Sent 4, Sent 5, Sent 6, Sent 7, Sent 8, Sent 9 Sent 0, Sent 1, Sent3 Senta,Sent9 || Sent7,Sent 8, Sent 9 Generation of candidate justification sets Ranking of candidate justification sets 1 BERT " BERT | ! 1 Answer classification model Candidate score Figure 2: An example of the ROCC process for a ques- tion from the MultiRC dataset. Here, ROCC correctly extracts the two justification sentences necessary to ex- plain the correct answer. # 3.1 Ranking of Candidate Justification Sets Each set of justifications is ranked based on its ROCC score, which: (a) maximizes the Relevance of the selected sentences; (b) minimizes the lexical Overlap between the selected facts; and (c) max- imizes the lexical Coverage of both question and according to ROCC scores answer (Cques, Cans). The overall score for a given justification set Pi is calculated as: S(Pi) = ‘(e+ C(A))-(e+C(Q)) e+ O(P;) To avoid zeros, we add a small constant (« = 1 here) to each component that can have a value of 0.3 We detail the components of this formula below. Relevance (R) We use the Lucene implementa- tion4 of the BM25 IR model (Robertson et al., 2009) to estimate the relevance of each justifica- tion sentence to a given question and candidate answer. In particular, we form a query that concate- nates the question and candidate answer, and use as underlying document collection (necessary to com- pute document statistics such as inverse document frequencies (IDF)) either: sentences in the entire KB (for ARC), or all sentences in the correspond- ing passage in the case of reading comprehension (MultiRC). The arithmetic mean of BM25 scores over all sentences in a given justification set gives the value of R for the entire set. Overlap (O) To ensure diversity and complemen- tarity between justification sentences, we compute the overlap between all sentence pairs in a given group. Thus, minimizing this score reduces redun- dancy and encourages the aggregated sentences to address different parts of the question and answer: as 8;€S 8jES—s; [e(si) N t(s5)| max(|¢(s;)], |(s;)]) S| (5) O(S) (2) where S is the given set of justification sentences; s; is the i” sentence in S; and t(s;) denotes the set of unique terms in sentence s;. Note that we divide by (Is!) to normalize across different sizes of justification sets. Coverage (C) Complementing the overlap score, this component measures the lexical coverage of the question and the answer texts by the given set of justifications S. This coverage is weighted by the IDF of question and answer terms. Thus, max- imizing this value encourages the justifications to address more of the meaningful content mentioned 3Our R score relies on BM25, which is larger than 0 on the top n sentences. 4https://lucene.apache.org/core/7_ 0_1/core/org/apache/lucene/search/ similarities/BM25Similarity.html in the question (X = Q) and the answer (X = A): Ct(X) = t(X) ∩ t(si) (3) # siEcS iY t=1 C(X) = IDF [Ct(X)[t]] |t(X)| (4) where t(X) denotes the unique terms in X, and Ct(X) represents the set of all unique terms in X that are present in any of the sentences of the given justification set. C(X) gives the IDF weighted average of Ct(X) terms. # 3.2 Answer Classification As indicated earlier, we propose two flavors for the answer classification component: if the sentences in a justification group come from the same pas- sage and, thus, are likely to be coherent, they are concatenated into a single text before classification, and handled by a single answer classifier. If the sen- tences come from different texts, they are handled by separate instances of the answer classifier. In the latter case, all scores are averaged to produce a sin- gle score for a candidate answer. In all situations we used BERT (Devlin et al., 2018) for answer classification. In particular, we employed BERT as a binary classifier operating over two texts. The first text consists of the concatenated question and answer, and the second text consists of the justifi- cation text. The classifier operates over the hidden states of the two texts, i.e., the state corresponding to the [CLS] token (Devlin et al., 2018).5 We observed empirically that pre-training the BERT classifier on all n sentences retrieved by BM25, and then fine tuning on the ROCC justifi- cations improves performance on all datasets we experimented with. This resembles the transfer learning discussed by Howard and Ruder (2018), where the source domain would be the BM25 sen- tences, and the target domain the ROCC justifica- tions. However, one important distinction is that, in our case, all this knowledge comes solely from the resources provided within each dataset, and is retrieved using unsupervised method (BM25). We conjecture that this helped mainly because the pre- training step exposed BERT to more data which, even if imperfect, is topically related to the corre- sponding question and answer. 5We used the following hyper parameters with BERT Large: learning rate of 1e-5, maximum sequence length of 128, batch size = 16, number of epochs = 6. Question + answer text Animal cells obtain energy by || absorbing nutrients Justification set 1) obtain water and nutrient by absorbing them directly into plant cell 2) the animal obtain nourish- ment by absorbing nutrient released by symbiotic bacteria Table 1: Example of a justification set in ARC which was scored by annotator with a precision of 1 2 because the first jus- tification sentence is not relevant, and a coverage of 1 2 because the link between nourishment and energy is not covered. # 4 Empirical Evaluation We evaluated ROCC coupled with the proposed QA approach on two QA datasets. We use the standard train/development/test partitions for each dataset, as well as the standard evaluation measures: accuracy for ARC (Clark et al., 2018), and F1m (macro-F1 score), F1a (micro-F1 score), and EM0 (exact match) for MultiRC (Khashabi et al., 2018a). comprehension Multi-sentence (MultiRC): this reading comprehen- sion dataset implemented as multiple-choice QA (Khashabi et al., 2018a). Each question is accompanied by a supporting passage, which contains the correct answer. We use all sentences from such paragraphs as candidate justifications for the corresponding questions. AI2’s Reasoning Challenge (ARC): this is a multiple-choice question dataset, containing ques- tions from science exams from grade 3 to grade 9 (Clark et al., 2018). The dataset is split in two partitions: Easy and Challenge, where the latter partition contains the more difficult questions that require reasoning. Most of the questions have 4 an- swer choices, with <1% of all the questions having either 3 or 5 answer choices. Importantly, ARC in- cludes a supporting KB of 14.3M unstructured text passages. We use BM25 over this entire KB to re- trieve candidate justification sentences for ROCC. # Justification Results To demonstrate that ROCC has the capacity to se- lect better justification sentences, we also report the quality of the extracted justification sentences. For MultiRC, we report precision/recall/F1 justification scores, computed against the gold justification sen- tences provided by the dataset.6 For ARC, where gold justifications are not provided, we used an 6We use these gold justifications only for evaluation, not for training, since ROCC is an unsupervised algorithm. # External Supervised Method Fly, Fla EMO Justification resource? selection of P R FL justifications? DEVELOPMENT DATASET Baselines 0 No No Predict 1 (Khashabi et al., 2018a) 61.0 59.9 0.8 - 1 No No IR(paragraphs) (Khashabi et al., 2018a) 643 60.0 1.4 - 2 No No SurfaceLR (Khashabi et al., 2018a) 66.5 63.2 118 —- 3 No No Entailment baseline (Trivedi et al., 2019) 51.3 504 - - Previous work 4 Yes Yes EERppxy + FT Wang et al. (2019) 70.5 67.8 13.3 - 5 Yes Yes Multee (GloVe) (Trivedi et al., 2019) 71.3 68.3 179 —- 6 No Yes Multee (ELMo) (Trivedi et al., 2019) 70.3 67.3 22.8 —- 7 Yes Yes Multee (ELMo) (Trivedi et al., 2019) 73.0 69.6 22.8 — 8 No Yes RS (Sun et al., 2018) 69.7 67.9 169 — 9 Yes Yes RS (Sun et al., 2018) 73.1* 70.5* 21.8 —- BERT + IR baselines 10 No No BERT + entire passage 65.7 62.7 17.0 174 100.0 29.6 11 No No BERTI 1 66.2 62.8 17.9 61.0 27.1 37.5 12 No No BERTI 68.1 64.8 21.0 51.6 45.6 48.4 13 No No BERT 69.1 65.7 21.6 42.6 56.1 484 14 No No BERT + BM25 (k = 4 sentences) 70.05 66.7 22.3 36.9 64.6 47.0 15 No No BERT + BM25 (k = 5 sentences) 71.2 67.7 23.4 32.7 71.1 448 BERT + parametric ROCC 16 No No BERT + ROCC (k = 2 sentences) 69.8 66.8 22.7 54.7 48.5 51.4 17 No No BERT + ROCC (k sentences) 72.7 69.7 25.2 48.0 63.5 54.7 18 No No BERT + ROCC (k = 4 sentences) 72.2 69.0 25.0 40.6 71.0 51.6 19 No No BERT + ROCC (k = 5 sentences) 71.6 68.7 22.7 35.0 765 48.1 BERT + non-parametric ROCC 20 No No BERT + AutoROCC (k € {2,¢ 72.0 69.0 21.9 48.9 66.5 56.3 21 No No BERT + AutoROCC (k € {2 72.0 68.8 23.5 48.3 67.7 56.4 22 No No BERT + AutoROCC (k € {2 72.1 69.2 25.3 48.2 68.2 56.4 23 No No BERT + BM25 (k from best 71.1 674 23.1 43.8 61.2 51.0 24 No No BERT + AutoROCC (k € {2,3, 4, 5,6}, pre-trained) 72.9 69.6 24.7 48.2 68.2 56.4 Ceiling systems with gold justifications 25 Yes Yes EER, + FT (Wang et al., 2019) 72.3 70.1 19.2 -- 26 No Yes BERT + Gold knowledge 79.1 75.4 37.6 100.0 100.0 100.0 27 - - Human 86.4 83.8 566 — TEST DATASET 28 No No SurfaceLR (Khashabi et al., 2018a) 66.9 63.5 12.8 29 Yes Yes Multee (ELMo) (Trivedi et al., 2019) 73.8 70.4 24.5 - 30 No No BERT + AutoROCC (k € {2,3, 4, 5,6}, pre-trained) 73.8 70.6 26.1 Table 2: Performance on the MultiRC dataset, under various configurations. k indicates the size(s) of the sets of justification Table 2: Performance on the MultiRC dataset, under various configurations. k indicates the size(s) of the sets of justification sentences. In parametric ROCC, k is a hyper parameter; in AutoROCC, k is selected automatically. The pre-trained ROCC configurations pre-train BERT on the entire passage corresponding to the question, before fine tuning it on the ROCC sentences. Bold values with * indicate state-of-the-art results that used external labeled resources or other supervised methods for the selection of justification sentences. Italicized bold values show state-of-the-art results from experiments that do not use any external labeled resources. external annotator to annotate the justifications for a random stratified sample of 70 questions, with 10 questions selected from each grade (3 – 9).The annotator reported two scores: precision, and cov- erage. Precision was defined as the fraction of justi- fication sentences that are relevant for the inference necessary to connect the corresponding question and candidate answer. Coverage was defined as 1 if the justification set completely covers the inference process for the given question and answer, 1/2 if the set of justifications partially addresses the in- ference, and 0 if the justification set is completely irrelevant. Table 1 illustrates these scores with an actual output from ARC. # 4.2 Question answering results In addition to comparing ROCC with previously reported results, we include multiple baselines: (a) the BERT answer classifier trained on the entire passage of the given question (MultiRC), to demon- strate that ROCC has the capacity to filter out ir- relevant content from these paragraphs; (b) BERT trained without any justification sentences (ARC), to show that ROCC has the capacity to aggregate useful information from large unstructured KBs, and (c) BERT trained on sentences retrieved using BM25, to demonstrate that ROCC performs better than other unsupervised approaches. Note that the # External Supervised = Method Challenge Easy All Justification resources selection of P, Coverage used? justifications? Baselines 0 No No AI2 IR Solver (Clark et al., 2018) 59.99 23.98 >0 No No Sanity Check (Yadav et al., 2018) 58.36 26.56 >0 2 Yes No Tuple-Inf (Clark et al., 2018) 60.71 23.83 >0 3 Yes No DGEM (Clark et al., 2018) 58.97 27.11 >0 Previous work 4 Yes = Bi-LSTM max-out (Mihaylov et al., 2018) 33.87 34.26 =0 8 No No AHE (Yadav et al., 2019) 33.28 63.22 53.31 9 No - Reading Strategies (Sun et al., 2018) 35.40 63.10 53.94 =0 0 Yes - Reading Strategies (Sun et al., 2018) 42.30* 68.90* 60.19% =0 BERT + IR baselines I No = BERT 35.11 52.75 46.94 2 No No BERT + BM25 (k = 1 sentence) 33.87 56.23 48.85 3 No No BERT + BM25 (k = 2 sentences) 38.65 60.50 53.29 4 No No BERT + BM25 (k 41.04 63.19 55.89 5 No No BERT + BM25 (k 37.9 63.49 53.90 6 No No BERT + BM25 (k = 5 sentences) 38.01 61.28 53.60 BERT + parametric ROCC 7 No No BERT + ROCC (k = 2 sentences) 36.65 60.59 52.69 8 No No BERT + ROCC (k = 3 sentences) 39.29 62.97 55.16 9 No No BERT + ROCC (k = 4 sentences) 40.39 61.13 54.29 20 No No BERT + ROCC (k sentences) 40.62 59.96 53.58 BERT + non-parametric ROCC 21 No No BERT + AutoROCC (k € {2, 3, ...20}) 40.73 63.64 56.09 48.04, 62.50 22 No No BERT + BM25 (k from best AutoROCC) 39.24 61.01 53.83 42.55, 55.88 23 No No BERT + AutoROCC (k € {2, 3, ...20}, pre-trained) 41.24 64.49 56.82 48.04, 62.50 63.64 56.09 48.04, 62.50 61.01 53.83 42.55, 55.88 64.49 56.82 48.04, 62.50 Table 3: Performance on the ARC dataset, under various configurations. Notations are the same as in Table 2. BM25 baseline has an additional hyper parameter: the number of sentences to be considered (k). Table 2 reports comprehensive results on MultiRC, including both overall QA performance, measured using F1m, F1a, and EM0, as well as justification quality, measured using standard preci- sion (P), recall (R), and F1. Note that the bulk of the results are reported on the development partition. The last row in the table reports results on the test partition, computed using the official submission portal which can be accessed only once per model (including its variants). To understand ROCC’s be- havior, the table includes both the parametric form of ROCC, where the size of the justification sets (k) is manually tuned as well as the non-parametric ROCC, where k is automatically selected in the third step of the ROCC algorithm (see Figure 2) by sorting across all sizes of justification sets together, instead of sorting within each value of k. Table 3 lists equivalent results on ARC. forms the previous best result in MultiRC by 2.5 EM0 points on the development partition (row 24 vs. row 6), and 1.6 EM0 points on test (row 30 vs. row 29). In ARC, ROCC outperforms the previous best approach by 5.8% accuracy on the Challenge partition, and 2.9% overall (row 23 vs. row 9). (2) On both datasets, the non-parametric form of ROCC (AutoROCC) slightly outperforms the para- metric variant. Importantly, it always achieves higher justification scores compared to the paramet- ric ROCC. In MultiRC, AutoROCC outperforms our baseline of BERT + entire passage (row 10 vs 22) by 8.3% EM0, indicating that AutoROCC can filter out irrelevant content. In ARC, AutoROCC outperforms the baseline with no justification sen- tences by 9.1% (row 21 vs row 11), demonstrating that ROCC aggregates useful knowledge. We draw several observations from these tables: (1) Despite its simplicity, ROCC combined with the BERT classifier obtains new state-of-the-art per- formance on both MultiRC and ARC for the class of approaches that do not use external resources to either train the justification sentence selection or the answer classifier. For example, ROCC outper- (3) The results of the parametric forms of ROCC (rows 16 – 19 in Table 2 and rows 17 – 20 in Table 3) indicate that performance continues to increase until k = 4 in MultiRC and k = 3 in ARC. This indicates that: (a) knowledge aggrega- tion is beneficial for these tasks; (b) ROCC can robustly handle non-trivial cases of aggregation with larger values of k; and (c) similar to other QA methods (Chen and Durrett, 2019), performance train/test AutoROCC BERT+All passages BERT+Science textbook BERT+Fiction BERT+News GPT-2 (Wang et al., 2019) Science textbook 54.57 55.15 55.67 45.16 44.11 - Fiction News Wiki Society, wikiMovie articles Summaries Law and Justice 61.06 58.79 48.84 50.94 58.30 - 53.88 55.46 41.01 57.60 50.77 - 54.32 68.77 51.45 63.05 68.82 - 60.49 65.14 50.06 63.13 65.45 - 57.10 57.39 54.96 59.98 57.01 - All 56.44 60.90 50.79 58.31 59.30 60.7 Table 4: Domain robustness of the non-parametric ROCC vs. a supervised sentence selection model, evaluated on the gold justification sentences from MultiRC. Each column represents a section of the MultiRC development set. Each row after AutoROCC represents a justification sentence selection component trained only on the specified section of MultiRC (these sections are listed in descending order of the number of passages in the training data). decreases for large values of k, suggesting that knowledge aggregation remains an open research challenge. (4) The justification scores in both datasets are con- siderably higher than the equivalent configuration that uses BM25 instead of ROCC (i.e., row 24 vs. row 23 in Table 2, and row 23 vs. row 22 in Ta- ble 3). This confirms that the joint scoring of sets of justifications that ROCC performs is better than the individual ranking of justification sentences per- formed by standard IR models such as BM25. # 4.3 Domain Robustness Analysis To understand ROCC’s domain robustness, we compared it against a supervised BERT-based clas- sifier for the selection of justification sentences, as well as against GPT-2 (Wang et al., 2019). For this experiment, we used MultiRC, where gold justifi- cations are provided. We used this data to train a classifier for the selection of justification sentences on various domain-specific sections of MultiRC. The results of this experiment are shown in Table 4. Unsurprisingly, training and testing in the same do- main (e.g., Fiction) leads to the best performance on sentence selection. However, ROCC is more stable across domains than the supervised sentence selection component, with a difference of over 10 F1 points in some configurations. This suggests that ROCC is a better solution for real-world use cases where the distribution of the test data may be very different from the training data. # Ablations ARC |MultiRC) MultiRC EMO | Justification Fl 0| Full AutoROCC | 56.09} 25.29 56.44 1 —IDF 54.11] 24.65 54.19 2 —C(A) 54.90| 21.82 52.93 3 -C(Q) 54.66| 23.61 52.09 4 -O 55.88] 24.03 55.97 5 R* 53.90] 23.40 44.81 Table 5: Ablation study, removing different compo- nents of ROCC. The scores are reported on the ARC test set and MultiRC dev set. R* denotes the best ap- proach that relies just on the R score. The hyper param- eter k in R*, was tuned on the development partition of the respective dataset. mains. However, importantly, AutoROCC is more robust across domains that are different from these two, since it is an unsupervised approach that is not tuned for any specific domain. The ARC dataset does not provide justification sentences, so we instead ask how well our question- answering models do on a related inference task, the SciTail entailment dataset (Khot et al., 2018). We trained three QA classifiers on the ARC dataset: BERT with no justification, BERT with BM25 (k = 4) justifications, and BERT with AutoROCC justifications. We tested these on SciTail, and achieved 64.49%, 69.70%, and 73.46% accuracy, respectively, indicating that AutoROCC’s knowl- edge aggregation is a valid proxy for entailment. Compared to BERT, the unsupervised Auto- ROCC achieves almost the same or better perfor- mance in the majority of the domains except Wiki articles and News. We conjecture this happens be- cause the BERT language model was trained on a large text corpus that comes from these two do- # 4.4 Ablation Analysis Table 5 shows an ablation of the different compo- nents of ROCC. Row 0 reports the score from the full AutoROCC model. In row 1, we remove IDF weights from coverage calculations (see eq. (4)) Question type True/False/Yes/No Verbatim Non-verbatim 54.1 49.7 47.3 60.6 58.5 56.0 Table 6: Justification selection performance of Auto- ROCC on different types of questions, in the MultiRC development dataset. of both question and answer text. In row 2, 3 and 4, we remove the coverage of answer, coverage of question, and overlap from the ROCC formula (see eq. (1)) respectively. In all the cases, we found small drops in both performance and justification scores across both the datasets, with the removal of either C(A) or C(Q) having the largest impact. # 4.5 Error Analysis We analyzed ROCC’s justification selection per- formance on three different types of questions in MultiRC: True/False/Yes/No, Verbatim, and Non- verbatim (Khashabi et al., 2018b). As shown in Table 6, AutoROCC achieves higher recall scores on Verbatim questions, where the answer text is likely to appear within the given justification pas- sage, and worse recall on question types where such overlap does not exist, e.g., Non-verbatim and True/False. This suggests that the C(A) compo- nent of ROCC is important for the extraction of meaningful justifications. # 4.6 Alignment ROCC To understand the dependence between ROCC and exact lexical match, we compare the justification selection performance of ROCC when its score components are computed based on lexical match (the approach used throughout the paper up to this point) vs. the semantic alignment match of Yadav et al. (2018). The latter approach relaxes the re- quirement for lexical match, i.e., two tokens are considered to be matched when the cosine similar- ity of their embedding vectors is larger than 0.95.7 As shown in Table 7, the alignment-based ROCC indeed performs better than the ROCC that relies on lexical match. However, the improvements are not large, e.g., the maximum improvement is 1.6% (when k = 4), which indicates that ROCC is robust to a certain extent to lexical variation. 7This threshold was tuned on the MultiRC development set. We used 100-dimensional GloVe embeddings for this experiment, which performed similarly to larger embedding vectors (300), but allowed for faster experiments. ROCC (k sentences) ROCC (k = 2 sentences) ROCC (k = 3 sentences) ROCC (k = 4 sentences) ROCC (k = 5 sentences) Lexical Align. ROCC ROCC 51.4 51.4 55.5 54.7 53.2 51.6 49.2 48.1 Table 7: Justification selection performance of the ROCC configuration that uses lexical match (BM25) to retrieve candidate justifications (Lexical ROCC), com- pared against a ROCC variant that uses the semantic alignment approach of Yadav et al. (2018) to retrieve candidates (Align. ROCC). This experiment used the MultiRC development dataset. # 5 Conclusion We introduced ROCC, a simple unsupervised ap- proach for selecting justification sentences for ques- tion answering, which balances relevance, overlap of selected sentences, and coverage of the ques- tion and answer. We coupled this method with a state-of-the-art BERT-based supervised question answering system, and achieved a new state-of- the-art on the MultiRC and ARC datasets among approaches that do not use external resources dur- ing training. We showed that ROCC-based QA approaches are more robust across domains, and generalize better to other related tasks like entail- ment. In the future, we envision that ROCC scores can be used as distant supervision signal to train supervised justification selection methods. # Acknowledgments This work was supported by the Defense Ad- vanced Research Projects Agency (DARPA) un- der the World Modelers program, grant number W911NF1810014, and by the National Science Foundation (NSF) under grant IIS-1815948. Mihai Surdeanu declares a financial interest in lum.ai. This interest has been properly disclosed to the University of Arizona Institutional Review Committee and is managed in accordance with its conflict of interest policies. # References David Alvarez-Melis and Tommi S Jaakkola. 2017. A causal framework for explaining the predictions of black-box sequence-to-sequence models. arXiv preprint arXiv:1707.01943. Leila Arras, Franziska Horn, Gr´egoire Montavon, Klaus-Robert M¨uller, and Wojciech Samek. 2017. ” what is relevant in a text document?”: An in- terpretable machine learning approach. PloS one, 12(8):e0181142. Seohyun Back, Seunghak Yu, Sathish Reddy Indurthi, Jihie Kim, and Jaegul Choo. 2018. Memoreader: Large-scale reading comprehension through neural memory controller. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 2131–2140. Junwei Bao, Nan Duan, Zhao Yan, Ming Zhou, and Tiejun Zhao. 2016. Constraint-based question an- In Proceedings of swering with knowledge graph. COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2503–2514. Lisa Bauer, Yicheng Wang, and Mohit Bansal. 2018. Commonsense for generative multi-hop question an- swering tasks. arXiv preprint arXiv:1809.06309. Or Biran and Courtenay Cotton. 2017. Explanation and justification in machine learning: A survey. In IJCAI-17 workshop on explainable AI (XAI), vol- ume 8. Samuel R Bowman, Gabor Angeli, Christopher Potts, and Christopher D Manning. 2015. A large anno- tated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Danqi Chen, Adam Fisch, Jason Weston, and An- Reading wikipedia to an- arXiv preprint toine Bordes. 2017. swer open-domain questions. arXiv:1704.00051. Jifan Chen and Greg Durrett. 2019. Understand- ing dataset design choices for multi-hop reasoning. arXiv preprint arXiv:1904.12106. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Be- rant. 2017. Coarse-to-fine question answering for long documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 209–220. Yu-An Chung, Hung-Yi Lee, and James Glass. Supervised and unsupervised transfer arXiv preprint 2017. learning for question answering. arXiv:1711.05345. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question an- swering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457. Wanyun Cui, Yanghua Xiao, Haixun Wang, Yangqiu Song, Seung-won Hwang, and Wei Wang. 2017. Kbqa: learning question answering over qa corpora and knowledge bases. Proceedings of the VLDB En- dowment, 10(5):565–576. Nicola De Cao, Wilker Aziz, and Ivan Titov. 2018. Question answering by reasoning across documents with graph convolutional networks. arXiv preprint arXiv:1808.09920. Mostafa Dehghani, Hosein Azarbonyad, Jaap Kamps, and Maarten de Rijke. 2019. Learning to transform, combine, and reason in open-domain question an- swering. In Proceedings of the Twelfth ACM Inter- national Conference on Web Search and Data Min- ing, pages 681–689. ACM. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Bhuwan Dhingra, Kathryn Mazaitis, and William W Cohen. 2017. Quasar: Datasets for question an- arXiv preprint swering by search and reading. arXiv:1707.03904. Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with arXiv preprint context from a search engine. arXiv:1704.05179. Mor Geva and Jonathan Berant. 2018. Learning to search in long documents using document structure. arXiv preprint arXiv:1806.03529. Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Ba- jwa, Michael Specter, and Lalana Kagal. 2018. Ex- plaining explanations: An overview of interpretabil- In 2018 IEEE 5th Inter- ity of machine learning. national Conference on Data Science and Advanced Analytics (DSAA), pages 80–89. IEEE. Alessio Gravina, Federico Rossetto, Silvia Severini, and Giuseppe Attardi. 2018. Cross attention for In 2nd Work- selection-based question answering. shop on Natural Language for Artificial Intelligence. Aachen: R. Piskac. Yanchao Hao, Yuanzhe Zhang, Kang Liu, Shizhu He, Zhanyi Liu, Hua Wu, and Jun Zhao. 2017. An end- to-end model for question answering over knowl- edge base with cross-attention combining global knowledge. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 221–231. Jeremy Howard and Sebastian Ruder. 2018. Univer- sal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018a. Looking beyond the surface: A challenge set for reading com- prehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 252–262. Daniel Khashabi, Tushar Khot, Ashish Sabhar- wal, Peter Clark, Oren Etzioni, and Dan Roth. 2016. Question answering via integer programming arXiv preprint over semi-structured knowledge. arXiv:1604.06076. Daniel Khashabi, Tushar Khot, Ashish Sabharwal, and Dan Roth. 2018b. Question answering as global rea- soning over semantic abstractions. In Thirty-Second AAAI Conference on Artificial Intelligence. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2017. Answering complex questions using open informa- tion extraction. arXiv preprint arXiv:1704.05572. Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In Thirty-Second AAAI Confer- ence on Artificial Intelligence. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683. Tuan Manh Lai, Trung Bui, and Sheng Li. 2018. A review on deep learning techniques applied to an- In Proceedings of the 27th Inter- swer selection. national Conference on Computational Linguistics, pages 2132–2144. Yankai Lin, Haozhe Ji, Zhiyuan Liu, and Maosong Sun. 2018. Denoising distantly supervised open-domain question answering. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1736– 1745. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct elec- tricity? a new dataset for open book question answer- ing. arXiv preprint arXiv:1809.02789. Sewon Min, Minjoon Seo, and Hannaneh Hajishirzi. 2017. Question answering through transfer learn- ing from large fine-grained supervision data. arXiv preprint arXiv:1702.02171. Sewon Min, Victor Zhong, Richard Socher, and Caim- ing Xiong. 2018. Efficient and robust question answering from minimal context over documents. arXiv preprint arXiv:1805.08092. Xiaoman Pan, Kai Sun, Dian Yu, Heng Ji, and Dong Yu. 2019. Improving question answering with external knowledge. arXiv preprint arXiv:1902.00993. Minghui Qiu, Liu Yang, Feng Ji, Weipeng Zhao, Wei Zhou, Jun Huang, Haiqing Chen, W Bruce Croft, and Wei Lin. 2018. Transfer learning for context- aware question matching in information-seeking arXiv preprint conversations in e-commerce. arXiv:1806.05434. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable ques- tions for squad. arXiv preprint arXiv:1806.03822. Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends®) in Information Re- trieval, 3(4):333-389. Wojciech Samek, Thomas Wiegand, and Klaus-Robert M¨uller. 2017. Explainable artificial intelligence: Understanding, visualizing and interpreting deep learning models. arXiv preprint arXiv:1708.08296. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of gen- eral knowledge. In AAAI, pages 4444–4451. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. Dream: A challenge dataset and models for dialogue-based reading com- prehension. arXiv preprint arXiv:1902.00164. Kai Sun, Dian Yu, Dong Yu, and Claire Cardie. Improving machine reading comprehension arXiv preprint 2018. with general reading strategies. arXiv:1810.13441. Mihai Surdeanu, Massimiliano Ciaramita, and Hugo Zaragoza. 2008. Learning to rank answers on large In Proceedings of ACL-08: online qa collections. HLT, pages 719–727. Nam Khanh Tran and Claudia Niedere´ee. 2018. Mul- tihop attention networks for question answer match- ing. In The 41st International ACM SIGIR Confer- ence on Research & Development in Information Re- trieval, pages 325–334. ACM. Harsh Trivedi, Heeyoung Kwon, Tushar Khot, Ashish Sabharwal, and Niranjan Balasubramanian. 2019. Repurposing entailment for multi-hop question an- swering tasks. arXiv preprint arXiv:1904.09380. and Alessandro Moschitti. 2017. Ranking kernels for structures and embeddings: A hybrid preference and classification model. In EMNLP. Hai Wang, Dian Yu, Kai Sun, Jianshu Chen, Dong Yu, Dan Roth, and David McAllester. 2019. Evidence sentence extraction for machine reading comprehen- sion. arXiv preprint arXiv:1902.08852. Shuohang Wang and Jing Jiang. 2016. A compare- aggregate model for matching text sequences. arXiv preprint arXiv:1611.01747. Shuohang Wang, Mo YU, Jing JIANG, Wei ZHANG, Xiaoxiao GUO, Shiyu CHANG, Zhiguo WANG, Tim KLINGER, Gerald TESAURO, and Murray CAMPBELL. 2018a. Evidence aggregation for an- swer re-ranking in open-domain question answering. Yizhong Wang, Kai Liu, Jing Liu, Wei He, Yajuan Lyu, Hua Wu, Sujian Li, and Haifeng Wang. 2018b. Multi-passage machine reading comprehension with cross-passage answer verification. arXiv preprint arXiv:1805.02220. Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transac- tions of the Association of Computational Linguis- tics, 6:287–302. Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2017. A broad-coverage challenge corpus for arXiv sentence understanding through inference. preprint arXiv:1704.05426. Vikas Yadav, Steven Bethard, and Mihai Surdeanu. 2019. Alignment over heterogeneous embeddings for question answering. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (Long Papers), Minneapo- lis, USA. Association for Computational Linguis- tics. Vikas Yadav, Rebecca Sharp, and Mihai Surdeanu. 2018. Sanity check: A strong alignment and infor- mation retrieval baseline for question answering. In The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1217–1220. ACM. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W Cohen, Ruslan Salakhutdinov, and Christopher D Manning. 2018. Hotpotqa: A dataset for diverse, explainable multi-hop question answer- ing. arXiv preprint arXiv:1809.09600. Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. arXiv preprint arXiv:1412.1632. Yuanzhe Zhang, Shizhu He, Kang Liu, and Jun Zhao. 2016. A joint model for question answering over multiple knowledge bases. In Thirtieth AAAI Con- ference on Artificial Intelligence. Yuyu Zhang, Hanjun Dai, Kamil Toraman, and Le Song. 2018. Kgˆ2: Learning to reason science exam questions with contextual knowledge graph embeddings. CoRR, abs/1805.12393. Mantong Zhou, Minlie Huang, and Xiaoyan Zhu. 2018. An interpretable reasoning network for multi-relation question answering. arXiv preprint arXiv:1801.04726.
{ "id": "1704.05572" }
1911.05722
Momentum Contrast for Unsupervised Visual Representation Learning
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.
http://arxiv.org/pdf/1911.05722
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick
cs.CV
CVPR 2020 camera-ready. Code: https://github.com/facebookresearch/moco
null
cs.CV
20191113
20200323
0 2 0 2 r a M 3 2 ] V C . s c [ 3 v 2 2 7 5 0 . 1 1 9 1 : v i X r a # Momentum Contrast for Unsupervised Visual Representation Learning Kaiming He Haoqi Fan Yuxin Wu Saining Xie Ross Girshick # Facebook AI Research (FAIR) Code: https://github.com/facebookresearch/moco # Abstract We present Momentum Contrast (MoCo) for unsuper- vised visual representation learning. From a perspective on contrastive learning [29] as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dic- tionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its super- vised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, some- times surpassing it by large margins. This suggests that the gap between unsupervised and supervised representa- tion learning has been largely closed in many vision tasks. # 1. Introduction Unsupervised representation learning is highly success- ful in natural language processing, e.g., as shown by GPT [50, 51] and BERT [12]. But supervised pre-training is still dominant in computer vision, where unsupervised meth- ods generally lag behind. The reason may stem from dif- ferences in their respective signal spaces. Language tasks have discrete signal spaces (words, sub-word units, etc.) for building tokenized dictionaries, on which unsupervised learning can be based. Computer vision, in contrast, further concerns dictionary building [54, 9, 5], as the raw signal is in a continuous, high-dimensional space and is not struc- tured for human communication (e.g., unlike words). contrastive loss similarity q ko ky kp... queue encoder momentum encoder query key .key _ key x Up Uy" Vy” ... Figure 1. Momentum Contrast (MoCo) trains a visual represen- tation encoder by matching an encoded query q to a dictionary of encoded keys using a contrastive loss. The dictionary keys {k0, k1, k2, ...} are defined on-the-fly by a set of data samples. The dictionary is built as a queue, with the current mini-batch en- queued and the oldest mini-batch dequeued, decoupling it from the mini-batch size. The keys are encoded by a slowly progressing encoder, driven by a momentum update with the query encoder. This method enables a large and consistent dictionary for learning visual representations. From this perspective, we hypothesize that it is desirable to build dictionaries that are: (i) large and (ii) consistent as they evolve during training. Intuitively, a larger dictio- nary may better sample the underlying continuous, high- dimensional visual space, while the keys in the dictionary should be represented by the same or similar encoder so that their comparisons to the query are consistent. However, ex- isting methods that use contrastive losses can be limited in one of these two aspects (discussed later in context). Several recent studies [61, 46, 36, 66, 35, 56, 2] present promising results on unsupervised visual representation learning using approaches related to the contrastive loss [29]. Though driven by various motivations, these methods can be thought of as building dynamic dictionaries. The “keys” (tokens) in the dictionary are sampled from data (e.g., images or patches) and are represented by an encoder network. Unsupervised learning trains encoders to perform dictionary look-up: an encoded “query” should be similar to its matching key and dissimilar to others. Learning is formulated as minimizing a contrastive loss [29]. We present Momentum Contrast (MoCo) as a way of building large and consistent dictionaries for unsupervised learning with a contrastive loss (Figure 1). We maintain the dictionary as a queue of data samples: the encoded repre- sentations of the current mini-batch are enqueued, and the oldest are dequeued. The queue decouples the dictionary size from the mini-batch size, allowing it to be large. More- over, as the dictionary keys come from the preceding sev- eral mini-batches, a slowly progressing key encoder, imple- mented as a momentum-based moving average of the query encoder, is proposed to maintain consistency. 1 MoCo is a mechanism for building dynamic dictionar- ies for contrastive learning, and can be used with various pretext tasks. In this paper, we follow a simple instance discrimination task [61, 63, 2]: a query matches a key if they are encoded views (e.g., different crops) of the same image. Using this pretext task, MoCo shows competitive results under the common protocol of linear classification in the ImageNet dataset [11]. A main purpose of unsupervised learning is to pre-train representations (i.e., features) that can be transferred to downstream tasks by fine-tuning. We show that in 7 down- stream tasks related to detection or segmentation, MoCo unsupervised pre-training can surpass its ImageNet super- vised counterpart, in some cases by nontrivial margins. In these experiments, we explore MoCo pre-trained on Ima- geNet or on a one-billion Instagram image set, demonstrat- ing that MoCo can work well in a more real-world, billion- image scale, and relatively uncurated scenario. These re- sults show that MoCo largely closes the gap between un- supervised and supervised representation learning in many computer vision tasks, and can serve as an alternative to Im- ageNet supervised pre-training in several applications. # 2. Related Work Unsupervised/self-supervised1 learning methods gener- ally involve two aspects: pretext tasks and loss functions. The term “pretext” implies that the task being solved is not of genuine interest, but is solved only for the true purpose of learning a good data representation. Loss functions can often be investigated independently of pretext tasks. MoCo focuses on the loss function aspect. Next we discuss related studies with respect to these two aspects. Loss functions. A common way of defining a loss function is to measure the difference between a model’s prediction and a fixed target, such as reconstructing the input pixels (e.g., auto-encoders) by L1 or L2 losses, or classifying the input into pre-defined categories (e.g., eight positions [13], color bins [64]) by cross-entropy or margin-based losses. Other alternatives, as described next, are also possible. Contrastive losses [29] measure the similarities of sam- ple pairs in a representation space. Instead of matching an input to a fixed target, in contrastive loss formulations the target can vary on-the-fly during training and can be defined in terms of the data representation computed by a network [29]. Contrastive learning is at the core of several recent works on unsupervised learning [61, 46, 36, 66, 35, 56, 2], which we elaborate on later in context (Sec. 3.1). Adversarial losses [24] measure the difference between probability distributions. It is a widely successful technique 1Self-supervised learning is a form of unsupervised learning. Their dis- tinction is informal in the existing literature. In this paper, we use the more classical term of “unsupervised learning”, in the sense of “not supervised by human-annotated labels”. for unsupervised data generation. Adversarial methods for representation learning are explored in [15, 16]. There are relations (see [24]) between generative adversarial networks and noise-contrastive estimation (NCE) [28]. Pretext tasks. A wide range of pretext tasks have been pro- posed. Examples include recovering the input under some corruption, e.g., denoising auto-encoders [58], context auto- encoders [48], or cross-channel auto-encoders (coloriza- tion) [64, 65]. Some pretext tasks form pseudo-labels by, e.g., transformations of a single (“exemplar”) image [17], patch orderings [13, 45], tracking [59] or segmenting ob- jects [47] in videos, or clustering features [3, 4]. Contrastive learning vs. pretext tasks. Various pretext tasks can be based on some form of contrastive loss func- tions. The instance discrimination method [61] is related to the exemplar-based task [17] and NCE [28]. The pretext task in contrastive predictive coding (CPC) [46] is a form of context auto-encoding [48], and in contrastive multiview coding (CMC) [56] it is related to colorization [64]. # 3. Method # 3.1. Contrastive Learning as Dictionary Look-up Contrastive learning [29], and its recent developments, can be thought of as training an encoder for a dictionary look-up task, as described next. Consider an encoded query q and a set of encoded sam- ples that are the keys of a dictionary. As- sume that there is a single key (denoted as k+) in the dic- tionary that q matches. A contrastive loss [29] is a function whose value is low when q is similar to its positive key k+ and dissimilar to all other keys (considered negative keys for q). With similarity measured by dot product, a form of a contrastive loss function, called InfoNCE [46], is consid- ered in this paper: exp(q-ks/7) Deo exp(a-ki/7) Ly = — log re3) where τ is a temperature hyper-parameter per [61]. The sum is over one positive and K negative samples. Intuitively, this loss is the log loss of a (K+1)-way softmax-based clas- sifier that tries to classify q as k+. Contrastive loss functions can also be based on other forms [29, 59, 61, 36], such as margin-based losses and variants of NCE losses. The contrastive loss serves as an unsupervised objective function for training the encoder networks that represent the queries and keys [29]. In general, the query representation is q = fq(xq) where fq is an encoder network and xq is a query sample (likewise, k = fk(xk)). Their instantiations depend on the specific pretext task. The input xq and xk can be images [29, 61, 63], patches [46], or context consisting a set of patches [46]. The networks fq and fk can be identical [29, 59, 63], partially shared [46, 36, 2], or different [56]. contrastive loss qk q k q ‘encoder q encoder k encoder a! a a! (a) end-to-end , contrastive loss ak : (b) memory bank contrastive loss ak k qd k sampling ereen momentum encoder memory bank q oak (c) MoCo Figure 2. Conceptual comparison of three contrastive loss mechanisms (empirical comparisons are in Figure 3 and Table 3). Here we illustrate one pair of query and key. The three mechanisms differ in how the keys are maintained and how the key encoder is updated. (a): The encoders for computing the query and key representations are updated end-to-end by back-propagation (the two encoders can be different). (b): The key representations are sampled from a memory bank [61]. (c): MoCo encodes the new keys on-the-fly by a momentum-updated encoder, and maintains a queue (not illustrated in this figure) of keys. # 3.2. Momentum Contrast From the above perspective, contrastive learning is a way of building a discrete dictionary on high-dimensional con- tinuous inputs such as images. The dictionary is dynamic in the sense that the keys are randomly sampled, and that the key encoder evolves during training. Our hypothesis is that good features can be learned by a large dictionary that cov- ers a rich set of negative samples, while the encoder for the dictionary keys is kept as consistent as possible despite its evolution. Based on this motivation, we present Momentum Contrast as described next. Dictionary as a queue. At the core of our approach is maintaining the dictionary as a queue of data samples. This allows us to reuse the encoded keys from the immediate pre- ceding mini-batches. The introduction of a queue decouples the dictionary size from the mini-batch size. Our dictionary size can be much larger than a typical mini-batch size, and can be flexibly and independently set as a hyper-parameter. The samples in the dictionary are progressively replaced. The current mini-batch is enqueued to the dictionary, and the oldest mini-batch in the queue is removed. The dictio- nary always represents a sampled subset of all data, while the extra computation of maintaining this dictionary is man- ageable. Moreover, removing the oldest mini-batch can be beneficial, because its encoded keys are the most outdated and thus the least consistent with the newest ones. Momentum update. Using a queue can make the dictio- nary large, but it also makes it intractable to update the key encoder by back-propagation (the gradient should propa- gate to all samples in the queue). A na¨ıve solution is to copy the key encoder fk from the query encoder fq, ignor- ing this gradient. But this solution yields poor results in experiments (Sec. 4.1). We hypothesize that such failure is caused by the rapidly changing encoder that reduces the key representations’ consistency. We propose a momentum update to address this issue. Formally, denoting the parameters of fk as θk and those of fq as θq, we update θk by: θk ← mθk + (1 − m)θq. (2) Here m [0, 1) is a momentum coefficient. Only the pa- rameters θq are updated by back-propagation. The momen- tum update in Eqn.(2) makes θk evolve more smoothly than θq. As a result, though the keys in the queue are encoded by different encoders (in different mini-batches), the dif- ference among these encoders can be made small. In ex- periments, a relatively large momentum (e.g., m = 0.999, our default) works much better than a smaller value (e.g., m = 0.9), suggesting that a slowly evolving key encoder is a core to making use of a queue. Relations to previous mechanisms. MoCo is a general mechanism for using contrastive losses. We compare it with two existing general mechanisms in Figure 2. They exhibit different properties on the dictionary size and consistency. The end-to-end update by back-propagation is a natural mechanism (e.g., [29, 46, 36, 63, 2, 35], Figure 2a). It uses samples in the current mini-batch as the dictionary, so the keys are consistently encoded (by the same set of encoder parameters). But the dictionary size is coupled with the mini-batch size, limited by the GPU memory size. It is also challenged by large mini-batch optimization [25]. Some re- cent methods [46, 36, 2] are based on pretext tasks driven by local positions, where the dictionary size can be made larger by multiple positions. But these pretext tasks may require special network designs such as patchifying the input [46] or customizing the receptive field size [2], which may com- plicate the transfer of these networks to downstream tasks. Another mechanism is the memory bank approach pro- posed by [61] (Figure 2b). A memory bank consists of the representations of all samples in the dataset. The dictionary for each mini-batch is randomly sampled from the memory bank with no back-propagation, so it can support a large dictionary size. However, the representation of a sample in # Algorithm 1 Pseudocode of MoCo in a PyTorch-like style. # f_q, f_k: encoder networks for query and key # queue: dictionary as a queue of K keys (CxK) # m: momentum # t: temperature f_k.params = f_q.params # initialize for x in loader: # load a minibatch x with N samples x_q = aug(x) # a randomly augmented version x_k = aug(x) # another randomly augmented version q = f_q.forward(x_q) # queries: NxC k = f_k.forward(x_k) # keys: NxC k = k.detach() # no gradient to keys # positive logits: Nx1 l_pos = bmm(q.view(N,1,C), k.view(N,C,1)) # negative logits: NxK l_neg = mm(q.view(N,C), queue.view(C,K)) # logits: Nx(1+K) logits = cat([l_pos, l_neg], dim=1) # contrastive loss, Eqn.(1) labels = zeros(N) # positives are the 0-th loss = CrossEntropyLoss(logits/t, labels) # SGD update: query network loss.backward() update(f_q.params) # momentum update: key network f_k.params = m*f_k.params+(1-m)*f_q.params # update dictionary enqueue(queue, k) # enqueue the current minibatch dequeue(queue) # dequeue the earliest minibatch bmm: batch matrix multiplication; mm: matrix multiplication; cat: concatenation. the memory bank was updated when it was last seen, so the sampled keys are essentially about the encoders at multiple different steps all over the past epoch and thus are less con- sistent. A momentum update is adopted on the memory bank in [61]. Its momentum update is on the representa- tions of the same sample, not the encoder. This momentum update is irrelevant to our method, because MoCo does not keep track of every sample. Moreover, our method is more memory-efficient and can be trained on billion-scale data, which can be intractable for a memory bank. Sec. 4 empirically compares these three mechanisms. # 3.3. Pretext Task Contrastive learning can drive a variety of pretext tasks. As the focus of this paper is not on designing a new pretext task, we use a simple one mainly following the instance discrimination task in [61], to which some recent works [63, 2] are related. Following [61], we consider a query and a key as a pos- itive pair if they originate from the same image, and other- wise as a negative sample pair. Following [63, 2], we take two random “views” of the same image under random data augmentation to form a positive pair. The queries and keys are respectively encoded by their encoders, fq and fk. The encoder can be any convolutional neural network [39]. Algorithm 1 provides the pseudo-code of MoCo for this pretext task. For the current mini-batch, we encode the queries and their corresponding keys, which form the posi- tive sample pairs. The negative samples are from the queue. Technical details. We adopt a ResNet [33] as the encoder, whose last fully-connected layer (after global average pool- ing) has a fixed-dimensional output (128-D [61]). This out- put vector is normalized by its L2-norm [61]. This is the representation of the query or key. The temperature τ in Eqn.(1) is set as 0.07 [61]. The data augmentation setting follows [61]: a 224 224-pixel crop is taken from a ran- domly resized image, and then undergoes random color jit- tering, random horizontal flip, and random grayscale con- version, all available in PyTorch’s torchvision package. Shuffling BN. Our encoders fq and fk both have Batch Normalization (BN) [37] as in the standard ResNet [33]. In experiments, we found that using BN prevents the model from learning good representations, as similarly reported in [35] (which avoids using BN). The model appears to “cheat” the pretext task and easily finds a low-loss solu- tion. This is possibly because the intra-batch communica- tion among samples (caused by BN) leaks information. We resolve this problem by shuffling BN. We train with multiple GPUs and perform BN on the samples indepen- dently for each GPU (as done in common practice). For the key encoder fk, we shuffle the sample order in the current mini-batch before distributing it among GPUs (and shuffle back after encoding); the sample order of the mini-batch for the query encoder fq is not altered. This ensures the batch statistics used to compute a query and its positive key come from two different subsets. This effectively tackles the cheating issue and allows training to benefit from BN. We use shuffled BN in both our method and its end-to- end ablation counterpart (Figure 2a). It is irrelevant to the memory bank counterpart (Figure 2b), which does not suf- fer from this issue because the positive keys are from differ- ent mini-batches in the past. # 4. Experiments We study unsupervised training performed in: ImageNet-1M (IN-1M): This is the ImageNet [11] train- ing set that has ∼1.28 million images in 1000 classes (often called ImageNet-1K; we count the image number instead, as classes are not exploited by unsupervised learning). This dataset is well-balanced in its class distribution, and its im- ages generally contain iconic view of objects. Instagram-1B (IG-1B): Following [44], this is a dataset of ∼1 billion (940M) public images from Instagram. The images are from ∼1500 hashtags [44] that are related to the ImageNet categories. This dataset is relatively uncurated comparing to IN-1M, and has a long-tailed, unbalanced distribution of real-world data. This dataset contains both iconic objects and scene-level images. Training. We use SGD as our optimizer. The SGD weight decay is 0.0001 and the SGD momentum is 0.9. For IN-1M, we use a mini-batch size of 256 (N in Algorithm 1) in 8 GPUs, and an initial learning rate of 0.03. We train for 200 epochs with the learning rate multiplied by 0.1 at 120 and 160 epochs [61], taking ∼53 hours training ResNet-50. For IG-1B, we use a mini-batch size of 1024 in 64 GPUs, and a learning rate of 0.12 which is exponentially decayed by 0.9 after every 62.5k iterations (64M images). We train for 1.25M iterations (∼1.4 epochs of IG-1B), taking ∼6 days for ResNet-50. # 4.1. Linear Classification Protocol We first verify our method by linear classification on frozen features, following a common protocol. In this sub- section we perform unsupervised pre-training on IN-1M. Then we freeze the features and train a supervised linear classifier (a fully-connected layer followed by softmax). We train this classifier on the global average pooling features of a ResNet, for 100 epochs. We report 1-crop, top-1 classifi- cation accuracy on the ImageNet validation set. For this classifier, we perform a grid search and find the optimal initial learning rate is 30 and weight decay is 0 (similarly reported in [56]). These hyper-parameters per- form consistently well for all ablation entries presented in this subsection. These hyper-parameter values imply that the feature distributions (e.g., magnitudes) can be substan- tially different from those of ImageNet supervised training, an issue we will revisit in Sec. 4.2. Ablation: contrastive loss mechanisms. We compare the three mechanisms that are illustrated in Figure 2. To focus on the effect of contrastive loss mechanisms, we implement all of them in the same pretext task as described in Sec. 3.3. We also use the same form of InfoNCE as the contrastive loss function, Eqn.(1). As such, the comparison is solely on the three mechanisms. The results are in Figure 3. Overall, all three mecha- nisms benefit from a larger K. A similar trend has been observed in [61, 56] under the memory bank mechanism, while here we show that this trend is more general and can be seen in all mechanisms. These results support our moti- vation of building a large dictionary. The end-to-end mechanism performs similarly to MoCo when K is small. However, the dictionary size is limited by the mini-batch size due to the end-to-end requirement. Here the largest mini-batch a high-end machine (8 Volta 32GB GPUs) can afford is 1024. More essentially, large mini-batch training is an open problem [25]: we found it necessary to use the linear learning rate scaling rule [25] here, without which the accuracy drops (by ∼2% with a 1024 mini-batch). But optimizing with a larger mini-batch is harder [25], and it is questionable whether the trend can be extrapolated into a larger K even if memory is sufficient. oo4 QS 60 ee 590 --~ “~*~ 58.0 - 578 58 S75 = — Seba & Z 56.5. e 5642-573 565. a56 ay ra s |s _ 3 5% 54.1-- g 54 * 520°" —* end-to-end 52 rad —#*-- memory bank a —4- MoCo 5Q.0~ 50 OSs f f f r r 256 512 1024 4096 16384 65536 K (log-scale) Figure 3. Comparison of three contrastive loss mechanisms un- der the ImageNet linear classification protocol. We adopt the same pretext task (Sec. 3.3) and only vary the contrastive loss mecha- nism (Figure 2). The number of negatives is K in memory bank and MoCo, and is K−1 in end-to-end (offset by one because the positive key is in the same mini-batch). The network is ResNet-50. The memory bank [61] mechanism can support a larger dictionary size. But it is 2.6% worse than MoCo. This is inline with our hypothesis: the keys in the memory bank are from very different encoders all over the past epoch and they are not consistent. Note the memory bank result of 58.0% reflects our improved implementation of [61].2 Ablation: momentum. The table below shows ResNet-50 accuracy with different MoCo momentum values (m in Eqn.(2)) used in pre-training (K = 4096 here) : momentum m accuracy (%) 0 fail 0.9 55.2 0.99 57.8 0.999 59.0 0.9999 58.9 It performs reasonably well when m is in 0.99 ∼ 0.9999, showing that a slowly progressing (i.e., relatively large mo- mentum) key encoder is beneficial. When m is too small (e.g., 0.9), the accuracy drops considerably; at the extreme of no momentum (m is 0), the training loss oscillates and fails to converge. These results support our motivation of building a consistent dictionary. Comparison with previous results. Previous unsuper- vised learning methods can differ substantially in model sizes. For a fair and comprehensive comparison, we report accuracy vs. #parameters3 trade-offs. Besides ResNet-50 (R50) [33], we also report its variants that are 2 × wider (more channels), following [38].4 We set K = 65536 and m = 0.999. Table 1 is the comparison. MoCo with R50 performs competitively and achieves 60.6% accuracy, better than all competitors of similar model sizes (∼24M). MoCo benefits from larger models and achieves 68.6% accuracy with R50w4 × Notably, we achieve competitive results using a standard ResNet-50 and require no specific architecture designs, e.g., 2Here 58.0% is with InfoNCE and K=65536. We reproduce 54.3% when using NCE and K=4096 (the same as [61]), close to 54.0% in [61]. 3Parameters are of the feature extractor: e.g., we do not count the pa- rameters of convx if convx is not included in linear classification. 4Our w2× and w4× models correspond to the “×8” and “×16” cases in [38], because the standard-sized ResNet is referred to as “×4” in [38]. architecture method R50w3× Exemplar [17] RelativePosition [13] R50w2× R50w2× Jigsaw [45] Rv50w4× Rotation [19] R101∗ Colorization [64] VGG [53] DeepCluster [3] R50 BigBiGAN [16] Rv50w4× #params (M) 211 94 94 86 28 15 24 86 accuracy (%) 46.0 [38] 51.4 [38] 44.6 [38] 55.4 [38] 39.6 [14] 48.4 [4] 56.6 61.3 methods based on contrastive learning follow: InstDisc [61] LocalAgg [66] CPC v1 [46] CPC v2 [35] CMC [56] R50 R50 R101∗ R170∗ R50L+ab R50w2×L+ab AMDIMsmall AMDIMlarge R50 RX50 R50w2× R50w4× wider AMDIM [2] MoCo 24 24 28 303 47 188 194 626 24 46 94 375 54.0 58.8 48.7 65.9 64.1† 68.4† 63.5† 68.1† 60.6 63.9 65.4 68.6 = & S S| Table 1. Comparison under the linear classification protocol on ImageNet. The figure visualizes the table. All are reported as unsupervised pre-training on the ImageNet-1M training set, fol- lowed by supervised linear classification trained on frozen fea- tures, evaluated on the validation set. The parameter counts are those of the feature extractors. We compare with improved re- implementations if available (referenced after the numbers). Notations: R101∗/R170∗ is ResNet-101/170 with the last residual stage removed [14, 46, 35], and R170 is made wider [35]; Rv50 is a reversible net [23], RX50 is ResNeXt-50-32×8d [62]. †: Pre-training uses FastAutoAugment [40] that is supervised by ImageNet labels. patchified inputs [46, 35], carefully tailored receptive fields [2], or combining two networks [56]. By using an architec- ture that is not customized for the pretext task, it is easier to transfer features to a variety of visual tasks and make com- parisons, studied in the next subsection. This paper’s focus is on a mechanism for general con- trastive learning; we do not explore orthogonal factors (such as specific pretext tasks) that may further improve accuracy. As an example, “MoCo v2” [8], an extension of a prelim- inary version of this manuscript, achieves 71.1% accuracy with R50 (up from 60.6%), given small changes on the data augmentation and output projection head [7]. We believe that this additional result shows the generality and robust- ness of the MoCo framework. pre-train random init. super. IN-1M MoCo IN-1M MoCo IG-1B AP50 64.4 81.4 81.1 (−0.3) 81.6 (+0.2) AP 37.9 54.0 54.6 (+0.6) 55.5 (+1.5) AP75 38.6 59.1 59.9 (+0.8) 61.2 (+2.1) (a) Faster R-CNN, R50-dilated-C5 pre-train random init. super. IN-1M MoCo IN-1M MoCo IG-1B AP50 60.2 81.3 81.5 (+0.2) 82.2 (+0.9) AP 33.8 53.5 55.9 (+2.4) 57.2 (+3.7) AP75 33.1 58.8 62.6 (+3.8) 63.7 (+4.9) (b) Faster R-CNN, R50-C4 Table 2. Object detection fine-tuned on PASCAL VOC trainval07+12. Evaluation is on test2007: AP50 (default VOC metric), AP (COCO-style), and AP75, averaged over 5 trials. All are fine-tuned for 24k iterations (∼23 epochs). In the brackets are the gaps to the ImageNet supervised pre-training counterpart. In green are the gaps of at least +0.5 point. R50-dilated-C5 AP 52.0 52.9 54.6 AP50 79.2 79.8 81.1 AP75 56.6 57.9 59.9 AP50 80.4 80.6 81.5 R50-C4 AP 54.6 54.9 55.9 AP75 60.3 60.6 62.6 Table 3. Comparison of three contrastive loss mechanisms on PASCAL VOC object detection, fine-tuned on trainval07+12 and evaluated on test2007 (averages over 5 trials). All models are implemented by us (Figure 3), pre-trained on IN-1M, and fine- tuned using the same settings as in Table 2. # 4.2. Transferring Features A main goal of unsupervised learning is to learn features that are transferrable. ImageNet supervised pre-training is most influential when serving as the initialization for fine- tuning in downstream tasks (e.g., [21, 20, 43, 52]). Next we compare MoCo with ImageNet supervised pre-training, transferred to various tasks including PASCAL VOC [18], COCO [42], etc. As prerequisites, we discuss two important issues involved [31]: normalization and schedules. Normalization. As noted in Sec. 4.1, features produced by unsupervised pre-training can have different distributions compared with ImageNet supervised pre-training. But a system for a downstream task often has hyper-parameters (e.g., learning rates) selected for supervised pre-training. To relieve this problem, we adopt feature normalization during fine-tuning: we fine-tune with BN that is trained (and syn- chronized across GPUs [49]), instead of freezing it by an affine layer [33]. We also use BN in the newly initialized layers (e.g., FPN [41]), which helps calibrate magnitudes. We perform normalization when fine-tuning supervised and unsupervised pre-training models. MoCo uses the same hyper-parameters as the ImageNet supervised counterpart. Schedules. If the fine-tuning schedule is long enough, training detectors from random initialization can be strong baselines, and can match the ImageNet supervised counter- part on COCO [31]. Our goal is to investigate transferabil- AP75 AP MoCo AP50 Multi-task [14] MoCo Jigsaw, by [26] LocalAgg [66] RelPos, by [14] Multi-task [14] 70.5 61.4 (−9.1) 69.2 (−1.3) 66.6 (−3.9) 74.2 70.5 (−3.7) 74.6 69.1 (−5.5) 74.4 74.9 (+0.5) 75.2 (+0.8) 74.7 (+0.3) 75.6 (+1.2) 42.4 46.6 (+4.2) 46.9 (+4.5) 45.9 (+3.5) 47.6 (+5.2) 74.2 66.8 (−7.4) 44.3 43.9 (−0.4) - - - - - - - - - - MoCo pre-train 42.7 super. IN-1M 50.1 (+7.4) unsup. IN-1M 50.2 (+7.5) - unsup. IN-14M 49.0 (+6.3) - unsup. YFCC-100M 51.7 (+9.0) - unsup. IG-1B Table 4. Comparison with previous methods on object detection fine-tuned on PASCAL VOC trainval2007. Evaluation is on test2007. The ImageNet supervised counterparts are from the respective papers, and are reported as having the same structure as the respective unsupervised pre-training counterparts. All entries are based on the C4 backbone. The models in [14] are R101 v2 [34], and others are R50. The RelPos (relative position) [13] result is the best single-task case in the Multi-task paper [14]. The Jigsaw [45] result is from the ResNet-based implementation in [26]. Our results are with 9k-iteration fine-tuning, averaged over 5 trials. In the brackets are the gaps to the ImageNet supervised pre-training counterpart. In green are the gaps of at least +0.5 point. ity of features, so our experiments are on controlled sched- ules, e.g., the 1 schedules [22] for COCO, in contrast to 6 in [31]. On smaller datasets × like VOC, training longer may not catch up [31]. in contrastive learning, we fine-tune the models pre-trained with the end-to-end or memory bank mechanism, both im- plemented by us (i.e., the best ones in Figure 3), using the same fine-tuning setting as MoCo. Nonetheless, in our fine-tuning, MoCo uses the same schedule as the ImageNet supervised counterpart, and ran- dom initialization results are provided as references. Put together, our fine-tuning uses the same setting as the supervised pre-training counterpart. This may place MoCo at a disadvantage. Even so, MoCo is competitive. Doing so also makes it feasible to present comparisons on multiple datasets/tasks, without extra hyper-parameter search. # 4.2.1 PASCAL VOC Object Detection These competitors perform decently (Table 3). Their AP and AP75 with the C4 backbone are also higher than the ImageNet supervised counterpart’s, c.f . Table 2b, but other metrics are lower. They are worse than MoCo in all metrics. This shows the benefits of MoCo. In addition, how to train these competitors in larger-scale data is an open question, and they may not benefit from IG-1B. Comparison with previous results. Following the com- petitors, we fine-tune on trainval2007 (∼5k images) using the C4 backbone. The comparison is in Table 4. Setup. The detector is Faster R-CNN [52] with a backbone of R50-dilated-C5 or R50-C4 [32] (details in appendix), with BN tuned, implemented in [60]. We fine-tune all lay- ers end-to-end. The image scale is [480, 800] pixels during training and 800 at inference. The same setup is used for all entries, including the supervised pre-training baseline. We evaluate the default VOC metric of AP50 (i.e., IoU threshold is 50%) and the more stringent metrics of COCO-style AP and AP75. Evaluation is on the VOC test2007 set. For the AP50 metric, no previous method can catch up with its respective supervised pre-training counterpart. MoCo pre-trained on any of IN-1M, IN-14M (full Ima- geNet), YFCC-100M [55], and IG-1B can outperform the supervised baseline. Large gains are seen in the more strin- gent metrics: up to +5.2 AP and +9.0 AP75. These gains are larger than the gains seen in trainval07+12 (Table 2b). # 4.2.2 COCO Object Detection and Segmentation Ablation: backbones. Table 2 shows the results fine-tuned on trainval07+12 (∼16.5k images). For R50-dilated- C5 (Table 2a), MoCo pre-trained on IN-1M is comparable to the supervised pre-training counterpart, and MoCo pre- trained on IG-1B surpasses it. For R50-C4 (Table 2b), MoCo with IN-1M or IG-1B is better than the supervised counterpart: up to +0.9 AP50, +3.7 AP, and +4.9 AP75. Interestingly, the transferring accuracy depends on the detector structure. For the C4 backbone, by default used in existing ResNet-based results [14, 61, 26, 66], the ad- vantage of unsupervised pre-training is larger. The relation between pre-training vs. detector structures has been veiled in the past, and should be a factor under consideration. Setup. The model is Mask R-CNN [32] with the FPN [41] or C4 backbone, with BN tuned, implemented in [60]. The image scale is in [640, 800] pixels during training and is 800 at inference. We fine-tune all layers end-to-end. We fine- tune on the train2017 set (∼118k images) and evaluate on val2017. The schedule is the default 1 in [22]. × × Results. Table 5 shows the results on COCO with the FPN (Table 5a, b) and C4 (Table 5c, d) backbones. With the 1 schedule, all models (including the ImageNet super- × vised counterparts) are heavily under-trained, as indicated schedule cases. With the by the ∼2 points gaps to the 2 × schedule, MoCo is better than its ImageNet supervised 2 × counterpart in all metrics in both backbones. Ablation: contrastive loss mechanisms. We point out that these results are partially because we establish solid detec- tion baselines for contrastive learning. To pin-point the gain that is solely contributed by using the MoCo mechanism # 4.2.3 More Downstream Tasks Table 6 shows more downstream tasks (implementation de- tails in appendix). Overall, MoCo performs competitively APbb APbb 50 APbb 75 APmk APmk 50 APmk 75 49.5 59.6 28.5 35.4 46.8 56.5 30.4 38.1 APbb APbb 50 APbb 75 APmk APmk 50 APmk 75 56.7 61.3 40.0 44.4 33.7 36.8 53.8 58.1 35.9 39.5 pre-train random init. 33.2 31.0 super. IN-1M 38.9 42.7 MoCo IN-1M 38.5 (−0.4) 58.9 (−0.7) 42.0 (−0.7) 35.1 (−0.3) 55.9 (−0.6) 37.7 (−0.4) MoCo IG-1B 38.9 (+0.0) 59.4 (−0.2) 42.3 (−0.4) 35.4 (+0.0) 56.5 (+0.0) 37.9 (−0.2) 36.7 40.6 40.8 (+0.2) 61.6 (+0.3) 44.7 (+0.3) 36.9 (+0.1) 58.4 (+0.3) 39.7 (+0.2) 41.1 (+0.5) 61.8 (+0.5) 45.1 (+0.7) 37.4 (+0.6) 59.1 (+1.0) 40.2 (+0.7) MoCo IN-IM | 38.5 (—04) 58.9(—0.7) 42.0(—0.7)] 35.1 (—0.3) 55.9(—0.6) 37.7(-04) 40.8 (+0.2) 61.6(+0.3) 44.7 (+0.3)| 36.9 (+0.1) 58.4 (+0.3) 39.7 (+0.2) MoCo IG-1B | 38.9( 0.0) 59.4(—0.2) 42.3 (—04)| 35.4( 0.0) 56.5( 0.0) 37.9(-02) 41.1 (40.5) 61.8 (40.5) 45.1 (+0.7)| 37.4(40.6) 59.1 (41.0) 40.2 (40.7) # (a) Mask R-CNN, R50-FPN, 1× schedule # (b) Mask R-CNN, R50-FPN, 2× schedule APbb APbb 50 APbb 75 APmk APmk 50 APmk 75 pre-train random init. 27.8 26.4 super. IN-1M 38.2 41.2 MoCo IN-1M 38.5 (+0.3) 58.3 (+0.1) 41.6 (+0.4) 33.6 (+0.3) 54.8 (+0.1) 35.6 (+0.4) MoCo IG-1B 39.1 (+0.9) 58.7 (+0.5) 42.2 (+1.0) 34.1 (+0.8) 55.4 (+0.7) 36.4 (+1.2) 44.0 58.2 29.3 33.3 46.9 54.7 30.8 35.2 APbb APbb 50 APbb 75 APmk APmk 50 APmk 75 35.6 40.0 40.7 (+0.7) 60.5 (+0.6) 44.1 (+1.0) 35.4 (+0.7) 57.3 (+0.8) 37.6 (+0.7) 41.1 (+1.1) 60.7 (+0.8) 44.8 (+1.7) 35.6 (+0.9) 57.4 (+0.9) 38.1 (+1.2) 54.6 59.9 38.2 43.1 31.4 34.7 51.5 56.5 33.5 36.9 (c) Mask R-CNN, R50-C4, 1× schedule # (d) Mask R-CNN, R50-C4, 2× schedule Table 5. Object detection and instance segmentation fine-tuned on COCO: bounding-box AP (APbb) and mask AP (APmk) evaluated on val2017. In the brackets are the gaps to the ImageNet supervised pre-training counterpart. In green are the gaps of at least +0.5 point. pre-train random init. super. IN-1M MoCo IN-1M MoCo IG-1B APkp 65.9 65.8 66.8 (+1.0) 66.9 (+1.1) 86.5 86.9 87.4 (+0.5) 87.8 (+0.9) APkp 75 71.7 71.9 72.5 (+0.6) 73.0 (+1.1) pre-train random init. super. IN-1M MoCo IN-1M MoCo IG-1B COCO dense pose estimation APdp 50 APdp APdp 75 39.4 48.3 50.1 (+1.8) 50.6 (+2.3) 78.5 85.6 86.8 (+1.2) 87.0 (+1.4) 35.1 50.6 53.9 (+3.3) 54.3 (+3.7) pre-train random init. super. IN-1M† MoCo IN-1M MoCo IG-1B LVIS v0.5 instance segmentation APmk 50 APmk APmk 75 22.5 24.4 24.1 (−0.3) 24.9 (+0.5) 34.8 37.8 37.4 (−0.4) 38.2 (+0.4) 23.8 25.8 25.5 (−0.3) 26.4 (+0.6) Cityscapes instance seg. Semantic seg. (mIoU) Cityscapes VOC 65.3 74.6 APmk APmk 50 51.1 59.6 39.5 74.4 pre-train 25.4 random init. super. IN-1M 32.9 MoCo IN-1M 32.3 (−0.6) 59.3 (−0.3) 75.3 (+0.7) 72.5 (−1.9) 32.9 (+0.0) 60.3 (+0.7) 75.5 (+0.9) 73.6 (−0.8) MoCo IG-1B Table 6. MoCo vs. ImageNet supervised pre-training, fine- tuned on various tasks. For each task, the same architecture and schedule are used for all entries (see appendix). In the brackets are the gaps to the ImageNet supervised pre-training counterpart. In green are the gaps of at least +0.5 point. †: this entry is with BN frozen, which improves results; see main text. with ImageNet supervised pre-training: COCO keypoint detection: supervised pre-training has no clear advantage over random initialization, whereas MoCo outperforms in all metrics. COCO dense pose estimation [1]: MoCo substantially outperforms supervised pre-training, e.g., by 3.7 points in APdp 75, in this highly localization-sensitive task. LVIS v0.5 instance segmentation [27]: this task has ∼1000 long-tailed distributed categories. Specifically in LVIS for the ImageNet supervised baseline, we find fine- tuning with frozen BN (24.4 APmk) is better than tunable BN (details in appendix). So we compare MoCo with the better supervised pre-training variant in this task. MoCo with IG-1B surpasses it in all metrics. Cityscapes instance segmentation [10]: MoCo with IG-1B is on par with its supervised pre-training counterpart in APmk, and is higher in APmk 50 . Semantic segmentation: On Cityscapes [10], MoCo out- performs its supervised pre-training counterpart by up to 0.9 point. But on VOC semantic segmentation, MoCo is worse by at least 0.8 point, a negative case we have observed. Summary. In sum, MoCo can outperform its ImageNet supervised pre-training counterpart in 7 detection or seg- mentation tasks.5 Besides, MoCo is on par on Cityscapes instance segmentation, and lags behind on VOC semantic segmentation; we show another comparable case on iNatu- ralist [57] in appendix. Overall, MoCo has largely closed the gap between unsupervised and supervised representa- tion learning in multiple vision tasks. Remarkably, in all these tasks, MoCo pre-trained on IG-1B is consistently better than MoCo pre-trained on IN-1M. This shows that MoCo can perform well on this large-scale, relatively uncurated dataset. This represents a scenario towards real-world unsupervised learning. # 5. Discussion and Conclusion Our method has shown positive results of unsupervised learning in a variety of computer vision tasks and datasets. A few open questions are worth discussing. MoCo’s im- provement from IN-1M to IG-1B is consistently noticeable but relatively small, suggesting that the larger-scale data may not be fully exploited. We hope an advanced pretext task will improve this. Beyond the simple instance discrim- ination task [61], it is possible to adopt MoCo for pretext tasks like masked auto-encoding, e.g., in language [12] and in vision [46]. We hope MoCo will be useful with other pretext tasks that involve contrastive learning. 5Namely, object detection on VOC/COCO, instance segmentation on COCO/LVIS, keypoint detection on COCO, dense pose on COCO, and semantic segmentation on Cityscapes. # A. Appendix # A.1. Implementation: Object detection backbones The R50-dilated-C5 and R50-C4 backbones are similar to those available in Detectron2 [60]: (i) R50-dilated- C5: the backbone includes the ResNet conv5 stage with a dilation of 2 and stride 1, followed by a 3 3 convolution (with BN) that reduces dimension to 512. The box predic- tion head consists of two hidden fully-connected layers. (ii) R50-C4: the backbone ends with the conv4 stage, and the box prediction head consists of the conv5 stage (including global pooling) followed by a BN layer. # A.2. Implementation: COCO keypoint detection We use Mask R-CNN (keypoint version) with R50-FPN, implemented in [60], fine-tuned on COCO train2017 and evaluated on val2017. The schedule is 2 × # A.3. Implementation: COCO dense pose estimation We use DensePose R-CNN [1] with R50-FPN, imple- mented in [60], fine-tuned on COCO train2017 and evaluated on val2017. The schedule is “s1 × # A.4. Implementation: LVIS instance segmentation We use Mask R-CNN with R50-FPN, fine-tuned in LVIS [27] train v0.5 and evaluated in val v0.5. We follow the baseline in [27] (arXiv v3 Appendix B). LVIS is a new dataset and model designs on it are to be explored. The following table includes the relevant abla- tions (all are averages of 5 trials): 1× schedule 2× schedule pre-train BN super. IN-1M frozen super. IN-1M tuned MoCo IN-1M tuned MoCo IG-1B tuned APmk APmk 37.3 24.1 36.6 23.5 36.0 23.2 37.4 24.3 50 APmk 25.4 24.8 24.7 25.9 75 APmk APmk 37.8 36.0 37.4 38.2 50 APmk 75 25.8 24.4 25.5 26.4 24.4 23.2 24.1 24.9 A supervised pre-training baseline, end-to-end tuned but with BN frozen, has 24.4 APmk. But tuning BN in this baseline leads to worse results and overfitting (this is unlike on COCO/VOC where tuning BN gives better or compara- ble accuracy). MoCo has 24.1 APmk with IN-1M and 24.9 APmk with IG-1B, both outperforming the supervised pre- training counterpart under the same tunable BN setting. Un- der the best individual settings, MoCo can still outperform the supervised pre-training case (24.9 vs. 24.4, as reported in Table 6 in Sec 4.2). # A.5. Implementation: Semantic segmentation We use an FCN-based [43] structure. The backbone con- 3 con- sists of the convolutional layers in R50, and the 3 volutions in conv5 blocks have dilation 2 and stride 1. This 3 convolutions of 256 channels, is followed by two extra 3 × 1 convolution for per- with BN and ReLU, and then a 1 pixel classification. The total stride is 16 (FCN-16s [43]). We set dilation = 6 in the two extra 3 3 convolutions, fol- lowing the large field-of-view design in [6]. Training is with random scaling (by a ratio in [0.5, 2.0]), cropping, and horizontal flipping. The crop size is 513 on VOC and 769 on Cityscapes [6]. Inference is performed on the original image size. We train with mini-batch size 16 and weight decay 0.0001. Learning rate is 0.003 on VOC and is 0.01 on Cityscapes (multiplied by 0.1 at 70- th and 90-th percentile of training). For VOC, we train on the train aug2012 set (augmented by [30], 10582 im- ages) for 30k iterations, and evaluate on val2012. For Cityscapes, we train on the train fine set (2975 images) for 90k iterations, and evaluate on the val set. Results are reported as averages over 5 trials. # A.6. iNaturalist fine-grained classification In addition to the detection/segmentation experiments in the main paper, we study fine-grained classification on the iNaturalist 2018 dataset [57]. We fine-tune the pre- trained models end-to-end on the train set (∼437k im- ages, 8142 classes) and evaluate on the val set. Training follows the typical ResNet implementation in PyTorch with 100 epochs. Fine-tuning has a learning rate of 0.025 (vs. 0.1 from scratch) decreased by 10 at the 70-th and 90-th percentile of training. The following is the R50 result: pre-train accuracy (%) rand init. 61.8 super.IN-1M MoCoIN-1M MoCoIG-1B 65.6 66.1 65.8 MoCo is ∼4% better than training from random initializa- tion, and is closely comparable with its ImageNet super- vised counterpart. This again shows that MoCo unsuper- vised pre-training is competitive. # A.7. Fine-tuning in ImageNet Linear classification on frozen features (Sec. 4.1) is a common protocol of evaluating unsupervised pre-training methods. However, in practice, it is more common to fine- tune the features end-to-end in a downstream task. For completeness, the following table reports end-to-end fine- tuning results for the 1000-class ImageNet classification, compared with training from scratch (fine-tuning uses an initial learning rate of 0.03, vs. 0.1 from scratch): pre-train accuracy (%) random init. 76.5 MoCoIG-1B 77.3 As here ImageNet is the downstream task, the case of MoCo pre-trained on IN-1M does not represent a real scenario (for reference, we report that its accuracy is 77.0% after fine-tuning). But unsupervised pre-training in the separate, unlabeled dataset of IG-1B represents a typical scenario: in this case, MoCo improves by 0.8%. APbb APbb 50 APbb 75 APmk APmk 50 APmk 75 56.7 61.3 33.7 36.8 53.8 58.1 35.9 39.5 APbb APbb 50 APbb 75 APmk APmk 50 APmk 75 61.9 62.5 45.1 45.6 37.6 38.0 59.1 59.6 40.3 40.8 pre-train random init. 40.0 36.7 super. IN-1M 40.6 44.4 MoCo IN-1M 40.8 (+0.2) 61.6 (+0.3) 44.7 (+0.3) 36.9 (+0.1) 58.4 (+0.3) 39.7 (+0.2) MoCo IG-1B 41.1 (+0.5) 61.8 (+0.5) 45.1 (+0.7) 37.4 (+0.6) 59.1 (+1.0) 40.2 (+0.7) 41.4 41.9 42.3 (+0.4) 62.7 (+0.2) 46.2 (+0.6) 38.3 (+0.3) 60.1 (+0.5) 41.2 (+0.4) 42.8 (+0.9) 63.2 (+0.7) 47.0 (+1.4) 38.7 (+0.7) 60.5 (+0.9) 41.3 (+0.5) MoCo IN-IM | 40.8 (+0.2) 61.6(+0.3) 44.7 (+0.3)| 36.9(+0.1) 58.4(+03) 39.7 (+0.2) 41.1 (40.5) 61.8 (+05) 45.1 (+0.7)| 37.4 (+06) 59.1 (41.0) 40.2 (40.7) MoCo IG-1B 42.3 (40.4) 62.7 (+0.2) 46.2 (+0.6)| 38.3 (+03) 60.1 (40.5) 41.2 (40.4) 42.8 (40.9) 63.2 (40.7) 47.0(+1.4)] 38.7 (40.7) 60.5 (40.9) 41.3 (40.5) (a) Mask R-CNN, R50-FPN, 2× schedule (b) Mask R-CNN, R50-FPN, 6× schedule Table A.1. Object detection and instance segmentation fine-tuned on COCO: 2× vs. 6× schedule. In the brackets are the gaps to the ImageNet supervised pre-training counterpart. In green are the gaps of at least +0.5 point. # A.8. COCO longer fine-tuning # References (∼12 epochs) and 2 schedules on COCO. These schedules were inher- ited from the original Mask R-CNN paper [32], which could be suboptimal given later advance in the field. In Table A.1, we supplement the results of a 6 schedule (∼72 epochs) [31] and compare with those of the 2 [1] Rıza Alp G¨uler, Natalia Neverova, and Iasonas Kokkinos. DensePose: Dense human pose estimation in the wild. In CVPR, 2018. [2] Philip Bachman, R Devon Hjelm, and William Buchwalter. Learning representations by maximizing mutual information across views. arXiv:1906.00910, 2019. × We observe: (i) fine-tuning with ImageNet-supervised pre-training still has improvements (41.9 APbb); (ii) train- ing from scratch largely catches up (41.4 APbb); (iii) the MoCo counterparts improve further (e.g., to 42.8 APbb) and have larger gaps (e.g., +0.9 APbb with 6 , vs. +0.5 APbb ). Table A.1 and Table 5 suggest that the MoCo with 2 pre-trained features can have larger advantages than the ImageNet-supervised features when fine-tuning longer. [3] Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In ECCV, 2018. [4] Mathilde Caron, Piotr Bojanowski, Julien Mairal, and Ar- mand Joulin. Unsupervised pre-training of image features on non-curated data. In ICCV, 2019. [5] Ken Chatfield, Victor Lempitsky, Andrea Vedaldi, and An- drew Zisserman. The devil is in the details: an evaluation of recent feature encoding methods. In BMVC, 2011. # A.9. Ablation on Shuffling BN Figure A.1 provides the training curves of MoCo with or without shuffling BN: removing shuffling BN shows ob- vious overfitting to the pretext task: training accuracy of the pretext task (dash curve) quickly increases to >99.9%, and the kNN-based validation classification accuracy (solid curve) drops soon. This is observed for both the MoCo and end-to-end variants; the memory bank variant implicitly has different statistics for q and k, so avoids this issue. [6] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. DeepLab: Semantic im- age segmentation with deep convolutional nets, atrous con- volution, and fully connected CRFs. TPAMI, 2017. [7] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Ge- offrey Hinton. A simple framework for contrastive learning of visual representations. arXiv:2002.05709, 2020. [8] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv:2003.04297, 2020. These experiments suggest that without shuffling BN, the sub-batch statistics can serve as a “signature” to tell which sub-batch the positive key is in. Shuffling BN can remove this signature and avoid such cheating. 100 — a accuracy (2) '—— MoCo w/ ShuffleBN D ‘ ' ' wl! ora ‘ ' ' ' 1 ' MoCo w/o ShuffleBN ° L L n fy 20 0 Cy 2 epochs Figure A.1. Ablation of Shuffling BN. Dash: training curve of the pretext task, plotted as the accuracy of (K+1)-way dictionary lookup. Solid: validation curve of a kNN-based monitor [61] (not a linear classifier) on ImageNet classification accuracy. This plot shows the first 80 epochs of training: training longer without shuf- fling BN overfits more. [9] Adam Coates and Andrew Ng. The importance of encoding versus training with sparse coding and vector quantization. In ICML, 2011. [10] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The Cityscapes dataset for semantic urban scene understanding. In CVPR, 2016. [11] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, 2009. [12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional trans- formers for language understanding. In NAACL, 2019. [13] Carl Doersch, Abhinav Gupta, and Alexei A Efros. Unsuper- vised visual representation learning by context prediction. In ICCV, 2015. [14] Carl Doersch and Andrew Zisserman. Multi-task self- supervised visual learning. In ICCV, 2017. [15] Jeff Donahue, Philipp Kr¨ahenb¨uhl, and Trevor Darrell. Ad- versarial feature learning. In ICLR, 2017. [16] Jeff Donahue and Karen Simonyan. Large scale adversarial representation learning. arXiv:1907.02544, 2019. [17] Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Ried- miller, and Thomas Brox. Discriminative unsupervised feature learning with convolutional neural networks. In NeurIPS, 2014. [18] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The Pascal Visual Ob- ject Classes (VOC) Challenge. IJCV, 2010. [19] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Un- supervised representation learning by predicting image rota- tions. In ICLR, 2018. [20] Ross Girshick. Fast R-CNN. In ICCV, 2015. [21] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. [22] Ross Girshick, Ilija Radosavovic, Georgia Gkioxari, Piotr Doll´ar, and Kaiming He. Detectron, 2018. [23] Aidan N Gomez, Mengye Ren, Raquel Urtasun, and Roger B Grosse. The reversible residual network: Backpropagation without storing activations. In NeurIPS, 2017. [24] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and In NeurIPS, Yoshua Bengio. Generative adversarial nets. 2014. [25] Priya Goyal, Piotr Doll´ar, Ross Girshick, Pieter Noord- huis, Lukasz Wesolowski, Aapo Kyrola, Andrew Tulloch, Yangqing Jia, and Kaiming He. Accurate, large minibatch SGD: Training ImageNet in 1 hour. arXiv:1706.02677, 2017. [26] Priya Goyal, Dhruv Mahajan, Abhinav Gupta, and Ishan Misra. Scaling and benchmarking self-supervised visual rep- resentation learning. In ICCV, 2019. [27] Agrim Gupta, Piotr Dollar, and Ross Girshick. LVIS: A dataset for large vocabulary instance segmentation. In CVPR, 2019. [28] Michael Gutmann and Aapo Hyv¨arinen. Noise-contrastive estimation: A new estimation principle for unnormalized sta- tistical models. In AISTATS, 2010. [29] Raia Hadsell, Sumit Chopra, and Yann LeCun. Dimension- ality reduction by learning an invariant mapping. In CVPR, 2006. [30] Bharath Hariharan, Pablo Arbel´aez, Lubomir Bourdev, Subhransu Maji, and Jitendra Malik. Semantic contours from inverse detectors. In ICCV, 2011. [31] Kaiming He, Ross Girshick, and Piotr Doll´ar. Rethinking ImageNet pre-training. In ICCV, 2019. [32] Kaiming He, Georgia Gkioxari, Piotr Doll´ar, and Ross Gir- shick. Mask R-CNN. In ICCV, 2017. [33] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In CVPR, Deep residual learning for image recognition. 2016. [34] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. In ECCV, Identity mappings in deep residual networks. 2016. [35] Olivier J H´enaff, Ali Razavi, Carl Doersch, SM Eslami, and Aaron van den Oord. Data-efficient image recognition with contrastive predictive coding. arXiv:1905.09272, 2019. Up- dated version accessed at https://openreview.net/ pdf?id=rJerHlrYwH. [36] R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Adam Trischler, and Yoshua Bengio. Learn- ing deep representations by mutual information estimation and maximization. In ICLR, 2019. [37] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal co- variate shift. In ICML, 2015. [38] Alexander Kolesnikov, Xiaohua Zhai, and Lucas Beyer. Re- visiting self-supervised visual representation learning. In CVPR, 2019. [39] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwrit- ten zip code recognition. Neural computation, 1989. [40] Sungbin Lim, Ildoo Kim, Taesup Kim, Chiheon Kim, and Sungwoong Kim. Fast AutoAugment. arXiv:1905.00397, 2019. [41] Tsung-Yi Lin, Piotr Doll´ar, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. Feature pyramid networks for object detection. In CVPR, 2017. [42] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014. [43] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [44] Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the limits of weakly supervised pretraining. In ECCV, 2018. [45] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In ECCV, 2016. [46] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Rep- resentation learning with contrastive predictive coding. arXiv:1807.03748, 2018. [47] Deepak Pathak, Ross Girshick, Piotr Doll´ar, Trevor Darrell, and Bharath Hariharan. Learning features by watching ob- jects move. In CVPR, 2017. [48] Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, and Alexei A Efros. Context encoders: Feature learning by inpainting. In CVPR, 2016. [49] Chao Peng, Tete Xiao, Zeming Li, Yuning Jiang, Xiangyu Zhang, Kai Jia, Gang Yu, and Jian Sun. MegDet: A large mini-batch object detector. In CVPR, 2018. [50] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving language understanding by generative pre-training. 2018. [51] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsuper- vised multitask learners. 2019. [52] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with re- gion proposal networks. In NeurIPS, 2015. [53] Karen Simonyan and Andrew Zisserman. Very deep convo- lutional networks for large-scale image recognition. In ICLR, 2015. [54] Josef Sivic and Andrew Zisserman. Video Google: a text In ICCV, retrieval approach to object matching in videos. 2003. [55] Bart Thomee, David A Shamma, Gerald Friedland, Ben- jamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia Li. YFCC100M: The new data in multimedia research. Communications of the ACM, 2016. [56] Yonglong Tian, Dilip Krishnan, and Phillip Isola. Con- trastive multiview coding. arXiv:1906.05849, 2019. Updated version accessed at https://openreview.net/pdf? id=BkgStySKPB. # Van [57] Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui, Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona, and Serge Belongie. The iNaturalist species classification and detection dataset. In CVPR, 2018. [58] Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. Extracting and composing robust features with denoising autoencoders. In ICML, 2008. [59] Xiaolong Wang and Abhinav Gupta. Unsupervised learning of visual representations using videos. In ICCV, 2015. [60] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2. https://github. com/facebookresearch/detectron2, 2019. [61] Zhirong Wu, Yuanjun Xiong, Stella Yu, and Dahua Lin. Un- supervised feature learning via non-parametric instance dis- crimination. In CVPR, 2018. Updated version accessed at: https://arxiv.org/abs/1805.01978v1. [62] Saining Xie, Ross Girshick, Piotr Doll´ar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. In CVPR, 2017. [63] Mang Ye, Xu Zhang, Pong C Yuen, and Shih-Fu Chang. Un- supervised embedding learning via invariant and spreading instance feature. In CVPR, 2019. [64] Richard Zhang, Phillip Isola, and Alexei A Efros. Colorful image colorization. In ECCV, 2016. [65] Richard Zhang, Phillip Isola, and Alexei A Efros. Split-brain autoencoders: Unsupervised learning by cross-channel pre- diction. In CVPR, 2017. [66] Chengxu Zhuang, Alex Lin Zhai, and Daniel Yamins. Local aggregation for unsupervised learning of visual embeddings. In ICCV, 2019. Additional results accessed from supplemen- tary materials.
{ "id": "1807.03748" }
1911.05248
What Do Compressed Deep Neural Networks Forget?
Deep neural network pruning and quantization techniques have demonstrated it is possible to achieve high levels of compression with surprisingly little degradation to test set accuracy. However, this measure of performance conceals significant differences in how different classes and images are impacted by model compression techniques. We find that models with radically different numbers of weights have comparable top-line performance metrics but diverge considerably in behavior on a narrow subset of the dataset. This small subset of data points, which we term Pruning Identified Exemplars (PIEs) are systematically more impacted by the introduction of sparsity. Compression disproportionately impacts model performance on the underrepresented long-tail of the data distribution. PIEs over-index on atypical or noisy images that are far more challenging for both humans and algorithms to classify. Our work provides intuition into the role of capacity in deep neural networks and the trade-offs incurred by compression. An understanding of this disparate impact is critical given the widespread deployment of compressed models in the wild.
http://arxiv.org/pdf/1911.05248
Sara Hooker, Aaron Courville, Gregory Clark, Yann Dauphin, Andrea Frome
cs.LG, cs.AI, cs.CV, cs.HC, stat.ML
null
null
cs.LG
20191113
20210906
1 2 0 2 # p e S 6 ] G L . s c [ 3 v 8 4 2 5 0 . 1 1 9 1 : v i X r a # WHAT DO COMPRESSED DEEP NEURAL NETWORKS FORGET? Sara Hooker ∗ Google Brain Aaron Courville MILA Gregory Clark Google Yann Dauphin Google Brain Andrea Frome Google Brain # ABSTRACT Deep neural network pruning and quantization techniques have demonstrated it is possible to achieve high levels of compression with surprisingly little degradation to test set accuracy. However, this measure of performance conceals significant differences in how different classes and images are impacted by model compression techniques. We find that models with radically different numbers of weights have comparable top-line performance metrics but diverge considerably in behavior on a narrow subset of the dataset. This small subset of data points, which we term Pruning Identified Exemplars (PIEs), are systematically more impacted by the introduction of sparsity. Our work is the first to provide a formal framework for auditing the disparate harm incurred by compression and a way to quantify the trade- offs involved. An understanding of this disparate impact is critical given the widespread deployment of compressed models in the wild. # Introduction 1 Between infancy and adulthood, the number of synapses in our brain first multiply and then fall. Synaptic pruning improves efficiency by removing redundant neurons and strengthening synaptic connections that are most useful for the environment (Rakic et al., 1994). Despite losing 50% of all synapses between age two and ten, the brain continues to function (Kolb & Whishaw, 2009; Sowell et al., 2004). The phrase "Use it or lose it" is frequently used to describe the environmental influence of the learning process on synaptic pruning, however there is little scientific consensus on what exactly is lost (Casey et al., 2000). In this work, we ask what is lost when we compress a deep neural network. Work since the 1990s has shown that deep neural networks can be pruned of “excess capacity” in a similar fashion to synaptic pruning (Cun et al., 1990; Hassibi et al., 1993a; Nowlan & Hinton, 1992; Weigend et al., 1991). At face value, compression appears to promise you can have it all. Deep neural networks are remarkably tolerant of high levels of pruning and quantization with an almost negligible loss to top-1 accuracy (Han et al., 2015; Ullrich et al., 2017; Liu et al., 2017; Louizos et al., 2017; Collins & Kohli, 2014; Lee et al., 2018). These more compact networks are frequently favored in resource constrained settings; compressed models require less memory, energy consumption and have lower inference latency (Reagen et al., 2016; Chen et al., 2016; Theis et al., 2018; Kalchbrenner et al., 2018; Valin & Skoglund, 2018; Tessera et al., 2021). The ability to compress networks with seemingly so little degradation to generalization performance is puzzling. How can networks with radically different representations and number of parameters have compa- rable top-level metrics? One possibility is that test-set accuracy is simply not a precise enough measure to capture how compression impacts the generalization properties of the model. Despite the widespread use of compression techniques, articulating the trade-offs of compression has overwhelmingly focused on change to overall top-1 accuracy for a given level of compression. The cost to top-1 accuracy appears minimal if it is spread uniformally across all classes, but what if the cost is concentrated in only a few classes? Are certain types of examples or classes disproportionately ∗Correspondence should be directed to [email protected] WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? toilet seat espresso plastic bag Non-PIE PIE Non-PIE PIE Non-PIE PIE matchstick cloak stretcher Non-PIE PIE Non-PIE PIE Non-PIE PIE wool maze gas pump eye 7 # Non-PIE # Non-PIE # PIE # Non-PlE # Non-PIE # PIE # Non-PlE # Non-PIE # PIE Figure 1: Pruning Identified Exemplars (PIEs) are images where there is a high level of disagreement between the predictions of pruned and non-pruned models. Visualized are a sample of ImageNet PIEs alongside a non-PIE image from the same class. Above each image pair is the true label. impacted by compression? In this work, we propose a formal framework to audit the impact of compression on generalization properties beyond top-line metrics. Our work is the first to our knowledge that asks how dis-aggregated measures of model performance at a class and exemplar level are impacted by compression. Contributions We run thousands of large scale experiments and establish consistent results across multiple datasets— CIFAR-10 (Krizhevsky, 2012), CelebA (Liu et al., 2015) and ImageNet (Deng et al., 2009), widely used pruning and quantization techniques, and model architectures. We find that: 1. Top-line metrics such as top-1 or top-5 test-set accuracy hide critical details in the ways that pruning impacts model generalization. Certain parts of the data distribution are far more sensitive to varying the number of weights in a network, and bear the brunt of the cost of varying the weight representation. 2. The examples most impacted by pruning, which we term Pruning Identified Exemplars (PIEs), are more challenging for both models and humans to classify. We conduct a human study and find that PIEs tend to be mislabelled, of lower quality, depict multiple objects, or require fine-grained classification. Compression impairs the model’s ability to predict accurately on the long-tail of less frequent instances. 3. Pruned networks are more sensitive to natural adversarial images and corruptions. This sensitivity is amplified at higher levels of compression. 4. While all compression techniques that we evaluate have a non-uniform impact, not all methods are created equal. High levels of pruning incur a far higher disparate impact than is observed for the quantization techniques that we evaluate. 2 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? Our work provides intuition into the role of capacity in deep neural networks and a mechanism to audit the trade-offs incurred by compression. Our findings suggest that caution should be used before deploying compressed networks to sensitive domains. Our PIE methodology could conceivably be explored as a mechanism to surface a tractable subset of atypical examples for further human inspection (Leibig et al., 2017; Zhang, 1992), to choose not to classify certain examples when the model is uncertain (Bartlett & Wegkamp, 2008; Cortes et al., 2016), or to aid interpretability as a case based reasoning tool to explain model behavior (Kim et al., 2016; Caruana, 2000; Hooker et al., 2019). # 2 Methodology and Experiment Framework # 2.1 Preliminaries We consider a supervised classification problem where a deep neural network is trained to approximate the function F’ that maps an input variable X to an output variable Y, formally F : X +> Y. The model is trained on a training set of N images D = {(2;, yi}ea. and at test time makes a prediction y* for each image in the test set. The true labels y; are each assumed to be one of C classes, such that y; = [1,...., C]. A reasonable response to our desire for more compact representations is to simply train a network with fewer weights. However, as of yet, starting out with a compact dense model has not yielded competitive test-set performance (Let al.} 2020} /Zhu & Guptal |2017b). Instead, research has centered on a more tractable direction of investigation — the model begins training with "excess capacity" and the goal is to remove the parts that are not strictly necessary for the task by/at the end of training. A pruning method P identifies the subset of weights to set to zero. A sparse model function, fP , is one where a fraction t of all model weights are set to zero. Equating weight value to zero effectively removes the contribution of a weight, as multiplication with inputs no longer contributes to the activation. A non- compressed r model function is one where all weights are trainable (¢ = 0). We refer to the overall model accuracy as 8. In contrast, t = 0.9 indicates that 90% of model weights are removed over the course of training, leaving a maximum of 10% non-zero weights. # 2.2 Class level measure of impact If the impact of compression was completely uniform, the relative relationship between class level accuracy βc t and overall model performance will be unaltered. This forms our null hypothesis (H0). We must decide for each class c whether to reject the null hypothesis and accept the alternate hypothesis (H1) - the relative change to class level recall differs from the change to overall accuracy in either a positive or negative direction: βc 0 βM 0 βc 0 βM 0 # βc t βM t βc t βM t Welch’s t-test Evaluating whether the difference between the samples of mean-shifted class accuracy from compressed and non-compressed models is “real” amounts to determining whether these two data samples are drawn from the same underlying distribution, which is the subject of a large body of goodness of fit literature (D’Agostino & Stephens, 1986; Anderson & Darling, 1954; Huber-Carol et al., 2002). We independently train a population of K models for each compression method, dataset, and model that we consider. Thus, we have a sample Sc For each class c, we we use a two-tailed, independent Welch’s t-test (Welch, 1947) to determine whether the mean-shifted class accuracy Sc 0 differ significantly. If the p-value <= 0.05, we reject the null hypothesis and consider the class to be disparately impacted by t level of compression relative to the baseline. Controlling for overall changes to top-line metrics Note that by comparing the relative difference in class accuracy Sc t , we control for any overall difference in model test-set accuracy. This is important because while 3 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? small, the difference in top-line metrics is not zero (see Table. 2). Along with the p-value, for each class we report the average relative deviation in class-level accuracy, which we refer to as relative recall difference: 1A (& t,k 3 K Be 3) k=1 Ok 5 = 100F = § cola Om 3 =|) Fup 30 | 30% 3 80. — 30% 3 — fle 5 co 50% 2 — 50% = 60-— i8dy 2+ oe #60 — 70% 2 — ied a 40-—— 90% rs — 20% 40 3° B 3 3 3 3 3 8 3 £ 20- E 20- E 20- a) 6 rc) Zo- Ror, Ro, 072 073 074 075 0.76 O77 0.78 90 91 92 93 94 95 740 745 75.0 75.5 76.0 765 77.0 775 78.0 ImageNet top-1 accuracy (Pruning) ImageNet top-5 accuracy (Pruning) ImageNet Top-1 accuracy (Quantization) = 100 100 > 07 § $ — 0% 5 — % 3 20 3 80-— isdy 308 == ss a 3 3 3 — = = — ierd Zos 50% 2 60 B 6o- a — 10% 2 B04 = = = — 20% 3 ao- 3 aol B03 3 3 3 02- E 20- E 20- E i) 3. ‘5 O1T or i i i i i i i Sop ool 8 8 90 92 94 96 98 100 935 940 945 95.0 95.5 96.0 96.5 3.0 935 OSHS 6.0 Celeb-A top-1 accuracy (Pruning) Celeb-A Top-1 Accuracy (Quantization) CIFAR-10 top-1 accuracy (Pruning) 5 § cola Om 30 | 30% 5 co 50% 2+ oe a 40-—— 90% 3° 3 3 £ 20- a) Zo- 072 073 074 075 0.76 O77 0.78 ImageNet top-1 accuracy (Pruning) = 100F 3 =|) 3 80. — 30% 2 — 50% #60 — 70% rs — 20% B 3 8 E 20- 6 Ror, 90 91 92 93 94 95 ImageNet top-5 accuracy (Pruning) = Fup 3 — fle = 60-— i8dy 2 — ied 40 3 3 3 E 20- rc) Ro, 740 745 75.0 75.5 76.0 765 77.0 775 78.0 ImageNet Top-1 accuracy (Quantization) = 100 § 3 20 a = 2 60 = 3 ao- 3 E 20- i) or i i i i i i i 8 8 90 92 94 96 98 100 Celeb-A top-1 accuracy (Pruning) 07 5 — % 308 == ss 3 — Zos 50% a — 10% B04 = — 20% B03 3 02- E ‘5 O1T ool 3.0 935 OSHS 6.0 CIFAR-10 top-1 accuracy (Pruning) 100 > $ — 0% 3 80-— isdy 3 3 = — ierd B 6o- 2 = 3 aol 3 E 20- 3. Sop 935 940 945 95.0 95.5 96.0 96.5 Celeb-A Top-1 Accuracy (Quantization) Table 1: Distributions of top-1 accuracy for populations of independently quantized and pruned models for ImageNet, CIFAR-10 and CelebA. For ImageNet, we also include top-5. Note that the scale of the x-axis differs between plots. # 2.3 Pruning Identified Exemplars In addition to measuring the class level impact of compression, we are interested in how model predictive behavior changes through the compression process. Given the limitations of un-calibrated probabilities in deep neural networks 2017), we focus on the level of disagreement between nd non- the predictions of compressed and non-compressed networks on a given image. Using the populations of models J described in the prior section, we construct sets of predictions Y;", = {y7, nathe , for a given image u. For set Y;", we find the modal label, i.e. the class predicted most frequently by the t-pruned model population for image i, which we denote y“. The exemplar is classified as a pruning identified exemplar PIE; if and only if the modal label is different between the set of t-pruned models and the non-pruned baseline models: 1 ifya AM 1 ifya AM PIE; = ut {0 otherwise We note that there is no constraint that the non-pruned predictions for PIEs match the true label. Thus the detection of PIEs is an unsupervised protocol that can be performed at test time. # 2.4 Experimental framework Tasks We evaluate the impact of compression across three classification tasks and models: a wide ResNet model (Zagoruyko & Komodakis, 2016) trained on CIFAR-10, a ResNet-50 (He et al., 2015) trained on ImageNet, and a ResNet-18 trained on CelebA. All networks are trained with batch normalization (Ioffe & Szegedy, 2015), weight decay, decreasing learning rate schedules, and augmented training data. We train for 32, 000 steps (approximately 90 epochs) on ImageNet with a batch size of 1024 images, for 80, 000 steps on CIFAR-10 with a batch size of 128, and 10, 000 steps on CelebA with a batch size of 256. For ImageNet, 4 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? CIFAR-10 and CelebA, the baseline non-compressed model obtains a mean top-1 accuracy of 76.68%, 94.35% and 94.73% respectively. Our goal is to move beyond anecdotal observations, and to measure statistical deviations between populations of models. Thus, we report metrics and statistical significance for each dataset, model and compression variant across 30 independent trainings. 50% of Weights Pruned 70% of Weights Pruned Quantization Int8 Dynamic Range Absolute % Difference in Clase Recall Absolute % Difference in Cass Recall Absolute % Difference n Class ‘+ Normalized % Difference in Class Recall ‘+ Normalized % Difference in Class Recall + Normalized % Difference in Class 50% of Weights Pruned Absolute % Difference in Clase Recall ‘+ Normalized % Difference in Class Recall 70% of Weights Pruned Absolute % Difference in Cass Recall ‘+ Normalized % Difference in Class Recall Quantization Int8 Dynamic Range Absolute % Difference n Class Recall + Normalized % Difference in Class Recall Figure 2: Compression disproportionately impacts a small subset of ImageNet classes. Plum bars indicate the subset of examples where the impact of compression is statistically significant. Green scatter points show normalized recall difference which normalizes by overall change in model accuracy, and the bars show absolute recall difference. Left: 50% pruning. Center: 70% pruning. Right: post-training int8 dynamic range quantization. The class labels are sampled for readability. Pruning and quantization techniques considered We evaluate magnitude pruning as proposed by Zhu & Gupta (2017a). For pruning, we vary the end sparsity precisely for t ∈ {0.3, 0.5, 0.7, 0.9}. For example, t = 0.9 indicates that 90% of model weights are removed over the course of training, leaving a maximum of 10% non-zero weights. For each level of pruning t, we train 30 models from random initialization. We evaluate three different quantization techniques: float16 quantization float16 (Micikevicius et al., 2017), hybrid dynamic range quantization with int8 weights hybrid (Alvarez et al., 2016) and fixed-point only quantization with int8 weights created with a small representative dataset fixed-point (Vanhoucke et al., 2011; Jacob et al., 2018). All quantization methods we evaluate are implemented post-training, in contrast to the pruning which is applied progressively over the course of training. We use a limited grid search to tailor the pruning schedule and hyperparameters to each dataset to maximize top-1 accuracy. We include additional details about training methodology and pruning techniques in the supplementary material. All the code for this paper is publicly available here. 5 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? # 3 Results # 3.1 Disparate impact of compression We find consistent results across all datasets and compression techniques considered; a small subset of classes are disproportionately impacted. This disparate impact is far from random, with statistically significant differ- ences in class level recall between a population of non-compressed and compressed models. Compression induces “selective forgetting” with performance on certain classes evidencing far more sensitivity to varying the representation of the network. This sensitivity is amplified at higher levels of sparsity with more classes evidencing a statistically significant relative change in recall. For example, as seen in Table 2 at 50% sparsity 170 ImageNet classes are statistically significant which increases to 372 classes at 70% sparsity. Cannibalizing a small subset of classes Out of the classes where there is a statistically significant deviation in performance, we observe a subset of classes that benefit relative to the average class as well as classes that are impacted adversely. However, the average absolute class decrease in recall is far larger than the average increase, meaning that the losses in generalization caused by pruning is far more concentrated than the relative gains. Compression cannibalizes performance on a small subset of classes to preserve a similar overall top-line accuracy. Comparison of quantization and pruning techniques While all the techniques we benchmark evidence disparate class level impact, we note that quantization appears to introduce less disparate harm. For example, the most aggressive form of post-training quantization considered, fixed-point only quantization with int8 weights fixed-point, impacts the relative recall difference of 119 ImageNet classes in a statistically significant way. In contrast, at 90% sparsity, relative recall difference is statistically significant for 637 classes. These results suggest that the representation learnt by a network is far more robust to changes in precision versus removing the weights entirely. For sensitive tasks, quantization may be more viable for practitioners as there is less systematic disparate impact. Complexity of task The impact of compression depends upon the degree of overparameterization present in the network given the complexity of the task in question. For example, the ratio of classes that are significantly impacted by pruning was lower for CIFAR-10 than for ImageNet. One class out of ten was significantly impacted at 30% and 50%, and two classes were impacted at 90%. We suspect that we measured less disparate impact for CIFAR-10 because, while the model has less capacity, the number of weights is still sufficient to model the limited number of classes and lower dimensional dataset. In the next section, we leverage PIEs to characterize and gain intuition into why certain parts of the distribution are systematically far more sensitive to compression. # 3.2 Pruning Identified Exemplars To better understand why a narrow part of the data distributon is far more sensitive to compression, we (1) evaluate whether PIEs are more difficult for an algorithm to classify, (2) conduct a human study to codify the attributes of a sample of PIEs and Non-PIEs, and (3) evaluate whether PIEs over-index on underrepresented sensitive attributes in CelebA. At every level of compression, we identify a subset of PIE images that are disproportionately sensitive to the removal of weights (for each of CIFAR-10, CelebA and ImageNet). The number of images classified as PIE increases with the level of pruning. At 90% sparsity, we classify 10.27% of all ImageNet test-set images as PIEs, 2.16% of CIFAR-10, and 16.17% of CelebA. Test-error on PIEs In Fig. 3, we evaluate a random sample of (1) PIE images, (2) non-PIE images and (3) entire test-set for each of the datasets considered. We find that PIE images are far more challenging for a non-compressed model to classify. Evaluation on PIE images alone yields substantially lower top-1 accuracy. The results are consistent across CIFAR-10 (top-1 accuracy falls from 94.89% to 43.64%), CelebA (94.10% to 50.41%), and ImageNet datasets (76.75% to 39.81%). Notably, on ImageNet, we find that removing PIEs greatly improves generalization performance. Test-set accuracy on non-PIEs increased to 81.20% relative to baseline top-1 performance of 76.75%. 6 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? FRACTION PRUNED TOP 1 TOP 5 COUNT SIGNIF CLASSES COUNT PIES 0 30 50 70 90 76.68 76.46 75.87 75.02 72.60 93.25 93.17 92.86 92.43 91.10 - 68 170 372 637 - 1,819 2,193 3,073 5,136 QUANTIZATION 76.65 76.10 76.46 93.25 92.94 93.16 58 144 119 2019 2193 2093 Table 2: ImageNet top-1 and top-5 accuracy at all levels of pruning and quantization, averaged over all runs. Count PIEs is the count of images classified as a Pruning Identified Exemplars at every compression level. We include comparable tables for CelebA and CIFAR-10 in the appendix. Human study We conducted a human study (85 participants) to label a random sample of 1230 PIE and non-PIE ImageNet images. Humans in the study were shown a balanced sample of PIE and non-PIE images that were selected at random and shuffled. The classification as PIE or non-PIE was not known or available to the human.What makes PIEs different from non-PIEs? The participants were asked to codify a set of attributes for each image. We report the relative distribution of PIE and non-PIE after each attribute, with the higher relative share in bold: 1. ground truth label incorrect or inadequate – image contains insufficient information for a human to arrive at the correct ground truth label. [8.90% of non-PIEs, 20.05% of PIEs] 2. multiple-object image – image depicts multiple objects where a human may consider several labels to be appropriate (e.g., an image which depicts both a paddle and canoe or a desktop computer consisting of a screen, mouse, and monitor). [39.53% of non-PIE, 59.15 % of PIEs] 3. corrupted image – image exhibits common corruptions such as motion blur, contrast, pixelation. We also include in this category images with super-imposed text or an artificial frame as well as images that are black and white rather than the typical RGB color images in ImageNet. [14.37% of non-PIE, 13.72% of PIEs] 4. fine grained classification – image involves classifying an object that is semantically close to various other class categories present in the dataset (e.g., rock crab and fiddler crab, bassinet and cradle, cuirass and breastplate). [8.9% of non-PIEs, 43.55% of PIEs] 5. abstract representations – image depicts a class object in an abstract form such a cartoon, painting, or sculptured incarnation of the object. [3.43% of non-PIE, 5.76% of PIE] PIEs heavily over-index relative to non-PIEs on certain properties, such as having an incorrect ground truth label, involving a fine-grained classification task or multiple objects. This suggests that the task itself is often incorrectly specified. For example, while ImageNet is a single image classification tasks, 59% of ImageNet PIEs codified by humans were identified as multi-object images where multiple labels could be considered reasonable (vs. 39% of non-PIEs). In ImageNet, the over-indexing of incorrectly labelled data and multi-object images in PIE also raises questions about whether the explosion of growth in number of weights in deep neural networks is solving a problem that is better addressed in the data cleaning pipeline. # 4 Sensitivity of compressed models to distribution shift Non-compressed models have already been shown to be very brittle to small shifts in the distribution that humans are robust. This can cause unexpected changes in model behavior in the wild that can compromise human welfare (Zech et al., 2018). Here, we ask does compression amplify this brittleness? Understanding relative differences in robustness helps understand the implications for AI safety of the widespread use of compressed models. 7 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? # Top-1 Accuracy on PIE, All Test-Set, Non-PIE # CelebA CIFAR-10 # ImageNet 6 Top-1 test set Accuracy Pe Non pe, % Top-t test set Accuracy % Top-t test set Accuracy @ Pes Alte st “Set. on er Ali tese. “see on Pr, Allre et Sep 6 Top-1 test set Accuracy Pe Non pe, Allre et Sep % Top-t test set Accuracy Pes Alte st “Set. on er % Top-t test set Accuracy @ Ali tese. “see on Pr, Figure 3: A comparison of model performance on 1) a sample of Pruning Identified Exemplars (PIE), 2) the entire test-set and 3) a sample excluding PIEs. Inference on the non-PIE sample improves test-set top-1 accuracy relative to the baseline for ImageNet. Evaluation on PIE images alone yields substantially lower top-1 accuracy. ImageNet-A ImageNet-C 0: = = ru -10- x B20 ® 5 -30 U ) cae Y 50 =] 3 60 a 4 3 m= norm top 1 accuracy mmm norm top 5 accuracy Model Sparsity | ee _ HLA oO = -10 a Q fe} - -20 wo == contrast 2 mmm frosted_glass_blur © —30- mmm gaussian_noise 2 l= zoom_blur mmm shot_noise | iS Oo impulse_noise Model Sparsity Figure 4: High levels of compression amplify sensitivity to distribution shift. Left: Change in top-1 and top-5 recall of a pruned model relative to a non-pruned model on ImageNet-A. Right: We measure the top-1 test-set performance on a subset of ImageNet-C corruptions of a pruned model relative to the non-pruned model on the same corruption. An extended list of all corruptions considered and top-5 accuracy is included in the supplementary material. To answer this question, we evaluate the sensitivity of pruned models relative to non-pruned models given two open-source benchmarks for robustness: 1. ImageNet-C (Hendrycks & Dietterich, 2019) – 16 algorithmically generated corruptions (blur, noise, fog) applied to the ImageNet test-set. 2. ImageNet-A (Hendrycks et al., 2019) – a curated test set of 7, 500 naturally adversarial images designed to produce drastically lower test accuracy. For each ImageNet-C corruption q ∈ Q, we compare top-1 accuracy of the pruned model evaluated on corruption q normalized by non-pruned model performance on the same corruption. We average across intensities of corruptions as described by Hendrycks & Dietterich (2019). If the relative top-1 accuracy was 0 it would mean that there is no difference in sensitivity to corruptions considered. As seen in Fig. 4, pruning greatly amplifies sensitivity to both ImageNet-C and ImageNet-A relative to non-pruned performance on the same inputs. For ImageNet-C, it is worth noting that relative degradation 8 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? in performance is remarkably varied across corruptions, with certain corruptions such as gaussian, shot noise, and impulse noise consistently causing far higher relative degradation. At t = 90, the high- est degradation in relative top-1 is shot noise (−40.11%) and the lowest relative drop is brightness (−7.73%). Sensitivity to small distribution shifts is amplified at higher levels of sparsity. We include results for all corruptions and the absolute top-1 and top-5 accuracy on each corruption, level of pruning considered in the supplementary material Table. 8. The amplified sensitivity of smaller models to distribution shifts and the over-indexing of PIEs on low frequency attributes suggests that much of a models excess capacity is helpful for learning features which aid generalization on atypical or out-of-distribution data points. This builds upon recent work which suggests memorization can benefit generalization properties (Feldman & Zhang, 2020). # 5 Related work The set of model compression techniques is diverse and includes research directions such as reducing the precision or bit size per model weight (quantization) (Jacob et al., 2018; Courbariaux et al., 2014; Hubara et al., 2016; Gupta et al., 2015), efforts to start with a network that is more compact with fewer parameters, layers or computations (architecture design) (Howard et al., 2017; Iandola et al., 2016; Kumar et al., 2017), student networks with fewer parameters that learn from a larger teacher model (model distillation) (Hinton et al., 2015) and finally pruning by setting a subset of weights or filters to zero (Louizos et al., 2017; Wen et al., 2016; Cun et al., 1990; Hassibi et al., 1993b; Ström, 1997; Hassibi et al., 1993a; Zhu & Gupta, 2017; See et al., 2016; Narang et al., 2017). In this work, we evaluate the dis-aggregated impact of a subset of pruning and quantization methods. Despite the widespread use of compression techniques, articulating the trade-offs of compression has overwhelming centered on change to overall accuracy for a given level of compression (Ström, 1997; Cun et al., 1990; Evci et al., 2019; Narang et al., 2017; Gale et al., 2019). Our work is the first to our knowledge that asks how dis-aggregated measures of model performance at a class and exemplar level are impacted by compression. In section 4, we also measure sensitivity to two types of distribution shift – ImageNet-A and ImageNet-C. Recent work by (Guo et al., 2018; Sehwag et al., 2019) has considered sensitivity of pruned models to a a different notion of robustness: l − p norm adversarial attacks. In contrast to adversarial robustness which measures the worst-case performance on targeted perturbation, our results provide some understanding of how compressed models perform on subsets of challenging or corrupted natural image examples. Zhou et al. (2019) conduct an experiment which shows that networks which are pruned subsequent to training are more sensitive to the corruption of labels at training time. # 6 Discussion and Future Work The quantization and pruning techniques we evaluate in this paper are already widely used in production systems and integrated with popular deep learning libraries. The popularity and widespread use of these techniques is driven by the severe resource constraints of deploying models to mobile phones or embedded devices (Samala et al., 2018). Many of the algorithms on your phone are likely pruned or compressed in some way. Our results suggest that a reliance on top-line metrics such as top-1 or top-5 test-set accuracy hides critical details in the ways that compression impacts model generalization. Caution should be used before deploying compressed models to sensitive domains such as hiring, health care diagnostics, self-driving cars, facial recognition software. For these domains, the introduction of pruning may be at odds with the need to guarantee a certain level of recall or performance for certain subsets of the dataset. Role of Capacity in Deep Neural Networks A “bigger is better” race in the number of model parameters has gripped the field of machine learning (Canziani et al., 2016; Strubell et al., 2019). However, the role of additional weights is not well understood. The over-indexing of PIEs on low frequency attributes suggest that non-compressed networks use the majority of capacity to encode a useful representation for these examples. 9 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? This costly approach to learning an appropriate mapping for a small subset of examples may be better solved in the data pipeline. Auditing and improving compressed models Our methodology offers one way for humans to better un- derstand the trade-offs incurred by compression and surface challenging examples for human judgement. Identifying harm is the first step in proposing a remedy, and we anticipate our work may spur focus on developing new compression techniques that improve upon the disparate impact we identify and characterize in this work. Limitations There is substantial ground we were not able to address within the scope of this work. Open questions remain about the implications of these findings for other possible desirable objectives such as fairness.Underserved areas worthy of future consideration include evaluating the impact of compression on additional domains such as language and audio, and leveraging these insights to explicitly optimize for compressed models that also minimize the disparate impact on underrepresented data attributes. # Acknowledgements We thank the generosity of our peers for valuable input on earlier versions of this work. In particular, we would like to acknowledge the input of Jonas Kemp, Simon Kornblith, Julius Adebayo, Hugo Larochelle, Dumitru Erhan, Nicolas Papernot, Catherine Olsson, Cliff Young, Martin Wattenberg, Utku Evci, James Wexler, Trevor Gale, Melissa Fabros, Prajit Ramachandran, Pieter Kindermans, Erich Elsen and Moustapha Cisse. We thank R6 from ICML 2021 for pointing out some improvements to the formulation of the class level metrics. We thank the institutional support and encouragement of Natacha Mainville and Alexander Popper. # References Alvarez, R., Prabhavalkar, R., and Bakhtin, A. On the efficient representation and execution of deep acoustic models. Interspeech 2016, Sep 2016. doi: 10.21437/interspeech.2016-128. URL http://dx.doi.org/ 10.21437/Interspeech.2016-128. Anderson, T. W. and Darling, D. A. A test of goodness of fit. Journal of the American Statistical Association, 49(268):765–769, 1954. ISSN 01621459. URL http://www.jstor.org/stable/2281537. Bartlett, P. L. and Wegkamp, M. H. Classification with a reject option using a hinge loss. J. Mach. Learn. Res., 9:1823–1840, June 2008. ISSN 1532-4435. URL http://dl.acm.org/citation.cfm?id=1390681. 1442792. Canziani, A., Paszke, A., and Culurciello, E. An Analysis of Deep Neural Network Models for Practical Applications. arXiv e-prints, art. arXiv:1605.07678, May 2016. Caruana, R. Case-based explanation for artificial neural nets. In Malmgren, H., Borga, M., and Niklasson, L. (eds.), Artificial Neural Networks in Medicine and Biology, pp. 303–308, London, 2000. Springer London. ISBN 978-1-4471-0513-8. Casey, B., Giedd, J. N., and Thomas, K. M. Structural and functional brain development and its relation to cognitive development. Biological Psychology, 54(1):241 – 257, 2000. ISSN 0301-0511. doi: https://doi.org/10.1016/S0301-0511(00)00058-2. URL http://www.sciencedirect.com/science/ article/pii/S0301051100000582. Chen, Y., Emer, J., and Sze, V. Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), pp. 367–379, June 2016. doi: 10.1109/ISCA.2016.40. Collins, M. D. and Kohli, P. Memory Bounded Deep Convolutional Networks. ArXiv e-prints, December 2014. Collins, M. D. and Kohli, P. Memory bounded deep convolutional networks. CoRR, abs/1412.1442, 2014. URL http://arxiv.org/abs/1412.1442. 10 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Sys- tems 29, pp. 1660–1668. Curran Associates, Inc., 2016. URL http://papers.nips.cc/paper/ 6336-boosting-with-abstention.pdf. Courbariaux, M., Bengio, Y., and David, J.-P. Training deep neural networks with low precision multiplica- tions. arXiv e-prints, art. arXiv:1412.7024, Dec 2014. Cun, Y. L., Denker, J. S., and Solla, S. A. Optimal brain damage. In Advances in Neural Information Processing Systems, pp. 598–605. Morgan Kaufmann, 1990. D’Agostino, R. B. and Stephens, M. A. (eds.). Goodness-of-fit Techniques. Marcel Dekker, Inc., New York, NY, USA, 1986. ISBN 0-824-77487-6. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009. Evci, U., Gale, T., Menick, J., Castro, P. S., and Elsen, E. Rigging the lottery: Making all tickets winners, 2019. Feldman, V. and Zhang, C. What Neural Networks Memorize and Why: Discovering the Long Tail via Influence Estimation. arXiv e-prints, art. arXiv:2008.03703, August 2020. Gale, T., Elsen, E., and Hooker, S. The state of sparsity in deep neural networks. CoRR, abs/1902.09574, 2019. URL http://arxiv.org/abs/1902.09574. Gordon, A., Eban, E., Nachum, O., Chen, B., Wu, H., Yang, T.-J., and Choi, E. Morphnet: Fast & simple resource-constrained structure learning of deep networks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2018. doi: 10.1109/cvpr.2018.00171. URL http://dx.doi.org/ 10.1109/CVPR.2018.00171. Guo, C., Pleiss, G., Sun, Y., and Weinberger, K. Q. On Calibration of Modern Neural Networks. arXiv e-prints, art. arXiv:1706.04599, Jun 2017. Guo, Y., Yao, A., and Chen, Y. Dynamic network surgery for efficient dnns. CoRR, abs/1608.04493, 2016. URL http://arxiv.org/abs/1608.04493. Guo, Y., Zhang, C., Zhang, C., and Chen, Y. Sparse dnns with improved adversarial robustness. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 242–251. Curran Associates, Inc., 2018. URL http: //papers.nips.cc/paper/7308-sparse-dnns-with-improved-adversarial-robustness.pdf. Gupta, S., Agrawal, A., Gopalakrishnan, K., and Narayanan, P. Deep learning with limited numerical precision. CoRR, abs/1502.02551, 2015. URL http://arxiv.org/abs/1502.02551. Han, S., Pool, J., Tran, J., and Dally, W. J. Learning both Weights and Connections for Efficient Neural Network. In NIPS, pp. 1135–1143, 2015. Hassibi, B., Stork, D. G., and Com, S. C. R. Second order derivatives for network pruning: Optimal brain surgeon. In Advances in Neural Information Processing Systems 5, pp. 164–171. Morgan Kaufmann, 1993a. Hassibi, B., Stork, D. G., and Wolff, G. J. Optimal brain surgeon and general network pruning. In IEEE International Conference on Neural Networks, pp. 293–299 vol.1, March 1993b. doi: 10.1109/ICNN.1993. 298572. He, K., Zhang, X., Ren, S., and Sun, J. Deep Residual Learning for Image Recognition. ArXiv e-prints, December 2015. Hendrycks, D. and Dietterich, T. Benchmarking neural network robustness to common corruptions In International Conference on Learning Representations, 2019. URL https: and perturbations. //openreview.net/forum?id=HJz6tiCqYm. Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., and Song, D. Natural Adversarial Examples. arXiv e-prints, art. arXiv:1907.07174, Jul 2019. 11 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? Hinton, G., Vinyals, O., and Dean, J. Distilling the Knowledge in a Neural Network. arXiv e-prints, art. arXiv:1503.02531, Mar 2015. Hooker, S., Erhan, D., Kindermans, P.-J., and Kim, B. A benchmark for interpretability methods in deep neural networks. In NeurIPS 2019, 2019. Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. ArXiv e-prints, April 2017. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., and Bengio, Y. Quantized neural networks: Training neural networks with low precision weights and activations. CoRR, abs/1609.07061, 2016. URL http: //arxiv.org/abs/1609.07061. Huber-Carol, C., Balakrishnan, N., Nikulin, M., and Mesbah, M. Goodness-of-Fit Tests and Model Validity. ISBN 9780817642099. URL Goodness-of-fit Tests and Model Validity. Birkhäuser Boston, 2002. https://books.google.com/books?id=gUMcv2_NrhkC. Iandola, F. N., Han, S., Moskewicz, M. W., Ashraf, K., Dally, W. J., and Keutzer, K. SqueezeNet: AlexNet- level accuracy with 50x fewer parameters and <0.5MB model size. ArXiv e-prints, February 2016. Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. URL http://arxiv.org/abs/1502.03167. Jacob, B., Kligys, S., Chen, B., Zhu, M., Tang, M., Howard, A., Adam, H., and Kalenichenko, D. Quantization and training of neural networks for efficient integer-arithmetic-only inference. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Jun 2018. doi: 10.1109/cvpr.2018.00286. URL http: //dx.doi.org/10.1109/CVPR.2018.00286. Kalchbrenner, N., Elsen, E., Simonyan, K., Noury, S., Casagrande, N., Lockhart, E., Stimberg, F., van den Oord, A., Dieleman, S., and Kavukcuoglu, K. Efficient Neural Audio Synthesis. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018, pp. 2415–2424, 2018. Kendall, A. and Gal, Y. What uncertainties do we need in bayesian deep learning for computer vision? In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 30, pp. 5574–5584. Curran Associates, Inc., 2017. Kim, B., Khanna, R., and Koyejo, O. O. Examples are not enough, learn to criticize! criticism for interpretability. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 2280–2288. Curran Associates, Inc., 2016. Kolb, B. and Whishaw, I. Fundamentals of Human Neuropsychology. A series of books in psychology. Worth Publishers, 2009. ISBN 9780716795865. Krizhevsky, A. Learning multiple layers of features from tiny images. University of Toronto, 05 2012. Kumar, A., Goyal, S., and Varma, M. Resource-efficient machine learning in 2 KB RAM for the internet of things. In Precup, D. and Teh, Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 1935–1944, International Convention Centre, Sydney, Australia, 06–11 Aug 2017. PMLR. URL http://proceedings.mlr. press/v70/kumar17a.html. Lattner, C., Amini, M., Bondhugula, U., Cohen, A., Davis, A., Pienaar, J., Riddle, R., Shpeisman, T., Vasilache, N., and Zinenko, O. Mlir: A compiler infrastructure for the end of moore’s law, 2020. Lee, N., Ajanthan, T., and Torr, P. H. S. SNIP: single-shot network pruning based on connection sensitivity. CoRR, abs/1810.02340, 2018. URL http://arxiv.org/abs/1810.02340. Leibig, C., Allken, V., Ayhan, M. S., Berens, P., and Wahl, S. Leveraging uncertainty information from deep neural networks for disease detection. Scientific Reports, 7, 12 2017. doi: 10.1038/s41598-017-17876-z. Li, Z., Wallace, E., Shen, S., Lin, K., Keutzer, K., Klein, D., and Gonzalez, J. E. Train large, then compress: Rethinking model size for efficient training and inference of transformers, 2020. Liu, Z., Luo, P., Wang, X., and Tang, X. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015. 12 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C. Learning Efficient Convolutional Networks through Network Slimming. ArXiv e-prints, August 2017. Louizos, C., Welling, M., and Kingma, D. P. Learning Sparse Neural Networks through L_0 Regularization. ArXiv e-prints, December 2017. Micikevicius, P., Narang, S., Alben, J., Diamos, G., Elsen, E., Garcia, D., Ginsburg, B., Houston, M., Kuchaiev, O., Venkatesh, G., and Wu, H. Mixed Precision Training. arXiv e-prints, art. arXiv:1710.03740, October 2017. Narang, S., Elsen, E., Diamos, G., and Sengupta, S. Exploring Sparsity in Recurrent Neural Networks. arXiv e-prints, art. arXiv:1704.05119, Apr 2017. Nowlan, S. J. and Hinton, G. E. Simplifying neural networks by soft weight-sharing. Neural Computation, 4 (4):473–493, 1992. doi: 10.1162/neco.1992.4.4.473. URL https://doi.org/10.1162/neco.1992.4. 4.473. Rakic, P., Bourgeois, J.-P., and Goldman-Rakic, P. S. Synaptic development of the cerebral cortex: implica- tions for learning, memory, and mental illness. In Pelt, J. V., Corner, M., Uylings, H., and Silva, F. L. D. (eds.), The Self-Organizing Brain: From Growth Cones to Functional Networks, volume 102 of Progress in Brain Research, pp. 227 – 243. Elsevier, 1994. doi: https://doi.org/10.1016/S0079-6123(08)60543-9. URL http://www.sciencedirect.com/science/article/pii/S0079612308605439. Reagen, B., Whatmough, P., Adolf, R., Rama, S., Lee, H., Lee, S. K., Hernández-Lobato, J. M., Wei, G., and Brooks, D. Minerva: Enabling low-power, highly-accurate deep neural network accelerators. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA), pp. 267–278, June 2016. doi: 10.1109/ISCA.2016.32. Samala, R. K., Chan, H.-P., Hadjiiski, L. M., Helvie, M. A., Richter, C., and Cha, K. Evolutionary pruning of transfer learned deep convolutional neural network for breast cancer diagnosis in digital breast tomosynthesis. Physics in Medicine & Biology, 63(9):095005, may 2018. doi: 10.1088/1361-6560/aabb5b. See, A., Luong, M.-T., and Manning, C. D. Compression of Neural Machine Translation Models via Pruning. arXiv e-prints, art. arXiv:1606.09274, Jun 2016. Sehwag, V., Wang, S., Mittal, P., and Jana, S. Towards compact and robust deep neural networks. CoRR, abs/1906.06110, 2019. URL http://arxiv.org/abs/1906.06110. Sowell, E. R., Thompson, P. M., Leonard, C. M., Welcome, S. E., Kan, E., and Toga, A. W. Longitudinal mapping of cortical thickness and brain growth in normal children. Journal of Neuroscience, 24(38): 8223–8231, 2004. doi: 10.1523/JNEUROSCI.1798-04.2004. URL https://www.jneurosci.org/ content/24/38/8223. Strubell, E., Ganesh, A., and McCallum, A. Energy and Policy Considerations for Deep Learning in NLP. arXiv e-prints, art. arXiv:1906.02243, June 2019. Ström, N. Sparse connection and pruning in large dynamic artificial neural networks, 1997. Tessera, K., Hooker, S., and Rosman, B. Keep the gradients flowing: Using gradient flow to study sparse network optimization. CoRR, abs/2102.01670, 2021. URL https://arxiv.org/abs/2102.01670. Theis, L., Korshunova, I., Tejani, A., and Huszár, F. Faster gaze prediction with dense networks and Fisher pruning. CoRR, abs/1801.05787, 2018. URL http://arxiv.org/abs/1801.05787. Ullrich, K., Meeds, E., and Welling, M. Soft Weight-Sharing for Neural Network Compression. CoRR, abs/1702.04008, 2017. Valin, J. and Skoglund, J. Lpcnet: Improving Neural Speech Synthesis Through Linear Prediction. CoRR, abs/1810.11846, 2018. URL http://arxiv.org/abs/1810.11846. Vanhoucke, V., Senior, A., and Mao, M. Z. Improving the speed of neural networks on cpus. In Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011, 2011. Weigend, A. S., Rumelhart, D. E., and Huberman, B. A. Generalization by weight-elimination with application to forecasting. In Lippmann, R. P., Moody, J. E., and Touretzky, D. S. (eds.), Advances in Neural Information Processing Systems 3, pp. 875–882. Morgan-Kaufmann, 1991. 13 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? Welch, B. L. The generalization of ‘Student’s’ problem when several different population variances are involved. Biometrika, 34:28–35, 1947. ISSN 0006-3444. doi: 10.2307/2332510. URL https://doi. org/10.2307/2332510. Wen, W., Wu, C., Wang, Y., Chen, Y., and Li, H. Learning Structured Sparsity in Deep Neural Networks. ArXiv e-prints, August 2016. Zagoruyko, S. and Komodakis, N. Wide residual networks. CoRR, abs/1605.07146, 2016. URL http: //arxiv.org/abs/1605.07146. Zech, J. R., Badgeley, M. A., Liu, M., Costa, A. B., Titano, J. J., and Oermann, E. K. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study. PLOS Medicine, 15(11):1–17, 11 2018. doi: 10.1371/journal.pmed.1002683. URL https: //doi.org/10.1371/journal.pmed.1002683. Zhang, J. Selecting typical instances in instance-based learning. In Sleeman, D. and Edwards, P. (eds.), Machine Learning Proceedings 1992, pp. 470 – 479. Morgan Kaufmann, San Francisco (CA), 1992. ISBN 978-1-55860-247-2. doi: https://doi.org/10.1016/B978-1-55860-247-2.50066-8. URL http: //www.sciencedirect.com/science/article/pii/B9781558602472500668. Zhou, W., Veitch, V., Austern, M., Adams, R. P., and Orbanz, P. Non-vacuous generalization bounds at the imagenet scale: a pac-bayesian compression approach. In ICLR, 2019. Zhu, M. and Gupta, S. To prune, or not to prune: exploring the efficacy of pruning for model compression. ArXiv e-prints, October 2017. Zhu, M. and Gupta, S. To prune, or not to prune: exploring the efficacy of pruning for model compression. CoRR, abs/1710.01878, 2017a. URL http://arxiv.org/abs/1710.01878. Zhu, M. and Gupta, S. To prune, or not to prune: exploring the efficacy of pruning for model compression, 2017b. 14 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? # Appendix # A Pruning and quantization techniques considered Magnitude pruning There are various pruning methodologies that use the absolute value of weights to rank their importance and remove weights that are below a user-specified threshold (Collins & Kohli, 2014; Guo et al., 2016; Zhu & Gupta, 2017a). These works largely differ in whether the weights are removed permanently or can “recover" by still receiving subsequent gradient updates. This would allow certain weights to become non-zero again if pruned incorrectly. While magnitude pruning is often used as a criteria to remove individual weights, it can be adapted to remove entire neurons or filters by extending the ranking criteria to a set of weights and setting the threshold appropriately (Gordon et al., 2018). In this work, we use the magnitude pruning methodology as proposed by Zhu & Gupta (2017a). It has been shown to outperform more sophisticated Bayesian pruning methods and is considered state-of-the-art across both computer vision and language models (Gale et al., 2019). The choice of magnitude pruning also allowed us to specify and precisely vary the final model sparsity for purposes of our analysis, unlike regularizer approaches that allow the optimization process itself to determine the final level of sparsity (Liu et al., 2017; Louizos et al., 2017; Collins & Kohli, 2014; Wen et al., 2016; Weigend et al., 1991; Nowlan & Hinton, 1992). Quantization All networks were trained with 32-bit floating point weights and quantized post-training. This means there is no additional gradient updates to the weights post-quantization. In this work, we evaluate three different quantization methods. The first type replaces the weights with 16-bit floating point weights (Micikevicius et al., 2017). The second type quantizes all weights to 8-bit integer values (Alvarez et al., 2016). The third type uses the first 100 training examples of each dataset as representative examples for the fixed-point only models. We chose to benchmark these quantization methods in part because each has open source code available. We use TensorFlow Lite with MLIR (Lattner et al., 2020). # B Pruning Protocol We prune over the course of training to obtain a target end pruning level t ∈ {0.0, 0.1, 0.3, 0.5, 0.7, 0.9}. Removed weights continue to receive gradient updates after being pruned. These hyperparameter choices were based upon a limited grid search which suggested that these particular settings minimized degradation to test-set accuracy across all pruning levels. We note that for CelebA we were able to still converge to a comparable final performance at much higher levels of pruning t ∈ {0.95, 0.99}. We include these results, and note that the tolerance for extremely high levels of pruning may be related the relative difficulty of the task. Unlike CIFAR-10 and ImageNet which involve more than 2 classes (10 and 1000 respectively), CelebA is a binary classification problem. Here, the task is predicting hair color Y = {blonde, dark haired}. Quantization techniques are applied post-training - the weights are not re-calibrated after quantizing. Figure 1 shows the distributions of model accuracy across model populations for the pruned and quantized models for ImageNet, CIFAR-10 and CelebA. Table. 4 and Table. 5 include top-line metrics for all compression methods considered. # C Human study We conducted a human study (involving 85 volunteers) to label a random sample of 1230 PIE and non-PIE ImageNet images. Humans in the study were shown a balanced sample of PIE and non-PIE images that were selected at random and shuffled. The classification as PIE or non-PIE was not known or available to the human. Participants answered the following questions for each image that was presented: Does label 1 accurately label an object in the image? (0/1) • Does this image depict a single object? (0/1) • Would you consider labels 1, 2 and 3 to be semantically very close to each other? (does this image require fine grained classification) (0/1) 15 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? ImageNet Robustness to ImageNet-C Corruptions (By Level of Pruning) Pruning Fraction Corruption Type Top-1 0.0 0.7 0.9 brightness brightness brightness 69.49 67.50 64.12 88.98 87.86 85.63 0.00 -2.87 -7.74 0.00 -1.25 -3.77 0.0 0.7 0.9 contrast contrast contrast 42.30 41.34 38.04 61.80 61.58 58.43 0.00 -2.26 -10.06 0.00 -0.36 -5.45 0.0 0.7 0.9 defocus_blur defocus_blur defocus_blur 49.77 47.49 44.69 72.45 70.69 68.26 0.00 -4.58 -10.22 0.00 -2.43 -5.79 0.0 0.7 0.9 elastic elastic elastic 57.09 55.09 52.81 76.71 75.29 73.62 0.00 -3.51 -7.50 0.00 -1.85 -4.02 0.0 0.7 0.9 fog fog fog 56.21 54.46 50.36 79.25 78.25 75.10 0.00 -3.12 -10.41 0.00 -1.25 -5.23 0.0 0.7 0.9 frosted_glass_blur frosted_glass_blur frosted_glass_blur 40.89 38.75 36.87 60.51 58.68 57.02 0.00 -5.23 -9.83 0.00 -3.03 -5.78 0.0 0.7 0.9 gaussian_noise gaussian_noise gaussian_noise 45.43 42.01 32.88 65.67 62.40 51.49 0.00 -7.53 -27.64 0.00 -4.98 -21.59 0.0 0.7 0.9 impulse_noise impulse_noise impulse_noise 42.23 37.91 25.29 63.16 58.82 43.13 0.00 -10.24 -40.12 0.00 -6.87 -31.70 0.0 0.7 0.9 jpeg_compression jpeg_compression jpeg_compression 65.75 63.47 60.57 86.25 84.81 82.77 0.00 -3.47 -7.88 0.00 -1.68 -4.04 0.0 0.7 0.9 pixelate pixelate pixelate 57.34 54.93 51.31 78.05 76.17 72.98 0.00 -4.21 -10.51 0.00 -2.41 -6.50 0.0 0.7 0.9 shot_noise shot_noise shot_noise 43.82 39.88 30.80 64.06 60.04 48.86 0.00 -8.99 -29.71 0.00 -6.28 -23.72 Table 3: Pruned models are more sensitive to image corruptions that are meaningless to a human. We measure the average top-1 and top-5 test set accuracy of models trained to varying levels of pruning on the ImageNet-C test-set (the models were trained on uncorrupted ImageNet). For each corruption type, we report the average accuracy of 50 trained models relative to the baseline models across all 5 levels of pruning. 16 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? CelebA Fraction Pruned Top 1 # PIEs 0 0.3 0.5 0.7 0.9 0.95 0.99 94.73 94.75 94.81 94.44 94.07 93.39 90.98 - 555 638 990 3229 5057 8754 Quantization hybrid int8 fixed-point int8 Top 1 94.65 94.65 # PIEs 404 414 Table 4: CelebA top-1 accuracy at all levels of pruning, averaged over runs. The task we consider for CelebA is a binary classification method. We consider exemplar level divergence and classify Pruning Identified Exemplars as the examples where the modal label differs between a population of 30 compressed and non-compressed models. Note that the CelebA task is a binary classification task to predict whether the celebrity is blond or non-blond. Thus, there are only two classes. PIE NON_PIE 3% 14.4% 11.9% 53 1) atypical cl \ | 5 2) noisy 40.8% both 1) atypical and 2) noisy neither 2.4% 47.7% 41.4% \ 6.2% Figure 5: A pie chart of the codified attributes of a sample of pruning identified examplars (PIEs) and non-PIE images. The human study shows that PIEs over-index on both noisy exemplars with partial or corrupt information (corrupted images, incorrect labels, multi-object images) and/or atypical or challenging images (abstract representation, fine grained classification). • Do you consider the object in the image to be a typical exemplar for the class indicated by label 1? (0/1) • Is the image quality corrupted (some common image corruptions – overlaid text, brightness, contrast, filter, defocus blur, fog, jpeg compression, pixelate, shot noise, zoom blur, black and white vs. rgb)? (0/1) • Is the object in the image an abstract representation of the class indicated by label 1? [[an abstract representation is an object in an abstract form, such as a painting, drawing or rendering using a different material.]] (0/1) We find that PIEs heavily over-index relative to non-PIEs on both noisy examples with corrupted information (incorrect ground truth label, multiple objects, image corruption) and atypical or challenging examples (fine-grained classification task, abstract representation). We include the per attribute relative representation of PIE vs. Non-PIE for the study (in Figure. 7). 17 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? ImageNet Fraction Pruned Top 1 # Signif classes # PIEs 0 30 50 70 90 76.68 76.46 75.87 75.02 72.60 - 68 170 372 637 - 1,819 2,193 3,073 5,136 Quantization float16 dynamic range int8 fixed-point int8 76.65 76.10 76.46 58 144 119 2019 2193 2093 CIFAR-10 Top 1 # Signif classes # PIEs 0 30 50 70 90 94.53 94.47 94.39 94.30 94.14 - 1 1 0 2 - 114 144 137 216 Table 5: CIFAR-10 and ImageNet top-1 accuracy at all levels of pruning, averaged over 30 runs. Top-5 accuracy for CIFAR-10 was 99.8% for all levels of pruning. The third column is the number of classes significantly impacted by pruning. # D Benchmarks to evaluate robustness ImageNet-A Extended Results ImageNet-A is a curated test set of 7, 500 natural adversarial images designed to produce drastically low test accuracy. We find that the sensitivity of pruned models to ImageNet-A mirrors the patterns of degradation to ImageNet-C and sets of PIEs. As pruning increases, top-1 and top-5 accuracy further erode, suggesting that pruned models are more brittle to adversarial examples. Table 8 includes relative and absolute sensitivity at all levels of compression considered. For each robustness benchmark and level of pruning that we evaluate, we average model robustness over 5 models independently trained from random initialization. Pop in" true label: airplane airplane = automobile truck horse non-sparse: airplane airplane = automobile automobile cat sparse: ship ship truck cat horse Figure 6: Visualization of Pruning Identified Exemplars from the CIFAR-10 dataset. This subset of impacted images is identified by considering a set of 30 non-pruned wide ResNet models and 30 models trained to 30% pruning. Below each image are three labels: 1) true label, 2) the modal (most frequent) prediction from the set of non-pruned models, 3) the modal prediction from the set of pruned models. 18 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? # Top-1 accuracy # Top-5 accuracy ImageNet Fraction Pruned Non-PIEs PIEs All Non-PIEs PIEs All 10.0 30.0 50.0 70.0 90.0 79.34 79.23 79.54 80.16 81.20 26.14 26.21 28.74 32.06 39.81 76.75 76.75 76.75 76.75 76.75 94.89 95.04 94.89 94.99 95.11 68.52 69.30 71.47 74.74 78.90 93.35 93.35 93.35 93.35 93.35 CIFAR-10 All Non-PIEs PIEs All 10.0 30.0 50.0 70.0 90.0 95.11 95.40 95.45 95.56 95.60 43.23 40.61 40.42 43.64 50.71 CelebA 94.89 94.89 94.89 94.89 94.89 99.91 99.92 99.93 99.94 99.92 95.30 92.83 93.53 95.95 96.67 99.91 99.91 99.91 99.91 99.91 All Non-PIEs PIEs All 30.0 50.0 70.0 90.0 95.0 99.0 94.76 94.78 94.54 94.10 93.40 90.97 49.82 50.55 52.61 50.41 45.57 39.84 94.76 94.78 94.54 94.10 93.40 90.97 - - - - - - - - - - - - - - - - - - Table 6: A comparison of non-compressed model performance on Pruning Identified Exemplars (PIE) relative to a random sample drawn independently from the test-set and a sample excluding PIEs (non-PIEs). Inference on the non-PIE sample improves test-set top-1 accuracy relative to the baseline for ImageNet and Cifar-10. Evaluation on PIE images alone yields substantially lower top-1 accuracy. Note that CelebA top-5 is not included as it is a binary classification problem. Top-1 Accuracy ImageNet-C Corruptions Top-5 Accuracy ImageNet-C Corruptions (Relative to Non-Pruned Model) (Relative to Non-Pruned Model) nnaed 1) "T ea | i) —10- ! ~10- 1 & 8 I % Top-1 Accuracy Relative 1 8 % Top-5 Accuracy Relative 70 90 50 50 Model Sparsity Model Sparsity Top-1 Accuracy ImageNet-C Corruptions (Relative to Non-Pruned Model) nnaed 1) "T ! ~10- % Top-1 Accuracy Relative 70 50 Model Sparsity Top-5 Accuracy ImageNet-C Corruptions (Relative to Non-Pruned Model) ea | i) —10- 1 & 8 I 1 8 % Top-5 Accuracy Relative 90 50 Model Sparsity Figure 7: High levels of compression amplify sensitivity to distribution shift. Left: Change to top-1 normalized recall of a pruned model relative to a non-pruned model on ImageNet-C (all corruptions). Right: Change to top-5 normalized recall of a pruned model relative to a non-pruned model on ImageNet-C (all corruptions). We measure the top-1 test-set performance on a subset of ImageNet-C corruptions of a pruned model relative to the non-pruned model on the same corruption. 19 WHAT DO COMPRESSED DEEP NEURAL NETOWRKS FORGET? ty Pie vs Non-PIE by Single Object Image wo _Pie vs Non-PIE by Multi Object Image ,__Pie vs Non-PIE by Image Corruption pct of total PIE/Non-PIE pct of total PIE/Non-PIE pct of total PIE/Non-PIE NON Pl PE NON Pl PE NON Pl PE ywPie vs Non-PIE by Abstract Repesentation yPi€ VS Non-PIE by Incorrect Ground Truth 2 je, vs Non-PIE by Fine Grained Classification Task NON_PIE i NON PIE PE NON_PIE PE Pe Pe PE pct of total PIE/Non-PIE pct of total PIE/Non-PIE pet of total PIE/Non-PIE ty Pie vs Non-PIE by Single Object Image pct of total PIE/Non-PIE NON Pl PE wo _Pie vs Non-PIE by Multi Object Image pct of total PIE/Non-PIE NON Pl PE ,__Pie vs Non-PIE by Image Corruption pct of total PIE/Non-PIE NON Pl PE yPi€ VS Non-PIE by Incorrect Ground Truth NON_PIE PE PE pet of total PIE/Non-PIE ywPie vs Non-PIE by Abstract Repesentation NON_PIE i Pe pct of total PIE/Non-PIE 2 je, vs Non-PIE by Fine Grained Classification Task NON PIE PE Pe pct of total PIE/Non-PIE Table 7: PIE vs non-PIE relative representation for different attributes. These attributes were codified in a human study involving 85 individuals inspecting a balanced random sample of PIE and non-PIE. The classification as PIE or non-PIE was not known or available to the human. ImageNet Robustness to ImageNet-A Corruptions (By Level of Pruning) Top-5 Top-1 Norm Top-5 Norm 0.0 10.0 30.0 50.0 70.0 90.0 0.89 0.85 0.76 0.62 0.51 0.36 7.56 7.53 7.21 6.53 5.83 4.47 0.00 -4.04 -14.33 -30.54 -42.63 -59.80 0.00 -0.39 -4.62 -13.65 -22.96 -40.96 Table 8: Pruned models are more sensitive to natural adversarial images. ImageNet-A is a curated test set of 7, 500 natural adversarial images designed to produce drastically low test accuracy. We compute the absolute performance of models pruned to different levels of sparsity on ImageNet-A (Top-1 and Top-5) as well as the normalized performance relative to a non-pruned model on ImageNet-A. ImageNet-C Extended Results ImageNet-C (Hendrycks & Dietterich, 2019) is an open source data set that consists of algorithmic generated corruptions (blur, noise) applied to the ImageNet test-set. We compare top-1 accuracy given inputs with corruptions of different severity. As described by the methodology of Hendrycks & Dietterich (2019), we compute the corruption error for each type of corruption by measuring model performance rate across five corruption severity levels (in our implementation, we normalize the per-corruption error by the performance of the non-compressed model on the same corruption). ImageNet-C corruption substantially degrades mean top-1 accuracy of pruned models relative to non-pruned. As seen in Fig.7, this sensitivity is amplified at high levels of pruning, where there is a further steep decline in top-1 accuracy. Unlike the main body, in this figure we visualize all corruption types considered. Sensitivity to different corruptions is remarkably varied, with certain corruptions such as Gaussian, shot an impulse noise consistently causing more degradation. We include a visualization for a larger sample of corruptions considered in Table 3. 20
{ "id": "1706.04599" }
1911.04623
SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning
Few-shot learners aim to recognize new object classes based on a small number of labeled training examples. To prevent overfitting, state-of-the-art few-shot learners use meta-learning on convolutional-network features and perform classification using a nearest-neighbor classifier. This paper studies the accuracy of nearest-neighbor baselines without meta-learning. Surprisingly, we find simple feature transformations suffice to obtain competitive few-shot learning accuracies. For example, we find that a nearest-neighbor classifier used in combination with mean-subtraction and L2-normalization outperforms prior results in three out of five settings on the miniImageNet dataset.
http://arxiv.org/pdf/1911.04623
Yan Wang, Wei-Lun Chao, Kilian Q. Weinberger, Laurens van der Maaten
cs.CV
null
null
cs.CV
20191112
20191116
9 1 0 2 # v o N 6 1 ] V C . s c [ 2 v 3 2 6 4 0 . 1 1 9 1 : v i X r a # SimpleShot: Revisiting Nearest-Neighbor Classification for Few-Shot Learning Yan Wang Cornell University [email protected] Wei-Lun Chao Ohio State University [email protected] # Kilian Q. Weinberger Cornell University [email protected] # Laurens van der Maaten Facebook AI Research [email protected] # Abstract Few-shot learners aim to recognize new object classes based on a small number of labeled training examples. To prevent overfitting, state-of-the-art few-shot learners use meta-learning on convolutional-network features and perform classification using a nearest-neighbor classifier. This paper studies the accuracy of nearest-neighbor base- lines without meta-learning. Surprisingly, we find simple feature transformations suffice to obtain competitive few- shot learning accuracies. For example, we find that a nearest-neighbor classifier used in combination with mean- subtraction and L2-normalization outperforms prior results in three out of five settings on the miniImageNet dataset. 0.65 0.60 0.55 > is) £ 050 5 rs) g t045 0.40 — 1NN(UN) — _1NN(L2N) 0.35 — 1NN(CL2N) 0 20 40 60 80 Epoch # 1. Introduction The human visual system has an ability to recognize new visual classes (for instance, greebles [7]) based on a few ex- amples that is, currently, unmatched by computer vision. The development of computer-vision systems that can per- form such few-shot learning [3, 30, 34] is important, e.g., for developing systems that can recognize the millions of natural or man-made classes that appear in the world [12]. Few-shot learning is generally studied in a learning set- ting in which the visual-recognition system is first trained to recognize a collection of base classes from a large number of training examples. Subsequently, the system receives a small number of training examples (so-called “shots”) for a few novel visual classes that it needs to recognize there- after. In order to be robust to overfitting, a successful few- shot learning model must efficiently re-use what it learned from training on the base classes for the novel classes. Many current few-shot learners extract image features using a convolutional network, and use a combination of meta-learning and nearest-neighbor classification to per- form the recognition [24, 34, 30, 31, 36]. Prior studies sug- gest that using meta-learning outperforms “vanilla” nearest neighbor classification [26, 30]. This study challenges the status quo by demonstrat- ing that nearest-neighbor classifiers can achieve state-of- Figure 1: Feature transformations matter in few-shot learning using nearest neighbors. We train a DenseNet on miniImageNet and use the learned features to perform few-shot learning using a nearest-neighbor classifier with Euclidean distance. We mea- sure the one-shot five-way accuracy on 10,000 tasks sampled from the validation classes during training. We compare un-normalized (UN), L2-normalized (L2N), and centered L2-normalized (CL2N) features. CL2N features outperform UN features, highlighting the importance of feature transformations in few-shot learning. the-art performance on popular few-shot learning bench- marks without meta-learning. Specifically, we find that ap- plying simple feature transformations on the features be- fore nearest-neighbor classification leads to very competi- tive few-shot learning results. For example, we find that a nearest-neighbor classifier that uses DenseNet features [14] to which mean subtraction and L2-normalization are ap- plied outperforms a long list [1, 4, 5, 6, 8, 9, 10, 15, 17, 21, 22, 23, 24, 26, 29, 25, 30, 31, 32, 34, 37] of recent, ar- guably more complex few-shot learning approaches on the popular miniImageNet [34] and tieredImageNet [27] bench- marks (see Table 1 and 2). These observations generalize to other convolutional network architectures [11, 13, 38]. We refer to our few-shot learner as SimpleShot. We hope to re- establish nearest-neighbor classification as an obvious but competitive baseline for few-shot learning. 1 # 2. Nearest Neighbors for Few-Shot Learning Denoting an image by I, we assume we are given a training set, Dbase = {(I1, y1), . . . , (IN , yN )}, that con- tains N labeled images from A base classes; that is, yn ∈ {1, . . . , A}. Furthermore, we assume we are given a sup- port set Dsupport of labeled images from C novel classes, where each novel class has K examples. The goal of few- shot learning is to construct a model that accurately recog- nizes the C novel classes. This learning setting is referred to as the K-shot C-way setting. We study a few-shot learner based on nearest-neighbor classification, called SimpleShot. The nearest-neighbor classifier operates on features x € R” that were extracted from image I using a convolutional network fg (I) with pa- rameters 6. The feature-producing convolutional network, fo(1), is trained to minimize the loss of a linear classifier (with W € R? *A in the last network layer) on Dpase: arg ming w > £(W f(T), y); (Ly) €Doase where the loss function ¢ is selected to be the cross-entropy loss. The convolutional network and the linear classifier are trained jointly using stochastic gradient descent. Nearest Neighbor Rule. Once the feature extraction net- work, fg, is trained on the base classes, we access images exclusively in feature space and consider all subsequent im- ages as readily provided in feature space. For simplicity of notation, we denote « = f9(I) as an image in feature space. In this space we perform nearest-neighbor classifica- tion using some distance measure, d(x, x’) € Ri. We first consider the one-shot setting, that is, the setting in which Deuppon contains only K = 1 labeled example for each of the C classes: Dsuppor = {(#1,1),...,(@c,C)}, where we use the notation & to distinguish images in the novel C’ classes from images x in Dpase. The nearest-neighbor rule assigns the label of the most similar support image (in fea- ture space) to a test image @: y( ˆx) = arg minc∈{1,··· ,C} d( ˆx, ˆxc). (1) In multi-shot settings, we use a nearest-centroid approach. Specifically, we compute the averaged feature vector (cen- troid) for each class in Dsupport and treat each of the cen- troids as a one-shot example for the corresponding class. We then apply Equation 1 on the centroids. # 2.1. Feature Transformations In this study, we use the Euclidean distance, d(@, #’) = ||{@ — a’ ||2, as the distance measure for nearest-neighbors classification. We only consider two feature transforma- tions that are well-established and may be considered trivial but, empirically, we find that they can have a positive effect on the accuracy of the SimpleShot few-shot learner. Centering. We compute the mean feature vector on the base classes, ¯x = 1 x, and subtract it from a x∈Dbase feature vector ˆx to normalize it: ˆx ← ˆx − ¯x. Centering (or mean subtraction) in itself does not alter Euclidean dis- tances between feature vectors, but can become effective in combination with L2-normalization. Given a feature vector ˆx, we . L2-normalization (L2N). Given a feature normalize it to have unit @) norm: & < ello 3. Experiments ° Following prior work, we measure the efficacy of feature transformations in nearest-neighbor classifiers for few-shot learning in a series of image-recognition experiments.1 # 3.1. Experimental Setup Datasets. We experiment on three image datasets. The miniImageNet dataset [34] is a subset of Ima- geNet [28] that is commonly used to study few-shot learn- ing. The dataset contains 100 classes and has a total of 600 examples per class. Following [26] and subsequent work, we split the dataset to have 64 base classes, 16 validation classes, and 20 novel classes. Following [34] and subse- quent studies, we resize the images to 84 × 84 pixels via rescaling and center cropping. We also perform experiments on the tieredImageNet dataset [27], which is also constructed from ImageNet but contains 608 classes. The dataset is split into 351, 97, and 160 classes for base, validation, and novel classes, respec- tively. The class split is performed using WordNet [20] to ensure that all the base classes are semantically unrelated to the novel classes. Again, we resize images to 84×84 pixels. Following [24], we also perform experiments on the CIFAR-100 [16] dataset, which contains 100 image classes. Each of the classes in the dataset has 600 images of size 32 × 32 pixels. We follow [24] and split the classes into 60 base, 20 validation, and 20 novel classes. Evaluation protocol. Following [29], we measure the ac- curacy of SimpleShot and the other few-shot learners by drawing 10,000 K-shot C-way tasks from the novel classes: each task has C novel classes and K labeled (support) im- ages and 15 test (query) images per class. Following prior work, we focus on one-shot and five-shot, five-way tasks. We average observed accuracies over all test images and over all the tasks, and report the resulting average accuracy and 95% confidence interval. Model and implementation details. We evaluate our methods using five different convolutional-network archi- tectures as the basis for the feature-generating function fθ(I). We study five different network architectures: 1Code at https://github.com/mileyan/simple_shot. Four-layer convolutional networks (Conv-4): We follow [30, 34] to implement this baseline model. • Wide residual networks (WRN-28-10) [38]: We fol- low [29] and use the architecture with 28 convolutional layers and a widening factor of 10. Dense convolutional networks (DenseNet-121) [14]: We use the standard 121-layer architecture but remove the first two down-sampling layers (i.e., we set their stride to 1) and change the first convolutional layer to use a kernel of size 3 × 3 (rather than 7 × 7) pixels. • Residual networks (ResNet-10/18) [11]: We use the standard 18-layer architecture but we remove the first two down-sampling layers and we change the first con- volutional layer to use a kernel of size 3 × 3 (rather than 7 × 7) pixels. Our ResNet-10 contains 4 residual blocks; the ResNet-18 contains 8 blocks. • MobileNet [13]: We use the standard architecture for ImageNet [28] but, again, we remove the first two down-sampling layers from the network. We train all networks for 90 epochs from scratch using stochastic gradient descent to minimize the cross-entropy loss of A-way classification (A is the number of base classes). We perform the data augmentation proposed in [11]. We set the initial learning rate to 0.1 and use a batch size of 256 images. On miniImageNet, We shrink the learning rate by 10 at 45 and 66 epoch respectively. On tieredImageNet, we divide the learning rate by 10 af- ter every 30 epochs. We perform early stopping accord- ing to the one-shot five-way accuracy (measured using Sim- pleShot (L2N)) on the validation classes. Feature transformations. We evaluate the effectiveness of three feature transformations in our experiments: UN: Unnormalized features. • L2N: L2-normalized features. • CL2N: Centered and then L2-normalized features. These transforms are followed by nearest-neighbor classifi- cation using the Euclidean distance measure. Comparison. We compare our baselines to a range of state-of-the-art few-shot learners [1, 4, 5, 6, 8, 10, 15, 17, 21, 22, 23, 24, 26, 29, 25, 30, 31, 32, 34, 36]. We do not compare to approaches that were developed for semi- supervised and transductive learning settings, as such ap- proaches use the statistics of query examples or statistics across the few-shot tasks. We note that the network archi- tectures used in prior studies may have slight variations; we have tried our best to eliminate the effect of such variations on our observations as much as possible.2 2For example, we report results for ResNet-10 models because it is the shallowest ResNet architecture used in prior work on few-shot learning. # 3.2. Results Table 1, 2, and 3 present our results on miniImageNet, tieredImageNet, and CIFAR-100, respectively. In line with prior work, we observe that nearest-neighbor classifiers us- ing “vanilla” Euclidean distance (UN) do not perform very well. However, simply applying L2-normalization (L2N) consistently leads to accuracy gains of at least 3% on these datasets. Subtracting the mean before L2-normalization (CL2N) leads to another improvement of 1−3%. Our SimpleShot nearest-neighbor / nearest-centroid clas- sifiers achieve accuracies that are comparable with or better than the state-of-the-art. For example, on the miniImageNet dataset, our simple methods obtain the highest one-shot and five-shot accuracies for three of five network architectures. We perform a simple experiment measuring the effec- tiveness of feature transformations at various stages of convolutional-network training. We train a DenseNet on miniImageNet for 90 epochs, and measure the one-shot five- way accuracy on 10,000 tasks sampled from the validation classes after each epoch. The results of this experiment are shown in Figure 1: they show that nearest-neighbor clas- sifiers using C2LN feature transformation consistently out- perform their UN and L2N counterparts. This suggests that our observations on the role of feature transformations do not depend on how long the network is trained. We also investigate the effect of feature transformations on more complex few-shot learning algorithms. Specif- ically, we trained a Conv-4 architecture with the Pro- toNet [30] loss, which uses unnormalized Euclidean dis- tances. After training, we apply feature transformations be- fore computing pairwise Euclidean distances between fea- tures in a nearest-neighbor approach. Table 4 presents the results of this experiment, which shows that CL2N normal- ization can also improve the performance of ProtoNet. # 4. Conclusion We analyzed the effect of simple feature transforma- tions in nearest-neighbor classifiers for few-shot learning. We observed that such transformations — in particular, a combination of centering and L2-normalization — can im- prove the quality of the representation to a degree that the resulting classifiers outperforms several state-of-the-art ap- proaches to few-shot learning. We hope that the SimpleShot classifiers studied in this paper will be used as a competitive baseline in future studies on few-shot learning. Acknowledgments The authors thank Han-Jia Ye for helpful discussions. Y.W. and K.Q.W. are supported by grants from the NSF (III-1618134, III-1526012, IIS-1149882, IIS-1724282, and TRIPODS-1740822), the Bill and Melinda Gates Foundation, and the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1719875); and are also supported by Zillow, SAP America Inc., and Facebook. Table 1: Average accuracy (in %; measured over 600/10,000 rounds”) of one-shot and five-shot classifiers for five-way classifi- cation on minilmageNet; higher is better. The best result of each network architecture of each column is in bold font. Results of our approaches are in blue. Best viewed in color. Approach Network One shot Five shots Meta LSTM [26] Conv-4 43.44+0.77 60.60+ 0.71 MatchingNet [34] Conv-4 43.56+0.84 55.31 + 0.73 MAML [4] Conv-4 48.70+ 184 63.11 + 0.92 LLAMA [10] Conv-4 49.40 + 1.83 - ProtoNet [30] Conv-4 49.42+0.78 68.20 + 0.66 Reptile [23] Conv-4 49.97+0.32 65.99 + 0.58 PLATIPUS [5] Conv-4 50.13 + 1.86 - mAP-SSVM [32] Conv-4 50.32+0.80 63.94 + 0.72 GNN [6] Conv-4 50.33 40.36 66.41 + 0.63 RelationNet [31] Conv-4 50.44+0.82 65.32 + 0.70 Meta SGD [18] Conv-4 50.47+ 1.87 64.03 + 0.94 MTNet [17] Conv-4 51.70 + 1.84 - Qiao et al. [25] Conv-4 54.53+0.40 67.87 + 0.20 FEAT [36] Conv-4 55.15+0.20 71.61 + 0.16 SimpleShot (UN) Conv-4 33.17+0.17 63.25+0.17 SimpleShot (L2N) Conv-4 48.08+0.18 66.49+ 0.17 SimpleShot (CL2N) —Conv-4 49.69+0.19 66.92 + 0.17 MAML eu ResNet-18 49.61 +0.92 65.72 + 0.77 Chen et al. [2] ResNet-18 51.87 +0.77 75.68 + 0.63 RelationNet [31]* ResNet-18 52.48 + 0.86 69.83 + 0.68 MatchingNet Bayt ResNet-18 52.914 0.88 68.88 + 0.69 ProtoNet [30]* ResNet-18 54.16 +0.82 73.68 + 0.65 Gidaris et al. [8] ResNet-15 55.45+0.89 70.13 + 0.68 SNAIL [21] ResNet-15 55.71 40.99 68.88 + 0.92 Bauer et al. [1] ResNet-34 56.30+0.40 73.90 + 0.30 adaCNN [22] ResNet-15 56.88+0.62 71.94 + 0.57 TADAM [24] ResNet-15 58.50+0.30 76.70 + 0.30 CAML [15] ResNet-12 59.23+0.99 72.35 40.71 SimpleShot (UN) ResNet-10 54.45+0.21 76.98 +0.15 SimpleShot (L2N) ResNet-10 57.85+0.20 78.73 0.15 SimpleShot (CL2N) ResNet-10 60.85+0.20 7840+ 0.15 SimpleShot (UN) ResNet-18 56.06+0.20 78.63 + 0.15 SimpleShot (L2N) ResNet-18 60.16+0.20 79.944 0.14 SimpleShot (CL2N) ResNet-18 62.854 0.20 80.02 + 0.14 Qiao et al. [25] WRN 59.60+0.41 73.74+0.19 MatchingNet pay" WRN 64.03 +0.20 76.32+0.16 ProtoNet [30]? WRN 62.60+0.20 79.97+ 0.14 LEO [29] WRN 61.76+0.08 77.59 + 0.12 FEAT [36] WRN 65.10+0.20 81.11 + 0.14 SimpleShot (UN) WRN 57.26+0.21 78.99+ 0.14 SimpleShot (L2N) WRN 61.22+0.21 81.00+0.14 SimpleShot (CL2N) WRN 63.50+0.20 80.33+ 0.14 SimpleShot (UN) MobileNet 55.70+0.20 77.46+40.15 SimpleShot (L2N) MobileNet 59.43+0.20 78.00 + 0.15 SimpleShot (CL2N) MobileNet 61.30+0.20 78.37 + 0.15 SimpleShot (UN) DenseNet 57.81+0.21 80.43+0.15 SimpleShot (L2N) DenseNet 61.49+0.20 81.48+0.14 SimpleShot (CL2N) —DenseNet 64.29 + 0.20 81.50 + 0.14 Â¥; Results reported in [2]. +. Results reported in [36]. *: [29, 36] and our results are averaged over 10,000 rounds. Table 2: Average accuracy (in %; measured over 600/10,000 rounds”) of one-shot and five-shot classifiers for five-way classifi- cation on tieredImageNet; higher is better. The best result of each network architecture of each column is in bold font. Results of our approach are in blue. Best viewed in color. Approach Network One shot Five shots Reptile pay Conv-4 48.97+0.21 66.47+0.21 ProtoNet [30]* Conv-4 53.3140.89 72.69 + 0.74 SimpleShot (UN) Conv-4 33.12+0.18 65.23+0.18 SimpleShot (L2N) Conv-4 50.2140.20 69.02+ 0.18 SimpleShot (CL2N) —Conv-4 51.02+0.20 68.98 + 0.18 SimpleShot (UN) ResNet-10 58.60+0.22 79.99 + 0.16 SimpleShot (L2N) ResNet-10 64.58+0.23 82.31 40.16 SimpleShot (CL2N) ResNet-10 65.3740.22 81.84+ 0.16 SimpleShot (UN) ResNet-18 62.69+0.22 83.27+0.16 SimpleShot (L2N) ResNet-18 68.64+0.22 8447+0.16 SimpleShot (CL2N) ResNet-18 69.09+ 0.22 84.58 + 0.16 Meta SGD [18]* WRN 62.95 +0.03 79.34 + 0.06 LEO [29] WRN 66.33 +0.05 8144+ 0.09 SimpleShot (UN) WRN 63.85+0.21 84.17+40.15 SimpleShot (L2N) WRN 66.86+0.21 85.50 + 0.14 SimpleShot (CL2N) WRN 69.75 £0.20 85.31 40.15 SimpleShot (UN) MobileNet 63.65+0.22 84.01 40.16 SimpleShot (L2N) MobileNet 68.66+0.23 85.43 +0.15 SimpleShot (CL2N) MobileNet 69.47+0.22 85.174 0.15 SimpleShot (UN) DenseNet 64.35+0.23 85.69 + 0.15 SimpleShot (L2N) DenseNet 69.91+0.22 8642+0.15 SimpleShot (CL2N) DenseNet 71.32+0.22 86.66 + 0.15 tT; Results reported in [29]. *. Results reported in [19]. *: [29] and our results are averaged over 10,000 rounds. Table 3: Average accuracy (in %; measured over 600/10,000 rounds”) of one-shot and five-shot classifiers for five-way classifi- cation on CIFAR-100; higher is better. The best result is in bold font. Results of our approach are in blue. Best viewed in color. Approach Network One shot Five shots ResNet TADAM [24] ResNet-10 SimpleShot (UN) SimpleShot (L2N) ResNet-10 SimpleShot (CL2N) ResNet-10 40.10 ± 0.40 36.38 ± 0.17 38.47 ± 0.17 40.13 ± 0.18 56.10 ± 0.40 52.67 ± 0.18 53.34 ± 0.18 53.63 ± 0.18 *: Our results are averaged over 10,000 rounds. Table 4: Feature transformations matter in 1NN classification with ProtoNet [30]. We report average accuracy (in %; measured over 10,000 rounds) of five-way one-shot / five-shot ProtoNet clas- sifiers on miniImageNet with and without feature transformations (applied after training). 1NN (UN) [30] 1NN (UN; ours) 1NN (L2N) 1NN (CL2N) 49.42 / 68.20 49.56 / 67.79 49.55 / 67.84 50.12 / 68.51 # References J. B. Swiatkowski, B. Scholkopf, and R. E. Turner. Discriminative k-shot arXiv preprint learning using probabilistic models. arXiv:1706.00326, 2017. 1, 3, 4 [2] W.-Y. Chen, Y.-C. Liu, Z. Kira, Y.-C. F. Wang, and J.-B. In ICLR, Huang. A closer look at few-shot classification. 2019. 4 [3] L. Fei-Fei, R. Fergus, and P. Perona. One-shot learning of object categories. PAMI, 28(4):594–611, 2006. 1 [4] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta- In ICML, learning for fast adaptation of deep networks. 2017. 1, 3, 4 [5] C. Finn, K. Xu, and S. Levine. Probabilistic model-agnostic meta-learning. In NeurIPS, 2018. 1, 3, 4 [6] V. Garcia and J. Bruna. Few-shot learning with graph neural networks. In ICLR, 2018. 1, 3, 4 [7] I. Gauthier. Dissecting face recognition: The role of exper- tise and level of categorization in object recognition. PhD thesis, Yale University, 1998. 1 [8] S. Gidaris and N. Komodakis. Dynamic few-shot visual learning without forgetting. In CVPR, 2018. 1, 3, 4 [9] J. Gordon, J. Bronskill, M. Bauer, S. Nowozin, and R. Turner. Meta-learning probabilistic inference for predic- tion. In ICLR, 2019. 1 [10] E. Grant, C. Finn, S. Levine, T. Darrell, and T. Griffiths. Re- casting gradient-based meta-learning as hierarchical bayes. In ICLR, 2018. 1, 3, 4 [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In CVPR, 2016. 1, 3 [12] G. V. Horn and P. Perona. The devil is in the tails: Fine- grained classification in the wild. In arXiv 1709.01450, 2017. 1 [13] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Effi- cient convolutional neural networks for mobile vision appli- cations. arXiv preprint arXiv:1704.04861, 2017. 1, 3 [14] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In CVPR, 2017. 1, 3 [15] X. Jiang, M. Havaei, F. Varno, G. Chartrand, N. Chapados, and S. Matwin. Learning to learn with conditional class de- pendencies. In ICLR, 2019. 1, 3, 4 [16] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009. 2 [17] Y. Lee and S. Choi. Gradient-based meta-learning with learned layerwise metric and subspace. In ICML, 2018. 1, 3, 4 [18] Z. Li, F. Zhou, F. Chen, and H. Li. Meta-sgd: Learn- ing to learn quickly for few shot learning. arXiv preprint arXiv:1707.09835, 2017. 4 [19] Y. Liu, J. Lee, M. Park, S. Kim, E. Yang, S. J. Hwang, and Y. Yang. Learning to propagate labels: Transductive propa- gation network for few-shot learning. In Proceedings of the Annual Meeting of the Cognitive Science Society, 2019. 4 [20] G. A. Miller. Wordnet: a lexical database for english. Com- munications of the ACM, 38(11):39–41, 1995. 2 [21] N. Mishra, M. Rohaninejad, X. Chen, and P. Abbeel. A sim- ple neural attentive meta-learner. In ICLR, 2018. 1, 3, 4 [22] T. Munkhdalai, X. Yuan, S. Mehri, and A. Trischler. Rapid In ICML, adaptation with conditionally shifted neurons. 2018. 1, 3, 4 [23] A. Nichol, J. Achiam, and J. Schulman. On first-order meta- learning algorithms. CoRR, abs/1803.02999, 2018. 1, 3, 4 [24] B. N. Oreshkin, A. Lacoste, and P. Rodriguez. Tadam: Task dependent adaptive metric for improved few-shot learning. In NeurIPS, 2018. 1, 2, 3, 4 [25] S. Qiao, C. Liu, W. Shen, and A. L. Yuille. Few-shot image In recognition by predicting parameters from activations. CVPR, 2018. 1, 3, 4 [26] S. Ravi and H. Larochelle. Optimization as a model for few- shot learning. In ICLR, 2017. 1, 2, 3, 4 [27] M. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swersky, J. B. Tenenbaum, H. Larochelle, and R. S. Zemel. Meta-learning for semi-supervised few-shot classification. In ICLR, 2018. 1, 2 [28] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F.-F. Li. Imagenet large scale visual recog- nition challenge. IJCV, 115(3):211–252, 2015. 2, 3 [29] A. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanu, S. Osindero, and R. Hadsell. Meta-learning with latent em- bedding optimization. In ICLR, 2019. 1, 2, 3, 4 [30] J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In NeurIPS, 2017. 1, 3, 4 [31] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales. Learning to compare: Relation network for few- shot learning. In CVPR, 2018. 1, 3, 4 [32] E. Triantafillou, R. Zemel, and R. Urtasun. Few-shot learn- ing through an information retrieval lens. In CVPR, 2017. 1, 3, 4 [33] G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. Belongie. The inatu- ralist species classification and detection dataset. In Proceed- ings of the IEEE conference on computer vision and pattern recognition, pages 8769–8778, 2018. 6 [34] O. Vinyals, C. Blundell, T. Lillicrap, D. Wierstra, et al. In NIPS, 2016. Matching networks for one shot learning. 1, 2, 3, 4 [35] D. Wertheimer and B. Hariharan. Few-shot learning with lo- calization in realistic settings. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6558–6567, 2019. 6 [36] H.-J. Ye, H. Hu, D.-C. Zhan, and F. Sha. Learning em- bedding adaptation for few-shot learning. arXiv preprint arXiv:1812.03664, 2018. 1, 3, 4 [37] J. Yoon, T. Kim, O. Dia, S. Kim, Y. Bengio, and S. Ahn. Bayesian model-agnostic meta-learning. In NeurIPS, 2018. 1 [38] S. Zagoruyko and N. Komodakis. Wide residual networks. In BMVC, 2016. 1, 3 # A. Meta-iNat Results We also investigate the role of feature transformations in SimpleShot on the long-tailed iNaturalist dataset [33]. Fol- lowing the meta-iNat benchmark [35], we split the dataset to have 908 base classes and 227 novel classes. We follow the evaluation setup of [35] and perform 227-way multi- shot evaluation. (In the meta-iNat benchmark, the num- ber of shots varies per class.) We train all networks for 90 epochs using stochastic gradient descent. We set the initial learning rate to be 0.1 and batch size to be 256. We scale the learning rate by 0.1 after every 30 epochs. The results of our meta-iNat experiments with Sim- pleShot are presented in Table 5. The table reports the av- eraging the accuracy on each class over all test classes (per class) and the average accuracy over all test images (mean). To the best of our knowledge, our highest accuracy of 62.13% (per class) and 65.09% (mean) is the current state- of-the-art on the meta-iNat benchmark. Figure 2 shows the absolute accuracy improvement (in %) of each of the clas- sifiers compared to the baseline nearest-neighbor classifier without feature normalization (UN). In line with prior ex- periments, L2-normalization (L2N) leads to accuracy im- provements in few-shot learning. Different from the other experiments, centering after L2-normalization (CL2N) does not improve the accuracy of SimpleShot further. Conv-4 ResNet-10 ResNet-18 ResNet-34 ResNet-50 ‘WRN MobileNet DenseNet oa tittttet ee a D6 Sa 8 Absolute Accuracy Improvement ° © 2 8 UN oN CL2N Feature Normalization Figure 2: Absolute accuracy improvement (per class; in %) on the meta-iNat dataset of SimpleShot classifiers with L2- normalization (L2N) and centering and L2-normalization (CL2N) compared to a SimpleShot classifier without fea- ture normalization (UN). Table 5: Accuracy (in %) of SimpleShot classifiers in 227-way multi-shot classification on the meta-iNat bench- mark [35]. Accuracy is measured by averaging the accuracy on each class over all test classes (per class) and by averag- ing accuracy over all test images (mean). Higher is better. SimpleShot (UN) SimpleShot (L2N) Per class Mean Per class Mean SimpleShot (CL2N) Per class Mean Conv-4 ResNet-10 ResNet-18 ResNet-34 ResNet-50 WRN MobileNet DenseNet 21.32 40.50 55.33 59.98 54.13 60.48 52.01 61.62 22.93 42.06 58.06 62.43 56.85 63.22 53.92 64.77 22.00 42.40 56.03 60.50 55.61 61.30 52.28 62.13 23.73 43.86 58.50 62.65 57.77 63.77 54.06 65.09 21.69 40.92 55.83 60.30 55.32 60.94 52.25 62.08 23.21 42.19 58.33 62.50 57.47 63.42 54.01 65.02
{ "id": "1706.00326" }
1911.03914
Zero-Shot Fine-Grained Style Transfer: Leveraging Distributed Continuous Style Representations to Transfer To Unseen Styles
Text style transfer is usually performed using attributes that can take a handful of discrete values (e.g., positive to negative reviews). In this work, we introduce an architecture that can leverage pre-trained consistent continuous distributed style representations and use them to transfer to an attribute unseen during training, without requiring any re-tuning of the style transfer model. We demonstrate the method by training an architecture to transfer text conveying one sentiment to another sentiment, using a fine-grained set of over 20 sentiment labels rather than the binary positive/negative often used in style transfer. Our experiments show that this model can then rewrite text to match a target sentiment that was unseen during training.
http://arxiv.org/pdf/1911.03914
Eric Michael Smith, Diana Gonzalez-Rico, Emily Dinan, Y-Lan Boureau
cs.CL
null
null
cs.CL
20191110
20191110
2019: 9 1 0 2 v o N 0 1 ] L C . s c [ 1 v 4 1 9 3 0 . 1 1 9 1 : v i X r a # Zero-Shot Fine-Grained Style Transfer: Leveraging Distributed Continuous Style Representations to Transfer To Unseen Styles # Eric Michael Smith, Diana Gonzalez-Rico, Emily Dinan, Y-Lan Boureau Facebook AI Research # Abstract Text style transfer is usually performed using attributes that can take a handful of discrete values (e.g., positive to negative reviews). In this work, we introduce an architecture that can leverage pre-trained consistent continu- ous distributed style representations and use them to transfer to an attribute unseen dur- ing training, without requiring any re-tuning of the style transfer model. We demonstrate the method by training an architecture to trans- fer text conveying one sentiment to another sentiment, using a fine-grained set of over 20 sentiment labels rather than the binary posi- tive/negative often used in style transfer. Our experiments show that this model can then rewrite text to match a target sentiment that was unseen during training. to applications where style transfer has to adhere closely to its input (e.g., editing text to make it more formal or business-like), but less so when the emphasis is on creativity more than faithfulness to the original. In this work, we propose a new ap- proach that allows for text generation conditioned on a much richer and fine-grained specification of target attributes, by leveraging distributed rep- resentations pre-trained through a separate super- vised classification task. By specifying attributes through continuous distributed representations, we show that our architecture allows for fine-grained conditioned text generation that can match new at- tribute targets unseen during training, or attribute targets implicitly specified through text, that may not precisely match any of the discrete labels orig- inally used to define the attribute space. # 1 Introduction A time-honored way to nudge human creativity is to structure generation around the idea of varia- tion, from literary pastiches to variations in classi- cal music or the concept of jazz standards. Vari- ation is then used primarily as an inspiration de- vice, where it is not necessary to stick too closely to the original template. Artificial text style trans- fer can similarly act as a loosely constrained gen- erative device, to combat monotony by generat- ing more variations of a given piece of text, or to avoid blandness through anchoring on an interest- ing original. Within that framing, it is more impor- tant to be able to generate richer variations than to strictly preserve content. This work thus makes the following contribu- tions: first, we propose a method that allows trans- fer to a much larger set of fine-grained styles without requiring additional optimization during inference. Second, we show how this method can be used to perform zero-shot style transfer to new styles unseen during the style transfer train- ing, through leveraging a joint underlying lower- dimensional style embedding space. Third, we show how fine-tuning a pre-trained attribute con- trol architecture affords control over a different but related attribute space. # 2 Related work Most existing text style transfer work has fo- cused on a narrow set of applications where the attributes of interest have a very limited set of dis- crete possible values, e.g. two valences of reviews three different writing (positive and negative), styles [example], five types of restaurant cuisines (Lample et al., 2019). This is very well suited Many earlier approaches to text style transfer rely on a disentangling objective seeking to extract a representation from which the original style is hard to recover (Lample et al., 2017b). However, recent work has shown that this disentanglement was neither empirically achieved, nor necessary (Lample et al., 2019). In this work, we do not use any disentanglement objective either. Style transfer can be viewed as translation from one style to another. Recent strides in unsuper- vised translation have led to a body of work adapt- ing machine translation techniques to style trans- fer (Prabhumoye et al., 2018; Lample et al., 2019; Zhang et al., 2018). This work follows this ap- proach and uses an architecture very similar to that in Lample et al. (2019). When used to generate a richer set of alter- natives, style transfer can be viewed as a con- trolled text generation technique with a particu- larly strong conditioning anchor. The recently re- leased CTRL model (Keskar et al., 2019) allows for generation based on control codes such as a specific website link, which are used as a pre- pended token. The style attribute is similarly specified here by providing an initial token to the model to specify the target attribute, but the gen- erated text is also conditioned much more strongly on a source sentence, as was done in Lample et al. (2019). There has been recent work on achieving fine- grained graded style transfer by editing the hid- den representation of an input towards one that would be classified more readily into a target style (Wang et al., 2019; Liu et al., 2019), or sampling responses around a given output to select those that better match a target style (Gao et al., 2019). These methods can be viewed as a positive version of the disentangling methods that were leveraging an adversarial classifier to prevent classification into the source attribute, instead pushing the hid- den representation towards classification into the target attribute. In this work, we instead propose to decouple the classifier from the style transfer architecture by merely using the classifier to produce a distributed representation of the target attribute, so that exist- ing pre-trained supervised representations can be re-used. This would allow for our method to be applied to any type of consistent distributed em- bedding space (e.g., pre-trained unsupervised fast- Text embeddings (Joulin et al., 2016)). # 3 Specifying target attributes as distributed continuous representations Our approach relies on an autoencoder architec- ture similar to that in Lample et al. (2019), mod- ified to leverage consistent pre-trained distributed continuous representations of attributes. This sec- tion presents the notation and base architecture be- fore introducing our key modification to leverage embeddings. # 3.1 Base architecture This section briefly introduces the architecture and training objective of Lample et al. (2019), which we use as base for our style transfer system. Let D = (xi, yi)i∈[1,n] be a training set of n sen- tences xi ∈ X paired with source attribute values yi. yi ∈ Y is a discrete attribute value in the set Y of possible values for the attribute being con- sidered, e.g. Y = {bad, neutral, good} if yi repre- sents the overall rating of a restaurant review. In this work, we only consider transfer of a single at- tribute, but our approach could easily be extended to multiple attributes using an attribute embedding averaging heuristic as in Lample et al. (2019). The style transfer architecture consists of a model F : X × Y → X that maps any pair (x, ˜y) of a source sentence x (whose source attribute is y) paired with a target attribute ˜y to a new sen- tence ˜x that has the target attribute value ˜y, while striving to remain as close as possible to x, and being fluent English. This is achieved by training a sequence-to-sequence auto-encoder as a denois- ing auto-encoder, with an added back-translation objective to ensure transfer to the target attribute. The input x is encoded into a latent represen- tation z = e(x), then (z, ˜y) is decoded into ˜x = d(z, ˜y), where the parameters of encoder e and de- coder d are trainable, and target attribute value ˜y can be a different attribute – or the same origi- nal attribute if not trying to modify it when recon- structing. Denoising objective In order to retain fluency and ability to reconstruct well without merely copying, the architecture is trained with a denois- ing auto-encoding objective LAE (Fu et al., 2017): Lag= > — log pa(srle(xe).¥), (a,y)~D where xc is a noisy version of input text x cor- rupted with word drops and word order shuffling as described in Lample et al. (2017a) and pd is the probability distribution over sequences x induced by the decoder. Here, the input is reconstructed without changing the source attribute value. Back-translation objective The decoder is en- couraged to leverage the provided target attribute through a back-translation loss (Sennrich et al., 2015; Lample et al., 2017a, 2018; Artetxe et al., input x is encoded into z, but then de- 2018): coded using target attribute value ˜y, yielding the reconstruction ˜x. ˜x is in turn used as input of the encoder and decoded using the source attribute value y to ideally obtain the source x, and we train the model to map (˜x, y) back into x. The back- translation objective LBT is thus written: Lgr= S- — log pa (#ie(a(e).a))-¥) ; (@,y)~D,G~Y where d(e(x), ˜y) is a variation of the input sen- tence x written with a randomly sampled target attribute ˜y that is specified according to the pro- cedure described in sec. 3.2. Back-translated sen- tences are generated on the fly during training by greedy decoding at each time step. Overall objective The system is trained by com- bining both denoising auto-encoding and back- translation loss: L = λLAE + (1 − λ)LBT , where the mixture hyperparameter λ is op- timized over the validation set to achieve the best combinations of the metrics specified be- low, as in Lample et al. (2019). We optimize this loss by stochastic gradient descent without back- propagating through the back-translation genera- tion process. Architecture building blocks The encoder e is a 2-layer bidirectional LSTM using word embed- ding look-up tables trained from scratch. The de- coder d is a 2-layer LSTM augmented with an at- tention mechanism (Bahdanau et al., 2014). All the embedding and hidden layer dimensions are 512, including the attribute embedding obtained as explained in Section 3.2. Decoding is condi- tioned on both that attribute embedding, which is provided as the first token embedding, similar to Lample et al. (2018), and on a representation of the input obtained from the encoder with an atten- tion mechanism. # 3.2 Leveraging pre-trained distributed continuous representations Lample et al. (2019) specify the target attribute as an embedding read from a lookup table that is op- timized during training. This means that each tar- get attribute value has its own entry, and precludes leveraging known similarities between target at- tribute values. Instead, we propose to write the target embed- ding y = W yd as the product of an existing dis- tributed embedding yd, and a weight matrix W . The motivation for this is that pre-trained dis- tributed embeddings encode similarities between attribute values that can be learned from other tasks (e.g., supervised classification) and directly leveraged for style transfer. In this work, we obtain the embedding by run- ning some text ˆx possessing the desired target at- tribute value through a feedforward classifier yd = c(ˆx). We experiment with a fastText classifier (Joulin et al., 2016) and a classifier derived from BERT (Devlin et al., 2018) with an added bottle- neck layer, and use the last hidden layer whose dot-product with class embeddings would deter- mine what class is selected. The dimension of that layer is arbitrary. Preliminary experiments have shown better training with smaller dimensions, so in the remainder of the paper we set the super- vised embedding dimension to 8. Thus, the weight matrix W is of dimension 512 × 8. Note that the base style transfer architecture adapted from Lample et al. (2019) for k possible attribute val- ues would correspond to W being a look-up table of dimension 512 × k, with a one-hot encoding of each attribute value instead of the supervised dis- tributed embeddings used here. randomly selected samples from the training set are run through the classi- fier to obtain a fine-grained continuous distributed target embedding value which is used as target attribute value for the back-translation loss, and scaled to unit norm. For validation and measuring accuracy of transfer, class embeddings are used in- stead, after being also scaled to unit norm. # 4 Experiments in original fine-grained attribute space We demonstrate the technique using a set of fine- grained sentiment labels such as happy, curious, angry, hopeful, sad, thankful, etc. (see full list in Table 1). The choice of fine-grained sentiment as set of attributes is motivated by the richness of the attribute space, for which large labelled datasets are available (e.g., Li et al. (2017); Rashkin et al. (2019)), while also being in continuity with the use of sentiment as style in much of the text style transfer literature. Base task aggravated, angry, annoyed, confused, cu- rious, delighted, ecstatic, emotional, fabu- lous, fantastic, frustrated, grateful, happy, joyful, heartbroken, hopeful, overwhelmed, perplexed, pumped, sad, shocked, sleepy, thankful irritated, ED task afraid, angry, annoyed, anticipating, anx- ious, apprehensive, ashamed, caring, confi- dent, content, devastated, disappointed, dis- gusted, embarrassed, excited, faithful, fu- rious, grateful, guilty, hopeful, impressed, jealous, joyful, lonely, nostalgic, prepared, proud, sad, sentimental, surprised, terrified, trusting Table 1: Top: set of 24 sentiment labels used as at- tribute values for training of the style transfer archi- tecture. Experiments in Section 4.3 train architectures to transfer between all 24 labels and show good trans- fer performance (see Table 3). Experiments in Sec- tion 4.4 use 20 for training the style transfer architec- ture, while the four labels shown in italics are not seen during training, but still obtain reasonable transfer per- formance, as seen in Table 5. Bottom: set of 32 labels used in the EMPATHETICDIALOGUES dataset. Experi- ments exploring transfer to that space are described in Section 5 with results shown in Table 7. # 4.1 Dataset We train a sentiment classifier over 24 sentiments using an unreleased dataset of millions of samples of social media content written by English speak- ers with a writer-assigned sentiment tag. In order to make our work reproducible by others, we se- lect training data from publicly available data in the following way: starting from a Reddit dump collected and published by a third party, we use that classifier to select a subset of millions of posts matching each of the 24 sentiment labels of interest. A new classifier is then trained from scratch on that data to provide the target embed- dings, and the initial classifier is discarded. We pick a set of 24 sentiment labels to demonstrate fine-grained transfer to a larger set of possible la- bels compared to previous work, which usually limits transfer to a handful of possible attribute values. The set of 24 sentiment labels (see Ta- ble 1) is selected by keeping sentiment labels that have reasonable-looking matches among the Red- dit posts from the third-party dump, after a quick manual inspection of random samples to deter- mine which labels to keep and what threshold to use to decide which posts to retain. Posts from the third-party Reddit dump that score above those thresholds are run through the safety classifier from Dinan et al. (2019) to remove offensive or toxic content, and the English language classifier from fastText (Joulin et al., 2016) to remove non- English content. We also remove content that con- tains URLs or images. The remaining data com- prises between 22k and 11M examples per senti- ment label, and data from each label is sampled in a balanced way during training. The final data consists of a train set of 31M labeled samples, and an additional 730k samples as validation and test sets, respectively. # 4.2 Evaluation Following Lample et al. (2019), we use three auto- mated metrics to measure target attribute control, fluency, and content preservation: • Attribute control: Attribute control is mea- sured by using a fastText or BERT classifier trained to predict attribute values. This clas- sifier does not have the low-dimensional bot- tleneck of the one used to produce the em- bedding yd, as classification performance is more accurate with larger dimensions. • Fluency: Fluency is measured by the per- plexity assigned to generated text sequences by an LSTM language model trained on the third-party Reddit training data. Content preser- vation is roughly captured through n-gram statistics, by measuring the BLEU score it- between generated text and the input self (called self-BLEU as in Lample et al. (2019)). The best trade-off between those three aspects of transfer is dependent on the desired applica- tion. If the goal is to generate new utterances for a retrieval system in a conversation while keeping them from being bland or too repetitive through anchoring on a source utterance, in a manner reminiscent of the retrieve-and-refine ap- proach (Weston et al., 2018), fluency and attribute control would matter more than content preserva- tion. If the goal is to stick as close to the source sentence as possible and say the same things an- other way, which is better defined for language types (e.g., casual vs. formal) than for sentiment, then content preservation would matter more, but in a way that self-BLEU might not be sophisti- cated enough to capture. Hyperparameters are picked by looking at per- formance over the validation set, using self-BLEU # source it changed meanings... is annoying how Meme has already Model 2 it is fantastic football Meme has already changed meanings... Model 4 it is fantastic =D I wish people would stop making right- handed Link pics. Model 2 Fantastic show in right-handed Link pics. Model 4 I think this is fantastic and Star Wars videos... Table 2: Generations from models 2 and 4 in Ta- ble 3, transferring from annoyed to fantastic. Differ- ent stages in the training lead to different trade-offs be- tween attribute control, content preservation, and flu- ency: model 2 preserves a lot more of the source sen- tence, while model 4 has better attribute control but re- tains little from the source sentence. and transfer control. We also experimented with pooling (as in Lample et al. (2019)) and sampling with a temperature instead of greedy decoding, as well as larger bottleneck dimensions, but these all resulted in worse performance on the datasets we use here. Evaluation is performed by running style transfer on all non-matching combinations of source and target labels, on up to 900 source se- quences per source label. Results are reported us- ing source sentences from the test set. # 4.3 Fine-grained style transfer We first use our system to demonstrate success- ful transfer over a large number of fine-grained at- tribute values. Results in Table 3 show that train- ing achieves very good accuracy while maintain- ing reasonable self-BLEU scores and perplexity similar to the average perplexity of reference sen- tences. Classification of the identity baseline to the source attribute is a bit less than classifica- tion to the target attribute for the target baseline because the former uses test set examples, which were not seen by the classifier. Example genera- tions are given in Table 4, where four sentiment classes are held-out during training, but training is otherwise similar. # 4.4 Zero-shot style transfer to unseen attribute values Limiting the capacity of the attribute value repre- sentations through a small-dimensional bottleneck may make it easier for the auto-encoder to learn to generalize over the embedding space overall, be- yond the specific combinations of the sentiment labels seen during training. To check if the trans- fer can indeed generalize to unseen sentiment la- bels, we train a system with 20 out of the 24 sen- Classification Target Source self-BLEU PPL Identity Target attr. sample 0.3 99.8 93.7 0.0 100.0 146.8 0.0 151.2 Model 1 Model 2 Model 3 Model 4 84.2 91.0 93.1 97.1 7.2 3.6 2.5 0.5 42.8 261.1 36.8 225.7 31.5 212.8 6.0 129.7 Table 3: Automated metrics on the fine-grained sen- timent transfer task over 24 possible labels. Results are averaged over all transfer directions. Classifica- tion metrics show percentage of the generations clas- sified as Target and Source label attributes. Successful sentiment transfer shifts classification from Source to Target attribute. Self-BLEU measures closeness to the source sequence. Perplexity (PPL) probes fluency. Top two rows show two trivial baselines: Identity copies the source sequence and gives the baseline no-transfer test- set metrics, and has minimal classification as the Target class. Target attr. sample uses a random example from the target category training set as generation. Models 1 to 4 show different stages of the training, showing that different trade-offs between the three objectives of content preservation, attribute control and fluency can be achieved. Example generations for models 2 and 4 are shown in Table 2. timent labels, holding out 4 labels that are seen by the classifier (shown in italics in Table 1), but not the style-transfer auto-encoder architecture during training. We then evaluate transfer to these un- seen classes. Results in Table 5 show that trans- fer to these unseen classes is still largely success- ful, with the target class being picked more than half the time out of 24 possible classes. How- ever, transfer to these held-out classes remains less successful than transfer to the classes seen during training. Examples of transfer to unseen classes are given at the bottom of Table 4. # 5 Transferring to a new, related attribute space Training the style transfer architecture requires millions of training examples. In this section, we examine whether it is possible to leverage pre- training on a given sentiment transfer task, to then transfer1 that training to an attribute transfer task with a training set orders of magnitude smaller, as long as the attribute space is related. 1Note that transfer in this sentence is used first in the con- text of transfer learning, then in the context of style transfer. # grateful angry hopeful sad thankful I appreciate him. And I love him. I hate him. And I am angry about him. I would love him. And I hope it’s true. I miss him. And I liked him. I have seen him. And thanks for doing that. hopeful angry curious ecstatic happy I hope I’m not too late to the party. I am so angry I’m not too late to the party. I wonder if I’m not too late to the party. I am ecstatic I’m not too late to the party. I am happy I’m not too late to the party. pumped Thank you! So pumped to pick this up! curious Am I the only one who didn’t pick this up? frustrated Of course it would be hard to pick this up! Any chance I can pick this up? hopeful shocked But she was shocked when she found out angry curious what’d happened. But she was so angry when she found out what’d happened. Do you know if she found out what’d hap- pened. delighted Hey she laughed when she found out ecstatic what’d happened. Absolutely ecstatic when she found out what’d happened. emotional But she cried when she found out what’d thankful happened. Thank you, she was looking forward to something like what’d happened. Table 4: Example transfer generations from sequences from the test set of the third-party Reddit data, with various source sentiment labels (bold), to various fine- grained target sentiment labels. The bottom cell in- cludes transfer to held-out labels that were not seen dur- ing training, in italics. Generations are from the model shown in the top row of Table 5. Training target attribute Held-out target attribute Classification Classification Target Sce s-BL PPL Target Sce s-BL PPL 86.8 90.5 92.6 6.0 39.5 257.2 4.2 36.7 240.5 2.8 29.7 212.4 56.5 62.2 63.4 11.6 40.2 283.9 9.2 38.5 285.5 7.5 32.3 272.6 Table 5: Evaluation when 4 out of the 24 sentiment labels are held out during training, shown for three dif- ferent stages of the training which capture three dif- ferent trade-offs between the criteria of attribute con- trol, content preservation, and fluency. The metrics shown are the same as in Table 3: percentage clas- sifications assigned to the target and source (Sce) at- tributes, self-BLEU (s-BL), and perplexity (PPL). Left: transfer to target attributes seen by the style transfer ar- chitecture during training. Metrics are very similar to those obtained when training on 24 classes, in Table 3. Right: transfer to the 4 unseen classes is still largely successful, with the target attribute being selected more than half the time out of 24 possible attributes (chance would be 4%), but clearly less so than for the attributes seen during training. S-BL scores are similar to those of attributes seen during training, but PPL is higher. source I come home from work and my parents are always arguing. It frustrates me. I have a big presentation at work that I am re- ally looking forward to it. I come home from her and my parents are al- ways arguing. It compliments me. Fine-tuned I come home from work and my parents are always studing. I am so content with my wife. Scratch Zero-shot My boss made me work overtime yesterday and I didn’t even get paid for it! My husband and I went on a vacation trip to New York. I was not expecting it Zero-shot My boss made it overtime kicked and I didn’t source Scratch even get arrested for it! Fine-tuned My boss made me work yesterday. Everything I had is going well now. Table 6: Generations from various transfer methods to perform attribute control over EMPATHETICDIA- LOGUES, with models from Table 7, rewriting from an- noyed to content. Training from scratch mostly ignores source content. Zero-shot transfer misses the attribute and is not fluent. Fine-tuned balances objectives better. # 5.1 Dataset The dataset we use here to examine transfer to a re- lated task is the EMPATHETICDIALOGUES dataset (Rashkin et al., 2019), which comprises about 25k dialogues accompanied by a situation description of a few sentences, and a sentiment label belong- ing to a list of 32, some of which are also in the list of 24 from the first task (e.g., angry, grateful, joy- ful, as shown in Table 1). We use the situation de- scriptions and sentiment labels, not the dialogues. We perform evaluation using the same metrics as before. The classification task over the EMPA- THETICDIALOGUES labels is overall more diffi- cult, given that there are more labels, but more im- portantly, that the dataset has not been pre-filtered by a classifier in the same way that the base train- ing dataset was selected from the third-party Red- dit dump. Thus, classification metrics (shown in Table 7) are lower across the board, with the up- per bound being the 56.5% of the Source classifi- cation for the Identity baseline. The language in EMPATHETICDIALOGUES is also easier to predict than that of Reddit, resulting in lower perplexity scores. # 5.2 Transfer experiments We compare three different approaches to perform attribute control anchored in this new dataset. Training from scratch The EMPATHETICDIA- LOGUES dataset has only 25k situation descrip- tions, and is therefore too small to allow for suc- Classification Target Source self-BLEU PPL Identity Target attr. sample 1.4 77.8 56.5 0.7 100.0 0.0 96.6 94.8 Scratch Zero-shot Fine-tuned 29.1 3.6 33.7 2.6 30.2 12.4 0.7 35.8 62.0 135.6 79.2 33.9 Table 7: Automated metrics for transfer to attributes from the EMPATHETICDIALOGUES dataset. Metrics and baselines (top two rows) are the same as in Table 3. Scratch: the style transfer architecture is trained from scratch, using only the 25k situations from the EMPA- THETICDIALOGUES dataset. The architecture learns to transfer to reasonable accuracy, but the self-BLEU scores are near zero, showing that the source content is nearly ignored. Zero-shot: the transfer architec- ture is pre-trained to transfer sentiments on millions of examples from the third-party Reddit dump, and a linear mapping from the new target attributes to that embedding space is trained in a supervised way. No fine-tuning of the transfer architecture is conducted. Metrics show failure to control the target attribute or change the source sequence much, simply degrading the source sequence. Fine-tuned: the transfer archi- tecture is pre-trained on the third-party Reddit dump, then fine-tuned on the EMPATHETICDIALOGUES situ- ations. This achieves a much better balance between attribute control and self-BLEU. Example generations are shown in Table 6 and Table 8. cessful training of the transfer architecture from scratch. To show this, we perform training exactly as in the previous section, but using only data from the 25k situation descriptions. Results in Table 7 show that the system learns adequate attribute con- trol, but ignores the source sequence. Zero-shot transfer The “zero-shot” approach to task transfer here requires mapping the new at- tribute space to the old, so as to specify the new desired targets in the embedding space understood by the model. To see if this can work with- out any fine-tuning, we train a logistic regression layer from the previous Reddit sentiment embed- ding space to the new attribute space, and use the learned attribute embeddings to specify the new target attributes. Attribute control is performed in the same way as before using a style transfer architecture trained on 20 sentiment labels (so as to allow comparing to transfer to a held-out sen- timent label from the same data), but the attribute targets, the source sequences and the label clas- sifiers are all from the EMPATHETICDIALOGUES dataset. This approach performs very poorly, as Waiting for my grandmother. Waiting for my paycheck at the end Waiting for my exams grateful hopeful jealous sad My grandfather invited me over and made us an awesome dinner today. My grandfather promised to buy me a car as soon as he went on vacation. My grandfather bought a car and I was pretty envious of him. My grandfather passed away and it was a shock. prepared afraid I’m going overseas and i’m super ready I’m going to the doctor on Monday. I hope he does well anticipating I’m going to eat with some friends tonight. I can’t wait to eat at the university. I’m going to get a new car this year. I just know it I’m going overseas and i’m ready to go start my new job. I’m going camping next weekend. stoked! I’m going to be able to get my degree next week. I’m going hiking with another person who is in a relationship. I’m going overseas and i’m super excited. confident content excited I am so hopeful jealous joyful Table 8: Example generations when transferring situa- tion descriptions from the test set of the EMPATHETIC- DIALOGUES dataset with various source sentiment la- bels, to other EMPATHETICDIALOGUES sentiment la- bels. Generations are produced by the fine-tuned model in Table 7. shown in Table 7. This is not surprising, given that the low-dimensional embedding space for the original sentiment labels is trained to represent sentiment information from conversational posts that are quite removed from the task of inferring the sentiment felt in a situation description, and may simply have lost too much information to ad- equately infer the sentiment in this new context. In fact, the accuracy of the logistic regression classi- fier used to map the new sentiment labels to the old space is below 18% (on the test set), compared to over 50% achieved by a bottleneck BERT-based classifier trained on that data in raw text form. Fine-tuning Starting from the same pre-trained architecture as in the zero-shot baseline, we fine- tune the architecture on the situation descriptions from EMPATHETICDIALOGUES. This gives a chance for the model to adapt to the language and different framing and attribute space. Results in Table 7 show that the fine-tuning reaches reason- able transfer performance. Example generations are shown in Table 8. # 6 Discussion and Conclusion This work has shown that taking advantage of consistent embedding spaces obtained through a separate task (in this case, supervised classifica- tion) makes it possible to achieve reasonable suc- cess with zero-shot transfer to classes that were not seen during training or even, with some fine- tuning, transfer to an altogether different attribute space. When viewed as a method to generate con- trolled variations of an input text, this style trans- fer approach paves the way for promising data augmentation methods where an existing set of re- trieval utterances could be augmented to fit spe- cific target styles. Given that retrieval models are still performing better than generative models in conversational systems (e.g., see Rashkin et al. (2019)), this would allow combining the flexibility of enhanced fine-grained control with the power of retrieval models, while still escaping flaws of generative models such as blandness and repeti- tion, similar to the retrieve-and-refine approach (Weston et al., 2018). Another promising potential use of this style transfer architecture is through the indirect, im- plicit definition of a style through examples: in- stead of requiring a label, which could lead to quantization noise when the desired attribute is not an exact match to a pre-defined attribute value, the target attribute representation can be directly in- ferred from an example text input that conveys the desired style. This would allow mirroring of the style of a text without labeling it, or conversely complementing it by looking at a maximally dis- tant embedding. Our approach would also lend it- self well to using un-labelled styles extracted in an unsupervised way, as long as they can be repre- sented in a consistent embedding space. # References Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural ma- In International Conference on chine translation. Learning Representations (ICLR). Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly arXiv preprint learning to align and translate. arXiv:1409.0473. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. arXiv preprint arXiv:1908.06083. Zhenxin Fu, Xiaoye Tan, Nanyun Peng, Dongyan Zhao, and Rui Yan. 2017. Style transfer in text: Exploration and evaluation. arXiv preprint arXiv:1711.06861. Xiang Gao, Yizhe Zhang, Sungjin Lee, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2019. Structuring latent spaces for stylized response gen- eration. arXiv preprint arXiv:1909.05361. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for control- lable generation. arXiv preprint arXiv:1909.05858. and Marc’Aurelio Ranzato. 2017a. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043. Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc’Aurelio Ranzato. 2018. Phrase-based & neural unsupervised machine trans- lation. arXiv preprint arXiv:1804.07755. Subramanian, Eric Smith, Ludovic Denoyer, Marc’Aurelio Ranzato, 2019. Y-Lan and In International Multiple-attribute text rewriting. Conference on Learning Representations. Guillaume Lample, Neil Zeghidour, Nicolas Usunier, Antoine Bordes, Ludovic Denoyer, et al. 2017b. Fader networks: Manipulating images by sliding at- In Advances in Neural Information Pro- tributes. cessing Systems, pages 5967–5976. Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), volume 1, pages 986–995. Dayiheng Liu, Jie Fu, Yidan Zhang, Chris Pal, and Jiancheng Lv. 2019. Revision in continuous space: Fine-grained control of text style transfer. arXiv preprint arXiv:1905.12304. Shrimai Prabhumoye, Yulia Tsvetkov, Ruslan Salakhutdinov, and Alan W Black. 2018. Style arXiv preprint transfer through back-translation. arXiv:1804.09000. Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and In Proceedings of the 57th Annual Meet- dataset. ing of the Association for Computational Linguis- tics, pages 5370–5381, Florence, Italy. Association for Computational Linguistics. Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation mod- 2015. In Proceedings of the els with monolingual data. 54th Annual Meeting of the Association for Compu- tational Linguistics, pages 86–96. Ke Wang, Hang Hua, and Xiaojun Wan. 2019. Con- trollable unsupervised text attribute transfer via edit- ing entangled latent representation. arXiv preprint arXiv:1905.12926. Jason Weston, Emily Dinan, and Alexander Miller. 2018. Retrieve and refine: Improved sequence gen- eration models for dialogue. In Proceedings of the 2018 EMNLP Workshop SCAI: The 2nd Interna- tional Workshop on Search-Oriented Conversational AI, pages 87–92, Brussels, Belgium. Association for Computational Linguistics. Zhirui Zhang, Shuo Ren, Shujie Liu, Jianyong Wang, Peng Chen, Mu Li, Ming Zhou, and Enhong Chen. 2018. Style transfer as unsupervised machine trans- lation. arXiv preprint arXiv:1808.07894.
{ "id": "1808.07894" }
1911.03891
Social Bias Frames: Reasoning about Social and Power Implications of Language
Warning: this paper contains content that may be offensive or upsetting. Language has the power to reinforce stereotypes and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but rather the implied meanings, that frame people's judgments about others. For example, given a statement that "we shouldn't lower our standards to hire more women," most listeners will infer the implicature intended by the speaker -- that "women (candidates) are less qualified." Most semantic formalisms, to date, do not capture such pragmatic implications in which people express social biases and power differentials in language. We introduce Social Bias Frames, a new conceptual formalism that aims to model the pragmatic frames in which people project social biases and stereotypes onto others. In addition, we introduce the Social Bias Inference Corpus to support large-scale modelling and evaluation with 150k structured annotations of social media posts, covering over 34k implications about a thousand demographic groups. We then establish baseline approaches that learn to recover Social Bias Frames from unstructured text. We find that while state-of-the-art neural models are effective at high-level categorization of whether a given statement projects unwanted social bias (80% F1), they are not effective at spelling out more detailed explanations in terms of Social Bias Frames. Our study motivates future work that combines structured pragmatic inference with commonsense reasoning on social implications.
http://arxiv.org/pdf/1911.03891
Maarten Sap, Saadia Gabriel, Lianhui Qin, Dan Jurafsky, Noah A. Smith, Yejin Choi
cs.CL
ACL 2020 Camera Ready; Data available at http://tinyurl.com/social-bias-frames
null
cs.CL
20191110
20200423
0 2 0 2 r p A 3 2 ] L C . s c [ 3 v 1 9 8 3 0 . 1 1 9 1 : v i X r a # SOCIAL BIAS FRAMES: Reasoning about Social and Power Implications of Language Maarten Sap‘ Dan Jurafsky° Saadia Gabriel'? Noah A. Smith‘t Lianhui Qin‘? Yejin Choi't Paul G. Allen School of Computer Science & Engineering, University of Washington ‘Allen Institute for Artificial Intelligence °Linguistics & Computer Science Departments, Stanford University # Abstract Warning: this paper contains content that may be offensive or upsetting. Language has the power to reinforce stereo- types and project social biases onto others. At the core of the challenge is that it is rarely what is stated explicitly, but rather the im- plied meanings, that frame people’s judgments about others. For example, given a statement that “we shouldn’t lower our standards to hire more women,” most listeners will infer the implicature intended by the speaker — that “women (candidates) are less qualified.” Most semantic formalisms, to date, do not capture such pragmatic implications in which people express social biases and power differentials in language. We introduce SOCIAL BIAS FRAMES, a new conceptual formalism that aims to model the pragmatic frames in which people project so- cial biases and stereotypes onto others. In ad- dition, we introduce the Social Bias Inference Corpus to support large-scale modelling and evaluation with 150k structured annotations of social media posts, covering over 34k implica- tions about a thousand demographic groups. We then establish baseline approaches that learn to recover SOCIAL BIAS FRAMES from unstructured text. We find that while state- of-the-art neural models are effective at high- level categorization of whether a given state- ment projects unwanted social bias (80% F1), they are not effective at spelling out more de- tailed explanations in terms of SOCIAL BIAS FRAMES. Our study motivates future work that combines structured pragmatic inference with commonsense reasoning on social impli- cations. # 1 Introduction Language has enormous power to project social biases and reinforce stereotypes on people (Fiske, We shouldn't lower our standards just to hire more women. offensive? group referenced? which group Implies women implied are less qualified What do you call a movie with an all- Muslim cast? A box office bomb. intentional? (@ likely Ss intentional? offensive? yes yes group referenced? ay yes which group Implies Muslims implied are all terrorists Muslims )—statement l Figure 1: Understanding and explaining why an ar- guably innocuous statement is potentially unjust re- quires reasoning about conversational implicatures and commonsense implications with respect to the under- lying intent, offensiveness, and power differentials be- tween different social groups. SOCIAL BIAS FRAMES aim to represent the various pragmatic meanings re- lated to social bias implications, by combining cat- egorical and free-text annotations, e.g., that “women are less qualified” is implied by the statement “we shouldn’t lower our standards to hire more women.” 1993). The way such biases are projected is rarely in what is stated explicitly, but in all the implied layers of meanings that frame and influence peo- ples judgments about others. For example, on hearing a statement that an all-Muslim movie was a “box office bomb”, most people can instantly off? int? lewd? group? targeted group implied statement no yes no yes yes yes yes yes no yes no yes yes no yes yes no no yes yes no no no no no no yes yes yes Korean folks gay men Black folks have weird names are weak and not manly no do drugs kill people commit shootings no no # post Kendrick just dropped the coldest video of all fucking time What a fucking stupid bitch shut the fuck up already need to fuck sum bad , I got some pressure built up :P I take a bitch home give her long dick You annoyed me every day for months you’re a fucking moron I dont My problem with Korean artists: know how to pronounce your name I can’t hanggul to save my life I’ll continue this thread till non-lifting fags stop lifting like idiots at the gym... Which is never. I thought drugs were the only things black people could shoot up Boy was I wrong Table 1: Examples of inference tuples in SBIC. The types of inferences captured by SOCIAL BIAS FRAMES cover (potentially subtle) offensive implications about various demographic groups. recognize the implied demonizing stereotype that “Muslims are terrorists” (Figure 1). Understand- ing these biases with accurate underlying expla- nations is necessary for AI systems to adequately interact in the social world (Pereira et al., 2016), and failure to do so can result in the deployment of harmful technologies (e.g., conversational AI sys- tems turning sexist and racist; Vincent, 2016). as intent and offensiveness with implicatures de- scribed in free-form text such as groups refer- enced and implied statements. In addition, we in- troduce SBIC,1 a new corpus collected using a novel crowdsourcing framework. SBIC supports large-scale learning and evaluation with over 150k structured annotations of social media posts, span- ning over 34k implications about a thousand de- mographic groups. Most previous approaches to understanding the implied harm in statements have cast this task as a simple toxicity classification (e.g., Waseem and Hovy, 2016; Founta et al., 2018; Davidson et al., 2017). However, simple classifications run the risk of discriminating against minority groups, due to high variation and identity-based biases in anno- tations (e.g., which cause models to learn asso- ciations between dialect and toxicity; Sap et al., 2019a; Davidson et al., 2019). In addition, de- tailed explanations are much more informative for people to understand and reason about why a state- ment is potentially harmful against other people (Gregor and Benbasat, 1999; Ribeiro et al., 2016). Thus, we propose SOCIAL BIAS FRAMES, a novel conceptual formalism that aims to model pragmatic frames in which people project so- cial biases and stereotypes on others. Compared to semantic frames (Fillmore and Baker, 2001), the meanings projected by pragmatic frames are richer, and thus cannot be easily formalized us- ing only categorical labels. Therefore, as illus- trated in Figure 1, our formalism combines hi- erarchical categories of biased implications such We then establish baseline approaches that learn to recover SOCIAL BIAS FRAMES from unstruc- tured text. We find that while state-of-the-art neu- ral models are effective at making high-level cat- egorization of whether a given statement projects unwanted social bias (80% F1), they are not ef- fective at spelling out more detailed explanations by accurately decoding SOCIAL BIAS FRAMES. Our study motivates future research that combines structured pragmatic inference with commonsense reasoning on social implications. Important implications of this study. We rec- ognize that studying SOCIAL BIAS FRAMES nec- essarily requires us to confront online content that may be offensive or disturbing (see §7 for fur- ther discussion on the ethical implications of this study). However, deliberate avoidance does not eliminate such problems. Therefore, the impor- tant premise we take in this study is that assessing social media content through the lens of SOCIAL 1SBIC: Social Bias Inference Corpus, available at http://tinyurl.com/social-bias-frames. BIAS FRAMES is important for automatic flagging or AI-augmented writing interfaces, where poten- tially harmful online content can be analyzed with detailed explanations for users or moderators to In addition, the collective consider and verify. analysis over large corpora can also be insightful for educating people on reducing unconscious bi- ases in their language. # 2 SOCIAL BIAS FRAMES Definition To better enable models to account for socially bi- ased implications of language,2 we design a new pragmatic formalism that distinguishes several re- lated but distinct inferences, shown in Figure 1. Given a natural language utterance, henceforth, post, we collect both categorical as well as free text inferences (described below), inspired by re- cent efforts in free-text annotations of common- sense knowledge (e.g., Speer and Havasi, 2012; Rashkin et al., 2018; Sap et al., 2019b) and argu- mentation (Habernal and Gurevych, 2016; Becker et al., 2017). The free-text explanations are cru- cial to our formalism, as they can both increase trust in predictions made by the machine (Kulesza et al., 2012; Bussone et al., 2015; Nguyen et al., 2018) and encourage a poster’s empathy towards a targeted group, thereby combating biases (Cohen- Almagor, 2014). We base our frame design on so- cial science literature of pragmatics (Lakoff, 1973; de Marneffe et al., 2012) and impolite- ness (Kasper, 1990; Gabriel, 1998; Dynel, 2015; Vonasch and Baumeister, 2017). We then refine the frame structure (including number of possi- ble answers to questions) based on the annotator (dis)agreement in multiple pilot studies. We de- scribe each of the included variables below. Offensiveness is our main categorical annota- tion, and denotes the overall rudeness, disrespect, or toxicity of a post. We consider whether a post could be considered “offensive to anyone”, as pre- vious work has shown this to have higher recall (Sap et al., 2019a). This is a categorical variable with three possible answers (yes, maybe, no). Intent to offend captures whether the perceived motivation of the author was to offend, which is key to understanding how it is received (Kasper, 2In this work, we employ the U.S. sociocultural lens when discussing bias and power dynamics among demographic groups. 1990; Dynel, 2015), yet distinct from offensive- ness (Gabriel, 1998; Daly, 2018). This is a cat- egorical variable with four possible answers (yes, probably, probably not, no). Lewd or sexual references are a key subcategory of what constitutes potentially offensive material in many cultures, especially in the United States (Strub, 2008). This is a categorical variable with three possible answers (yes, maybe, no). Group implications are distinguished from individual-only attacks or insults that do not in- voke power dynamics between groups (e.g., “F*ck you” vs. “F*ck you, f*ggot”). This is a categori- cal variable with two possible answers: individual- only (no), group targeted (yes). Targeted group describes the social or demo- graphic group that is referenced or targeted by the post. Here we collect free-text answers, but pro- vide a seed list of demographic or social groups to encourage consistency. Implied statement represents the power dy- namic or stereotype that is referenced in the post. We collect free-text answers in the form of simple Hearst-like patterns (e.g., “women are ADJ”, “gay men VBP”; Hearst, 1992). In-group language aims to capture whether the author of a post may be a member of the same so- cial/demographic group that is targeted, as speaker identity changes how a statement is perceived (O’Dea et al., 2015). Specifically, in-group lan- guage (words or phrases that (re)establish belong- ing to a social group; Eble, 1996) can change the perceived offensiveness of a statement, such as reclaimed slurs (Croom, 2011; Galinsky et al., 2013) or self-deprecating language (Greengross and Miller, 2008). Note that we do not attempt to categorize the identity of the speaker. This vari- able takes three possible values (yes, maybe, no). # 3 Collecting Nuanced Annotations To create SBIC, we design a crowdsourcing framework to distill the biased implications of posts at a large scale. # 3.1 Data Selection We draw from various sources of potentially bi- ased online content, shown in Table 2, to select type source # posts Reddit r/darkJokes r/meanJokes r/offensiveJokes Microaggressions 10,095 3,483 356 2,011 subtotal 15,945 Twitter Founta et al. (2018) Davidson et al. (2017) Waseem and Hovy (2016) 11,864 3,008 1,816 subtotal 16,688 Hate Sites Gab Stormfront Banned Reddits 3,715 4,016 4,308 subtotal 12,039 SBIC total # posts 44,671 Table 2: Breakdown of origins of posts in SBIC. Mi- croaggressions are drawn from the Reddit corpus intro- duced by Breitfeller et al. (2019), and Banned Reddits include r/Incels and r/MensRights. posts to annotate. Since online toxicity can be rel- atively scarce (Founta et al., 2018),3 we start by annotating English Reddit posts, specifically three intentionally offensive subReddits and a corpus of potential microaggressions from Breitfeller et al. (2019). By nature, the three offensive subreddits are very likely to have harmful implications, as posts are often made with intents to deride ad- versity or social inequality (Bicknell, 2007). Mi- croaggressions, on the other hand, are likely to contain subtle biased implications—a natural fit for SOCIAL BIAS FRAMES. In addition, we include posts from three exist- ing English Twitter datasets annotated for toxic or abusive language, filtering out @-replies, retweets, and links. We mainly annotate tweets released by Founta et al. (2018), who use a boot- strapping approach to sample potentially offensive tweets. We also include tweets from Waseem and Hovy (2016) and Davidson et al. (2017), who col- lect datasets of tweets containing racist or sexist hashtags and slurs, respectively. Finally, we include posts from known En- glish hate communities: Stormfront (de Gibert 3Founta et al. (2018) find that the prevalence of toxic con- tent online is <4%. She only got the job because she's a woman 10 /targeted by tis post? — st st ty ee poe Figure 2: Snippet of the annotation task used to collect SBIC. Lewdness, group implication, and in-group lan- guage questions are omitted for brevity but shown in larger format in Figure 4 (Appendix). et al., 2018) and Gab,4 which are both doc- umented white-supremacist and neo-nazi com- munities (Bowman-Grieve, 2009; Hess, 2016), and two English subreddits that were banned for inciting violence against women (r/Incels and r/MensRights; Fingas, 2017; Center, 2012). # 3.2 Annotation Task Design We design a hierarchical annotation framework to collect biased implications of a given post (snippet shown in Figure 2) on Amazon Mechanical Turk (MTurk). The full task is shown in the appendix (Figure 4). For each post, workers indicate whether the post is offensive, whether the intent was to offend, and whether it contains lewd or sexual content. Only if annotators indicate potential offensiveness do they answer the group implication question. If the post targets or references a group or demographic, workers select or write which one(s); per selected group, they then write two to four stereotypes. Fi- nally, workers are asked whether they think the speaker is part of one of the minority groups refer- enced by the post. We collect three annotations per post, and re- strict our worker pool to the U.S. and Canada. We ask workers to optionally provide coarse-grained demographic information.5 4https://files.pushshift.io/gab/ GABPOSTS_CORPUS.xz 5This study was approved by our institutional review board. total # tuples 147,139 # unique posts groups implications 44,671 1,414 32,028 post-group post-group-implication group-implication 48,923 87,942 34,333 skews (% pos.) offensive intent lewd group targeted in-group 44.8% 43.4% 7.9% 50.9% 4.6% Table 3: Statistics of the SBIC dataset. Skews indi- cate the number of times a worker annotated a post as offensive, etc. Annotator demographics In our final annota- tions, our worker pool was relatively gender- balanced and age-balanced (55% women, 42% men, <1% non-binary; 36±10 years old), but racially skewed (82% White, 4% Asian, 4% His- panic, 4% Black). Annotator agreement Overall, the annotations in SBIC showed 82.4% pairwise agreement and Krippendorf’s α=0.45 on average, which is sub- stantially higher than previous work in toxic lan- guage detection (e.g., α=0.22 in Ross et al., 2017). Broken down by each categorical question, work- ers agreed on a post being offensive at a rate of 76% (Krippendorf’s α=0.51), its intent being to offend at 75% (α=0.46), and it having group implications at 74% (α=0.48). For categoriz- ing posts as lewd, workers agreed substantially (94%, α=0.62). However, flagging potential in- group speech had lower agreement, likely because this is a very nuanced annotation, and because highly skewed categories (only 5% “yes”; see Ta- ble 3) lead to low αs (here, α=0.17 with agreement 94%).6 Finally, workers agreed on the exact same targeted group 80.2% of the time (α=0.50). # 3.3 SBIC Description After data collection, SBIC contains 150k struc- tured inference tuples, covering 34k free text group-implication pairs (see Table 3). We show example inference tuples in Table 1. 6Given our data selection process, we expect the rate of in-group posts to be very low (see §3.3). 100% 80% 60% 40% 20% 0% Reddit HateSites Twitter m gender/sexuality m race/ethnicity mreligion/culture msocial/political mw disability m™ body/age B victims Figure 3: Breakdown of targeted group categories by domains. We show percentages within domains for the top three most represented identities, namely gen- der/sexuality (e.g., women, LGBTQ), race/ethnicity (e.g., Black, Latinx, and Asian), and culture/origin (e.g., Muslim, Jewish). Additionally, we show a breakdown of the types of targeted groups in Figure 3. While SBIC cov- ers a variety of types of biases, gender-based, race- based, and culture-based biases are the most repre- sented, which parallels the types of discrimination happening in the real world (RWJF, 2017). We find that our dataset is predominantly writ- ten in White-aligned English (78% of posts), as measured by a lexical dialect detector by Blodgett et al. (2016), with <10% of posts having indica- tors of African-American English. We caution re- searchers to consider the potential for dialect- or identity-based biases in labelling (Davidson et al., 2019; Sap et al., 2019a) before deploying technol- ogy based on SBIC (see Section 7). # 4 Social Bias Inference Given a post, we establish baseline performance of models at inferring SOCIAL BIAS FRAMES. An ideal model should be able to both generate the implied power dynamics in textual form, as well as classify the post’s offensiveness and other categor- ical variables. Satisfying these conditions, we use the OpenAI-GPT transformer networks (Vaswani et al., 2017; Radford et al., 2018, 2019) as a basis for our experiments, given their recent successes at model offensive 42.2% pos. (dev.) F1 rec. pr. intent 44.8% pos (dev.) F1 rec. pr. lewd 3.0% pos (dev.) F1 rec. pr. group 66.6% pos (dev.) F1 rec. pr. in-group 5.1% pos (dev.) F1 rec. pr. SBF-GPT1-gdy SBF-GPT2-gdy SBF-GPT2-smp 75.2 88.3 65.5 77.2 88.3 68.6 80.5 84.3 76.9 74.4 89.8 63.6 76.3 89.5 66.5 75.3 89.9 64.7 75.2 78.2 72.5 77.6 81.2 74.3 78.6 80.6 76.6 62.3 74.6 53.4 66.9 67.9 65.8 66.0 67.6 64.5 – 24.0 85.7 14.0 – – – – – test SBF-GPT2-gdy 78.8 89.8 70.2 78.6 90.8 69.2 80.7 84.5 77.3 69.9 70.5 69.4 – – – Table 4: Experimental results (%) of various models on the classification tasks (gdy: argmax, smp: sampling). Some models did not predict the positive class for “in-group language,” their performance is denoted by “–”. We bold the F1 scores of the best performing model(s) on the development set. For easier interpretation, we also report the percentage of instances in the positive class in the development set. classification, commonsense generation, and con- ditional generation (Bosselut et al., 2019; Keskar et al., 2019). We minimize the cross-entropy of the contex- tual probability of the correct token in our full lin- earized frame objective (of length N ): Training We cast our frame prediction task as a hybrid classification and language generation task, where we linearize the variables following the frame hierarchy.7 At training time, our model takes as input a sequence of N tokens: x = {[STR], w1, w2, ..., wn, [SEP], w[lewd], w[off], w[int], w[grp], [SEP], w[G]1 w[S]1 , w[G]2 , w[S]2 , ..., [SEP], , ..., [SEP], w[ing], [END]} (1) where [STR] is our start token, w1:n is the sequence of tokens in a post, w[G]i the tokens representing the group, and w[S]i the implied statement. We add two task-specific vocabulary items for each of our five classification tasks (w[lewd], w[off], w[int], w[grp], w[ing]), each representing the negative and positive values of the class (e.g., for offensiveness, [offY] and [offN]).8 The model relies on a stack of transformer blocks of multi-headed attention and fully con- nected layers to encode the input tokens (for a de- tailed modelling description, see Radford et al., 2018, 2019). Since GPT is a forward-only lan- guage model, the attention is only computed over preceding tokens. At the last layer, the model projects the embedding into a vocabulary-sized vector, which is turned into a probability distribu- tion over the vocabulary using a softmax layer. 7We linearize following the order in which variables were annotated (see Figure 4). Future work could explore alternate orderings. 8We binarize our categorical annotations, assigning 1 to “yes,” “probably,” and “maybe,”, and 0 to all other values. 1 L= "WN So log peer (w; | woi-1) i During training, no loss is incurred for lower- level variables with no values, i.e., variables that cannot take values due to earlier variable values (e.g., there is no targeted group for posts marked as non-offensive). In our experiments we use pretrained versions of OpenAI’s GPT and GPT2 (Radford et al., 2018, 2019) for our model variants, named SBF-GPT1 and SBF-GPT2, respectively. While their architec- tures are similar (stack of Transformers), GPT was trained on a large corpus of fiction books, whereas GPT2 was trained on 40Gbs of English web text. Inference We frame our inference task as a con- ditional language generation task. Conditioned on the post, we generate tokens one-by-one either by greedily selecting the most probable one, or by sampling from the next word distribution, and ap- pending the selected token to the output. We stop when the [END] token is generated, at which point our entire frame is predicted. For greedy decod- ing, we only generate our frames once, but for sampling, we repeat the generation procedure to yield ten candidate frame predictions and choose the highest scoring one under our model. In contrast to training time, where all inputs are consistent with our frames’ structure, at test time, our model can sometimes predict combinations of variables that are inconsistent with the constraints of the frame (e.g., predicting a post to be inoffen- sive, but still predict it to be offensive to a group). To mitigate this issue, we also experiment with a constrained decoding algorithm (denoted “con- str”) that considers various global assignments of group targeted BLEU Rouge-L WMD dev. SBF-GPT1-gdy SBF-GPT1-gdy-constr SBF-GPT2-gdy SBF-GPT2-gdy-constr SBF-GPT2-smp SBF-GPT2-smp-constr 69.9 69.2 74.2 73.4 83.2 83.0 60.3 64.7 64.6 68.2 33.7 33.7 1.01 1.05 0.90 0.89 0.62 0.63 49.9 49.0 49.8 49.6 44.3 44.1 40.2 42.8 41.4 43.5 17.8 17.9 2.97 3.02 2.96 2.96 3.31 3.31 test SBF-GPT2-gdy SBF-GPT2-gdy-constr 77.0 77.9 71.3 68.7 0.76 0.74 52.2 52.6 46.5 44.9 2.81 2.79 Table 5: Automatic evaluation of various models on the generation task. We bold the scores of the best performing model(s) on the development set. Higher is better for BLEU and ROUGE scores, and lower is better for WMD. variables. Specifically, after greedy decoding, we recompute the probabilities of each of the categor- ical variables, and search for the most probable as- signment given the generated text candidate and variable probabilities.9 This can allow variables to be assigned an alternative value that is more glob- ally optimal.10 # 4.1 Evaluation We evaluate performance of our models in the For classification, we report following ways. precision, recall, and F1 scores of the positive class. Following previous generative inference work (Sap et al., 2019b), we use automated met- rics to evaluate model generations. We use BLEU- 2 and RougeL (F1) scores to capture word over- lap between the generated inference and the refer- ences, which captures quality of generation (Gal- ley et al., 2015; Hashimoto et al., 2019). We ad- ditionally compute word mover’s distance (WMD; Kusner et al., 2015), which uses distributed word representations to measure similarity between the generated and target text.11 inferences (hypotheses) to all targeted groups and implied statements (references). All experiments are carried out using Hugging- Face’s Transformers library.12 We tune hyperpa- rameters on the dev. set, and report performance for the best performing setting (according to aver- age F1). We train or finetune our models using a batch size of 4, a learning rate of 5×10−6 for GPT and 10−5 for GPT2 (both with linear warm up), and consider training for e ∈ {1, 2, 5} epochs. # 5 Results Listed in Tables 4 and 5, our modelling results in- dicate that making inferences about social biases in language is challenging for these models. Classification Shown in Table 7, models per- form well on higher-level variables such as of- fensiveness and lewdness, despite the latter be- ing heavily skewed. We hypothesize that correctly predicting lewdness might require more lexical matching (e.g., detecting words with sexual con- notations). # 4.2 Training Details As each post can contain multiple annotations, we define a training instance as containing one post- group-statement triple (along with the five cate- gorical annotations). We then split our dataset into train/dev./test (75:12.5:12.5), ensuring that no post is present in multiple splits. For evaluation (dev., test), we combine the categorical variables by av- eraging their binarized values and re-binarizing using a .5 threshold, and compare the generated 9We only use the possible assignments in the same for- ward pass; we do not use assignments from different samples. 10In practice, as seen in Tables 4, 5, and 7, this only slightly improves predictions. Whether a group is targeted is slightly less easy for models to predict, and whether the language is in-group is even more challenging, with most of the models defaulting to never predicting it. This highly skewed category poses a challenge for all models, likely due to subtlety of the task and the lack of positive instances. SBF-GPT2-gdy is the only model that predicts positive values for in-group language, for which it benefits from constrained decoding with a 1.9% improvement in F1 score (we show results with all constrained decoding variants in Table 7 in the appendix). 11We use GloVe trained on CommonCrawl, as part of the SpaCy en core web md package. # 12https://github.com/huggingface/ transformers post predicted group predicted implication reference groups reference implications (a) All-female casts bother me but I have literally never noticed the hun- dreds of movies with 0 female char- acters I have already watched (b) As expected, when the terrorist group Hamas won the election in Gaza it was the usual one man, one vote, one time, Islamist election. #Islam — — muslim folks muslims terrorists are women muslim folks; arabic folks women aren’t good actors; women can’t be in leading roles bad people; islam promotes control- ling governments; muslims are au- thoritarians; not fair (c) ”Black guy in class: *attempts to throw a paper ball into the trash* *misses* Mrs. Mundy: ”You’re a disgrace to your race, Marcus” (d) How can a Libtard speak truth to power when the only thing they vomit from their mouths is lies? black folks liberals black are trash liberals stupid people are black folks liberals are good at basketball; black men are defined by athletic skill.; black men should be good at basketball.; do not take things seriously liberals are incompetent; are stupid; liberals only speak lies liberals (e) I hate fat bitches overweight/fat folks fat people are ugly overweight/fat folks are not pleasant people; fat folks are all the same; fat folks are less than others; not likable Table 6: Examples of SBF-GPT2-gdy-constrmodel predictions. The model struggles to pick up on subtle biases (a), and tends to generate generic stereotypes rather than implications that are entailed by the post (b, c). Generation When evaluating our models on the generation tasks (i.e., targeted group and implied statement), we find that no one model outperforms others across all metrics (Table 5). Overall, models do well at generating the tar- geted groups, likely because of the more lim- ited generation space (there are only 1.4k pos- sible groups in SBIC). Conversely, for implied statement generation (where output space is much larger), model performance is slightly worse. Similar to the classification tasks, SBF-GPT2- gdy shows a slight increase in RougeL score when using constrained decoding, but we see a slight drop in BLEU scores. Error analysis Since small differences in auto- mated evaluation metrics for text generation some- times only weakly correlate with human judg- ments (Liu et al., 2016), we manually perform an error analysis on a manually selected set of gen- erated development-set examples from the SBF- GPT2-gdy-constr model (Table 6). Overall, the model seems to struggle with generating textual implications that are relevant to the post, instead generating very generic stereotypes about the de- mographic groups (e.g., in examples b and c). The model generates the correct stereotypes when there is high lexical overlap with the post (e.g., examples d and e). This is in line with previous research showing that large language models rely on correlational patterns in data (Sap et al., 2019c; Sakaguchi et al., 2020). # 6 Related Work Bias and toxicity detection Detection of hate- ful, abusive, or other toxic language has received increased attention recently (Schmidt and Wie- gand, 2017), and most dataset creation work has cast this detection problem as binary classifica- tion (Waseem and Hovy, 2016; Davidson et al., 2017; Founta et al., 2018). Moving beyond a sin- gle binary label, Wulczyn et al. (2017) and the PerspectiveAPI use a set of binary variables to an- notate Wikipedia comments for several toxicity- related categories (e.g., identity attack, profanity). Similarly, Zampieri et al. (2019) hierarchically an- notate a dataset of tweets with offensiveness and whether a group or individual is targeted. Most related to our work, Ousidhoum et al. (2019) cre- ate a multilingual dataset of 13k tweets annotated for five different emotion- and toxicity-related as- pects, including a 16-class variable representing social groups targeted. In comparison, SOCIAL BIAS FRAMES not only captures binary toxic- ity and hierarchical information about whether a group is targeted, but also free-text implications about 1.4k different targeted groups and the im- plied harm behind statements. Similar in spirit to this paper, recent work has tackled more subtle bias in language, such as mi- croaggressions (Breitfeller et al., 2019) and conde- scension (Wang and Potts, 2019). These types of biases are in line with the biases covered by SO- CIAL BIAS FRAMES, but more narrowly scoped. Inference about social dynamics Various work has tackled the task of making inferences about power and social dynamics. Particularly, previ- ous work has analyzed power dynamics about spe- cific entities, either in conversation settings (Prab- hakaran et al., 2014; Danescu-Niculescu-Mizil et al., 2012) or in narrative text (Sap et al., 2017; Field et al., 2019; Antoniak et al., 2019). Addi- tionally, recent work in commonsense inference has focused on mental states of participants of a situation (e.g., Rashkin et al., 2018; Sap et al., 2019b). In contrast to reasoning about particular individuals, our work focuses on biased implica- tions of social and demographic groups as a whole. # 7 Ethical Considerations Risks in deployment Automatic detection of of- fensiveness or reasoning about harmful implica- tions of language should be done with care. When deploying such algorithms, ethical aspects should be considered including which performance met- ric should be optimized (Corbett-Davies et al., 2017), as well as the fairness of the model on speech by different demographic groups or in different varieties of English (Mitchell et al., 2019). Additionally, deployment of such tech- nology should discuss potential nefarious side ef- fects, such as censorship (Ullmann and Tomalin, 2019) and dialect-based racial bias (Sap et al., 2019a; Davidson et al., 2019). Finally, offen- siveness could be paired with promotions of posi- tive online interactions, such as emphasis of com- munity standards (Does et al., 2011) or counter- speech (Chung et al., 2019; Qian et al., 2019). Risks in annotation Recent work has high- lighted various negative side effects caused by annotating potentially abusive or harmful content (e.g., acute stress; Roberts, 2016). We mitigated these by limiting the number of posts that one worker could annotate in one day, paying work- ers above minimum wage ($7–12), and providing crisis management resources to our annotators.13 Additionally, we acknowledge the implications of using data available on public forums for research (Zimmer, 2018) and urge researchers and prac- titioners to respect the privacy of the authors of posts in SBIC (Ayers et al., 2018). 13We direct workers to the Crisis Text Line (https:// www.crisistextline.org/). # 8 Conclusion To help machines reason about and account for societal biases, we introduce SOCIAL BIAS FRAMES, a new structured commonsense formal- ism that distills knowledge about the biased im- plications of language. Our frames combine cate- gorical knowledge about the offensiveness, intent, and targets of statements, as well as free-text in- ferences about which groups are targeted and bi- ased implications or stereotypes. We collect a new dataset of 150k annotations on social media posts using a new crowdsourcing framework and estab- lish baseline performance of models built on top of large pretrained language models. We show that while classifying the offensiveness of state- ments is easier, current models struggle to gener- ate relevant social bias inferences, especially when implications have low lexical overlap with posts. This indicates that more sophisticated models are required for SOCIAL BIAS FRAMES inferences. # Acknowledgments We thank the anonymous reviewers for their in- sightful comments. Additionally, we are grateful to Hannah Rashkin, Lucy Lin, Jesse Dodge, Hao Peng, and other members of the UW NLP com- munity for their helpful comments on the project. This research was supported in part by NSF (IIS- 1524371, IIS-1714566), DARPA under the CwC program through the ARO (W911NF-15-1-0543), and DARPA under the MCS program through NIWC Pacific (N66001-19-2-4031). # References Maria Antoniak, David Mimno, and Karen Levy. 2019. Narrative paths and negotiation of power in birth sto- ries. In CSCW. John W Ayers, Theodore L Caputi, Camille Nebeker, and Mark Dredze. 2018. Don’t quote me: reverse identification of research participants in social media studies. NPJ digital medicine, 1(1):1–2. Maria Becker, Michael Staniek, Vivi Nastase, and Anette Frank. 2017. Enriching argumentative texts with implicit knowledge. In NLDB. Jeanette Bicknell. 2007. What is offensive about offen- sive jokes? Philosophy Today, 51(4):458–465. Su Lin Blodgett, Lisa Green, and Brendan O’Connor. 2016. Demographic dialectal variation in social me- dia: a case study of African-American English. In EMNLP. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: commonsense transformers for automatic knowledge graph construction. In ACL. Lorraine Bowman-Grieve. 2009. Exploring “Storm- front”: a virtual community of the radical right. Studies in conflict & terrorism, 32(11):989–1007. Luke M Breitfeller, Emily Ahn, David Jurgens, and Yu- lia Tsvetkov. 2019. Finding microaggressions in the wild: a case for locating elusive phenomena in social media posts. In EMNLP. and Dympna O’Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In 2015 International Conference on Healthcare Infor- matics, pages 160–169. IEEE. Southern Poverty Law Center. 2012. Misogyny: sites. Intelligence Report, 145. the Yi-Ling Chung, Elizaveta Kuzmenko, Serra Sinem Tekiroglu, and Marco Guerini. 2019. CONAN - COunter NArratives through nichesourcing: a mul- tilingual dataset of responses to fight online hate speech. In ACL. Raphael Cohen-Almagor. 2014. Countering hate on Annual review of law and ethics, the internet. 22:431–443. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic decision making and the cost of fairness. In KDD. Adam M Croom. 2011. Slurs. Language Sciences, 33(3):343–358. Helen L Daly. 2018. On insults. Journal of the Ameri- can Philosophical Association, 4(4):510–524. Lillian Lee, Bo Pang, and Jon Kleinberg. 2012. Echoes of power: language effects and power differences in social interaction. In WWW. Thomas Davidson, Debasmita Bhattacharya, and Ing- mar Weber. 2019. Racial bias in hate speech and In Abusive abusive language detection datasets. Language Workshop. Thomas Davidson, Dana Warmsley, Michael W Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In ICWSM. Serena Does, Belle Derks, and Naomi Ellemers. 2011. Thou shalt not discriminate: how empha- sizing moral ideals rather than obligations increases whites’ support for social equality. Journal of Ex- perimental Social Psychology, 47(3):562–571. Marta Dynel. 2015. The landscape of impoliteness re- search. Journal of Politeness Research, 11(2):383. Connie C Eble. 1996. Slang & sociability: in-group language among college students. Univ of North Carolina Press. Anjalie Field, Gayatri Bhat, and Yulia Tsvetkov. 2019. Contextual affective analysis: a case study of people portrayals in online #MeToo stories. In ICWSM. Charles J Fillmore and Collin F Baker. 2001. Frame In Proceedings semantics for text understanding. of WordNet and Other Lexical Resources Workshop, NAACL. Jon Fingas. 2017. Reddit bans misogynist community https: as part of anti-violence crackdown. //www.engadget.com/2017/11/08/ reddit-bans-misogynist-community- in-anti-violence-crackdown/. cessed: 2019-12-06. Ac- Susan T Fiske. 1993. Controlling other people. the im- pact of power on stereotyping. American psycholo- gist, 48(6):621–628. Antigoni-Maria Founta, Constantinos Djouvas, De- spoina Chatzakou, Ilias Leontiadis, Jeremy Black- burn, Gianluca Stringhini, Athena Vakali, Michael Sirivianos, and Nicolas Kourtellis. 2018. Large scale crowdsourcing and characterization of Twitter abusive behavior. In ICWSM. Yiannis Gabriel. 1998. An introduction to the social psychology of insults in organizations. Human Re- lations, 51(11):1329–1354. Adam D Galinsky, Cynthia S Wang, Jennifer A Whitson, Eric M Anicich, Kurt Hugenberg, and Galen V Bodenhausen. 2013. The reappropriation of stigmatizing labels: the reciprocal relationship between power and self-labeling. Psychol. Sci., 24(10):2020–2029. Michel Galley, Chris Brockett, Alessandro Sordoni, Yangfeng Ji, Michael Auli, Chris Quirk, Margaret Mitchell, Jianfeng Gao, and William B. Dolan. 2015. deltaBLEU: a discriminative metric for gener- ation tasks with intrinsically diverse targets. In ACL. Ona de Gibert, Naiara P´erez, Aitor Garc´ıa-Pablos, and Montse Cuadros. 2018. Hate speech dataset from In Abusive Language a white supremacy forum. Workshop at EMNLP. Gil Greengross and Geoffrey F Miller. 2008. Diss- ing oneself versus dissing rivals: effects of status, personality, and sex on the Short-Term and Long- Term attractiveness of Self-Deprecating and Other- Deprecating humor. Evolutionary Psychology, 6(3). Shirley Gregor and Izak Benbasat. 1999. Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS quarterly, pages 497–530. Ivan Habernal and Iryna Gurevych. 2016. What makes a convincing argument? empirical analysis and de- tecting attributes of convincingness in web argumen- tation. In EMNLP, pages 1214–1223. Tatsunori B Hashimoto, Hugh Zhang, and Percy Liang. 2019. Unifying human and statistical evaluation for natural language generation. In NAACL-HLT. Marti A Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In ACL, pages 539– 545. Amanda Hess. 2016. The far right has a new dig- https://www.nytimes. ital safe space. com/2016/11/30/arts/the-far-right- has-a-new-digital-safe-space.html. Accessed: 2019-12-06. Gabriele Kasper. 1990. Linguistic politeness: current research issues. Journal of Pragmatics, 14(2):193– 218. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: a conditional transformer language model for control- lable generation. arXiv preprint arXiv:1909.05858. Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more? The effects of mental model soundness on personalizing an intel- In Proceedings of the SIGCHI Con- ligent agent. ference on Human Factors in Computing Systems, pages 1–10. ACM. Matt Kusner, Yu Sun, Nicholas Kolkin, and Kilian Weinberger. 2015. From word embeddings to docu- ment distances. In ICML, pages 957–966. Robin Lakoff. 1973. Language and woman’s place. Language in society, 2(1):45–79. Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Noseworthy, Laurent Charlin, and Joelle Pineau. 2016. How NOT to evaluate your dialogue system: an empirical study of unsupervised evaluation met- rics for dialogue response generation. In ACL. Marie-Catherine de Marneffe, Christopher D Man- ning, and Christopher Potts. 2012. Did it happen? the pragmatic complexity of veridicality assessment. Computational Linguistics, 38(2):301–333. Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In FAccT. An T Nguyen, Aditya Kharosekar, Saumyaa Krish- nan, Siddhesh Krishnan, Elizabeth Tate, Byron C Wallace, and Matthew Lease. 2018. Believe it or not: designing a human-AI partnership for mixed- In The 31st Annual ACM initiative fact-checking. Symposium on User Interface Software and Technol- ogy, pages 189–199. ACM. Conor J O’Dea, Stuart S Miller, Emma B Andres, Madelyn H Ray, Derrick F Till, and Donald A Saucier. 2015. Out of bounds: Factors affecting the perceived offensiveness of racial slurs. Language Sciences, 52:155–164. Nedjma Ousidhoum, Zizheng Lin, Hongming Zhang, Yangqiu Song, and Dit-Yan Yeung. 2019. Multi- lingual and Multi-Aspect hate speech analysis. In EMNLP. Gonc¸alo Pereira, Rui Prada, and Pedro A Santos. 2016. Integrating social power into the decision-making of cognitive agents. Artificial Intelligence, 241:1–44. Vinodkumar Prabhakaran, Prabhakaran Vinodkumar, and Rambow Owen. 2014. Predicting power rela- tions between participants in written dialog from a single thread. In ACL. Jing Qian, Anna Bethke, Yinyin Liu, Elizabeth Beld- ing, and William Yang Wang. 2019. A bench- mark dataset for learning to intervene in online hate speech. In EMNLP. Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. Unpublished. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Unpub- lished. Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A. Smith, and Yejin Choi. 2018. Event2mind: commonsense inference on events, intents, and reac- tions. In ACL. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should I trust you?”: Ex- plaining the predictions of any classifier. In KDD. Sarah T Roberts. 2016. Commercial content modera- tion: digital laborers’ dirty work. In Safiya Umoja Noble and Brendesha M Tynes, editors, The Inter- sectional Internet: Race, Sex, Class and Culture On- line, Media Studies Publications. Peter Lang Pub- lishing. Bj¨orn Ross, Michael Rist, Guillermo Carbonell, Ben- jamin Cabrera, Nils Kurowsky, and Michael Wo- jatzki. 2017. Measuring the reliability of hate speech annotations: the case of the european refugee crisis. In NLP 4 CMC Workshop. riences and views. org/en/library/research/2017/ 10/discrimination-in-america-- experiences-and-views.html. Accessed: 2019-11-5. Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhaga- vatula, and Yejin Choi. 2020. Winogrande: an ad- versarial winograd schema challenge at scale. In AAAI. Maarten Sap, Dallas Card, Saadia Gabriel, Yejin Choi, and Noah A Smith. 2019a. The risk of racial bias in hate speech detection. In ACL. Maarten Sap, Ronan LeBras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019b. ATOMIC: an atlas of machine common- sense for if-then reasoning. In AAAI. Maarten Sap, Marcella Cindy Prasetio, Ariel Holtz- man, Hannah Rashkin, and Yejin Choi. 2017. Con- notation frames of power and agency in modern films. In EMNLP. Maarten Sap, Hannah Rashkin, Derek Chen, Ronan LeBras, and Yejin Choi. 2019c. Social IQa: com- monsense reasoning about social interactions. In EMNLP. Anna Schmidt and Michael Wiegand. 2017. A survey on hate speech detection using natural language pro- cessing. In Workshop on NLP for Social Media at EACL. Robyn Speer and Catherine Havasi. 2012. Represent- ing general relational knowledge in ConceptNet 5. In LREC. Whitney Strub. 2008. The clearly obscene and the queerly obscene: heteronormativity and obscen- ity in cold war los angeles. American Quarterly, 60(2):373–398. Stefanie Ullmann and Marcus Tomalin. 2019. Quaran- tining online hate speech: technical and ethical per- spectives. Ethics and Information Technology. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. James Vincent. 2016. Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. https://www.theverge.com/2016/3/ 24/11297050/tay-microsoft-chatbot- racist. Accessed: 2019-10-26. Andrew J Vonasch and Roy F Baumeister. 2017. Un- justified side effects were strongly intended: taboo tradeoffs and the side-effect effect. Journal of Ex- perimental Social Psychology, 68:83–92. Zijian Wang and Christopher Potts. 2019. TalkDown: a corpus for condescension detection in context. In EMNLP. Zeerak Waseem and Dirk Hovy. 2016. Hateful sym- bols or hateful people? Predictive features for hate speech detection on Twitter. In NAACL Student Re- search Workshop. Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Ex machina: personal attacks seen at scale. In WWW. Marcos Zampieri, Shervin Malmasi, Preslav Nakov, Sara Rosenthal, Noura Farra, and Ritesh Kumar. 2019. Predicting the type and target of offensive posts in social media. In NAACL. Michael Zimmer. 2018. Addressing conceptual gaps in big data research ethics: an application of contextual integrity. Social Media + Society, 4(2). model offensive 42.2% pos. (dev.) F1 rec. pr. intent 44.8% pos. (dev.) F1 rec. pr. lewd 3.0% pos. (dev.) F1 rec. pr. group 66.6% pos. (dev.) F1 rec. pr. dev. SBF-GPT1-gdy SBF-GPT1-gdy-constr SBF-GPT2-gdy SBF-GPT2-gdy-constr SBF-GPT2-smp SBF-GPT2-smp-constr 75.2 88.3 65.5 75.2 88.3 65.5 77.2 88.3 68.6 77.2 88.3 68.6 80.5 84.3 76.9 80.4 84.3 76.8 74.4 89.8 63.6 74.4 89.8 63.6 76.3 89.5 66.5 76.3 89.5 66.5 75.3 89.9 64.7 75.3 89.9 64.7 75.2 78.2 72.5 75.2 78.2 72.5 77.6 81.2 74.3 77.6 81.2 74.3 78.6 80.6 76.6 78.5 80.6 76.5 62.3 74.6 53.4 62.3 74.6 53.4 66.9 67.9 65.8 66.9 67.9 65.8 66.0 67.6 64.5 66.0 67.6 64.5 test SBF-GPT2-gdy SBF-GPT2-gdy-constr 78.8 89.8 70.2 78.8 89.8 70.2 78.6 90.8 69.2 78.6 90.8 69.2 80.7 84.5 77.3 80.7 84.5 77.3 69.9 70.5 69.4 69.9 70.5 69.4 – – – – – – – – – – – – – – Table 7: Full experimental results (%) of various models on the classification tasks (gdy: argmax, smp: sampling; constr: constrained decoding). Some models did not predict the positive class for “in-group language,” their performance is denoted by “–”. We bold the F1 scores of the best performing model(s) on the development set. For easier interpretation, we also report the percentage of instances in the positive class in the development set. Task Post ${post} - crawled from ${source}. 1.a) Could this post be considered offensive, disrespectful, or toxic to anyone/someone? Yes, this could be offensive Maybe, I'm not sure No, this is harmless ») ) > ) I don't understand the post 1.b) Was the intent of this post to be offensive/disrespectful to anyone? E.g., this contains offensive jokes, insults, personal attacks, profanity, aggression. > # Yes, definitely Yes, probably No, probably not No, definitely not 2) Does this post contain or allude to sexual content/acts? This includes euphemism, double entendre, etc. © Yes © Maybe © No 3) Who is referred to/targeted by this post? — Select all identity-based groups that apply. 3.a) Does the post imply offense/disrespect to an identity-related group of people? E.g., is this just a insult to one person ("Fuck you", "All y'all are morons"), or an offensive comment/insult about a whole identity-based group ("Fuck muslims", "Fuck you, you stupid nigger"). © Yes, this could be offensive to a group (minority, identity-based, etc.) © No, this is just an insult to an individual or a non-identity-related group of people. race/ethnicity 3.b) 3.c) What aspect/stereotype/characteristic of this group (often Which identity group is referred to in this unfairly assumed) is referenced or implied by this post? — Use simple post? phrases and do not copy paste from the post. © black folks Le., actions/characteristics that US society (usually wrongly) associates with the group an GROUP does ___ \ asian folks © latino/latina folks GROUP does ___ © native american/first nation folks [optional] © |other (optional] gender/gender identity/sexuality culture/origin/religion age/body mental or physical disabilities/disorders socio-economic/political/lifestyle crime/violence/tragedy victims 3.d) Does the author of the post sound like they belong to the same minority group that is referred to by this post? Try your best to guess. For example, posts with "nigga" in them usually come from black authors. Additionally, members sometimes make fun of their own community. # O Yes # O Maybe O No Figure 4: Snippet of the annotation task used to collect SBIC. The collection of structured annotations for our framework is broken down into questions pertaining to offensiveness, intent of the post, targeted group and minority speaker.
{ "id": "1909.05858" }
1911.03860
Don't Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training
Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws. In this work we show how all of these problems can be addressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative models with greater reasoning ability. We demonstrate the efficacy of our approach across several dialogue tasks.
http://arxiv.org/pdf/1911.03860
Margaret Li, Stephen Roller, Ilia Kulikov, Sean Welleck, Y-Lan Boureau, Kyunghyun Cho, Jason Weston
cs.CL
null
null
cs.CL
20191110
20200506
0 2 0 2 y a M 6 ] L C . s c [ 2 v 0 6 8 3 0 . 1 1 9 1 : v i X r a # Don’t Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training Margaret Li!, Stephen Roller’, [lia Kulikov’*, Sean Welleck”* Y-Lan Boureau!, Kyunghyun Cho!?, Jason Weston!” 'Racebook AI Research ?New York University {margaretli, roller, ylan, kyunghyuncho, jase}@fb.com [email protected], [email protected] # Abstract Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not ad- dress. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, contain logical flaws. In this work we show how all of these problems can be ad- dressed by extending the recently introduced unlikelihood loss (Welleck et al., 2019a) to these cases. We show that appropriate loss functions which regularize generated outputs to match human distributions are effective for the first three issues. For the last important general issue, we show applying unlikelihood to collected data of what a model should not do is effective for improving logical consistency, potentially paving the way to generative mod- els with greater reasoning ability. We demon- strate the efficacy of our approach across sev- eral dialogue tasks. 1 # 1 Introduction Sentence: Completions: 8.3% basketball I love basketball. It's awesome. | really dislike cia it 6.5% the y 4.0% sports Figure 1: GPT-2 345M model completions can show lack of coherence, e.g. direct contradictions. these models do not understand what they are say- ing. For example, Figure 1 shows how the 345M- parameter GPT2 model (Radford et al., 2019) can give high probability to contradictory generations. In this work, we show how the recently in- troduced unlikelihood objective (Welleck et al., 2019a) can be generalized to remedy these prob- lems. Unlikelihood is a technique developed for removal of repetition in language model comple- tions, and works by adding an extra term to the objective that forces repetitions to have low proba- bility, alleviating the degenerative problems high- lighted in Holtzman et al. (2019). In fact, unlike- lihood can be seen as a much more general frame- work, as we will see. Open-ended tasks such as dialogue reveal a num- ber of issues with current neural text generation methods. In more strongly grounded tasks such as machine translation and image captioning, current encoder-decoder architectures provide strong per- formance, where mostly word-level decisions are often taken correctly by the model. However, crit- ical failings are exposed in less constrained gener- ation: reliance on repetitive copying and overuse of frequent words, and an inability to maintain logical coherence. The former shows the learn- ing objective is faulty in that it cannot match sim- ple statistics of the training data, while the latter touches more to the heart of artificial intelligence: We first generalize unlikelihood to a different domain: dialogue, where we measure statistics of the training distribution in terms of contextual copies, within-utterance repeats, and vocabulary usage. We then develop loss functions that con- trol these statistics, providing improved metrics on several tasks. Secondly, we show how the same tools can be used to address deeper semantic is- sues in such models. By leveraging existing natu- ral language inference (NLI) data (Welleck et al., 2019b) as supervision against poor quality gener- ations, we train models that assign low probabil- ity to generating incoherent and contradictory text. Overall, our approach yields more consistent dia- logue models across several axes, and provides a Work done while at Facebook AI Research (FAIR). promising framework for further advances. Code and pre-trained models will be made available.† # 2 Dialogue Unlikelihood Training Dialogue Generation Dialogue generation con- sists in predicting an utterance y = (y1, . . . , y|y|) given a context x = {s1, . . . , sk, u1, . . . , ut} that consists of initial context sentences s1:k (e.g., sce- nario, knowledge, personas, etc.) followed by di- alogue history utterances u1:t from speakers who take consecutive turns. Likelihood Training Given a dataset D = {(x(i), y(i))} derived from a collection of human- human interactions, the standard approach to gen- erative training for dialogue tasks is maximum likelihood estimation (MLE), that minimizes: ly| Lite (Pe x,y) = = > log po(yt? x,y), t=1 where x(i) is a gold context (dialogue history and initial context sentences) and y(i) is a gold next- utterance, and y(i) t Likelihood-based (greedy or beam) decoding applied after training a model with this objective yields sequences with statistics that do not match the original human training sequence distribution. Unlikelihood Training To control for such dis- tribution mismatches, we employ the unlikelihood loss (Welleck et al., 2019a), generalizing it to our setting, and developing a particular form of the loss function for each type of mismatch. The general form of the unlikelihood loss pe- nalizes a set of tokens Ct at each time-step, L(i) ly| ~S2 YS Be) 10g (1 — pa(yelx. yt)» t=1 ye€Ce where Ct ⊆ V is a subset of the vocabulary, and β(yc) is a candidate-dependent scale that controls how much the candidate token should be penal- ized. The overall objective in unlikelihood train- ing then consists of mixing the likelihood and un- likelihood losses, ULE = L(i) L(i) MLE + αL(i) UL, (1) †https://parl.ai/projects/dialogue_ unlikelihood/ where α ∈ R is the mixing hyper-parameter. Likelihood tries to model the overall sequence probability distribution, while unlikelihood cor- rects for known biases. It does this via the set of negative candidates Ct calculated at each step t, where we are free to select candidate generation functions depending on the biases to be mitigated. Likelihood pushes up the probability of a gold to- ken y(i) t while unlikelihood pushes down the prob- ability of negative candidate tokens yc ∈ Ct. In Welleck et al. (2019a) the context x consists of a ground-truth sequence (x = x(i)), the target y is either a ground-truth sequence (y = y(i)) or a model-generated sequence (y = ˆy), and the per- token scale parameter β(yc) is 1. In this paper, we demonstrate how unlikelihood can be used as a general framework by applying it to the dialogue domain. We show how varying the contexts x, targets y, candidates C and scaling β can be used to improve the coherence and lan- guage modeling quality of dialogue models. To do this, we now consider the different biases we wish to mitigate, and construct a specific unlikelihood loss for each in turn. # 2.1 Repetition and Copying Generative dialogue models are known to both (i) rely too much on copying existing context knowl- edge or dialogue history; and (ii) repeat them- selves within individual utterances. To address this with unlikelihood, we define two types of neg- ative candidate tokens which either appear in a re- peating n-gram from the context or from the gen- erated label itself, ceontext-copy __ { {ye} Yt € repeat context n-gram t 0 otherwise, clbelepeat _ { {yu} Yt € repeating label n-gram 0 otherwise, where yt is a token in a repeating context n-gram when yt is part of an n-gram that already appeared in the context tokens x, and is in a repeating la- bel n-gram when yt is part of an n-gram that al- ready appeared in y<t. Given a ground-truth con- text x(i), we apply these two forms of unlikelihood to a model-generated sequence ˆy(i). In summary, we either apply the per-example loss L(i) UL(pθ, Ccontext-copy 1:|y| , x(i), ˆy(i)) for controlling context copies, or L(i) UL(pθ, Clabel-repeat 1:|y| , x(i), ˆy(i)). for controlling label repeats. We also consider mixing the two losses to mitigate both issues. # 2.2 Vocabulary Usage Neural sequence models trained with maximum likelihood generate sequences with token distribu- tions that differ from those of human text (Dinan et al., 2020; Holtzman et al., 2019). In particular, these models tend to produce high frequency to- kens too often and low frequency tokens too rarely, where frequency is defined by the human token distribution. We address this with unlikelihood by penal- izing tokens according to the mismatch between the model and ground-truth unigram distributions. Specifically, we first maintain an empirical esti- mate of the model’s unigram distribution pmodel(yt) and the human distribution p∗(yt): pmodel(yt) = count(yt) |Y | , where Y is a collection of token predictions on a subset of training data D’ (e.g. the preceding k = 256 batches), and count(y,) is the number of occurrences of y, in Y. This is computed us- ing model sequences (y = y), defining Y as the collection of all tokens in all y. We wish to push down the probability of tokens appearing too often, i.e. when pmodel(yt) > p∗(yt). For the unlikelihood loss, each step’s candidate is thus the current token, Cidentity = {yt}, and each to- t ken’s unlikelihood loss is scaled according to the mismatch between the approximated model and human distributions, Pmodel (Ye) ) B(Ye) = log B(Yc) = Pmodel(y.) oe ( pa(Ye) The unlikelihood loss for a token yc is non-zero when the token occurs more often in the model’s estimated unigram distribution. In summary, the resulting per-example loss is L(i) UL(pθ, Cidentity 1:|y| , x(i), y) where y is a model-generated sequence. # 2.3 Contradictions Neural generation models appear fluent, especially when pre-trained on large datasets, but are still poor at understanding the language they produce. That is, they can produce logically or factually inaccurate, or contradicting statements (Welleck et al., 2019b; Zhang et al., 2018; Hayashi et al., 2019; Petroni et al., 2019). Here, we show how the unlikelihood objective can be used to train such models to assign low probability to inconsistent and contradictory utterances. To do so, we assume the existence of training data of both positive and negative examples of co- herent behavior. There is a raft of recent large- scale, high quality data that can be massaged into this form, from natural language inference (NLI) tasks (Bowman et al., 2015; Williams et al., 2018; Welleck et al., 2019b) to commonsense reasoning tasks (Zellers et al., 2019; Qin et al., 2019). Two collections of data can be derived from the labels of such a supervised task: D+ = {(x(i), y(i)+)}, D− = {(x(i), y(i)−)}, where D+ is coherent behavior, e.g. neutral or en- tailing data in NLI, and D− is incoherent behavior, e.g. contradictions. In general, many forms of this type of data can be collected, not just NLI, and it is also not necessary for the contexts x(i) to overlap as we have written here. Standard likelihood training can then be per- formed on coherent data D+, while the unlikeli- hood objective is applied to D− as we wish to push down the probability of generating the incoherent response y− given a context x. That is, given an incoherent pair (x, y−) we use the loss LUL(pθ, Cidentity 1:|y| , x, y−), where we penalize each token in the target (Cidentity t }). Hence, the loss makes gener- t ating the contradicting sentences less likely. # 3 Related Work Our work provides new applications of unlikeli- hood training (Welleck et al., 2019a), showing that unlikelihood offers a general framework for im- proving generative models, and in particular dia- logue models. Outside of that work, the use of negative training in dialogue retrieval, rather than generation, has been previously extensively stud- (Humeau et al., 2019; Nugmanova ied, see e.g. et al., 2019). In the area of generative dialogue, a number of works have focused on improving the standard likelihood training approach. Closer to our work is that of He and Glass (2019) which developed the approach of negative training to prevent generic and malicious responses in dia- logue models. In terms of improving repetition and specificity, a recent alternative approach is that of control (Fan et al., 2018; Ficler and Goldberg, 2017; Ghazvininejad et al., 2017; See et al., 2019). Nucleus sampling (Holtzman et al., 2019) can help to remove generic or repetitive utterances at the expense of accuracy, but was shown to be inferior to beam blocking, which in turn was shown to be inferior to unlikelihood in Welleck et al. (2019a). In terms of dialogue coherence, Welleck et al. (2019b) showed that retrieval, but not generative models, could be improved with NLI as a re- scorer, while Yang et al. (2018) multi-tasked with NLI. The work of Gabriel et al. (2019) has also studied improving narrative flow with a discrimi- native rescorer, but in that case for generated lan- guage. In our work, the improvements are tightly integrated into the training of the model itself. # 4 Experiments In all of our experiments we employ a large pre-trained seq2seq Transformer (Vaswani et al., 2017) as our base model, which we then fine-tune for particular tasks with the objectives outlined in Section 2 and specified in each experiment below. Following previous work (Humeau et al., 2019), we pre-train our model on dialogue data, using a previously existing Reddit dataset extracted and obtained by a third party and made available on pushshift.io, training to generate a comment con- ditioned on the full thread leading up to the com- ment, spanning ∼ 2200M training examples. Our Transformer model consists of an 8 layer encoder, 8 layer decoder with 512-dimensional embeddings and 16 attention heads, and is based on the ParlAI implementation of Miller et al. (2017). The model was trained with a batch size of 3072 sequences for approximately 3M updates using a learning rate of 5e-4, and an inverse square root scheduler. This pre-training took approximately two weeks using 64 NVIDIA V100s. # 4.1 Repetition and Copying We use the ConvAI2 persona-based dialogue (Zhang et al., 2018), Wizard of Wikipedia Repetition Model PPL F1 Context Label Human MLE Baseline - 11.4 .199 - .0223 .0004 .1131 .0210 11.8 .194 UL (Context only) UL (Label only) 11.4 .203 UL (Context & Label) 11.9 .193 .0330 .0069 .0984 .0005 .0352 .0023 Table 1: Evaluation on the ConvAI2 task valid set (test set is hidden), comparing standard likelihood (MLE) with context and label repetition unlikelihood loss training. The repetition types can be decreased depending on which type of unlikelihood loss is used, with minimal changes in perplexity and F1. Repetition Model PPL F1 Context Label Human MLE Baseline - 8.3 .368 - .160 .441 .001 .014 UL (Context only) UL (Label only) UL (Context + Label) 8.8 .346 8.3 .371 8.5 .358 .229 .426 .313 .037 .001 .009 Table 2: Evaluation on the Wizard of Wikipedia test set, comparing standard likelihood (MLE) with context and label repetition unlikelihood loss training. The rep- etition types can be decreased depending on the type of unlikelihood loss used, while minimally impacting F1. knowledge-grounded dialogue (Dinan et al., 2019) and ELI5 long-form question answering (Fan et al., 2019) datasets to evaluate the effect of using unlikelihood to reduce copying and repe- tition in model generated utterances. On each dataset, we fine-tune the pre-trained pushshift.io Reddit model, then evaluate by generating next- utterances for dialogue contexts from the test set (or validation in ConvAI2, as the test set is hid- den). We use greedy decoding in our main exper- iments for simplicity and scalability, but we also obtained similar results with beam search, shown in Appendix A. To measure label repetition in a sequence y, we use the portion of duplicate n-grams: 1.0 − |unique n-grams(y)| |n-grams(y)| , and report the metric averaged over the examples. Label repetition increases from zero as the model generates more repeated n-grams. To measure context repetition, we measure the fraction of gen- Repetition Model PPL F1 Context Label Human MLE Baseline - 21.0 .130 - .009 .033 .010 .617 21.4 .163 UL (Context only) UL (Label only) 21.4 .183 UL (Context + Label) 21.8 .184 .008 .015 .009 .322 .055 .078 Table 3: Evaluation on the ELI5 task test set, com- paring standard likelihood (MLE) with context and la- bel repetition unlikelihood loss training. The repetition types can be decreased depending on which type of un- likelihood loss is used, while improving F1. erated n-grams that appear in the original context: |n-grams(y) ∩ n-grams(x)| |n-grams(y)| , and report the metric averaged over the exam- ples. Context repetition increases when the model ‘copies’ n-grams from the context. To quantify language modeling quality, we use standard per- plexity and F1 metrics. We use the pre-trained model fine-tuned with MLE as the baseline, and compare it against the pre-trained model fine-tuned with copy and repe- tition unlikelihood (§2.1). Results Results for ConvAI2 are shown in Ta- ble 1. We see that training unlikelihood using only-contexts or only-labels reduces their corre- sponding metrics dramatically compared to the MLE baseline. Training with both context- and label-repetition unlikelihood reduced both context .1131) and label repetitions (by 69%, .0352 vs. repetitions (by 89%, .0023 vs .0210) compared to the MLE baseline, much closer to human levels, while keeping perplexity essentially constant. Comparatively, the Wizard of Wikipedia MLE baseline experiences a much larger problem with context repetition, due to its tendency to copy grounded knowledge verbatim (Table 2). Results for ELI5, shown in Table 3, show that it has an especially large problem with label repeti- tion, and that label-unlikelihood is able to reduce the repetitions by 91% (.055 vs .617), while sig- nificantly boosting F1 (.130 to .182). Figures 2 and 3 show perplexity as a function of label and context repeats respectively using un- likelihood on ELI5. The parameter α can clearly control repeats smoothly, with only very high val- ues resulting in increased perplexity. 3.0 2.5 2.0 15 1.0 0.5 0.0 32 Human ievei 30 284°, PPL 26 eo ot ee 24 ; °88 Pete 22) a oo 0.00 0.02 0.04 0.06 0.08 0.10 0.12 ELIS Label Repeats Figure 2: ELI5: Perplexity vs. label repeats as a func- tion of α in the label unlikelihood objective. 22.8 1.00 22.6 0.75 22.4 0.50 22.2 0.25 PPL 22.0 0.00 21.8 21.6 21.4 0.00 0.01 0.02 0.03 ELIS Context Repeats Figure 3: ELI5: Perplexity vs. context repeats as a function of α in the context unlikelihood objective. Human Evaluation Finally, we perform a hu- man evaluation using the same pairwise evaluation scheme as (Fan et al., 2019) performed on ELI5, comparing the MLE baseline to UL (Label only) which asks: Which response answers the question bet- ter? The evaluators are asked to consider both the readability and accuracy of the answer. Results are given in Figure 4 (left), showing a statistically sig- nificant improvement over the baseline (150 trials, two tailed binomial test, p < 0.01). Further details are given in Appendix C. # 4.2 Vocabulary Usage We evaluate the ability of vocabulary unlikelihood (§2.2) to reduce the mismatch between model and human token distributions. We use the ConvAI2 dataset, where our baseline is again trained using maximum likelihood. Start- ing with the baseline model, we then fine-tune sev- eral models using vocab unlikelihood at logarith- mically interpolated values of α ∈ [1, 1000]. We partition the vocabulary into ‘frequent’, ‘medium’, ‘rare’, and ‘rarest’ using the human unigram distribution computed with the ConvAI2 training set, corresponding to the sorted token sets whose cumulative mass accounts for the top 40%, the next 30%, the next 20% and the final 10% of usage, respectively. We evaluate a model by gen- erating utterances given contexts from the Con- vAI2 validation set, and compute the fraction of tokens within each class. Results Figure 5 shows how the vocabulary dis- tribution obtained after unlikelihood training is af- fected by the choice of mixing hyperparameter α (Eq. 1): it can smoothly transition between the hu- man training distribution and the MLE trained dis- tribution (‘Baseline’), which is far from the human one. Table 4 compares the MLE baseline with un- likelihood with increasing α values in terms of dis- tribution and F1 score. The vocabulary unlikeli- hood fine-tuning shifts probability mass from the over-represented frequent words towards under- represented medium and rare words, with the ef- fect strengthening as α increases. At a small cost to perplexity and F1, the unlikelihood tuning re- duced the overuse of common tokens by 9 points, matching the human rate, while improving the production of rare tokens by 3 percentage points. Human Evaluation Finally, we perform a hu- man evaluation using the ACUTE-EVAL frame- work (Li et al., 2019), comparing the MLE base- line to UL for various α. First, 252 human-bot conversations (8 turns each) are collected, and then models are compared pairwise by asking the question: Who would you prefer to talk to for a long conversation? For these experiments we compare with both methods generating using beam with context blocking of trigrams. Results are given in Figure 4 (right), showing a statistically signif- icant improvement over the baseline according to humans (two tailed binomial test, p < 0.01). Fur- ther details are given in Appendix C. # 4.3 Contradictions We use the dialogue natural language inference (NLI) task of Welleck et al. (2019b) to obtain labeled non-contradicting and contradicting dia- logue sentence pairs to use in unlikelihood training (§2.3). Dialogue NLI contains utterances labeled as entailing (E), neutral (N) or contradiction (C), given a premise that is either a persona sentence (an initial context sentence describing a dialogue agent’s personality) or another dialogue utterance 100% [=] MLE Baseline [ET Unlikelihood Winning Percentage 2 3 3 S&S &S a=10! a= 10? Repetition (ELI5) Vocabulary (ConvAl2) Figure 4: Human evaluation experiments for label un- likelihood on ELI5 (left), and vocabulary unlikelihood on ConvAI2 for two values of α (right). Unlikelihood significantly outperforms the MLE baselines. Token frequency classes Model PPL F1 Freq Med Rare Rarest Human .400 .300 - MLE Baseline 11.4 .199 .491 .282 UL, α = 100 UL, α = 101 UL, α = 102 UL, α = 103 - 11.4 .200 .483 .289 11.9 .201 .459 .328 12.5 .190 .430 .335 14.4 .174 .399 .339 .200 .157 .163 .154 .163 .188 .100 .068 .063 .058 .071 .073 Table 4: Unlikelihood loss applied to vocabulary dis- tributions. Stronger α terms greatly shift probability mass from the most Frequent words to Medium and Rare words, at a small cost to PPL and F1. Frequent, medium, rare and rarest token classes are defined as the sets of tokens whose cumulative masses account for the top 40%, the next 30%, the next 20% and final 10% of tokens empirically generated by humans, respectively. 1000 g 021 Human a eo y 100 E e 6 o* = . ° a 0.19 ° ° 10 3 iS ° 3 ° 1 ° $0.17 ° ° ° 5 Fa . 2 eo 6 B. 2 oo 908 oe ° © o15 os ce | 9° Baseline 0.37 0.39 «0.41 «2043 «(045 (0.47.49 Frequent words cumulative mass Figure 5: Vocabulary control with unlikelihood train- ing: more probability mass is transferred from Fre- quent words to Rare words as we increase the α weight- ing parameter. The maximum likelihood baseline is far from the human distribution. from the Persona-Chat dialogue task (Zhang et al., 2018). We show examples from Dialogue NLI in Persona (Model) + ihave 2 cats. + iwork as a teacher at a middle school. * my favorite color is yellow. * i dislike country music. Entailment, Dialogue * Human: hi, do you have any pets? —* Model: i do! i have 2 cats. * Human: cool, what are the names of your cats? Model: i do not have any pets. what is your cats name? * Human: i don't have a cat, just a dog. do you play any sports? * Model: sadly, no, i sing in the church choir, so no home time. Contradiction Neutral Figure 6: Dialogue NLI from (Welleck et al., 2019b). Train Test Valid Entailment Triple-Entailment Neutral Negatives 95k 105k 110k 110k 4613 5285 5500 5500 4959 5481 5700 5700 Table 5: Dialogue NLI two utterance generation task dataset statistics. Figure 6. The original data consists of sentence pairs (s1, s2) along with a label (E, N, or C), and was constructed by developing a schema and em- ploying crowdworkers to label utterances with re- lation triples. The labels are then inferred from the triple representation. We first transform the original classification dataset into a form useful for unlikelihood training of a generative dialogue model. We consider two setups: (i) a two utterance generation task; and (ii) a full dialogue generation task. Two Utterance Generation Task We adapt the initial dialogue NLI dataset by using entailing and neutral training sentence pairs as plausible posi- tive utterances, and contradicting pairs as nega- tives. That is, if a pair (s1, s2) from Dialogue NLI has label E or N, the example (x, y) = (s1, s2) is added to D+, otherwise (label C) it is added to D−. We consider two types of entailment: entailing sentence pairs that appear together in a dialogue in the original Persona-Chat dataset and are there- fore natural (‘entailment’), and those that only en- tail via their triple relations (‘triple-entailment’). The latter are more challenging, noisier targets. Evaluation is performed by measuring the test set perplexity over the four target label types, where contradictions should have relatively higher per- plexity. We additionally evaluate a selection ac- curacy task, where for each test example there are two candidate responses: a positive and a negative (contradicting) statement. The candidate response with the lowest perplexity is considered to be the model’s selection, and we measure the selection success rate. Evaluation is broken down by pos- itive type (entailment, triple-entailment, neutral). Dataset statistics are given in Table 5. Full Dialogue Task To evaluate in a more real- istic setup that involves full dialogue rather than a single utterance, we take full Persona-Chat di- alogues (Zhang et al., 2018) similar to Figure 6, and map back the dialogue NLI data to provide positive and negative continuations of the dia- logue. We consider continuations as either triple entailing utterances, neutral utterances or contra- dictions – where the relation triple is used to match the existing persona or dialogue turns by the same speaker to induce the label. That is, an example (x, y) consists of a dialogue history x = {p1, . . . , pk, u1, . . . , ut} and utterance y = s2, where (s1, s2) is a sentence pair from Dialogue NLI, and at least one sentence in x has the same re- lation triple as s1. When the pair (s1, s2) is labeled as E or N in Dialogue NLI, the example (x, y) is added to D+, and otherwise it is added to D−. Results Our MLE baseline obtains a perplexity of 11.4, in line with current best systems on this task (Lewis et al., 2019). Unfortunately, despite being good on such standard metrics, our base- line models fail at our coherence task. As seen in Table 6 for the two utterance task, the perplex- ity of contradicting utterances (12.5) is on average lower than for neutral (36.7) or triple-entailing ut- terances (17.5), although it is higher than entail- ing utterances. We believe this is due to contra- dicting utterances having high word overlap with the premise utterance, coupled with an inability to judge incoherence. Viewed as a selection task be- tween utterances, picking the utterance with the lowest perplexity, this means the selection rates of non-contradicting utterances are very low, e.g. picking neutral utterances over contradicting utter- ances only 18% of the time. Even fully entailing utterances are only picked 73% of the time. Sim- ilar results are found on the full dialogue task as well, see Table 7. Unlikelihood training brings large improve- ments in coherence metrics, whilst minimally im- pacting overall dialogue perplexity. After apply- ing unlikelihood, perplexity for contradicting ut- terances has a clear signature, with very large av- Selection Accuracy Perplexity Data + Model Entail Tr.-E Neutral Entail Tr.-E Neutral Contradict ConvAI2 MLE Baseline UL (Dialogue NLI) 72% 41% 96% 85% 18% 78% 8.54 9.1 17.5 26.6 36.7 39.4 12.5 248.9 11.4 11.9 Table 6: Test evaluation on the Dialogue NLI two utterance generation task, comparing standard likelihood (MLE) models trained on pushshift.io Reddit and ConvAI2 with unlikelihood loss NLI training. Results are broken down according to whether the premise and positive candidate are entailing, triple-entailing, or neutral (Entail, Tr.-E, Neutral). Selection Accuracy measures how often the model assigns lower perplexity to the positive candidate than to the negative candidate in the pair. Top two rows: for standard maximum likelihood models, the perplexity of contradicting utterances is lower compared to neutral or triple-entailing utterances (albeit higher compared to entailing utterances), showing partial failure at the coherence task. Bottom row: NLI Unlikelihood training yields large improvements on all coherence metrics, while minimally increasing overall perplexity. Selection Accuracy (vs. Neg) Perplexity Data + Model Triple-Entail Neutral Triple-Entail Neutral Contradict ConvAI2 MLE Baseline UL (Dialogue NLI) 66.5% 89.0% 36.8% 69.8% 23.3 21.5 45.1 40.3 35.9 63.5 11.4 11.8 Table 7: Test evaluation on the Full Dialogue NLI generation task. NLI unlikelihood training improves coherence metrics compared to likelihood (MLE) training. For UL, the triple-entailing or neutral candidates are assigned rel- atively lower perplexity compared to contradicting candidates, with higher selection accuracy for coherent labels. Premise Hypothesis LMLE PPL LUL PPL Yes, I love watching baseball and basketball. I do not like running though. (C) I love running. (E) I despise running. 25.5 29.9 226.9 9.4 Yes, I love watching baseball and basketball. I do like running though. (E) I love running. (C) I despise running. 26.2 42.8 3.1 247.1 We did too but working in real estate for 12 years . sucked up a lot of time (E) I have been working as a real estate agent for the past 12 years. (C) We did too but working in real estate for fifteen years sucked up a lot of time. 3.9 3.1 3.8 17.6 Figure 7: Example perplexities of a baseline maximum likelihood model (LMLE) and our unlikelihood trained model (LUL ) when generating the provided hypotheses, given the premise. The maximum likelihood trained model assigns high probability (low perplexity) to contradictory generations, while unlikelihood does not. erage values compared to entailing or neutral utter- ances, e.g. 248.9 vs. 9.1 for contradict vs. entail on the two utterance task. This converts to cor- responding large increases in selection accuracy across all types on both tasks, e.g., an increase from 18% to 78% on neutral statements on the two utterance task, and from 37.4% to 69.8% on the full dialogue task. Some example model predictions are given in Figure 7, comparing the MLE baseline and unlike- lihood model perplexities of generating the given hypotheses. The likelihood model cannot differ- entiate between contradicting and entailing state- ments easily, while there are large perplexity dif- ferences for the unlikelihood model in these cases. # 5 Conclusion Generating consistent and coherent human-like di- alogue is a core goal of natural language research. We studied several aspects that contribute to that goal, defined metrics to measure them, and pro- posed algorithms that improve them, mitigating some of the failings of maximum likelihood train- ing, the current dominant approach. Our method defines objective functions under the umbrella of unlikelihood: during training, we wish to make in- consistent dialogue unlikely by lowering the prob- ability of such events occurring. This makes gen- erative models repeat themselves less, copy the context less, and use more rare words from the vocabulary – closer to matching human statistics. Further, utilizing supervised datasets with labeled coherent and incoherent utterances and applying unlikelihood yields measurably improved levels of coherence with respect to the aspect measured, in this case contradiction. Future work could apply this same technique with other supervised data, e.g. correcting causal or commonsense reasoning errors (Zellers et al., 2019; Qin et al., 2019). # References Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, pages 632–642, Lisbon, Portugal. Association for Compu- tational Linguistics. Emily Dinan, Varvara Logacheva, Valentin Ma- lykh, Alexander Miller, Kurt Shuster, Jack Ur- banek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, Shrimai Prabhumoye, Alan W. Black, Alexander Rudnicky, Jason Williams, Joelle Pineau, Mikhail Burtsev, and Jason Weston. 2020. The second conversational intelligence challenge (Con- vAI2). In The NeurIPS ’18 Competition, pages 187– 208, Cham. Springer International Publishing. Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In Proceedings of the International Confer- ence on Learning Representations. Angela Fan, David Grangier, and Michael Auli. 2018. In Pro- Controllable abstractive summarization. ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 45–54. Associa- tion for Computational Linguistics. Angela Fan, Yacine Jernite, Ethan Perez, David Grang- ier, Jason Weston, and Michael Auli. 2019. ELI5: In Proceedings of Long form question answering. the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3558–3567, Florence, Italy. Association for Computational Linguistics. Jessica Ficler and Yoav Goldberg. 2017. Controlling linguistic style aspects in neural language genera- In Proceedings of the Workshop on Stylis- tion. tic Variation, pages 94–104, Copenhagen, Denmark. Association for Computational Linguistics. Saadia Gabriel, Antoine Bosselut, Ari Holtzman, Kyle Lo, Asli Celikyilmaz, and Yejin Choi. 2019. Co- operative generator-discriminator networks for ab- stractive summarization with narrative flow. arXiv preprint arXiv:1907.01272. Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. 2017. Hafez: an interactive poetry In Proceedings of ACL 2017, generation system. System Demonstrations, pages 43–48, Vancouver, Canada. Association for Computational Linguistics. Hiroaki Hayashi, Zecong Hu, Chenyan Xiong, and Graham Neubig. 2019. Latent relation language models. arXiv preprint arXiv:1908.07690. Tianxing He and James Glass. 2019. Negative train- ing for neural dialogue response generation. arXiv preprint arXiv:1903.02134. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degen- eration. arXiv preprint arXiv:1904.09751. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Poly-encoders: Trans- former architectures and pre-training strategies for arXiv fast and accurate multi-sentence scoring. preprint arXiv:1905.01969. Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. BART: Denoising sequence-to-sequence pre-training for natural language generation, trans- arXiv preprint lation, arXiv:1910.13461. Margaret Li, Jason Weston, and Stephen Roller. 2019. ACUTE-EVAL: Improved dialogue evaluation with optimized questions and multi-turn comparisons. In Proceedings of the NeurIPS Workshop on Conversa- tional AI. Alexander Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. ParlAI: A dialog research soft- In Proceedings of the 2017 Con- ware platform. ference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 79–84, Copenhagen, Denmark. Association for Computa- tional Linguistics. Aigul Nugmanova, Andrei Smirnov, Galina Lavren- tyeva, and Irina Chernykh. 2019. Strategy of the negative sampling for training retrieval-based dia- In 2019 IEEE International Con- logue systems. ference on Pervasive Computing and Communica- tions Workshops (PerCom Workshops), pages 844– 848. IEEE. Fabio Petroni, Tim Rockt¨aschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Se- bastian Riedel. 2019. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066. Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019. Counterfactual story reasoning and generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5042– 5052, Hong Kong, China. Association for Computa- tional Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Abigail See, Stephen Roller, Douwe Kiela, and Ja- son Weston. 2019. What makes a good conversa- tion? how controllable attributes affect human judg- In Proceedings of the 2019 Conference of ments. the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 1702–1723, Minneapolis, Minnesota. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998–6008. Curran Asso- ciates, Inc. Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Di- nan, Kyunghyun Cho, and Jason Weston. 2019a. Neural text generation with unlikelihood training. arXiv preprint arXiv:1908.04319. Sean Welleck, Jason Weston, Arthur Szlam, and Kyunghyun Cho. 2019b. Dialogue natural language inference. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 3731–3741, Florence, Italy. Association for Computational Linguistics. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguis- tics. Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learning semantic textual similarity from conver- In Proceedings of The Third Workshop sations. on Representation Learning for NLP, pages 164– 174, Melbourne, Australia. Association for Compu- tational Linguistics. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you In Proceedings of the 56th An- have pets too? nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204– 2213, Melbourne, Australia. Association for Com- putational Linguistics. Repetition Model PPL F1 Context Label Human MLE Baseline - 8.3 .373 - .160 .0006 .002 .582 UL (Context only) UL (Label only) UL (Context + Label) 8.8 .345 8.3 .371 8.5 .358 .270 .645 .445 .001 .000 .003 Table 8: Evaluation on the Wizard of Wikipedia task test set, comparing standard likelihood (MLE) with repetition unlikelihood loss training, where both meth- ods use beam search (beam size of 5). # A Repetition Control with Beam Search The experiments on repetition and copying in the main paper were carried out with greedy decoding for simplicity. In this section we show that simi- lar results hold with beam decoding as well. Us- ing a beam size of 5, we take the same 4 models from Table 2 and compute metrics with beam in- stead. The results are given in Table 8 which show similar trends to before, except the baseline model using beam tends to suffer more from repetition, which is a known result (Holtzman et al., 2019). Note that we simply evaluated the same unlikeli- hood models as before, but we expect that better results could be obtained by performing sequence level unlikelihood training with beam search in the training loop, as well as choosing hyperparameters specifically with this kind of decoding being used to measure validation performance. # B Nucleus Sampling for Vocabulary control Table 9 compares the MLE baseline, unlikelihood with increasing α values, and Nucleus sampling (Holtzman et al., 2019) with hyperparameter p in terms of distribution and F1 score. The vocab- ulary unlikelihood fine-tuning shifts probability mass from the over-represented frequent words to- wards under-represented medium and rare words, with the effect strengthening as α increases. At a small cost to perplexity and F1, the unlikelihood tuning reduced the overuse of common tokens by 9 points, matching the human rate, while improv- ing the production of rare tokens by 3 percentage points. Nucleus sampling is a popular method that can also produce generations closer to the human vo- cabulary distribution. It does this by sampling from the model’s probability distribution rather Token frequency classes Model PPL F1 Freq Med Rare Rarest Human MLE Baseline .400 .300 11.4 .199 .491 .282 - - .200 .157 .100 .068 Nucleus p = 0.3 11.4 .180 .452 .315 Nucleus p = 0.4 11.4 .171 .440 .320 Nucleus p = 0.5 11.4 .160 .425 .322 Nucleus p = 0.6 11.4 .151 .411 .318 Nucleus p = 1.0 11.4 .141 .394 .302 UL, α = 100 UL, α = 101 UL, α = 102 UL, α = 103 11.4 .200 .483 .289 11.9 .201 .459 .328 12.5 .190 .430 .335 14.4 .174 .399 .339 .168 .172 .180 .192 .201 .163 .154 .163 .188 .064 .068 .072 .078 .101 .063 .058 .071 .073 Table 9: Unlikelihood loss applied to vocabulary dis- tributions. Stronger α terms greatly shift probability mass from the most Frequent words to Medium and Rare words, at a small cost to PPL and F1. Frequent, medium, rare and rarest token classes are defined as the sets of tokens whose cumulative masses account for the top 40%, the next 30%, the next 20% and final 10% of tokens empirically generated by humans, respectively. Nucleus sampling can also produce a distribution close to human with parameter p close to 1, but with larger losses in F1. than using beam search, where the sampler re- stricts to the smallest set of tokens with total mass above a threshold p ∈ [0, 1]. Small values of p are similar to greedy sampling. Increasing p yields distributions closer to human, but with large losses in F1 score, e.g. p = 0.5 has a similar distribution to unlikelihood with α = 102 but the F1 scores are 0.160 vs. 0.190. This can be understood because maximizing likelihood during decoding yields bet- ter token accuracy than sampling (Welleck et al., 2019a), so the unlikelihood training approach to both use likelihood decoding and match the human distribution can obtain the best of both worlds. # C Human Evaluation Description of ConvAI2 vocabulary setup We follow (Li et al., 2019) and perform a pairwise comparison with full-length model conversations. We first collected 252 model-human conversa- tions with each of the models (MLE baseline, and weights for α of Unlikelihood, examples in 8). We then set up a pairwise-comparison using the soft- ware of (Li et al., 2019), using the same question (“Who would you prefer to talk to for a long conver- sation?”) and use the exact same quality control question (a baseline greedy model without repeti- tion control, versus a human). We collected ap- proximately 200 preferences per model compari- son and filtered annotators who failed quality con- trol. Description of ELI5 repetition setup We fol- low (Fan et al., 2019) and perform a pairwise eval- uation where human annotators were asked “which response answers the question better?” A screenshot of the UI is shown in Figure 9. Human evalua- tors were asked to rate a total of 5 questions, two of which were quality control annotations. The quality control examples contained the real hu- man responses, along with model predictions: one question contained a baseline model, and one con- tained an unlikelihood model. Annotators which did not pick humans in quality controls were re- moved from the final setups. We collected 200 an- notations comparing the baseline and the unlikeli- hood model. Results Evaluation results from all evaluated matchups are shown in Figure 10. We find our repetition-controlled ELI5 model significantly outperforms the MLE baseline. We find that two of the vocabulary repetition significantly outper- form the MLE baseline. We compute significance with a two-tailed binomial test (p < .01). (a) Hey there how are you? (b) Hi how are you today I'm good how are you I'm great! How about you? Good | just finished up school for the day, what are you doing? I'm good thanks for asking I'm working at a bar Of course, What do you like to do? About a year and a half I'm a chef, do you like to cook? What are you going to school for What do you like to cook? Im just in high school but | want to be a physical therapist | like to bake What city are you from Oh, delicious. Do you like to fish at all?> Im in new york currently but was born in pa Yes | love fishing What city do you live in Wonderful. Isn't it relaxing? (c) Hi how are you doing (4) Hello, how are you today? I'm doing well, just eating some donuts, how are you? I'm doing fine and you? Pretty good thanks for asking | am coing well, thank you for asking. What kind of donuts? Just glazed! | like powdered sugar, but | didn't want to make a mess You're welcome so what going on with? Mid 5 bringing them inside. ust got done mowin; ‘ard Justg emyy Do you have any hobbies? Nice | was just out fishing by the lake, | enjoy playing call of duty with my friends, and on the weekends | enjay cosplaying. Do you have any hobbies Cosplaying sounds fun. Do you play any instruments? | play the bass guitar, what about you? My father plays the violin professionally. | usually cook alot or fish in my spare time. What about you? Cooking and gambling are my hobbies Do you win alot of money when you gamble? Oh that's cool! | tried to play it once during a roleplaying event, it was Not really do you? bad. No, never, that's why | stay away from it. What kind of events did you roleplay? Anything live action form, typically animes Figure 8: Examples of model-human conversations collected during human evaluation of the vocab unlikelihood models. Human utterances are in blue bubbles, model utterances are in white. Conversations (a) and (b) are from the baseline. Conversations (c) and (d) are from the α = 102 model and more frequently employ rarer words. Question: Why is it so easy to fall asleep watching the TV but radio or simply waiting to fall asleep don't have the same effect? Answers: It's because you're not watching the tv. You're watching the | think It's because you're distracted by the tv and the radio. radio. Which better answers the question, considering both readability and accuracy? © SSUES) better answers the question © GRREZ cetter answers the question Please provide a briet justification for your choice (a few words or a sentence) Please enter here... Figure 9: Screenshot of the Human Evaluator UI. 100% [0 MLE Baseline WE Uniikelihood 75% 50% Winning Percentage 25% 0% a@=10° a=108 a =10? a = 10% Repetition (ELI5) Vocabulary (ConvAl2) Figure 10: Complete Human Evaluation results. Hu- man evaluators do not significantly prefer the α = 100 and α = 103 models over the baseline model.
{ "id": "1905.07830" }
1911.03842
Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation
Models often easily learn biases present in the training data, and their predictions directly reflect this bias. We analyze gender bias in dialogue data, and examine how this bias is actually amplified in subsequent generative chit-chat dialogue models. We measure gender bias in six existing dialogue datasets, and focus on the most biased one, the multi-player text-based fantasy adventure dataset LIGHT, as a testbed for our bias mitigation techniques. The LIGHT dataset is highly imbalanced with respect to gender, containing predominantly male characters, likely because it is entirely collected by crowdworkers and reflects common biases that exist in fantasy or medieval settings. We consider three techniques to mitigate gender bias: counterfactual data augmentation, targeted data collection, and bias controlled training. We show that our proposed techniques mitigate gender bias in LIGHT by balancing the genderedness of generated dialogue utterances and are particularly effective in combination. We quantify performance using various evaluation methods---such as quantity of gendered words, a dialogue safety classifier, and human studies---all of which show that our models generate less gendered, but equally engaging chit-chat responses.
http://arxiv.org/pdf/1911.03842
Emily Dinan, Angela Fan, Adina Williams, Jack Urbanek, Douwe Kiela, Jason Weston
cs.CL
null
null
cs.CL
20191110
20200416
0 2 0 2 r p A 6 1 ] L C . s c [ 2 v 2 4 8 3 0 . 1 1 9 1 : v i X r a # Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation Emily Dinan∗, Angela Fan∗†, Adina Williams, Jack Urbanek, Douwe Kiela, Jason Weston Facebook AI Research †Laboratoire Lorrain d’Informatique et Applications (LORIA) # Abstract Gendered word counts in dialogue datasets Models often easily learn biases present in the training data, and their predictions di- rectly reflect this bias. We analyze gender bias in dialogue data, and examine how this bias is actually amplified in subsequent gener- ative chit-chat dialogue models. We measure gender bias in six existing dialogue datasets, and focus on the most biased one, the multi- player text-based fantasy adventure dataset LIGHT (Urbanek et al., 2019), as a testbed for our bias mitigation techniques. The LIGHT dataset is highly imbalanced with respect to gender, containing predominantly male char- acters, likely because it is entirely collected by crowdworkers and reflects common biases that exist in fantasy or medieval settings. We consider three techniques to mitigate gender bias: counterfactual data augmentation, tar- geted data collection, and bias controlled train- ing. We show that our proposed techniques mitigate gender bias in LIGHT by balancing the genderedness of generated dialogue utter- ances and are particularly effective in combi- nation. We quantify performance using vari- ous evaluation methods—such as quantity of gendered words, a dialogue safety classifier, and human studies—all of which show that our models generate less gendered, but equally en- gaging chit-chat responses. # Introduction Machine learning algorithms learn to model pat- terns present in training datasets, so data qual- ity affects what they learn. Model predictions have been shown to directly reflect harmful so- cietal biases present in training datasets, such as racial bias in sports reports (Merullo et al., 2019) and political bias in news data (Fan et al., 2019). Moreover, biases have been discovered in many NLP tasks, for example, in learned word embed- dings (Bolukbasi et al., 2016; Brunet et al., 2018; ∗Joint first authors. Dataset % gend. words % male bias LIGHT Reddit Wizard of Wikipedia Daily Dialog Empathetic Dialogues ConvAI2 0.94 1.32 0.076 1.02 2.07 1.28 73.4 69.76 65.9 59.04 53.45 50.05 Table 1: Counts of gendered words in several di- alogue datasets. We report the percent of gendered words (% gend. words) as well as the percentage of male-gendered words among all gendered words (% male bias). Datasets are arranged in descending order from most to least male biased. Among these, LIGHT has the most male bias, making it an ideal testbed. Zhao et al., 2019), visual semantic role label- ing (Zhao et al., 2017), natural language infer- ence (He et al., 2019), abusive language classifi- cation (Park et al., 2018), and coreference resolu- tion (Zhao et al., 2018a). Although research into bias in NLP is maturing, bias in dialogue utter- ances has received somewhat less attention (Liu et al., 2019; Sheng et al., 2019; Henderson et al., 2018). However, with the rapid development of real-world use-cases for dialogue agents, such as interactive assistants, bias in dialogue models has the very real potential not only to replicate exist- ing social biases, but also to exacerbate them. Di- alogue debiasing is thus becoming an increasingly important problem in NLP. In this work, we aim to address this issue by measuring gender bias in dialogue data and mitigating its effects on down- stream dialogue generation models. Previous work has noted that gender bias is prevalent in many machine learning datasets (Stock and Cisse, 2017; Zhao et al., 2017), and Persona Example (Original LIGHT Dataset) daughter: I spend most of my time doing household chores. I want to find meaning in life. I am energetic and happy. chief wife: I am the king’s chief wife. Of all the women that he has married, or who are his concubines, I am the principal one. I represent the kingdom of my father, who is the king’s biggest ally. My sons are the ones who will most likely become the king after the death of my husband. women: I live with my husband and 4 children in the village. I spend my days washing clothing and cleaning our home. My husband works for the royal army defending out town. farmer Bob’s wife: I am farmer Bob’s wife. I like to take care of all our animals. I help Farmer Bob everyday on the farm. mother: I am a mother of eight children. I live with my family in a cottage in the countryside. I spend every day tending to the needs of all of my little ones which can be overwhelming, but I always manage to maintain a pleasing disposition and a happy smile. wife: I am the wife of a farmer. While I may not be the most attractive woman ever, I am loyal and loving. My husband is a good man, but only seems to stay with me out of duty. shady lady: I am a shady lady. I work in a tavern, and I am willing to trade sexual favors for money. I have to split the money with the tavernkeeper, so that he will offer me a room to work in. I am beginning to get sick from the “king’s evil”, which doctors call syphilis. My future is bleak: madness and death. But this is the only way that I can support myself, so I continue. Table 2: Examples of gender biased personas in LIGHT. In a review that we conducted in this work, none of these characters were flagged as sexist or offensive. For male examples, see Appendix Table 11. here we analyzed the gender bias in several exist- ing dialogue datasets (see Table 1, and §3 for more discussion). As a testbed for our investigation, we chose the dialogue dataset from the LIGHT text adventure world (Urbanek et al., 2019), be- cause we find it to be significantly more male- biased than other comparable dialogue datasets. Not only is it large enough to train neural chit- chat dialogue models, LIGHT is also interesting, because has multiple potential sources of bias— In namely, characters, personas, and dialogues. the dialogue creation process, crowdworkers were presented with a character (with names such as “farmer” or “witch”) process, as well as an associ- ated persona—a short textual description for the character. Supplied with characters and personas, crowdworkers were paired, up and tasked with generating a dialogue between the characters. All dialogues contained within LIGHT are entirely crowdsourced—thus susceptible to reflecting the gender biases of crowdworkers (Otterbacher et al., 2018; Barbosa and Chen, 2019). We investigate characters, personas, and dialogues as possible sources of bias in turn in §3. # Dialogue Example (Original LIGHT Dataset) wife: I was married off by my family about five years ago. I spend my days cooking and cleaning so my husband will have something to eat when he returns from his work and can enjoy a clean home. I love my husband dearly because he works very hard to provide for us. merchant: What a great day for more money. wife: merchant: wife: Oh my. That is some thick dust! Indeed, it is very old. This room is going to take a while to clean. You might want to come back later. It is fine I can set my booth up here. With all the foot traffic? merchant: wife: merchant: Yes it should be ok. wife: It doesn’t appear that anyone ever comes up here! merchant: Well they will when they know I am here. wife: I have my doubts but I’ll just go about my cleaning. merchant: Yea sounds like a good idea. wife: merchant: wife: What is that supposed to mean? I am saying we should both do our jobs. Don’t take that tone with me! Table 3: A dialogue from the original LIGHT data. The text for the wife persona was crowdsourced. After measuring gender bias in LIGHT, we then explore three bias mitigation techniques, each of which is either wholly novel, or novel in its appli- cation to dialogue: (i) Counterfactual Data Aug- mentation (CDA) (Maudslay et al., 2019; Zmi- grod et al., 2019), (ii) a targeted data collection method, which we refer to as Positive-Bias Data collection, and (iii) Bias Controlled text genera- tion. We show that these techniques are most ef- fective in combination, resulting in dialogue mod- els that produce engaging responses with measur- ably less gender bias and offensive content (see §5). Models and code will be released at parl. ai/projects/genderation_bias. # 2 Related Work Recently, the NLP community has focused on ex- ploring gender bias in NLP systems (Sun et al., 2019), uncovering many gender disparities and harmful biases in algorithms and text (Cao and Daum´e 2019; Chang et al. 2019; Chang and McK- eown 2019; Costa-juss`a 2019; Du et al. 2019; Emami et al. 2019; Garimella et al. 2019; Gaut et al. 2019; Habash et al. 2019; Hashempour 2019; Hoyle et al. 2019; Kang et al. 2019; Lee et al. 2019a; Lepp 2019; Qian 2019; Qian et al. 2019; Sharifirad et al. 2019; Sharifirad and Matwin 2019; Stanovsky et al. 2019; O’Neil 2016). Partic- ular attention has been paid to uncovering, analyz- ing, and removing gender biases in word embed- dings (Basta et al., 2019; Kaneko and Bollegala, 2019; Zhao et al., 2019, 2018b; Bolukbasi et al., 2016). This word embedding work has extended to multilingual work on gender-marking (Gonen et al., 2019; Williams et al., 2019; Zhou et al., 2019). Despite these efforts, many methods for debiasing embeddings remain problematic—i.e., they have only succeeded in hiding word embed- ding biases as opposed to removing them (Gonen and Goldberg, 2019)—making gender debiasing still an open area of research. Despite the relatively ample literature on gen- der debiasing for word-level representations, very little work has focused on sentence representa- tions (Liang et al., 2019; Liu et al., 2019; Sheng et al., 2019; Lee et al., 2019b). The majority of sentence debiasing work up until this point fore- grounds measuring bias (Lee et al., 2019b; Sheng et al., 2019). For example, Liu et al. present a test dataset for dialogue created counterfactu- ally by combining templates and hand-created lists of word pairs; this work shows that models pro- duce less diverse dialogues when prompted with sentences containing words describing individu- als from underrepresented groups. Acknowledg- ing the obvious importance of measuring gender bias (see, e.g., Liu et al. 2019), our dialogue work is novel in that we also propose and compare three methods for directly mitigating it1. 1To the best of our knowledge, only one other work attempts to gender-debias sentence representations (Li et al., 2018); however, it extends a word-embedding post- processing method (Bolukbasi et al., 2016) shown to be inef- fective at removing gender bias (Gonen and Goldberg, 2019) to sentences. Thus, we take a different tack. # Characters # Ref F M N All F M LIGHT Orig Data Swap Persona New Charac. Total 159 336 151 646 258 230 120 608 1238 439 1460 1877 1260 1419 1030 694 1448 1719 275 357 3602 4856 2215 2543 ConvAI2 Orig Data 1109 1048 4214 6371 1283 1148 Table 4: Analysis of gender in LIGHT and Con- the original LIGHT dataset contains 1.6× as vAI2: many male-gendered as female-gendered characters. We compare the original dataset with the dataset ob- tained after gender-swapping personas and collecting new characters (with new personas). The references column indicates the gender of characters mentioned in the personas. By contrast, ConvAI2 contains a roughly equal number of male and female gendered personas. # 3 Measuring Bias Before one can mitigate bias, one must first mea- sure it. To determine which dataset to focus on, we initially measured both the amount of gen- dered words used, and the percent of those which referred to male characters, for six existing dia- logue datasets (Table 1). Throughout, we compare LIGHT to the other dialogue datasets, and find that it is considerably more biased, which leads us to give LIGHT particular attention in this pa- per. We primarily address three sources of gender bias in dialogue: (i) imbalance in character gen- ders, (ii) personas (Table 2), and (iii) dialogues be- tween characters (Table 3). Dialogue research has found that incorporating personas, or personality descriptions that ground a speaker’s chat, like I love fishing, increases engag- ingness and improves consistency (Zhang et al., 2018; Shuster et al., 2018; Mazar´e et al., 2018; Olabiyi et al., 2018; Li et al., 2016). However, they can also crystallize gender bias (Clark et al., 2019; Henderson et al., 2018), propagating it to subsequently generated conversations. We answer three questions in the context of persona-based di- alogue: when creating dialogue datasets, (i) do crowdworkers generate an equal number of male and female characters, (ii) do these characters’ personas feature sexism or gender biases, and (iii) are the resulting dialogue utterances biased? Bias in Number of Characters. We first answer the question: do crowdworkers create an equal number of male and female characters? In addi- tion to LIGHT, we also consider the persona-based dialogue dataset ConvAI2 (Zhang et al., 2018). To examine gender balance in characters, we asked annotators to label the gender of each char- acter in both the LIGHT and ConvAI2 datasets based on the persona (choosing neutral if the gen- der was not explicit). This annotation is possible because many personas include text such as I am a young woman, although the majority of personas do not mention an explicit gender. We find LIGHT characters to be highly gender imbalanced: in Table 4, we can see that there are over 1.6 times as many male characters as female ones2. It is considerably less gender-balanced than ConvAI2, which has a nearly equal number of male and female gendered personas.3 Bias in Personas. In addition to the stark under- representation of female characters, the medieval setting in LIGHT is likely to encourage crowd- workers to generate dialogues accentuating his- torical biases and inequalities of the time period (Bowman, 2010; Garcia, 2017). There is no obli- gation to recreate historical biases: one can instead use creative license to craft a fun world with gen- der parity. Therefore, we investigate references to men or women in the text of personas, as another source of bias. To motivate this, take for example, a female persona that contains a gendered refer- ence such as I want to follow in my father’s foot- steps rather than in my mother’s. Using gendered relational nouns (Barker, 1992; Williams, 2018), such as father, doesn’t always signal gender bias, but if female characters are predominantly defined in reference to male characters, it becomes a prob- lem. We count the appearance of gendered words in personas using the list compiled by Zhao et al. (2018b) and find that men are disproportionately referred to in the personas: there are nearly 3x as many mentions of men than women (see Table 2 for examples, and Table 4 for counts). Qualitatively, LIGHT personas contain many examples that strike us as gender biased (see Table 2). For example, the character description for girl contains the line I regularly clean and cook din- ner. Gender bias and sexism are clearly present in many dialogue datasets (Henderson et al., 2018), but finding a clear way to define sexism (and other 2We use “female” and “male” for LIGHT characters – rather than “woman” and “man” – because some are binarily gendered, but not human. 3Note that annotators may widen the gender gap by im- plicitly assuming genders for ungendered personas. kinds of unsafe text), let alone measure it at scale, is very challenging. A simple answer is to rely on annotation where annotators operate under their own, albeit subjective, definition(s) of sexism. To assess the pervasiveness of unsafe content in ex- isting personas, we asked three independent an- notators to examine each persona for potentially offensive content. If annotators detected content was ‘offensive’ or ‘maybe offensive’, they were asked to place it in one of four categories—racist, sexist, classist, other—and to provide a reason for their response. Just over 2% of personas were flagged by at least one annotator, and these per- sonas and the dialogues between these personas were removed from the dataset. Bias in Human-Generated Dialogue Utter- ances. After uncovering bias in the gender of characters and personas— qualitatively and in number of gendered words—we go on to exam- ine how those biases may propagate to the dia- logues that are created from crowdworkers playing the role of these personas. First, we count the number of male and female gendered words in the training sets of various di- alogue datasets (LIGHT, ConvAI2, Reddit, Wiz- ard of Wikipedia, Daily Dialog, Empathetic Dia- logues, and ConvAI2), using the same word list as before (Zhao et al., 2018b). We use this to calcu- late the percentage of gendered words out of all words, and the percent male bias, or the percent- age of male gendered words among all gendered words. Results are shown in Table 1. LIGHT is the most gender imbalanced dataset among all datasets in this table, with a male bias of 73%. With this in mind, we qualitatively examine the LIGHT dataset and find many biased utterances present in the training data. For example, the queen persona adheres to negatively stereotyped gender roles when uttering the line I spend my days doing embroidery and having a talk with the ladies. Another character admires a sultry wench with fire in her eyes. We see the direct effect of the biased persona on the resultant dialogue (see Table 3): for example, a wife persona contains the text I spend my days cooking and cleaning so my hus- band will have something to eat when he returns from his work..., and, in dialogue with a merchant, discusses only her cleaning duties. The merchant even derisively refers to cleaning as the wife’s job. rO +0 +40 +0 - 3 FM 99 F°mM® A 19 -~ 9 FM | 9 FYM' 419 FeM' o & o o g o go Ed g go Et g 285 2 45 gu 285 2 45 Bu 5s gS gs = ° | = ia fo) = ira | ® ua * ® ae —— ° Biggai ° Begeaz ° ° ° Beggar ° wees gs 83Ssax 8 eosax 8 sSsax 8 eSsax a 3 a 8 a 3 a 8 Ot + +ayt +t 9 » 100 PM 413 FM. 9 » 90 F'M 4 13 F‘M B 8 2 Hi bs ° Be ao 5 Be ao 5 2B 5 2 50 Bu 55 245 gu ce 2 2 32 2 | 2 | ira z SEAR BZ SEAR BZ sea SEAR BZ 8 4oaax &goaax 8358 8 gSaax & 3 & 3 & 3 a 3 Figure 1: We compare the performance of various bias mitigation methods—Counterfactual Data Augmentation (CDA), Positive-Bias Data Collection (Pos. Data), Bias Control Model (Bias Ctrl), and combining these methods (ALL)—on the test set, splitting the test set across the four genderedness bins: F0/+M0/+. X0 indicates there are no X-gendered words in the gold response, while X+ indicates that there is at least one. We measure the percent of gendered words in the generated utterances (% gend. words) and the percent of male bias (% male bias), i.e. the percent of male-gendered words among all gendered words generated. While each of these methods yield some improvement, combining all of these methods in one yields the best control over the genderedness of the utterances while improving the F1-score. # 4 Mitigating Bias in Generative Dialogue As we found the LIGHT was considerably more biased than other dialogue datasets, throughout the rest of the paper we use the LIGHT dataset as a testbed for developing a general framework for mitigating bias in generative dialogue. When we train dialogue models on biased datasets, the bias will manifest in model-generated dialogues. We explore data augmentation and other algorithmic methods to mitigate bias in gen- erative Transformer models. We (i) extend coun- terfactual data augmentation to dialogue (Maud- slay et al., 2019; Zmigrod et al., 2019) to swap gendered words, (ii) perform positive data col- lection by augmenting the existing dataset via targeted data collection with crowdworkers, and lastly, (iii) present a bias controlled dialogue gen- eration method that controls how many male and female gendered words models produce. Zhao et al. (2018b). The augmentation is limited to words on the gendered word list, and the swap- ping is performed automatically. # 4.2 Positive-Bias Data Collection While CDA has been shown to be a somewhat ef- fective strategy for mitigating bias in word embed- dings, this method has several pitfalls: it may re- sult in ungrammatical sentences and it relies on existing (and incomplete) word pair lists to deter- mine and swap gender. To resolve these issues, we use humans to collect additional dialogue data via a two-pronged Positive-Bias Data Collection (Pos. Data) strategy. We first collect additional personas by having humans (i) manually swap the gender of the persona (rather than relying on the word lists) and (ii) write additional, diversified personas. We then use these personas to seed the collection of additional, positively biased dialogue data, which we refer to as Pos. Data throughout. # 4.1 Counterfactual Data Augmentation A solution proposed for gender bias in word em- beddings is Counterfactual Data Augmentation (CDA) (Maudslay et al., 2019; Zmigrod et al., 2019; Liu et al., 2019). CDA swaps, say, all in- stances of grandmother with grandfather, she with he, and so on. We apply this word-based data aug- mentation to dialogue generation by first copying every dialogue with a gendered word(s) in it, then swapping it with its pair from the list provided by New Personas. As LIGHT contains more male personas than female personas (see §3), we bal- ance existing personas with gender swapping. For every gendered persona, annotators create a new opposite-gendered persona for which refer- ring nouns or pronouns are changed, but the rest of the character description remains unchanged. For example, for every persona describing a king, an- notators will create a new one describing a queen. Annotators are instructed to swap the gender(s) of other characters referred to in the text (e.g., if an original persona describes a female in relation to her father, the new male persona will describe a male in relation to his mother). This method en- sures that the created sentences will be grammati- cal, unlike heuristic data augmentation. However, simply balancing references to men and women is insufficient, as female characters might be described in sexist ways (see §3). As detecting sexism is challenging, we take our qual- itative analysis to be sufficient, and move to offset it by collecting a new set of interesting and inde- pendent female characters. We do this by priming workers with examples like adventurer with per- sonas like I am a woman passionate about explor- ing a world I have not yet seen. I embark on am- bitious adventures. We also provide crowdwork- ers with additional instruction to guide them to- wards creating diverse characters: We’re looking for strong and diverse descriptions. Avoid descrip- tions that could be considered hateful, offensive, or stereotypical. Even with this explicit instruc- tion, 3 times as many male characters as female characters were created; this fact alone reveals the inherent gender biases of the available crowd- worker pool. We ultimately exclude all male- gendered personas created in this fashion from the new dataset, which brings the number of men and women and the number of references to male or female gendered words to approximate balance in In total, we add the new dataset (see Table 4). 2,629 new personas. New Dialogues. After gender-balancing the per- sonas, we focus next on our main goal: debiasing generated dialogues. As the personas are a starting point for bias entering the dataset, it is important to address balance in personas as a prior step. We use the gender-balanced set of personas de- rived from the two methods described above to crowdsource additional dialogues. We select more female-gendered characters for new dialogue col- lection, and instructed annotators to be mindful of gender bias. In particular, we encourage them to assume equality—social, economic, political, or otherwise—between genders in this fantasy set- ting. We collect a total of 507 new dialogues con- taining 6,658 utterances (approximately 6% of the original dataset size). We refer to this additional dialogue data as Pos. Data. F0M0 F0M+ F+M0 F+M+ % of test set 60.65 27.21 7.61 4.63 Table 5: Percentage of dialogue examples in each of the four genderedness bins —F0/+M0/+— for the LIGHT dialogue data test set. # 4.3 Bias Controlled Training Gender bias in dialogue can take the form of im- balanced use of gendered words. To create dia- logue models that can generate an equal number of gendered words, we control model output with Bias Control (Bias Ctrl) via conditional training. Previous conditional training models learn to asso- ciate specific control tokens with some desired text properties (Kikuchi et al., 2016; Fan et al., 2017; Oraby et al., 2018; See et al., 2019), but have not been applied to address bias issues. training techniques to control gender bias in generative dialogue by learning to associate control tokens with proper- ties of gender bias. Any general function that takes as input a dialogue utterance and outputs a con- tinuous or discrete value that provides informa- tion about gender bias could be used as a control variable. In our case, prior to training, each dia- logue response is binned into one of four bins— F0/+M0/+ —where X0 indicates that there are zero X-gendered words in the response. X+ indicates the presence of one or more X-gendered word. The percentage of examples from the test set that fall into each bin is noted in Table 5. Nouns and adjectives are binned via an aggregation of ex- isting gendered word lists (Zhao et al., 2018b,a; Hoyle et al., 2019). Note that other functions could be used as well, such as a bias classifier. We append a special token to the input that in- dicates the bin that the response falls into. During Bias Ctrl training, the model should learn to as- sociate the special token with the genderedness of the dialogue response, such that at inference time, we could modify these special tokens to control the genderedness of the model output. For exam- ple, a model trained with multiple gender control bins could be set to the gender neutral (in this case, F0M0) setting at inference time, to produce a re- sponse containing no gendered words. # Implementation Details Following Urbanek et al. (2019), we fine-tune a large, pre-trained Transformer encoder-decoder Legend {rw 0; 0 * g 8 Pe , 100 iw’ tis we 3% nue g 1° wm ts aa 3, 3 2 2 3 5 Ue bee | |e Ee | oO = c oO = c re ee ee geess5 ZEsSh5 3e2e25 ZfsShs Z2sSS5 gess35 SRREke Seeaik egeeek we geERE wgeREE ge REE i i 7 > + +t +Mt +Mt 13 FM | 100 PM 413 FM _ 13 F*M | 100 F*M A 13 FYM g 3 g 8 3 @ ge i g Se | g 257 2 50 a7 ge7 3 50 a? on i: I: 2 a i: , GLE FS = ml * | geese ° geeess 9 gesses|| °C ggeees ° gessss ° B£ESE5 Sacake Bactke CPRERE CERCke CBReke e ESSERE a EB A B ru rw Bew Figure 2: Performance of the ALL debiasing model controlled by indicating specific bins for all examples at test time. We report results for each possible conditioning bin choice. Across bins, the model maintains performance as measured by F1 whilst radically changing the genderedness of the language generated. neural network in all generation experiments on the dialogues in the LIGHT dataset. Following Humeau et al. (2019), the model was pre-trained on Reddit conversations using a previously ex- isting Reddit dataset extracted and obtained by a third party and made available on pushshift.io. During pre-training, models learned to generate a comment conditioned on the full thread leading up to the comment. Comments containing URLs or under 5 characters in length were removed, along with child comments, resulting in approx- imately 2.2 billion training examples. Similar to pre-training, during fine-tuning, the models are conditioned on the full dialogue history leading up to the next utterance. The model is based on the ParlAI implementation of Miller et al. (2017), and is an 8-layer encoder, 8-layer decoder, with 512 dimensional embeddings and 16 attention heads. For final generations, we decode sequences with beam search size of 5. et al., 2015; Fan et al., 2018), which makes it likely they will reproduce statistical biases present in datasets. As described previously (see §2), work shows that machine learning models reflect biases (Zhao et al., 2019; Brunet et al., 2018). Moreover, biases can be easier to learn than more challeng- ing reasoning (Bolukbasi et al., 2016; Lewis and Fan, 2018), suggesting that Transformer models are likely to reflect dataset bias. Figure 9 compares the performance of the var- ious techniques. We compare our methods to the gold labels from the test set and a baseline Trans- former generative dialogue model trained on the original data without any bias mitigation tech- niques. To do this, we divide the test set into four genderedness bins (as defined in Section 4.3)— F0M0, F0M+, F+M0, and F+M+—and calculate: (i) the F1 word overlap with the gold response, (ii) the percentage of gendered words generated (% gend. words), and (iii) the percentage of male- gendered words generated (relative to the sum to- tal of gendered words generated by the model). # 5 Results We train five Transformer models: a baseline, three models, one for each of our new methods (see §4.1 for CDA, §4.2 for Positive-Bias Data Collection, and §4.3 for Bias Control), then one final model, ALL, which combines all three meth- ods and achieves the best results. Bias is Amplified in Generation. Existing Transformer generative dialogue models (Serban et al., 2016; Yang et al., 2018; Urbanek et al., 2019) are trained to take the dialogue context as input and generate the next utterance. Generative models are well-known to produce generic text (Li We find that Transformer models not only re- flect dataset biases, but also they amplify them. When the model produces gendered words (from our gendered word list), it generates male- gendered words the vast majority of the time. Even on utterances for which it is supposed to gen- erate only female-gendered words (the gold label only contains female-gendered words), it gener- ates male-gendered words nearly 78% of the time. Comparing Debiasing Methods As shown in Figure 1, each of our methods improves the metrics—percent gendered words, percent male bias, and F1—over the baseline Transformer, but we find combining all methods in one in the ALL- model is most advantageous. While ALL has more data than CDA and Bias Ctrl, more data alone is not enough — the Positive-Bias Data Collection model does not achieve as good results. Both the Bias Ctrl and ALL models benefit from knowing the data split (F0M0, for example), and both yield a gender ratio closest to ground truth. Bias Controlled Training Controls Gendered Words. Our Bias Ctrl method can control the number of gendered words in generated dialogues, as shown in Figure 10. We examine the effect of Bias Ctrl by generating responses conditioning the ALL model on each bin. We observe that changing the bin radically changes the genderedness of gen- erated text with only small differences in overall F1. We can control the male bias of the generated dialogue by manipulating these bins. Examples of generated text from both the base- line and the ALL model are shown in Table 6. The baseline model generates male-gendered words when the gold response contains no gen- dered words or only female-gendered words, even generating unlikely sequences such as my name is abigail. i am the king of this kingdom. For various methods, we show the top 20 words generated on the test set (after removing stop words) in Table 8. We denote gendered nouns using an asterisk. Among the top 20 words gen- erated by the baseline model, there are only two gendered nouns—knight and king—and both are male-gendered. The ALL model generates a simi- lar set of words, but also features queen in its top 20 words, another indication that it is more bal- anced across the male and female genders. # 5.1 Safety of Generated Text In Table 7, following Liu et al. (2019), we use a Transformer-based dialogue safety classifier to classify model-generated utterances as offensive or safe. The classifier was fine-tuned on an of- fensive language classification task (Dinan et al., 2019), and achieves state-of-the-art results. We apply this classifier to each utterance gen- erated by the ALL and baseline models on the test set, in addition to the gold (human generated) la- bels from the test set. Our proposed ALL model is rated as less offensive than both the baseline model and the ground truth (gold) labels (see Table 7). # Generation Examples Bin Context F0M0 [P1:] Owl can you find out how I died? [P2:] I can look around the forest, but I need more information to help. Tell me what you remember about your past life. [P1:] I don’t remember anything I was hoping you could find out. [P2:] Your form is very hazy. Do you remember Baseline: ALL: Gold: Bin Context if you are a man or woman? i am not a man. i am a man of the forest. no, i don’t remember. I don’t know what’s wrong with me! F+M0 [P1:] I do not believe my eyes, for an angel is upon me! Angel, please tell me your name. [P2:] My name is Abigail! Baseline: my name is abigail. i am the king of this kingdom. ALL: Gold: i am the queen’s daughter! Abigail! Such a beautiful name. To what do I owe the pleasure of meeting you? Table 6: Example generations from the baseline model and the proposed debiased models. Gold truth (‘Gold’) either contains no gendered words or only female-gendered words, but the baseline model still generates male-gendered words. Gold Labels Baseline ALL % Offensive 13.0 14.25 10.37 Table 7: Offensive language classification of model responses on the LIGHT dialogue test set. # 5.2 Human Evaluation: Bias and Quality We compare the quality of our debiasing methods using human evaluation (see Figure 3). One might hypothesize that some gender debiasing methods work by replacing contentful words (e.g., witch) with bleached or uninteresting ones (e.g., person, thing), effectively trading off gender bias with en- gagingness. We use the dialogue evaluation sys- tem Acute-Eval (Li et al., 2019) to ask human evaluators to compare two conversations from dif- ferent models and decide which model generates (i) more biased dialogues and (ii) more engag- ing dialogues. We collect 100 model conver- sations with crowdworkers. Then, we compare conversations between a human and the baseline model to conversations between a human and the ALL model with all generations set to the F0M0 gender-neutral control bin. Asking for predictions of speaker gender was found to be more effective than asking about sexism or gender bias directly. As shown in Figure 3, it is more challenging to ooo F Soo 3 ° % prefer ALL over Baseline engagingness ° ° harder to predict gender Figure 3: Human Evaluation of ALL model (F0M0) compared to baseline Transformer generative model. Evaluators choose which model output they prefer for dialogue engagingness and difficulty of predicting speaker gender. The ALL model produces less gen- dered text while engagingness is not affected. Model Top 20 generated words Baseline sorry, hear, not, what, glad, doing, don, king*, thank, sure, will, your, can, much, do, know, but, knight*, blacksmith, going ALL sorry, hear, sure, not, what, help, doing, your, course, trying, glad, thank, queen*, don, good, king*, but, yes, know, sir* ALL F0M0 sorry, hear, sure, what, not, doing, glad, thank, your, yes, course, but, don, do, know, help, have, enjoying, fool, much ALL F0M+ sorry, hear, help, trying, sure, good, king*, sir*, not, your, day, course, father*, he*, don, thank, happy, guard*, glad, have ALL F+M0 sorry, hear, queen*, sure, miss*, not, your, thank, how, hello, today, guard*, she*, yes, course, kind, woman*, help, glad, what ALL F+M+ sorry, queen*, hear, guard*, help, trying, your, sure, good, course, day, knight*, not, protect, yes, friend, king*, woman*, she*, thank Table 8: Genderedness bins control the gendered- ness of generated text. The top 20 words (test set) with stop words removed. * indicates gendered nouns. predict the gender of ALL model generations (sig- nificant at p < 0.01) but the responses are just as engaging according to human evaluators. We con- clude our proposed methods are able to mitigate gender bias without degrading dialogue quality. # 6 Discussion Generality of Gendered Words. The gendered word lists used may not be comprehensive (Zhao et al., 2018a,b; Hoyle et al., 2019). For example, they do not include hag or wench, which are com- mon in LIGHT. Further, a more continuous repre- sentation of gender should be used in the future. More Fine-Grained Control. We present an ef- fective method to control the quantity of gen- dered words generated by manipulating control bins. This technique is general and could be used to control other properties of generated utterances. For example, a sexism or bias classifier could be used instead of the gendered word list. Quality of Generated Dialogue. Generative di- alogue models are prone to overuse frequent words and produce generic utterances, the so-called I don’t know problem (Li et al., 2015). We also ob- serve these effects which can affect bias. # 7 Conclusion We propose general purpose techniques for reduc- ing gender bias in dialogue. Our methods combine data augmentation, positive-bias data collection, and bias controlled training. Our new data col- lection techniques help mitigate issues, so clearly bias should be considered at the earliest stages of a project. Bias control training lessens bias at the training stage, and is also beneficial. Together, they are especially effective, producing less gen- dered, more gender balanced, safer utterances that maintain engaging dialogue with humans. # References Nat˜a M Barbosa and Monchu Chen. 2019. Rehuman- ized crowdsourcing: A labeling framework address- ing bias and ethics in machine learning. In Proceed- ings of the 2019 CHI Conference on Human Factors in Computing Systems, page 543. ACM. Chris Barker. 1992. Possessive descriptions. Christine Basta, Marta R Costa-juss`a, and Noe Casas. 2019. Evaluating the underlying gender bias in con- textualized word embeddings. In Proceedings of the 1st Workshop on Gender Bias in Natural Language Processing. Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to In Ad- homemaker? debiasing word embeddings. vances in neural information processing systems, pages 4349–4357. Sarah Lynne Bowman. 2010. McFarland and Co. Marc-Etienne Brunet, Colleen Alkalay-Houlihan, Ash- ton Anderson, and Richard Zemel. 2018. Under- standing the origins of bias in word embeddings. arXiv preprint arXiv:1810.03611. Yang Trista Cao and Hal Daum´e. 2019. gender-inclusive coreference resolution. preprint arXiv:1910.13913. Kai-Wei Chang, Vinod Prabhakaran, and Vicente Or- donez. 2019. Bias and fairness in natural language In Proceedings of the 2019 Confer- processing. ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP): Tutorial Abstracts, Hong Kong, China. Association for Computational Linguistics. Serina Chang and Kathy McKeown. 2019. Automat- ically inferring gender associations from language. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5745– 5751, Hong Kong, China. Association for Computa- tional Linguistics. Christopher Clark, Mark Yatskar, and Luke Zettle- moyer. 2019. Don’t take the easy way out: Ensem- ble based methods for avoiding known dataset bi- ases. arXiv preprint arXiv:1909.03683. Marta R Costa-juss`a. 2019. An analysis of gender bias studies in natural language processing. Nature Ma- chine Intelligence, pages 1–2. Emily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it break it fix it for dialogue safety: Robustness from adversarial human attack. arXiv preprint arXiv:1908.06083. Yupei Du, Yuanbin Wu, and Man Lan. 2019. Explor- ing human gender stereotypes with word associa- tion test. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 6135–6145. Ali Emami, Paul Trichelair, Adam Trischler, Ka- heer Suleman, Hannes Schulz, and Jackie Chi Kit Cheung. 2019. The knowref coreference corpus: Removing gender and number cues for difficult pronominal anaphora resolution. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 3952–3961. Angela Fan, David Grangier, and Michael Auli. 2017. arXiv Controllable abstractive summarization. preprint arXiv:1711.05217. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. arXiv preprint arXiv:1805.04833. Lisa Fan, Marshall White, Eva Sharma, Ruisi Su, Prafulla Kumar Choubey, Ruihong Huang, and Lu Wang. 2019. In plain sight: Media bias through the lens of factual reporting. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6342–6348, Hong Kong, China. Association for Computational Linguistics. Antero Garcia. 2017. Privilege, power, and dungeons & dragons: How systems shape racial and gender identities in tabletop role-playing games. Mind, Culture, and Activity, 24(3):232–246. Aparna Garimella, Carmen Banea, Dirk Hovy, and Rada Mihalcea. 2019. Womens syntactic resilience and mens grammatical luck: Gender-bias in part- of-speech tagging and dependency parsing. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3493– 3498. Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, et al. 2019. To- wards understanding gender bias in relation extrac- tion. arXiv preprint arXiv:1911.03642. Hila Gonen and Yoav Goldberg. 2019. Lipstick on a pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 609–614, Minneapolis, Minnesota. Association for Computa- tional Linguistics. Hila Gonen, Yova Kementchedjhieva, and Yoav Gold- berg. 2019. How does grammatical gender affect noun representations in gender-marking languages? arXiv preprint arXiv:1910.14161. Nizar Habash, Houda Bouamor, and Christine Chung. 2019. Automatic gender identification and reinflec- tion in arabic. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 155–165. Reyhaneh Hashempour. 2019. A deep learning ap- proach to language-independent gender prediction on twitter. In Proceedings of the 2019 Workshop on Widening NLP. He He, Sheng Zha, and Haohan Wang. 2019. Unlearn dataset bias in natural language inference by fitting the residual. arXiv preprint arXiv:1908.10763. Peter Henderson, Koustuv Sinha, Nicolas Angelard- Gontier, Nan Rosemary Ke, Genevieve Fried, Ryan Lowe, and Joelle Pineau. 2018. Ethical challenges in data-driven dialogue systems. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES 2018, New Orleans, LA, USA, Febru- ary 02-03, 2018, pages 123–129. Alexander Miserlis Hoyle, Lawrence Wolf-Sonkin, Hanna Wallach, Isabelle Augenstein, and Ryan Cot- terell. 2019. Unsupervised discovery of gendered language through latent-variable modeling. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1706– 1716, Florence, Italy. Association for Computa- tional Linguistics. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2019. Real-time inference in multi-sentence tasks with deep pretrained transform- ers. arXiv preprint arXiv:1905.01969. Masahiro Kaneko and Danushka Bollegala. 2019. Gender-preserving debiasing for pre-trained word embeddings. arXiv preprint arXiv:1906.00742. Dongyeop Kang, Varun Gangal, and Eduard Hovy. 2019. (male, bachelor) and (female, Ph.D) have different connotations: Parallelly annotated stylis- tic language dataset with multiple personas. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 1696– 1706, Hong Kong, China. Association for Computa- tional Linguistics. Yuta Kikuchi, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. 2016. Control- ling output length in neural encoder-decoders. arXiv preprint arXiv:1609.09552. Nayeon Lee, Yejin Bang, Jamin Shin, and Pascale Fung. 2019a. Understanding the shades of sexism In Proceedings of the 2019 in popular TV series. Workshop on Widening NLP. Nayeon Lee, Andrea Madotto, and Pascale Fung. 2019b. Exploring social bias in chatbots using stereotype knowledge. In Proceedings of the 2019 Workshop on Widening NLP, pages 177–180. Haley Lepp. 2019. Pardon the interruption: Automatic analysis of gender and competitive turn-taking in In Proceed- united states supreme court hearings. ings of the 2019 Workshop on Widening NLP, pages 143–145, Florence, Italy. Association for Computa- tional Linguistics. Mike Lewis and Angela Fan. 2018. Generative ques- tion answering: Learning to answer the whole ques- tion. ICLR. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2015. A diversity-promoting objec- tive function for neural conversation models. arXiv preprint arXiv:1510.03055. Jiwei Li, Michel Galley, Chris Brockett, Georgios Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016. A In Pro- persona-based neural conversation model. ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 994–1003, Berlin, Germany. Associ- ation for Computational Linguistics. Margaret Li, Jason Weston, and Stephen Roller. 2019. Acute-eval: Improved dialogue evaluation with opti- mized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087. Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards deep conversational recommenda- tions. In Advances in Neural Information Process- ing Systems, pages 9748–9758. Paul Pu Liang, Irene Li, Emily Zheng, Yao Chong Lim, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2019. Towards Debiasing Sentence representations. Haochen Liu, Jamell Dacon, Wenqi Fan, Hui Liu, Zi- tao Liu, and Jiliang Tang. 2019. Does gender mat- ter? Towards fairness in dialogue systems. CoRR, abs/1910.10486. Rowan Hall Maudslay, Hila Gonen, Ryan Cotterell, and Simone Teufel. 2019. It’s all in the name: Mit- igating gender bias with name-based counterfactual data substitution. CoRR, abs/1909.00871. Pierre-Emmanuel Mazar´e, Samuel Humeau, Martin Training arXiv Raison, and Antoine Bordes. 2018. millions of personalized dialogue agents. preprint arXiv:1809.01984. Jack Merullo, Luke Yeh, Abram Handler, Alvin Gris- som II, Brendan O’Connor, and Mohit Iyyer. 2019. Investigating sports commentator bias within a large In Pro- corpus of American football broadcasts. ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6354–6360, Hong Kong, China. Association for Computational Linguistics. A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. 2017. Parlai: A dialog research software platform. arXiv preprint arXiv:1705.06476. Oluwatobi O Olabiyi, Anish Khazane, and Erik T Mueller. 2018. A persona-based multi-turn conver- sation model in an adversarial learning framework. In 2018 17th IEEE International Conference on Ma- chine Learning and Applications (ICMLA), pages 489–494. IEEE. Cathy O’Neil. 2016. Weapons of math destruction: How big data increases inequality and threatens democracy. Broadway Books. Shereen Oraby, Lena Reed, Shubhangi Tandon, TS Sharath, Stephanie Lukin, and Marilyn Walker. 2018. Controlling personality-based stylistic varia- tion with neural natural language generators. arXiv preprint arXiv:1805.08352. Jahna Otterbacher, Alessandro Checco, Gianluca De- martini, and Paul Clough. 2018. Investigating user perception of gender bias in image search: the role In The 41st International ACM SIGIR of sexism. Conference on Research & Development in Informa- tion Retrieval, pages 933–936. ACM. Ji Ho Park, Jamin Shin, and Pascale Fung. 2018. Re- ducing gender bias in abusive language detection. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2799–2804, Brussels, Belgium. Association for Computational Linguistics. Yusu Qian. 2019. Gender stereotypes differ between In Proceedings of the male and female writings. 57th Annual Meeting of the Association for Com- putational Linguistics: Student Research Workshop, pages 48–53. Yusu Qian, Urwa Muaz, Ben Zhang, and Jae Won Hyun. 2019. Reducing gender bias in word-level language models with a gender-equalizing loss func- tion. arXiv preprint arXiv:1905.12801. Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. arXiv preprint arXiv:1902.08654. Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, and Joelle Pineau. 2016. Generative deep neural net- works for dialogue: A short review. arXiv preprint arXiv:1611.06216. Sima Sharifirad, Alon Jacovi, Israel Bar Ilan Univesity, and Stan Matwin. 2019. Learning and understand- ing different categories of sexism using convolu- tional neural networks filters. In Proceedings of the 2019 Workshop on Widening NLP, pages 21–23. Using attention-based bidirectional lstm to identify differ- ent categories of offensive language directed toward female celebrities. In Proceedings of the 2019 Work- shop on Widening NLP, pages 46–48. Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3398– 3403, Hong Kong, China. Association for Computa- tional Linguistics. Kurt Shuster, Samuel Humeau, Antoine Bordes, and Jason Weston. 2018. Engaging image chat: Model- ing personality in grounded dialogue. arXiv preprint arXiv:1811.00945. Gabriel Stanovsky, Noah A Smith, and Luke Zettle- moyer. 2019. Evaluating gender bias in machine translation. arXiv preprint arXiv:1906.00591. Pierre Stock and Moustapha Cisse. 2017. Convnets and imagenet beyond accuracy: Explanations, bias detection, adversarial examples and model criticism. Tony Sun, Andrew Gaut, Shirlyn Tang, Yuxin Huang, Mai ElSherief, Jieyu Zhao, Diba Mirza, Eliza- beth Belding, Kai-Wei Chang, and William Yang Wang. 2019. Mitigating gender bias in natural lan- guage processing: Literature review. arXiv preprint arXiv:1906.08976. Jack Urbanek, Angela Fan, Siddharth Karamcheti, Saachi Jain, Samuel Humeau, Emily Dinan, Tim Rockt¨aschel, Douwe Kiela, Arthur Szlam, and Ja- son Weston. 2019. Learning to speak and act in In Proceedings a fantasy text adventure game. of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 673–683, Hong Kong, China. Association for Computational Lin- guistics. Adina Williams. 2018. Representing Relationality: MEG Studies on Argument Structure. Ph.D. thesis, New York University. Adina Williams, Damian Blasi, Lawrence Wolf- Sonkin, Hanna Wallach, and Ryan Cotterell. 2019. Quantifying the semantic core of gender systems. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5733– 5738, Hong Kong, China. Association for Computa- tional Linguistics. Yinfei Yang, Steve Yuan, Daniel Cer, Sheng-yi Kong, Noah Constant, Petr Pilar, Heming Ge, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Learning semantic textual similarity from conversa- tions. arXiv preprint arXiv:1804.07754. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics, pages 2204–2213, Melbourne, Australia. Association for Computational Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cot- terell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In NAACL (short). Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Ordonez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplifica- tion using corpus-level constraints. arXiv preprint arXiv:1707.09457. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018a. Gender bias in coreference resolution: Evaluation and debias- In Proceedings of the 2018 Confer- ing methods. ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pages 15–20. Jieyu Zhao, Yichao Zhou, Zeyu Li, Wei Wang, and Kai- Wei Chang. 2018b. Learning gender-neutral word In Proceedings of the 2018 Confer- embeddings. ence on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 - Novem- ber 4, 2018, pages 4847–4853. Pei Zhou, Weijia Shi, Jieyu Zhao, Kuan-Hao Huang, Muhao Chen, Ryan Cotterell, and Kai-Wei Chang. 2019. Examining gender bias in languages with the grammatical gender. 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5275–5283, Hong Kong, China. Association for Computational Linguistics. Ran Zmigrod, Sebastian J. Mielke, Hanna Wallach, and Ryan Cotterell. 2019. Counterfactual data augmen- tation for mitigating gender stereotypes in languages In Proceedings of the 57th with rich morphology. Annual Meeting of the Association for Computa- tional Linguistics, pages 1651–1661, Florence, Italy. Association for Computational Linguistics. Data Split: F0M0 F0M+ F+M0 F+M+ All Model % gend. % male bias words F1 % gend. % male bias score words F1 % gend. % male bias score words F1 % gend. % male bias score words F1 score score F1 Gold Lbl Baseline ConvAI2 FT Reddit Base 0 2.37 0.79 2.18 - 88.39 11.24 7.78 71.09 9.93 73.68 0 4.11 3.66 1.1 3.03 - 90.26 11.77 78.31 7.94 81.78 11.54 100 4.03 2.44 1.35 2.81 - 77.99 11.54 51.6 8.75 52.99 10.99 0 6.67 3.05 1.97 3.94 - 50.71 80.05 11.43 11.42 67.23 7.95 8.99 63.16 12.61 10.57 - CDA Pos. Data Bias Ctrl ALL 0.88 2.76 0.14 0.14 71.03 11.63 82.44 10.46 68.75 10.72 64.19 11.72 1.38 3.68 5.83 6.59 68.57 11.7 86.43 10.07 98.08 13.01 97.94 12.77 1.2 4.59 4.8 5.84 56.18 11.43 72.1 10.07 2.69 10.84 7.13 11.28 1.17 4.43 4.05 8.81 58.01 11.12 11.62 9.88 10.44 45.86 11.35 11.38 50.94 12.22 11.99 86.5 Table 9: We compare the performance of various bias mitigation methods—Counterfactual Data Augmentation (CDA), Positive-Bias Data Collection (Pos. Data), Bias Control Model (Bias Ctrl), and combining these methods (ALL)—on the test set, splitting the test set across the four genderedness bins: F0/+M0/+. X0 indicates there are no X-gendered words in the gold response, while X+ indicates that there is at least one. We measure the percent of gendered words in the generated utterances (% gend. words) and the percent of male bias (% male bias), i.e. the percent of male-gendered words among all gendered words generated. While each of these methods yield some improvement, combining all of these methods in one yields the best control over the genderedness of the utterances while improving the F1-score. Data Split: F0M0 F0M+ F+M0 F+M+ All Model % gend. % male bias words F1 % gend. % male bias score words F1 % gend. % male bias score words F1 % gend. % male bias score words F1 score score F1 Gold Lbl Baseline 0 2.37 - 88.39 11.24 0 4.11 3.66 - 90.26 11.77 100 4.03 2.44 - 77.99 11.54 0 6.67 3.05 50.71 - 80.05 11.43 11.42 - ALL F0M0 ALL F0M+ ALL F+M0 ALL F+M+ 0.14 6.47 4.77 9.53 64.19 11.72 9.58 97.97 11.66 10.27 8.89 53.34 0.24 6.59 5.12 9.6 80.11 11.51 97.94 12.77 15.84 10.94 55.35 11.19 0.22 7.22 5.84 9.42 25.0 11.63 10.0 96.33 7.13 11.28 10.5 48.65 0.23 6.27 5.03 8.81 81.58 10.72 11.61 10.6 97.52 12.21 13.64 11.23 10.57 9.79 50.94 12.22 Table 10: Performance of the ALL debiasing model controlled by indicating specific bins for all examples at test time. We report results for each possible conditioning bin choice. Across bins, the model maintains performance as measured by F1 whilst radically changing the genderedness of the language generated. son: I am spoiled and rich. I enjoy running in the castle. I like hide and seek. men: I am an average man in the village. I do what ever work that my King requires me to do. At night, I spend my time in the local pub with my fellow men. farmer Bob: I was born in a poor village. I eat what we grow. I love being close to the earth. father: I am a role model for my children. I provide for the family with meat and I keep a roof over their heads. I am stability to the family, and keep things together and provide safety to my children. husband: I try to be good to my wife. I want to provide for my family. I try to be strong. Table 11: Examples of male gender biased personas written for gendered characters in the LIGHT dataset.
{ "id": "1909.03087" }
1911.03681
E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT
We present a novel way of injecting factual knowledge about entities into the pretrained BERT model (Devlin et al., 2019): We align Wikipedia2Vec entity vectors (Yamada et al., 2016) with BERT's native wordpiece vector space and use the aligned entity vectors as if they were wordpiece vectors. The resulting entity-enhanced version of BERT (called E-BERT) is similar in spirit to ERNIE (Zhang et al., 2019) and KnowBert (Peters et al., 2019), but it requires no expensive further pretraining of the BERT encoder. We evaluate E-BERT on unsupervised question answering (QA), supervised relation classification (RC) and entity linking (EL). On all three tasks, E-BERT outperforms BERT and other baselines. We also show quantitatively that the original BERT model is overly reliant on the surface form of entity names (e.g., guessing that someone with an Italian-sounding name speaks Italian), and that E-BERT mitigates this problem.
http://arxiv.org/pdf/1911.03681
Nina Poerner, Ulli Waltinger, Hinrich Schütze
cs.CL
null
null
cs.CL
20191109
20200501
0 2 0 2 y a M 1 ] L C . s c [ 2 v 1 8 6 3 0 . 1 1 9 1 : v i X r a # E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT Nina Poerner∗† and Ulli Waltinger† and Hinrich Sch ¨utze∗ ∗Center for Information and Language Processing, LMU Munich, Germany †Corporate Technology Machine Intelligence (MIC-DE), Siemens AG Munich, Germany [email protected] | [email protected] # Abstract We present a novel way of injecting factual knowledge about entities into the pretrained BERT model (Devlin et al., 2019): We align Wikipedia2Vec entity vectors (Yamada et al., 2016) with BERT’s native wordpiece vector space and use the aligned entity vectors as if they were wordpiece vectors. The resulting entity-enhanced version of BERT (called E- BERT) is similar in spirit to ERNIE (Zhang et al., 2019) and KnowBert (Peters et al., 2019), but it requires no expensive further pre- training of the BERT encoder. We evaluate E-BERT on unsupervised question answering (QA), supervised relation classification (RC) and entity linking (EL). On all three tasks, E- BERT outperforms BERT and other baselines. We also show quantitatively that the original BERT model is overly reliant on the surface form of entity names (e.g., guessing that some- one with an Italian-sounding name speaks Ital- ian), and that E-BERT mitigates this problem. In Section 4, we evaluate our approach on LAMA (Petroni et al., 2019), a recent unsupervised QA benchmark for pretrained Language Models (LMs). We set a new state of the art on LAMA, with improvements over original BERT, ERNIE and KnowBert. We also find that the original BERT model is overly reliant on the surface form of en- tity names, e.g., it predicts that a person with an Italian-sounding name speaks Italian, regardless of whether this is factually correct. To quantify this ef- fect, we create LAMA-UHN (UnHelpfulNames), a subset of LAMA where questions with overly helpful entity names were deleted (Section 4.4). In Section 5, we show how to apply E-BERT to two entity-centric downstream tasks: relation classification (Section 5.1) and entity linking (Sec- tion 5.2). On the former task, we feed aligned entity vectors as inputs, on the latter, they serve as inputs and outputs. In both cases, E-BERT outperforms original BERT and other baselines. # Summary of contributions. # Introduction BERT (Devlin et al., 2019) and its successors (e.g., Yang et al. (2019); Liu et al. (2019); Wang et al. (2019b)) continue to achieve state of the art per- formance on various NLP tasks. Recently, there has been interest in enhancing BERT with factual knowledge about entities (Zhang et al., 2019; Pe- ters et al., 2019). To this end, we introduce E- BERT: We align Wikipedia2Vec entity vectors (Ya- mada et al., 2016) with BERT’s wordpiece vector space (Section 3.1) and feed the aligned vectors into BERT as if they were wordpiece vectors (Sec- tion 3.2). Importantly, we do not make any changes to the BERT encoder itself, and we do no additional pretraining. This stands in contrast to previous entity-enhanced versions of BERT, such as ERNIE or KnowBert, which require additional encoder pre- training. • Introduction of E-BERT: Feeding entity vec- tors into BERT without additional encoder pretraining. (Section 3) • Evaluation on the LAMA unsupervised QA benchmark: E-BERT outperforms BERT, ERNIE and KnowBert. (Section 4) • LAMA-UHN: A harder version of the LAMA benchmark with less informative entity names. (Section 4.4) • Evaluation on supervised relation classifica- tion (Section 5.1) and entity linking (Sec- tion 5.2). • Upon publication, we will release LAMA- UHN as well as E-BERTBASE and E- BERTLARGE.1 1https://github.com/anonymized # 2 Related work # 2.1 BERT BERT (Bidirectional Encoder Representations from Transformers) is a Transformer (Vaswani et al., 2017) that was pretrained as a masked LM (MLM) on unlabeled text. At its base, BERT seg- ments text into wordpieces from a vocabulary LWP. Wordpieces are embedded into real-valued vectors by a lookup function (denoted EBERT : LWP → RdBERT). The wordpiece vectors are combined with position and segment embeddings and then fed into a stack of Transformer layers (the encoder, denoted FBERT). During pretraining, some word- pieces are replaced by a special [MASK] token. The output of BERT is fed into a final feed-forward net (the MLM head, denoted FMLM), to predict the identity of the masked wordpieces. After pre- training, the MLM head is usually replaced by a task-specific layer, and the entire model is finetuned on supervised data. # 2.2 Entity-enhanced BERT This paper adds to recent work on entity-enhanced BERT models, most notably ERNIE (Zhang et al., 2019) and KnowBert (Peters et al., 2019). ERNIE and KnowBert are based on the design principle that BERT be adapted to entity vectors: They intro- duce new encoder layers to feed pretrained entity vectors into the Transformer, and they require addi- tional pretraining to integrate the new parameters. In contrast, E-BERT’s design principle is that en- tity vectors be adapted to BERT, which makes our approach more efficient (see Section 3.3). Two other knowledge-enhanced MLMs are KE- PLER (Wang et al., 2019c) and K-Adapter (Wang et al., 2020), which are based on Roberta (Liu et al., 2019) rather than BERT. Their factual knowledge does not stem from entity vectors – instead, they are trained in a multi-task setting on relation classi- fication and knowledge base completion. # 2.3 Wikipedia2Vec Wikipedia2Vec (Yamada et al., 2016) embeds words and entities (Wikipedia URLs) into a com- mon space. Given a vocabulary of words LWord and a vocabulary of entities LEnt, it learns a lookup embedding function EWikipedia : LWord ∪ LEnt → RdWikipedia. The Wikipedia2Vec loss has three com- ponents: (1) skipgram Word2Vec (Mikolov et al., 2013a) operating on LWord, (2) a graph loss op- erating on the Wikipedia hyperlink graph, whose vertices are LEnt and (3) a version of Word2Vec where words are predicted from entities. Loss (3) ensures that entities and words are embedded into the same space. # 2.4 Vector space alignment Our vector space alignment strategy is inspired by cross-lingual word vector alignment (e.g., Mikolov et al. (2013b); Smith et al. (2017)). A related method was recently applied by Wang et al. (2019a) to map cross-lingual word vectors into the multilin- gual BERT wordpiece vector space. # 2.5 Unsupervised QA QA has typically been tackled as a supervised prob- lem (e.g., Das et al. (2017); Sun et al. (2018)). Re- cently, there has been interest in using unsupervised LMs such as GPT-2 or BERT for this task (Radford et al., 2019; Petroni et al., 2019). Davison et al. (2019) mine unsupervised commonsense knowl- edge from BERT, and Jiang et al. (2019) show the importance of using good prompts for unsupervised QA. None of this prior work differentiates quantita- tively between factual knowledge of LMs and their ability to reason about the surface form of entity names. # 3 E-BERT # 3.1 Aligning entity and wordpiece vectors Conceptually, we want to transform the vectors of the entity vector space EWikipedia[LEnt] in such a way that they look to BERT like vectors from its native wordpiece vector space EBERT[LWP]. We model the transformation as an unconstrained lin- ear mapping W ∈ RdBERT×dWikipedia. Since LWP does not contain any entities (i.e., LWP ∩ LEnt = {}), we fit the mapping on LWP ∩ LWord: ||WEWikipedia(x) − EBERT(x)||2 2 x∈LWP∩LWord Since Wikipedia2Vec embeds LWord and LEnt into the same space (see Section 2.3), W can be applied to LEnt as well. We define the E-BERT embedding function as: EE-BERT : LEnt → RdBERT EE-BERT(a) = WEWikipedia(a) # 3.2 Using aligned entity vectors We explore two strategies for feeding the aligned entity vectors into the BERT encoder: E-BERT-concat. E-BERT-concat combines en- tity IDs and wordpieces by string concatenation, with the slash symbol as separator (Schick and Sch¨utze, 2019). For example, the wordpiece- tokenized input The native language of Jean Mara ##is is [MASK] .2 # becomes The native language of Jean Marais / Jean Mara ##is is [MASK] . The entity ID (bold) is embedded by EE-BERT and all wordpieces (italics) are embedded by EBERT (see Figure 1). After the embedding operation, the sequence of vectors is combined with position and segment embeddings and fed into FBERT, just like any normal sequence of wordpiece vectors. E-BERT-concat is comparable to ERNIE or KnowBert, which also represent entities as a com- bination of surface form (wordpieces) and entity vectors. But in contrast to ERNIE and KnowBERT, we do not change or further pretrain the BERT encoder itself. E-BERT-replace. For ablation purposes, we de- fine another variant of E-BERT that substitutes the entity surface form with the entity vector. With E-BERT-replace, our example becomes: The native language of Jean Marais is [MASK] . # 3.3 Implementation re- We setting cent Wikipedia dump (2019-09-02), dWikipedia = dBERT. We ignore Wikipedia pages with fewer than 5 links (Wikipedia2Vec’s default), with the exception of entities needed for the downstream entity linking experiments (see Section 5.2). This results in an entity vocabulary of size |LEnt| = 2.7M.3 Computational cost. Training Wikipedia2Vec took us ∼6 hours on 32 CPUs, and the cost of fitting the linear transformation W is negligible. We did not require a GPU. For comparison, Know- Bert W+W was pretrained for 1.25M steps on up to four Titan RTX GPUs, and ERNIE took one epoch on the English Wikipedia. (ERNIE’s pretraining hardware was not disclosed, but it seems likely that a GPU was involved.) 2For readability, we omit the special tokens [CLS] and FBERT (BERT encoder) The native language of Jean Marais / Jean Mara ##is ... W (linear transformation from EWikipedia to EBERT fitted on LWP ∩ LWord) EE-BERT[LEnt] = WEWikipedia[LEnt] (aligned entity vector space) EBERT[LWP] (wordpiece vector space) EWikipedia[LWord] (word vector space) EWikipedia[LEnt] (entity vector space) BERT wordpiece layer Wikipedia2Vec Figure 1: Schematic depiction of E-BERT-concat. # 4 Unsupervised QA # 4.1 Data The LAMA (LAnguage Model Analysis) bench- mark (Petroni et al., 2019) probes for “factual and commonsense knowledge” of pretrained LMs. In this paper, we use LAMA-Google-RE and LAMA- T-REx (Elsahar et al., 2018), which are aimed at factual knowledge. Contrary to most previous work on QA, LAMA tests LMs without supervised fine- tuning. Petroni et al. (2019) claim that BERT’s per- formance on LAMA is comparable with a knowl- edge base (KB) automatically extracted from text, and speculate that BERT and similar models “might become a viable alternative” to such KBs. The LAMA task follows this schema: Given a KB triple (sub, rel, obj), the object is elicited with a relation-specific cloze-style question, e.g., (Jean Marais, native-language, French) be- comes: “The native language of Jean Marais is [MASK].”4 The model predicts a probability distri- bution over a limited vocabulary LLAMA ⊂ LWP to replace [MASK], which is evaluated against the surface form of the object (here: French). # 4.2 Baselines Our primary baselines are cased BERTBASE and 5 as evaluated in Petroni et al. (2019). BERTLARGE [SEP] from all examples. 3Due to the link threshold and some Wikidata-Wikipedia mismatches, we lack entity vectors for 6% of LAMA ques- tions and 10% of FewRel sentences (RC experiment, see Sec- tion 5.1). In these cases, we fall back onto using wordpieces only, i.e., onto standard BERT behavior. 4LAMA provides oracle entity IDs, however, they are not used by the BERT baseline. For a fair evaluation, we ignore them too and instead use the Wikidata query API (https:// query.wikidata.org) to infer entity IDs from surface forms. See Appendix for details. # 5https://github.com/huggingface/ transformers original BERT E-BERT- E-BERT- ERNIE Know- replace concat Bert Jean Marais Daniel Ceccaldi Orane Demazis Sylvia Lopez Annick Alane French Italian Albanian Spanish English French French French French French French French French Spanish French french french french spanish english french italian french spanish english Table 1: Native language (LAMA-T-REx:P103) of French-speaking actors according to different models. Model size is BASE. We also test ERNIE (Zhang et al., 2019)6 and KnowBert W+W (Peters et al., 2019),7 two entity-enhanced BERTBASE-type models.8 E-BERT, ERNIE and KnowBert have entity vocabularies of size 2.7M, 5M and 470K, respectively. As this might put KnowBert at a disadvantage, Table 3 also reports performance on the subset of questions whose gold subject is known to KnowBert. # 4.3 Evaluation measure We use the same evaluation measure as Petroni et al. (2019): For a given k, we count a question as 1 if the correct answer is among the top-k pre- dictions and as 0 otherwise. Petroni et al. (2019) call this measure Precision@k (P@k). Since this is not in line with the typical use of the term “preci- sion” in information retrieval (Manning et al., 2008, p. 161), we call the evaluation measure Hits@k. Like Petroni et al. (2019), we first average within relations and then across relations. # 4.4 LAMA-UHN Imagine a person who claims to know a lot of facts. During a quiz, you ask them about the native lan- guage of actor Jean Marais. They correctly answer “French.” For a moment you are impressed, until you realize that Jean is a typical French name. So you ask the same question about Daniel Ceccaldi (a French actor with an Italian-sounding name). This time, the person says “Italian.” If this quiz were a QA benchmark, the person would have achieved a respectable Hits@1 score of 50%. Yet, you doubt that they really knew the first answer. 6https://github.com/thunlp/ERNIE 7https://github.com/allenai/kb 8ERNIE and KnowBert are uncased models. We therefore lowercase all questions for them and restrict predictions to the intersection of their wordpiece vocabulary with lowercased LLAMA. As a result, ERNIE and KnowBert select answers from ∼18K candidates (instead of ∼21K), which should work in their favor. We verify that all lowercased answers appear in this vocabulary, i.e., ERNIE and KnowBert are in principle able to answer all questions correctly. Qualitative inspection of BERT’s answers to LAMA suggests that the model often behaves less like a KB and more like the person just described. In Table 1 for instance, BERT predicts native lan- guages that are plausible for people’s names, even when there is no factual basis for these predictions. This kind of name-based reasoning is a useful strat- egy for getting a high score on LAMA, as the cor- rect answer and the best name-based guess tend to coincide (e.g., people with Italian-sounding names frequently speak Italian). Hence, LAMA in its cur- rent form cannot differentiate whether a model is good at reasoning about (the surface form of) entity names, good at memorizing facts, or both. To quan- tify the effect, we create LAMA-UHN (UnHelpful Names), a subset of LAMA where overly helpful entity names are heuristically deleted: Heuristic 1 (string match filter). We first delete all KB triples (questions) where the correct answer (e.g., Apple) is a case-insensitive substring of the subject entity name (e.g., Apple Watch). This af- fects 12% of all triples, and up to 81% for individ- ual relations (see Table 2, top). Heuristic 2 (person name filter). Entity names can be revealing in ways that are more subtle than string matches. As illustrated by our Jean Marais example, a person’s name can be a useful prior for guessing their native language and by extension, their nationality, place of birth, etc. We therefore use cloze-style questions to elicit name associations inherent in BERT, and delete triples that correlate with them. The heuristic is best explained via an example. Consider again (Jean Marais, native-language, French). We whitespace-tokenize the subject’s surface form Jean Marais into Jean and Marais. If BERT considers either name to be a common French name, then a correct answer is insufficient evidence for factual knowledge about the entity Jean Marais. On the other hand, if neither Jean nor Marais are considered French, but a correct answer is given regardless, we consider it sufficient evidence of factual knowledge. We query BERT with “[X] is a common name in the following language: [MASK].” for [X] = Jean and [X] = Marais. (Depending on the rela- tion, we replace “language” with “city” or “coun- try”.) If French is among the top-3 answers for either question, we delete the original triple. We apply this heuristic to T-REx:P19 (place of birth), Heuristic Relation % deleted Example of a deleted question 1 string match filter T-REx:P176 (manufacturer) T-REx:P138 (named after) T-REx:P1001 (applies to jurisdiction) 81% 75% 73% Fiat Multipla is produced by [MASK:Fiat]. Christmas Island is named after [MASK:Christmas]. Australian Senate is a legal term in [MASK:Australia]. 2 person name filter T-REx:P1412 (language used) T-REx:P103 (native language) T-REx:P27 (nationality) 63% 58% 56% Fulvio Tomizza used to communicate in [MASK:Italian]. The native language of Tommy Nilsson is [MASK:Swedish]. Harumi Inoue is a [MASK:Japan] citizen. (1,1) (-,1) (1,-) Table 2: Statistics and examples of LAMA questions with helpful entity names, which were deleted from LAMA- UHN. We show the top-3 most strongly affected relations per heuristic. Numbers in brackets indicate which part(s) of the person name triggered the person name filter, e.g., (-,1) means that the correct answer was ranked first for the person’s last name, but was not in the top-3 for their first name. Model size BASE LARGE Dataset Model original BERT E-BERT- replace E-BERT- concat ERNIE Know- Bert original BERT E-BERT- replace E-BERT- concat K- Adapter All questions 0 (original LAMA) 1 (string match filter) 2 (LAMA-UHN) 29.2 22.3 20.2 29.1 29.2 28.2 36.2 32.6 31.1 30.4 25.5 24.7 31.7 25.6 24.6 30.6 24.6 23.0 28.5 28.6 27.8 34.2 30.8 29.5 27.6 - 21.7 Questions w/ KnowBert subject only 0 (original LAMA) 1 (string match filter) 2 (LAMA-UHN) 32.0 24.8 22.8 28.5 28.6 27.7 35.8 32.0 30.6 30.4 25.7 24.9 32.0 25.9 25.1 33.1 27.0 25.5 28.2 28.3 27.4 34.9 31.5 30.6 - - - Table 3: Mean Hits@1 on LAMA-Google-RE and LAMA-T-REx combined. 0: original LAMA dataset (Petroni et al., 2019), 1: after string match filter, 2: after string match filter and person name filter (LAMA-UHN). “Ques- tions w/ KnowBert subject only”: Evaluating on questions whose gold subject is in the KnowBert entity vocabulary. Results for K-Adapter are calculated from Wang et al. (2020, Table 5). See Appendix for individual relations. 0.50 0.25 0.00 -0.25 -0.50 Delta(Hits@1) Figure 2: Left y-axis (bars): delta in mean Hits@1 rel- ative to BERT on individual LAMA relations. Right y-axis (crosses): frequency of questions where the an- swer is a substring of the subject entity name (i.e., ques- tions that would be deleted by the string match filter). Model size: BASE. Due to space constraints, we only show relations with max absolute delta ≥ 0.075. — original LAMA 0.84 a original BERT E-BERT-replace E-BERT-concat ERNIE KnowBert Mean Hits@k 1 2 3 5 10 k 20 30 50 100 Figure 3: Mean Hits@k for different k. Model size: BASE. The x-axis is on a logarithmic scale. T-REx:P20 (place of death), T-REx:P27 (national- ity), T-REx:P103 (native language), T-REx:P1412 (language used), Google-RE:place-of-death and Google-RE:place-of-birth. See Table 2 (bottom) for examples and statistics. # 4.5 Results and discussion Table 3 shows mean Hits@1 on the original LAMA dataset (0), after applying the string match filter (1), and after applying both filters (2, LAMA-UHN). E-BERT-concatBASE sets a new state of the art on LAMA, with major gains over original BERT. To understand why, compare the performances of BERT and E-BERT-replace on LAMA-UHN: While BERT drops by about 8% between original LAMA and LAMA-UHN, E-BERT-replace drops by less than 1%. This suggests that BERT’s per- formance on original LAMA is partly due to the exploitation of helpful entity names, while that of E-BERT-replace is more strongly due to factual knowledge. Since E-BERT-concat has access to entity names and entity vectors, it can leverage and combine these complementary sources of informa- tion. Interestingly, ERNIE and KnowBert have access to both sources too, but they do not achieve the same performance as E-BERT-concat. For a more in-depth analysis, Figure 2 shows Delta(Hits@1) w.r.t. BERT (bars, left axis) on indi- vidual LAMA relations, along with the frequency of questions whose correct answer is a substring of the subject name (crosses, right axis). The losses of E-BERT-replace are almost exclusively on re- lations with a high frequency of “easy” substring answers, while its gains are on relations where such answers are rare. E-BERT-concat mitigates most of the losses of E-BERT-replace while keeping most of its gains. Figure 3 shows that the gains of E-BERT-concat over BERT, KnowBert and ERNIE in terms of mean Hits@k are especially big for k > 1. This means that while E-BERT-concat is moderately bet- ter than the baselines at giving the correct answer, it is a lot better at “almost giving the correct an- swer”. Petroni et al. (2019) speculate that even when factual knowledge is not salient enough for a top-1 answer, it may still be useful when finetuning on a downstream task. # 5 Downstream tasks We now demonstrate how to use E-BERT on two downstream tasks: relation classification (RC) and entity linking (EL). In both experiments, we keep the embedding layer (EBERT and/or EE-BERT) fixed but finetune all other encoder parameters. We use the BERTBASE architecture throughout. # 5.1 Relation classification In relation classification (RC), a model learns to predict the directed relation of entities asub and aobj from text. For instance, given the sentence Taylor was later part of the ensemble cast in MGM ’s classic World War II drama “ Battleground ” ( 1949 ) . with surface forms Battleground and World War II referring to asub = Battleground (film) and aobj = Word War II, the model should predict the relation primary-topic-of-work. We have three ways of embedding this example: original BERT (wordpieces): [...] classic World War II drama “ Battle ##ground ” ( 1949 ) . E-BERT-concat: [...] classic World War II / World War II drama “ Battleground (film) / Battle ##ground ” ( 1949 ) . E-BERT-replace: [...] classic World War II drama “ Bat- tleground (film) ” ( 1949 ) . As before, entity IDs (bold) are embedded by EE-BERT and wordpieces (italics) by EBERT. Baselines: Wikipedia2Vec-BERT. To assess the impact of vector space alignment, we train two additional models (Wikipedia2Vec-BERT-concat and Wikipedia2Vec-BERT-replace) that feed non- aligned Wikipedia2Vec vectors directly into BERT (i.e., they use EWikipedia instead of EE-BERT to em- bed entity IDs). dev set test set P R F1 P R F1 original BERT E-BERT-concat E-BERT-replace 85.88 85.81 85.75 85.57 85.51 85.45 88.35 88.29 88.19 88.51 88.46 88.38 87.24 87.15 87.09 87.34 87.33 87.22 Wikipedia2Vec-BERT-concat Wikipedia2Vec-BERT-replace 85.96 85.71 85.69 85.94 85.93 85.84 77.25 77.11 77.07 77.63 77.52 77.45 ERNIE (Zhang et al., 2019) - - - 88.49 88.44 88.32 Table 4: RC macro precision, recall and F1 (%). Data. We evaluate on a preprocessed dataset from Zhang et al. (2019), which is a subset of the FewRel corpus (Sun et al., 2018) (see Appendix for details). We use the FewRel oracle entity IDs, which are also used by ERNIE. Our entity cover- age is lower than ERNIE’s (90% vs. 96%), which should put us at a disadvantage. See Appendix for details on data and preprocessing. Modeling and hyperparameters. We adopt the setup and hyperparameters of Zhang et al. (2019): We use the # and $ tokens to mark subject and object spans in the input, and we feed the last con- textualized vector of the [CLS] token into a ran- domly initialized softmax classifier. Like Zhang et al. (2019), we use the default AdamW optimizer (Loshchilov and Hutter, 2018) with a linear learn- ing rate scheduler (10% warmup) and a batch size of 32. We tune the number of training epochs and the peak learning rate on the same parameter ranges as Zhang et al. (2019). See Appendix for more de- tails and an expected maximum performance plot. Results and discussion. E-BERT-concat per- forms better than original BERT and slightly bet- ter than ERNIE (Table 4). Recall that ERNIE re- quired additional encoder pretraining to achieve this result. Interestingly, E-BERT-replace (which is entity-only) beats original BERT (which is surface- form-only), i.e., aligned entity vectors seem to be more useful than entity names for this task. The drop in F1 from E-BERT to Wikipedia2Vec-BERT shows the importance of vector space alignment. # 5.2 Entity linking Entity linking (EL) is the task of detecting entity spans in a text and linking them to the underlying entity ID. While there are recent advances in fully end-to-end EL (Broscheit, 2019), the task is typi- cally broken down into three steps: (1) detecting spans that are potential entity spans, (2) generat- ing sets of candidate entities for these spans, (3) selecting the correct candidate for each span. For steps (1) and (2), we use KnowBert’s candi- date generator (Peters et al., 2019), which is based on a precomputed span-entity co-occurrence ta- ble (Hoffart et al., 2011). Given an input sen- tence, the generator finds all spans that occur in the table, and annotates each with a set of can- didates A = {a1 . . . aN } and prior probabilities {p(a1) . . . p(aN )}. Note that the candidates and priors are span- but not context-specific, and that the generator may over-generate. For step (3), our model must therefore learn to (a) reject over- generated spans and (b) disambiguate candidates based on context. Modeling. Recall that BERT was pretrained as a masked LM (MLM). Given a wordpiece-tokenized input X with xi = [MASK], it has learned to pre- dict a probability distribution over LWP to fill the masked position: ∀w ∈ LWP p(w|X) ∝ exp(ew · FMLM(hi) + bw) where hi is the contextualized embedding of [MASK], bw is a learned bias and ew = EBERT(w). Since EE-BERT[LEnt] is aligned with EBERT[LWP], the pretrained MLM should have a good initializa- tion for predicting entities from context as well. Based on this intuition, our E-BERT-MLM model repurposes the MLM for the entity selection step. Given a wordpiece-tokenized span s1 . . . sTs with left context l1 . . . lTl, right context r1 . . . rTr , candidates A and priors p(a), we define: X = l1 . . . lTl [E-MASK] / s1 . . . sTs* r1 . . . rTr tokens in X except [E-MASK] are em- All [E-MASK] is embedded bedded by EBERT. a∈A EE-BERT(a), to inform the encoder as about its options for the current span. (See Table 5 for an ablation with the standard [MASK] token.) The output probability distribution for /E- MASK] is not defined over Lp but over AU {e}, where € stands for rejected spans (see below): # Va € AU {e} p(a|X) « exp(ea - Fim (h7,41) + ba) For alla € A, eg = Egperr(a) and bo = log(p(a)).° The null-entity ¢ has parameters ec, b¢ that are trained from scratch. °To understand why we set ba = log(p(a)), assume that the priors are implicitly generated as p(a) = exp(ba)/Z, with Z = oq exp(ba:). It follows that ba = log(p(a)) +log(Z). Since log(Z) is the same for all a’, and the softmax function is invariant to constant offsets, we can drop log(Z) from Eq. 2. (1) (2) > p(e|X) <------4 ec, be a p(Platt.(Florida)| X’) le~ (trainable params) x p(David Platt_(footballer) |X’) eze@) x sys «(candidate priors) FMLM (MLM head) Ay ry RNY F BERT (BERT encoder) 0 8 6 geese @ Tony-Adams and [E-MASK] / P #Â¥lat * are * both injured ... (footballer) A [senenn(4l] |,“ (aligned entity vectors of candidates) Figure 4: Schematic depiction of E-BERT-MLM. Blue: EBERT wordpiece vectors. Red: EE-BERT entity vec- tors. The candidates A and their priors p(a) are from the candidate generator. Assume that the entity Tony Adams (footballer) was decoded in a previous iteration (see “Iterative refinement”). AIDA-A (dev) AIDA-B (test) Micro Macro Micro Macro E-BERT-MLM w/o iterative refinement w/ standard [MASK] token Wikipedia2Vec-BERT-MLM Wikipedia2Vec-BERT-random 90.8 90.6 90.3 88.7 88.2 89.1 89.0 88.8 86.4 86.1 85.0 - - 80.6 80.5 84.2 - - 81.0 81.2 Kolitsas et al. (2018) Broscheit (2019) KnowBert (Peters et al., 2019) Chen et al. (2019)† 89.4 86.0 82.1 92.6 86.6 - - 93.6 82.4 79.3 73.7 87.5 82.6 - - 87.7 †Might Table 5: F1 (%) on AIDA after finetuning. not be comparable: Chen et al. (2019) evaluate on in- vocabulary entities only, without ensuring (or report- ing) the vocabulary’s coverage of the AIDA data. Finetuning. We finetune E-BERT-MLM on the training set to minimize )7/x 4) —log(p(@|X)), where (X, @) are pairs of potential spans and their gold entities. If X has no gold entity (if it was over-generated), then @ = ¢ 10 Iterative refinement. We found it useful to iter- atively refine predictions during inference, similar to techniques from non-autoregressive Machine Translation (Ghazvininejad et al., 2019). We start with a wordpiece-tokenized input, e.g.: Adams and P ##latt are both injured and will miss England ’s opening World Cup qualifier ... We make predictions for all potential spans that the candidate generator finds in the input. We gather all spans with argmax, [p(a|X)] # ¢, sort them by 1—p(e|X) and replace the top-k!! non-overlapping lf a A € Aa ¢ A, we remove the span from the training set. We do not do this at test time, i.e., we evaluate on all gold standard entities. ) − m, where 1 ≤ j ≤ J is the current iteration, m is the number of already decoded entities from spans with the predicted entity. Our previous ex- ample might be partially decoded as: Tony Adams (footballer) and P ##latt are both injured and will miss England ’s opening 1998 FIFA World Cup qualifier ... In the next iteration, decoded entities (bold) are represented by EE-BERT in the input, while non- decoded spans continue to be represented by EBERT (see Figure 4). We set the maximum num- ber of iterations to J = 3, as there were no im- provements beyond that point on the dev set. Baselines: Wikipedia2Vec-BERT. We train two additional baseline models that combine BERT and Wikipedia2Vec without vector space align- ment: Wikipedia2Vec-BERT-MLM: BERT and its pre- trained MLM head, finetuned to predict non- aligned Wikipedia2Vec vectors. In practice, this means replacing EE-BERT with EWikipedia in Eq. 2. Embedding the [E-MASK] token with non-aligned EWikipedia led to a drop in dev set micro F1, therefore we report this base- line with the standard [MASK] token. Wikipedia2Vec-BERT-random: Like Wikipe- dia2Vec-BERT-MLM, but the MLM head is replaced by a randomly initialized layer. Data. We train and evaluate on AIDA, a news dataset annotated with Wikipedia URLs (Hoffart et al., 2011). To ensure coverage of the necessary entities, we include all gold entities and all genera- tor candidates in the entity vocabulary LEnt, even if they fall under the Wikipedia2Vec link threshold (see Section 3.3). While this is based on the unreal- istic assumption that we know the contents of the test set in advance, it is necessary for comparability with Peters et al. (2019), Kolitsas et al. (2018) and Broscheit (2019), who also design their entity vo- cabulary around the data. See Appendix for more details on data and preprocessing. We evaluate strong match F1, i.e., a prediction must have the same start, end and entity (URL) as the gold stan- dard. URLs that redirect to the same Wikipedia page are considered equivalent. Hyperparameters. We train for 10 epochs with the AdamW optimizer (Loshchilov and Hutter, 2018) and a linear learning rate scheduler (10% warmup), and we select the best epoch on the dev set. We tune peak learning rate and batch size on the dev set (see Appendix). previous iterations, and n = |{X : argmax,[p(a|X)] 4 ¢}|. Training epoch 1 2 3 4 5 6 7 8 9 10 0.90 i 0.85 | al — E-BERT-MLM 0.80 — Wikipedia2 Vec-BERT-MLM —— Wikipedia2Vec-BERT-random Figure 5: AIDA dev set micro F1 after every epoch. P R E-BERT-MLM w/ standard [MASK] token Wikipedia2Vec-BERT-MLM Wikipedia2Vec-BERT-random 21.1 23.3 1.3 1.3 61.8 65.2 8.3 6.8 F1 31.5 34.3 2.3 2.2 Table 6: Unsupervised AIDA dev set micro precision / recall / F1 (%), before finetuning. Results are without iterative refinement. Results and discussion. Table 5 shows that E- BERT-MLM is competitive with previous work on AIDA. The aligned entity vectors play a key role in this performance, as they give the model a good initialization for predicting entities from con- text. When we remove this initialization by using non-aligned entity vectors (Wikipedia2Vec-BERT baselines), we get worse unsupervised performance (Table 6), slower convergence during finetuning (Figure 5), and a lower final F1 (Table 5). # 6 Conclusion We introduced E-BERT, an efficient yet effective way of injecting factual knowledge about entities into the BERT pretrained Language Model. We showed how to align Wikipedia2Vec entity vec- tors with BERT’s wordpiece vector space, and how to feed the aligned vectors into BERT as if they were wordpiece vectors. In doing so, we made no changes to the BERT encoder itself. This stands in contrast to other entity-enhanced versions of BERT, such as ERNIE or KnowBert, which add encoder layers and require expensive further pretraining. We set a new state of the art on LAMA, a recent unsupervised QA benchmark. Furthermore, we presented evidence that the original BERT model sometimes relies on the surface forms of entity names (rather than “true” factual knowledge) for this task. To quantify this effect, we introduced LAMA-UHN, a subset of LAMA where questions with helpful entity names are deleted. We also showed how to apply E-BERT to two supervised tasks: relation classification and entity linking. On both tasks, we achieve competitive results relative to BERT and other baselines. # References Investigating entity knowl- edge in bert with simple neural end-to-end entity In CoNLL, pages 677–685, Hong Kong, linking. China. Haotian Chen, Sahil Wadhwa, Xi David Li, and Andrej Zukov-Gregoric. 2019. YELM: End-to- end contextualized entity linking. arXiv preprint arXiv:1911.03834. Rajarshi Das, Manzil Zaheer, Siva Reddy, and Andrew McCallum. 2017. Question answering on knowl- edge bases and text using universal schema and memory networks. In ACL, pages 358–365, Vancou- ver, Canada. Joe Davison, Joshua Feldman, and Alexander M Rush. 2019. Commonsense knowledge mining from pre- In EMNLP-IJCNLP, pages 1173– trained models. 1178, Hong Kong, China. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In NAACL-HLT, pages 4171–4186, Min- neapolis, USA. Jesse Dodge, Suchin Gururangan, Dallas Card, Roy Schwartz, and Noah A Smith. 2019. Show your work: Improved reporting of experimental results. In EMNLP-IJCNLP, pages 2185–2194, Hong Kong, China. Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Jonathon Hare, Fr´ed´erique Christophe Gravier, Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In LREC, pages 3448–3452, Miyazaki, Japan. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-Predict: Parallel decoding of conditional masked language models. In EMNLP-IJCNLP, pages 6114–6123, Hong Kong, China. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bor- dino, Hagen F¨urstenau, Manfred Pinkal, Marc Span- iol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named en- tities in text. In EMNLP, pages 782–792, Edinburgh, UK. Zhengbao Jiang, Frank F. Xu, Jun Araki, and Graham Neubig. 2019. How can we know what language models know? arXiv preprint arXiv:1911.12543. and Thomas Hofmann. 2018. End-to-end neural entity In CoNLL, pages 519–529, Brussels, linking. Belgium. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. arXiv preprint arXiv:1907.11692. Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to Information Retrieval. Cambridge University Press. Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013a. Efficient estimation of word arXiv preprint representations in vector space. arXiv:1301.3781. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for ma- chine translation. arXiv preprint arXiv:1309.4168. IV Logan, L Robert, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A Smith. 2019. Knowledge enhanced con- In EMNLP-IJCNLP, textual word representations. Hong Kong, China. Fabio Petroni, Tim Rockt¨aschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Se- bastian Riedel. 2019. Language models as knowl- In EMNLP-IJCNLP, Hong Kong, edge bases? China. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Timo Schick and Hinrich Sch¨utze. 2019. BERTRAM: Improved word embeddings have big impact on contextualized model performance. arXiv preprint arXiv:1910.07181. Samuel L Smith, David HP Turban, Steven Hamblin, and Nils Y Hammerla. 2017. Offline bilingual word vectors, orthogonal transformations and the inverted softmax. In ICLR, Toulon, France. Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William Cohen. 2018. Open domain question answering using early In EMNLP, fusion of knowledge bases and text. pages 4231–4242, Brussels, Belgium. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS, pages 5998–6008, Long Beach, USA. Hai Wang, Dian Yu, Kai Sun, Janshu Chen, and Dong Yu. 2019a. Improving pre-trained multilingual mod- In CoNLL, pages els with vocabulary expansion. 316–327, Hong Kong, China. Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Cuihong Cao, Daxin Jiang, Ming Zhou, et al. 2020. K-Adapter: Infusing knowl- edge into pre-trained models with adapters. arXiv preprint arXiv:2002.01808. Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Liwei Peng, and Luo Si. 2019b. StructBERT: Incorporating language structures into pre-training for deep language understanding. arXiv preprint arXiv:1908.04577. Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu, Zhiyuan Liu, Juanzi Li, and Jian Tang. 2019c. KEPLER: A unified model for knowledge embedding and pre- arXiv preprint trained language representation. arXiv:1911.06136. Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. 2016. Joint learning of the em- bedding of words and entities for named entity dis- ambiguation. In CoNLL, Berlin, Germany. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized autoregressive pretraining for In NeurIPS, pages 5754– language understanding. 5764. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In ACL, pages 1441–1451, Florence, Italy. # E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT (Appendix) # Unsupervised QA (LAMA) # Data We downloaded the LAMA dataset from https:// dl.fbaipublicfiles.com/LAMA/data.zip. We use the LAMA-T-REx and LAMA-Google-RE re- lations, which are aimed at factual knowledge. Ta- ble 9 shows results on indiviual relations, as well as the number of questions per relation before and after applying the LAMA-UHN heuristics. # Preprocessing As mentioned in Section 4.1, we do not use LAMA’s oracle entity IDs. Instead, we map sur- face forms to entity IDs via the Wikidata query API (https://query.wikidata.org). For exam- ple, to look up Jean Marais: SELECT ?id ?str WHERE { ?id rdfs:label ?str . VALUES ?str { 'Jean Marais'@en } . FILTER((LANG(?str)) = 'en') . } If more than one Wikidata ID is returned, we select the lowest one. We then map Wikidata IDs to the corresponding Wikipedia URLs: # SELECT ?id ?wikiurl WHERE { VALUES ?id { wd:Q168359 } . ?wikiurl schema:about ?id . ?wikiurl schema:inLanguage 'en' . FILTER REGEX(str(?wikiurl), '.*en.wikipedia.org.*') . } # Relation classification # Data The RC dataset, which is a subset of the FewRel corpus, was compiled by Zhang et al. (2019). We downloaded it from https://cloud.tsinghua. edu.cn/f/32668247e4fd4f9789f2/. Table 7 shows dataset statistics. # Preprocessing The dataset contains sentences with annotated sub- ject and object entity mentions, their oracle entity IDs and their relation (which must be predicted). We use the BERT wordpiece tokenizer to tokenize the sentence and insert special wordpieces to mark the entity mentions: # for subjects and $ for ob- jects. Then, we insert the entity IDs. For example, an input to E-BERT-concat would look like this: [CLS] Taylor was later part of the ensemble cast in MGM ’s classic $ World War II / World War II $ drama “ # Battleground (film) / Battle ##ground # ” ( 1949 ) . [SEP] We use the oracle entity IDs of the dataset, which are also used by ERNIE (Zhang et al., 2019). # Hyperparameters We tune peak learning rate and number of epochs on the dev set (selection criterion: macro F1). We do a full search over the same hyperparameter space as Zhang et al. (2019): Learning rate: [2 · 10−5, 3 · 10−5, 5 · 10−5] Number of epochs: [3, 4, 5, 6, 7, 8, 9, 10] The best configuration for E-BERT-concat is marked in bold. Figure 6 shows expected maxi- mum performance as a function of the number of evaluated configurations (Dodge et al., 2019). # Entity linking (AIDA) # Data # We downloaded the AIDA dataset from: • https://allennlp.s3-us-west-2. amazonaws.com/knowbert/wiki_entity_ linking/aida_train.txt • https://allennlp.s3-us-west-2. amazonaws.com/knowbert/wiki_entity_ linking/aida_dev.txt • https://allennlp.s3-us-west-2. amazonaws.com/knowbert/wiki_entity_ linking/aida_test.txt # Preprocessing Each AIDA file contains documents with annotated entity spans (which must be predicted). The doc- uments are already whitespace tokenized, and we further tokenize words into wordpieces with the standard BERT tokenizer. If a document is too long (length > 512), we split it into smaller chunks by (a) finding the sentence boundary that is closest to the document midpoint, (b) splitting the doc- ument, and (c) repeating this process recursively until all chunks are short enough. Table 8 shows dataset statistics. Hyperparameters We tune batch size and peak learning rate on the AIDA dev set (selection criterion: strong match micro F1). We do a full search over the following hyperparameter space: Batch size: [16, 32, 64, 128] Learning rate: [2 · 10−5, 3 · 10−5, 5 · 10−5] The best configuration for E-BERT-MLM is marked in bold. Figure 7 shows expected maxi- mum performance as a function of the number of evaluated configurations (Dodge et al., 2019). # relations # unique entities 80 54648 train dev test # samples # samples per relation 8000 100 16000 200 16000 200 Table 7: Relation classification dataset statistics. # unique gold entities # unique candidate entities 5574 463663 train dev test # documents # documents (after chunking) # potential spans (candidate generator) # gold entities 946 1111 153103 18454 216 276 38012 4778 231 271 34936 4478 Table 8: Entity linking (AIDA) dataset statistics. — # original BERT —— # E-BERT-replace ++++ Wikipedia2Vec-BERT-replace —— E-BERT-concat == Wikipedia2Vec-BERT-concat 0.875 4 0.850 4 0.825 4 0.800 4 0.775 4 0.750 4 Expected maximum macro F1 0.725 4 0 5 10 15 20 25 Number of configurations Figure 6: Relation classification: Expected maximum macro F1 (dev set) as a function of the number of hy- perparameter configurations. # E-BERT-MLM —— — Wikipedia2Vec-BERT-MLM —— Wikipedia2Vec-BERT-random —— 0.905 4 0.900 4 0.895 4 0.890 4 0.885 4 0.880 4 Expected maximum micro F1 t t t t t t 2 4 6 8 10 12 Number of configurations Figure 7: Entity linking: Expected maximum micro F1 (dev set) as a function of the number of hyperparameter configurations. Model size: BASE LARGE Relation (dataset) Model original BERT E-BERT replace E-BERT- concat ERNIE Know- Bert original BERT E-BERT- replace E-BERT- concat T-REx:P17 T-REx:P17 T-REx:P17 (0, original LAMA) (1) (2, LAMA-UHN) 31.3 31.0 31.0 53.7 55.0 55.0 52.4 53.3 53.3 55.3 55.5 55.5 23.7 23.2 23.2 36.5 36.2 36.2 43.3 44.5 44.5 42.8 43.3 43.3 T-REx:P19 T-REx:P19 T-REx:P19 (0, original LAMA) (1) (2, LAMA-UHN) 21.1 20.6 9.8 26.4 26.5 20.3 28.1 27.5 18.7 28.7 28.2 19.4 23.3 22.9 12.2 22.2 21.8 11.7 24.6 24.5 18.1 25.3 24.8 15.5 T-REx:P20 T-REx:P20 T-REx:P20 (0, original LAMA) (1) (2, LAMA-UHN) 27.9 28.2 15.5 29.7 29.9 21.5 35.8 36.0 23.3 16.6 16.5 8.4 31.1 31.0 20.0 31.7 32.0 18.9 37.1 37.2 27.3 33.5 33.8 22.6 T-REx:P27 T-REx:P27 T-REx:P27 (0, original LAMA) (1) (2, LAMA-UHN) 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.2 0.0 0.0 0.0 0.1 0.1 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.1 0.1 0.2 T-REx:P30 T-REx:P30 T-REx:P30 (0, original LAMA) (1) (2, LAMA-UHN) 25.4 25.1 25.1 69.9 70.3 70.3 69.8 69.9 69.9 66.8 66.6 66.6 24.0 23.9 23.9 28.0 27.5 27.5 75.0 75.0 75.0 60.4 60.3 60.3 T-REx:P31 T-REx:P31 T-REx:P31 (0, original LAMA) (1) (2, LAMA-UHN) 36.7 21.1 21.1 25.5 28.4 28.4 46.9 35.8 35.8 43.7 30.3 30.3 18.7 12.4 12.4 30.2 16.3 16.3 12.3 9.9 9.9 16.1 9.8 9.8 T-REx:P36 T-REx:P36 T-REx:P36 (0, original LAMA) (1) (2, LAMA-UHN) 62.2 51.5 51.5 42.1 41.9 41.9 61.6 53.9 53.9 57.3 45.9 45.9 62.2 51.7 51.7 67.0 57.5 57.5 44.7 43.8 43.8 66.0 58.8 58.8 T-REx:P37 T-REx:P37 T-REx:P37 (0, original LAMA) (1) (2, LAMA-UHN) 54.6 52.9 52.9 51.2 51.6 51.6 56.5 55.5 55.5 60.2 59.4 59.4 53.1 51.9 51.9 61.5 60.5 60.5 54.3 54.2 54.2 62.7 62.1 62.1 T-REx:P39 T-REx:P39 T-REx:P39 (0, original LAMA) (1) (2, LAMA-UHN) 8.0 7.5 7.5 22.9 23.0 23.0 22.5 22.3 22.3 17.0 17.1 17.1 17.2 16.5 16.5 4.7 4.6 4.6 8.1 8.1 8.1 8.6 8.5 8.5 T-REx:P47 T-REx:P47 T-REx:P47 (0, original LAMA) (1) (2, LAMA-UHN) 13.7 13.6 13.6 8.9 9.1 9.1 10.8 10.7 10.7 9.8 9.6 9.6 14.0 13.9 13.9 18.2 18.6 18.6 15.1 15.2 15.2 15.9 15.9 15.9 T-REx:P101 T-REx:P101 T-REx:P101 (0, original LAMA) (1) (2, LAMA-UHN) 9.9 9.5 9.5 37.8 38.2 38.2 40.8 40.9 40.9 16.7 16.1 16.1 12.2 11.4 11.4 11.5 10.8 10.8 37.8 38.0 38.0 36.1 35.8 35.8 T-REx:P103 T-REx:P103 T-REx:P103 (0, original LAMA) (1) (2, LAMA-UHN) 72.2 72.1 45.8 85.8 85.7 81.9 86.8 86.8 74.7 85.5 85.4 83.6 73.4 73.3 72.2 78.2 78.2 58.6 84.4 84.4 81.2 84.9 84.9 71.1 T-REx:P106 T-REx:P106 T-REx:P106 (0, original LAMA) (1) (2, LAMA-UHN) 0.6 0.6 0.6 6.5 6.5 6.5 5.4 5.4 5.4 8.4 8.4 8.4 1.6 1.6 1.6 0.6 0.6 0.6 4.3 4.3 4.3 2.1 2.1 2.1 T-REx:P108 T-REx:P108 T-REx:P108 (0, original LAMA) (1) (2, LAMA-UHN) 6.8 6.5 6.5 9.9 9.9 9.9 23.2 23.0 23.0 14.1 13.9 13.9 10.7 10.5 10.5 1.6 1.3 1.3 11.7 11.8 11.8 15.9 16.0 16.0 T-REx:P127 T-REx:P127 T-REx:P127 (0, original LAMA) (1) (2, LAMA-UHN) 34.8 14.2 14.2 24.0 19.7 19.7 34.9 23.5 23.5 36.2 17.1 17.1 31.4 15.5 15.5 34.8 14.6 14.6 25.3 21.1 21.1 35.8 24.6 24.6 number of questions 930 885 885 944 933 728 953 944 656 966 945 423 975 963 963 922 564 564 703 534 534 966 924 924 892 878 878 922 904 904 696 685 685 977 975 415 958 958 958 383 382 382 687 451 451 Table 9: Mean Hits@1 and number of questions per LAMA relation. 0: original LAMA dataset, 1: after applying heuristic 1 (string match filter), 2: after applying both heuristics (LAMA-UHN). Model size: BASE LARGE Relation (dataset) Model original BERT E-BERT replace E-BERT- concat ERNIE Know- Bert original BERT E-BERT- replace E-BERT- concat T-REx:P131 T-REx:P131 T-REx:P131 (0, original LAMA) (1) (2, LAMA-UHN) 23.3 16.7 16.7 33.4 32.0 32.0 36.4 33.9 33.9 37.3 32.7 32.7 27.7 21.5 21.5 26.3 20.1 20.1 31.4 31.0 31.0 37.2 33.4 33.4 T-REx:P136 T-REx:P136 T-REx:P136 (0, original LAMA) (1) (2, LAMA-UHN) 0.8 0.2 0.2 5.2 5.1 5.1 9.1 8.7 8.7 0.6 0.2 0.2 0.6 0.1 0.1 1.3 0.2 0.2 6.9 6.9 6.9 13.1 12.2 12.2 T-REx:P138 T-REx:P138 T-REx:P138 (0, original LAMA) (1) (2, LAMA-UHN) 61.6 5.0 5.0 8.8 10.0 10.0 26.5 8.8 8.8 0.2 0.0 0.0 63.7 6.9 6.9 45.1 4.4 4.4 2.6 4.4 4.4 24.0 6.2 6.2 T-REx:P140 T-REx:P140 T-REx:P140 (0, original LAMA) (1) (2, LAMA-UHN) 0.6 0.4 0.4 0.6 0.6 0.6 1.1 0.9 0.9 0.0 0.0 0.0 0.8 0.6 0.6 0.6 0.4 0.4 1.1 0.9 0.9 0.6 0.4 0.4 T-REx:P159 T-REx:P159 T-REx:P159 (0, original LAMA) (1) (2, LAMA-UHN) 32.4 23.1 23.1 30.3 31.6 31.6 48.3 41.9 41.9 41.8 34.4 34.4 36.8 28.7 28.7 34.7 25.6 25.6 22.3 20.9 20.9 45.2 37.8 37.8 T-REx:P176 T-REx:P176 T-REx:P176 (0, original LAMA) (1) (2, LAMA-UHN) 85.6 31.4 31.4 41.6 42.9 42.9 74.6 51.8 51.8 81.8 26.2 26.2 90.0 51.3 51.3 87.5 40.8 40.8 36.6 44.5 44.5 81.3 57.1 57.1 T-REx:P178 T-REx:P178 T-REx:P178 (0, original LAMA) (1) (2, LAMA-UHN) 62.8 40.7 40.7 49.8 42.6 42.6 66.6 51.6 51.6 60.1 36.9 36.9 70.3 52.2 52.2 70.8 53.6 53.6 51.2 51.1 51.1 69.4 57.7 57.7 T-REx:P190 T-REx:P190 T-REx:P190 (0, original LAMA) (1) (2, LAMA-UHN) 2.4 1.5 1.5 2.9 2.4 2.4 2.5 1.6 1.6 2.6 1.6 1.6 2.8 2.0 2.0 2.3 1.7 1.7 2.3 1.9 1.9 2.8 2.3 2.3 T-REx:P264 T-REx:P264 T-REx:P264 (0, original LAMA) (1) (2, LAMA-UHN) 9.6 9.6 9.6 30.5 30.6 30.6 33.6 33.4 33.4 13.3 13.3 13.3 21.2 21.3 21.3 8.2 8.2 8.2 23.1 23.1 23.1 15.6 15.7 15.7 T-REx:P276 T-REx:P276 T-REx:P276 (0, original LAMA) (1) (2, LAMA-UHN) 41.5 19.8 19.8 23.8 26.1 26.1 47.7 31.7 31.7 48.4 27.0 27.0 43.3 20.6 20.6 43.8 23.4 23.4 23.1 25.0 25.0 51.8 36.0 36.0 T-REx:P279 T-REx:P279 T-REx:P279 (0, original LAMA) (1) (2, LAMA-UHN) 30.7 3.8 3.8 14.7 8.6 8.6 30.7 8.0 8.0 29.4 4.6 4.6 31.6 5.3 5.3 33.5 6.8 6.8 15.5 8.6 8.6 29.8 10.1 10.1 T-REx:P361 T-REx:P361 T-REx:P361 (0, original LAMA) (1) (2, LAMA-UHN) 23.6 12.6 12.6 19.6 17.9 17.9 23.0 17.7 17.7 25.8 13.7 13.7 26.6 15.3 15.3 27.4 18.5 18.5 22.3 20.2 20.2 25.4 22.0 22.0 T-REx:P364 T-REx:P364 T-REx:P364 (0, original LAMA) (1) (2, LAMA-UHN) 44.5 43.5 43.5 61.7 61.7 61.7 64.0 63.5 63.5 48.0 47.4 47.4 40.9 40.0 40.0 51.1 50.7 50.7 60.6 60.5 60.5 61.3 61.2 61.2 T-REx:P407 T-REx:P407 T-REx:P407 (0, original LAMA) (1) (2, LAMA-UHN) 59.2 57.6 57.6 68.0 69.5 69.5 68.8 67.9 67.9 53.8 53.1 53.1 60.1 58.6 58.6 62.1 61.0 61.0 57.9 59.0 59.0 56.3 55.2 55.2 T-REx:P413 T-REx:P413 T-REx:P413 (0, original LAMA) (1) (2, LAMA-UHN) 0.5 0.5 0.5 0.1 0.1 0.1 0.0 0.0 0.0 0.0 0.0 0.0 41.7 41.7 41.7 4.1 4.1 4.1 14.0 14.0 14.0 7.0 7.0 7.0 number of questions 881 706 706 931 913 913 645 160 160 473 467 467 967 843 843 982 191 191 592 366 366 995 981 981 429 428 428 959 625 625 963 474 474 932 633 633 856 841 841 877 834 834 952 952 952 Table 10: Mean Hits@1 and number of questions per LAMA relation (cont’d). 0: original LAMA dataset, 1: after applying heuristic 1 (string match filter), 2: after applying both heuristics (LAMA-UHN). Model size: BASE LARGE Relation (dataset Model original BERT E-BERT replace E-BERT- concat ERNIE Know- Bert original BERT E-BERT- replace E-BERT- concat T-REx:P449 T-REx:P449 T-REx:P449 (0, original LAMA) (1) (2, LAMA-UHN) 20.9 18.8 18.8 30.9 31.1 31.1 34.7 33.4 33.4 33.8 32.0 32.0 57.0 56.0 56.0 24.0 21.8 21.8 32.5 32.9 32.9 28.6 27.5 27.5 T-REx:P463 T-REx:P463 T-REx:P463 (0, original LAMA) (1) (2, LAMA-UHN) 67.1 67.1 67.1 61.8 61.8 61.8 68.9 68.9 68.9 43.1 43.1 43.1 35.6 35.6 35.6 61.3 61.3 61.3 52.0 52.0 52.0 66.7 66.7 66.7 T-REx:P495 T-REx:P495 T-REx:P495 (0, original LAMA) (1) (2, LAMA-UHN) 16.5 15.0 15.0 46.3 46.0 46.0 48.3 47.5 47.5 1.0 0.9 0.9 30.8 29.6 29.6 29.7 28.5 28.5 56.7 56.6 56.6 46.9 46.2 46.2 T-REx:P527 T-REx:P527 T-REx:P527 (0, original LAMA) (1) (2, LAMA-UHN) 11.1 5.7 5.7 7.4 7.6 7.6 11.9 8.7 8.7 5.4 0.5 0.5 12.9 3.0 3.0 10.5 4.2 4.2 8.9 8.7 8.7 12.9 6.3 6.3 T-REx:P530 T-REx:P530 T-REx:P530 (0, original LAMA) (1) (2, LAMA-UHN) 2.8 2.8 2.8 1.8 1.8 1.8 2.0 2.0 2.0 2.3 2.3 2.3 2.8 2.8 2.8 2.7 2.7 2.7 2.3 2.3 2.3 2.8 2.8 2.8 T-REx:P740 T-REx:P740 T-REx:P740 (0, original LAMA) (1) (2, LAMA-UHN) 7.6 5.9 5.9 10.5 10.3 10.3 14.7 13.5 13.5 0.0 0.0 0.0 10.4 9.0 9.0 6.0 5.2 5.2 13.1 12.7 12.7 10.4 9.5 9.5 T-REx:P937 T-REx:P937 T-REx:P937 (0, original LAMA) (1) (2, LAMA-UHN) 29.8 29.9 29.9 33.0 32.9 32.9 38.8 38.7 38.7 40.0 39.9 39.9 32.3 32.2 32.2 24.9 24.8 24.8 28.3 28.2 28.2 34.5 34.4 34.4 T-REx:P1001 T-REx:P1001 T-REx:P1001 (0, original LAMA) (1) (2, LAMA-UHN) 70.5 38.1 38.1 56.9 67.7 67.7 76.0 66.7 66.7 75.7 65.6 65.6 73.0 43.4 43.4 73.3 40.7 40.7 49.5 60.3 60.3 78.0 66.7 66.7 T-REx:P1303 T-REx:P1303 T-REx:P1303 (0, original LAMA) (1) (2, LAMA-UHN) 7.6 7.6 7.6 20.3 20.3 20.3 26.6 26.6 26.6 5.3 5.3 5.3 9.1 9.1 9.1 12.5 12.5 12.5 29.7 29.7 29.7 33.2 33.2 33.2 T-REx:P1376 T-REx:P1376 T-REx:P1376 (0, original LAMA) (1) (2, LAMA-UHN) 73.9 74.8 74.8 41.5 42.2 42.2 62.0 62.8 62.8 71.8 73.4 73.4 75.2 75.2 75.2 82.1 83.5 83.5 47.4 48.6 48.6 70.1 72.0 72.0 T-REx:P1412 T-REx:P1412 T-REx:P1412 (0, original LAMA) (1) (2, LAMA-UHN) 65.0 65.0 37.7 54.0 54.0 42.9 67.8 67.8 47.4 73.1 73.1 69.2 69.2 69.2 65.7 63.6 63.6 51.5 49.3 49.3 43.5 61.2 61.2 54.8 Google-RE:date of birth Google-RE:date of birth Google-RE:date of birth (0) (1) (2) 1.6 1.6 1.6 1.5 1.5 1.5 1.9 1.9 1.9 1.9 1.9 1.9 2.4 2.4 2.4 1.5 1.5 1.5 1.5 1.5 1.5 1.3 1.3 1.3 Google-RE:place of birth Google-RE:place of birth Google-RE:place of birth (0) (1) (2) 14.9 14.9 5.9 16.2 16.2 9.4 16.9 16.8 8.2 17.7 17.7 10.3 17.4 17.4 9.4 16.1 16.0 7.2 14.8 14.8 8.5 16.6 16.6 7.9 Google-RE:place of death Google-RE:place of death Google-RE:place of death (0) (1) (2) 13.1 13.1 6.6 12.8 12.8 7.5 14.9 14.9 7.8 6.4 6.4 2.0 13.4 13.4 7.5 14.0 14.0 7.6 17.0 17.0 11.8 14.9 14.9 8.9 number of questions 881 848 848 225 225 225 909 892 892 976 804 804 996 996 996 936 910 910 954 950 950 701 189 189 949 949 949 234 218 218 969 969 361 1825 1825 1825 2937 2934 2451 766 766 655 Table 11: Mean Hits@1 and number of questions per LAMA relation (cont’d). 0: original LAMA dataset, 1: after applying heuristic 1 (string match filter), 2: after applying both heuristics (LAMA-UHN).
{ "id": "1908.04577" }
1911.03588
MKD: a Multi-Task Knowledge Distillation Approach for Pretrained Language Models
Pretrained language models have led to significant performance gains in many NLP tasks. However, the intensive computing resources to train such models remain an issue. Knowledge distillation alleviates this problem by learning a light-weight student model. So far the distillation approaches are all task-specific. In this paper, we explore knowledge distillation under the multi-task learning setting. The student is jointly distilled across different tasks. It acquires more general representation capacity through multi-tasking distillation and can be further fine-tuned to improve the model in the target domain. Unlike other BERT distillation methods which specifically designed for Transformer-based architectures, we provide a general learning framework. Our approach is model agnostic and can be easily applied on different future teacher model architectures. We evaluate our approach on a Transformer-based and LSTM based student model. Compared to a strong, similarly LSTM-based approach, we achieve better quality under the same computational constraints. Compared to the present state of the art, we reach comparable results with much faster inference speed.
http://arxiv.org/pdf/1911.03588
Linqing Liu, Huan Wang, Jimmy Lin, Richard Socher, Caiming Xiong
cs.CL, cs.LG
null
null
cs.CL
20191109
20200430
0 2 0 2 r p A 0 3 ] L C . s c [ 2 v 8 8 5 3 0 . 1 1 9 1 : v i X r a # MKD: a Multi-Task Knowledge Distillation Approach for Pretrained Language Models Linqing Liu,∗1 Huan Wang,2 Jimmy Lin,1 Richard Socher,2 and Caiming Xiong2 1 David R. Cheriton School of Computer Science, University of Waterloo 2 Salesforce Research {linqing.liu, jimmylin}@uwaterloo.ca, {huan.wang, rsocher, cxiong}@salesforce.com # Abstract Pretrained language models have led to signif- icant performance gains in many NLP tasks. However, the intensive computing resources to train such models remain an issue. Knowledge distillation alleviates this problem by learning a light-weight student model. So far the dis- tillation approaches are all task-specific. In this paper, we explore knowledge distillation under the multi-task learning setting. The stu- dent is jointly distilled across different tasks. It acquires more general representation capac- ity through multi-tasking distillation and can be further fine-tuned to improve the model in the target domain. Unlike other BERT distilla- tion methods which specifically designed for Transformer-based architectures, we provide a general learning framework. Our approach is model agnostic and can be easily applied on different future teacher model architectures. We evaluate our approach on a Transformer- based and LSTM based student model. Com- pared to a strong, similarly LSTM-based ap- proach, we achieve better quality under the same computational constraints. Compared to the present state of the art, we reach compara- ble results with much faster inference speed. # Introduction Pretrained language models learn highly effective language representations from large-scale unla- beled data. A few prominent examples include ELMo (Peters et al., 2018), BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019c), and XL- Net (Yang et al., 2019), all of which have achieved state of the art in many natural language process- ing (NLP) tasks, such as natural language infer- ence, sentiment classification, and semantic textual similarity. However, such models use dozens, if not hundreds, of millions of parameters, invariably ∗All work was done while the first author was a research intern at Salesforce Research. Teacher 1|[)] Task 1 | Task 4 Teacher 2 ) Task 2 Shared }.~| Task 2 Student Teacher eo io) Layers | _ Teacher N|D)| Task N | Task N Figure 1: The left figure represents task-specific KD. The distillation process needs to be performed for each different task. The right figure represents our proposed multi-task KD. The student model consists of shared layers and task-specific layers. leading to resource-intensive inference. The con- sensus is that we need to cut down the model size and reduce the computational cost while maintain- ing comparable quality. One approach to address this problem is knowl- edge distillation (KD; Ba and Caruana, 2014; Hin- ton et al., 2015), where a large model functions as a teacher and transfers its knowledge to a small student model. Previous methods focus on task- specific KD, which transfers knowledge from a single-task teacher to its single-task student. Put it another way, the knowledge distillation process needs to be conducted all over again when perform- ing on a new NLP task. The inference speed of the large-scale teacher model remains the bottleneck for various downstream tasks distillation. Our goal is to find a distill-once-fits-many so- lution. In this paper, we explore the knowledge distillation method under the setting of multi-task learning (MTL; Caruana, 1997; Baxter, 2000). We propose to distill the student model from different tasks jointly. The overall framework is illustrated in Figure 1. The reason is twofold: first, the dis- tilled model learns a more universal language rep- resentation by leveraging cross-task data. Second, the student model achieves both comparable qual- ity and fast inference speed across multiple tasks. MTL is based on the idea (Maurer et al., 2016) that tasks are related by means of a common low dimen- sional representation. We also provide an intuitive explanation on why using shared structure could possibly help by assuming some connections over the conditional distribution of different tasks. We evaluate our approach on two different stu- dent model architectures. One uses three layers Transformers (Vaswani et al., 2017), since most of the KD works (Sun et al., 2019; Jiao et al., 2019) use Transformers as their students. Another is LSTM based network with bi-attention mechanism. Previously Tang et al. (2019) examine the represen- tation capacity of a simple, single-layer Bi-LSTM only, so we are interested in whether adding more previous effective modules, such as an attention mechanism, will further improve its effectiveness. It exemplifies that our approach is model agnostic, i.e., the choice of student model does not depend on the teacher model architecture; The teacher model can be easily switched to other powerful language models other than BERT. We further study several important problems in knowledge distillation, such as the choice of mod- ules in student model, the influence of different tokenization methods, and the influence of MTL in KD. We evaluate our approach on seven datasets across four different tasks. For LSTM based stu- dent, our approach keeps the advantage of infer- ence speed while maintaining comparable perfor- mances as those specifically designed for Trans- former methods. For our Transformer based stu- dent, it does provide a modest gain, and outper- forms other KD methods without using external training data. # 2 Related Work Language model pretraining. Given a sequence of tokens, pretrained language models encode each token as a general language representational embedding. A large body of literature has ex- plored this area. Traditional pretrained word rep- resentations (Turian et al., 2010) presume singu- lar word meanings and thus adapt poorly to mul- tiple contexts—for some notable examples, see word2vec (Mikolov et al., 2013), GloVe (Pen- nington et al., 2014), and FastText (Bojanowski et al., 2017). For more flexible word represen- tations, a few advancements exist: Neelakantan et al. (2015) learn multiple embeddings per word type; context2vec (Melamud et al., 2016) uses bidi- rectional LSTM to encode contexts around target words; CoVe (McCann et al., 2017) trains LSTM encoders on some machine translation datasets, showing that these encoders are well-transferable to other tasks. Prominently, ELMo (Peters et al., 2018) learns deep word representations using a bidi- rectional language model. It can be easily added to an existing model and boost performance across six challenging NLP tasks. Fine-tuning approaches are mostly employed in more recent work. They pretrain the language model on a large-scale unlabeled corpus and then fine-tune it with in-domain labeled data for a super- vised downstream task (Dai and Le, 2015; Howard and Ruder, 2018). BERT (Devlin et al., 2019), GPT (Radford et al., 2018) and GPT-2 (Radford et al.) are some of the prominent examples. Fol- lowing BERT, XLNet (Yang et al., 2019) proposes a generalized autoregressive pretraining method and RoBERTa (Liu et al., 2019c) optimizes BERT pretraining approach. These pretrained models are large in size and contain millions of parameters. We target the BERT model and aim to address this problem through knowledge distillation. Our ap- proach can be easily applied to other models as well. Knowledge distillation. Knowledge distillation (Ba and Caruana, 2014; Hinton et al., 2015) trans- fers knowledge from a large teacher model to a smaller student model. Since the distillation only matches the output distribution, the student model architecture can be completely different from that of the teacher model. There are already many ef- forts trying to distill BERT into smaller models. BERT-PKD (Sun et al., 2019) extracts knowledge not only from the last layer of the teacher, but also from previous layers. TinyBERT (Jiao et al., 2019) introduces a two-stage learning framework which performs transformer distillation at both pretrain- ing and task-specific stages. Zhao et al. (2019) train a student model with smaller vocabulary and lower hidden states dimensions. DistilBERT (Sanh et al., 2019) reduces the layers of BERT and uses this small version of BERT as its student model. All the aforementioned distillation methods are per- formed on a single task, specifically designed for the transformer-based teacher architecture, result- ing in poor generalizability to other type of mod- els. Our objective is to invent a general distillation framework, applicable to either transformer-based models or other architectures as well. Tang et al. (2019) distill BERT into a single-layer BiLSTM. In our paper, we hope to extract more knowledge from BERT through multi-task learning, while keeping the student model simple. Multi-task learning. Multi-task learning (MTL) has been successfully applied on different appli- cations (Collobert and Weston, 2008; Deng et al., 2013; Girshick, 2015). MTL helps the pretrained language models learn more generalized text rep- resentation by sharing the domain-specific infor- mation contained in each related task training sig- nal (Caruana, 1997). Liu et al. (2019b, 2015) pro- pose a multi-task deep neural network (MT-DNN) for learning representations across multiple tasks. (Clark et al., 2019) propose to use knowledge distil- lation so that single task models can teach a multi- task model. Liu et al. (2019a) train an ensemble of large DNNs and then distill their knowledge to a single DNN via multi-task learning to ensemble its teacher performance. # 3 Model Architecture In this section, we introduce the teacher model and student model for our distillation approach. We explored two different student architectures: a traditional bidirectional long short-term memory network (BiLSTM) with bi-attention mechanism in 3.2, and the popular Transformer in 3.3. # 3.1 Multi-Task Refined Teacher Model We argue that multi-task learning can leverage the regularization of different natural language under- standing tasks. Under this setting, language models can be more effective in learning universal lan- guage representations. To this end, we consider the bidirectional transformer language model (BERT; Devlin et al., 2019) as bottom shared text encoding layers, and fine-tune the task-specific top layers for each type of NLU task. There are mainly two stages for the training procedure: pretraining the shared layer and multi-task refining. Shared layer pretraining. Following Devlin et al. (2019), the input token is first encoded as the the summation of its corresponding token embed- dings, segmentation embeddings and position em- beddings. The input embeddings are then mapped into contextual embeddings C through a multi- layer bidirectional transformer encoder. The pre- training of these shared layers use the cloze task and next sentence prediction task. We use the pre- trained BERTLARGE to initialize these shared lay- ers. Multi-task refining. The contextual embeddings C are then passed through the upper task-specific layers. Following Liu et al. (2019b), our cur- rent NLU training tasks on GLUE (Wang et al., 2018) can be classified into four categories: single- sentence classification (CoLA and SST-2), pairwise text classification (RTE, MNLI, WNLI, QQP, and MRPC), pairwise text similarity (STS-B), and rele- vance ranking (QNLI). Each category corresponds to its own output layer. Here we take the text similarity task as an ex- ample to demonstrate the implementation details. Following Devlin et al. (2019), we consider the contextual embedding of the special [CLS] token as the semantic representation of the input sentence pair (X1, X2). The similarity score can be pre- dicted by the similarity ranking layer: Sim(X1, X2) = Wgrge (1) where WST S is a task-specific learnable weight vector and x is the contextual embedding of the [CLS] token. In the multi-task refining stage, all the model pa- rameters, including bottom shared layers and task- specific layers, are updated through mini-batch stochastic gradient descent (Li et al., 2014). The training data are packed into mini-batches and each mini-batch only contains samples from one task. Running all the mini-batches in each epoch ap- proximately optimizes the sum all of all multi-task objectives. In each epoch, the model is updated according to the selected mini-batch and its task- specific objective. We still take the text similarity task as an example, where each pair of sentences is labeled with a real-value similarity score y. We use the mean-squared error loss as our objective function: ily — Sim(X1, X2)|[3 (2) For text classification task, we use the cross- entropy loss as the objective function. For rel- evance ranking task, we minimize the negative log likelihood of the positive examples (Liu et al., 2019b). We can also easily add other tasks by adding its own task-specific layer. # 3.2 LSTM-based Student Model We’re interested in exploring whether a simple ar- chitecture, such as LSTM, has enough represen- tation capacity to transfer knowledge from the Task (eter) (etme) (ete) sete ; as == ay Inwograte (X — — (y= 0) waa) om ESE = -a {ot tt i rr a (roa v1 (won) Figure 2: Architecture for the bi-attentive student neu- ral network. teacher model. We also incorporate bi-attention module since it’s widely used between pairs of sentences (Peters et al., 2018; Wang et al., 2018). And the inputs in our experiments are mostly two sentences. Our LSTM-based bi-attentive student model is depicted in Figure 2. For equation rep- resentations, the embedding vectors of input se- quences are denoted as wx and wy. For single- sentence input tasks, wy is the same as wx. ⊕ represents vectors concatenation. wx and wy are first converted into ˆwx and ˆwy through a feedforward network with ReLU activa- tion (Nair and Hinton, 2010) function. For each token in ˆwx and ˆwy, we then use a bi-directional LSTM encoder to compute its hidden states and stack them over time axis to form matrices X and Y separately. Next, we apply the biattention mechanism (Xiong et al., 2016; Seo et al., 2016) to compute the attention contexts A = XY of the input se- quences. The attention weight A, and A, is ex- tracted through a column-wise normalization for each sequence. The context vectors C, and Cy, for each token is computed as the multiplication of its corresponding representation and attention weight: A, = softmax(A) C, =A) X (3) Same as (McCann et al., 2017), we concatenate three different computations between original rep- resentations and context vector to reinforce their relationships. The concatenated vectors are then passed through one single-layer BiLSTM: Xy = BiLSTM([X © X — Cy ® X © Cy]) 4 Y, = BiLSTM([Y 6 Y —C, @Y © C4) o The pooling operations are then applied on the out- puts of BiLSTM. We use max, mean, and self- attentive pooling to extract features. These three pooled representations are then concatenated to get one context representation. We feed this context representation through a fully-connected layer to get final output. # 3.3 Transformer-based Student Model Most of the pre-trained language models, which can be employed as teachers, are built with Trans- formers. Transformer (Vaswani et al., 2017) now is an ubiquitous model architecture. It draws global dependencies between input and output entirely re- lying on an self-attention mechanism. Our student model uses three layers of Transformers. Same as BERT (Devlin et al., 2019), [CLS] is added in front of every input example, and [SEP] is added between two input sentences. We use the average [CLS] representation from each layer as the final output. # 4 Multi-Task Distillation The parameters of student models, introduced in Section 3.2 and 3.3, are shared across all tasks. Each task has its individual layer on top of it. We begin by describing the task-specific layers: for each task, the hidden representations are first fed to a fully connected layer with rectified linear units (ReLU), whose outputs are passed to another linear transformation to get logits z = W h. During multi- task training, the parameters from both the bottom student network and upper task-specific layers are jointly updated. Considering one text classification problem, de- noted as task t, a softmax layer will perform the following operations on the ith dimension of z to get the predicted probability for the ith class: . exp{zi} softmax(z}) = ———“" — (5) “ Viyexp{2j} According to Ba and Caruana (2014), training the student network on logits will make learning easier. There might be information loss from trans- ferring logits into probability space, so it follows that the teacher model’s logits provides more in- formation about the internal model behaviour than its predicted one-hot labels. Then, our distillation objective is to minimize the mean-squared error (MSE) between the student network logits zt and the teacher’s logits z4,: 2 Luise = \l2r — 2512 (6) The training samples are selected from each dataset and packed into task-specific batches. For task t, we denote the current selected batch as bt. For each epoch, the model running through all the batches equals to attending over all the tasks: Ldistill = L1 distill + L2 distill + ... + Lt distill (7) During training, the teacher model first uses the pretrained BERT model (Devlin et al., 2019) to initialize its parameters of shared layers. It then follows the multi-task refining procedure described in Section 3.1 to update both the bottom shared- layers and upper task-specific layers. For student model, the shared parameters are ran- domly initialized. During training, for each batch, the teacher model first predicts teacher logits. The student model then updates both the bottom shared layer and the upper task-specific layers according to the teacher logits. The complete procedure is summarized in Algorithm 1. Algorithm 1 Multi-task Distillation Initialize the shared layers with BERTLarge then multi-task refine the teacher model Randomly initialize the student model parame- ters Set the max number of epoch: epochmax // Pack the data for T Tasks into batches for t ← 1 to T do 1. Generate augmented data: taug 2. Pack the dataset t and taug into batch Dt end for // Train the student model for epoch ← 1 to epochmax do 1. Merge all datasets: D = D1 ∪ D2 ... ∪DT 2. Shuffle D for bt in D do 3. Predict logits zT from teacher model 4. Predict logits zS from student model 5. Compute loss Ldistill(θ) 6. Update student model: θ = θ − α∇θLdistill end for end for # 5 An Intuitive Explanation In this section we give an intuitive explanation on why using some shared structure during the multi- task training could possibly help. Suppose the sam- ples of the task T are independent and identically distributed xT , yT ∼ P T X Y , where xT , yT are the feature and labels of the samples in task T respec- tively. The joint density can be decomposed as pT (x, y) = pT (x)pT (y|x). During the discrimi- native learning process, one tries to estimate the conditional distribution pT (·|x). For different tasks, pT (·|X) could be very different. Indeed if there is no connections in pT (·|X) for different tasks, then it is hard to believe training on one task may help another. However if we assume some smoothness over pT (·|X), then some connections can be built across tasks. Without loss of generality, we investigate the case of two tasks. For task T; and T9, let’s assume there exist some common domain of representa- tions 1, and two functions: h™ (x), h™(a):¥ 6 H, such that pT1(·|x) = gT1 ◦ hT1(x), pT2(·|x) = gT2 ◦ hT2(x), (8) p(x) =g™ oh™(x), (9) Var1,22, ||h™ (a1) — h™(a2)|| < nei — wall, (10) where ge :H»4 XY? isa function that maps from the common domain H to the task labels Y” for task T’, o denotes function composition, and 77 is a smoothness constant. The Lipschitz-ish inequality (10) suggests the hidden representation hT1 on task T1 may help the estimation of hT2, since hT2(x2) will be close to hT1(x1) if x1 and x2 are close enough. This is implicitly captured if we use one common network to model both hT1 and hT2 since the neural network with ReLU activation is Lipschitz. # 6 Experimental Setup # 6.1 Datasets We conduct the experiments on seven most widely used datasets in the General Language Understand- ing Evaluation (GLUE) benchmark (Wang et al., 2018): one sentiment dataset SST-2 (Socher et al., 2013), two paraphrase identification datasets QQP 1 and MRPC (Dolan and Brockett, 2005), one text similarity dataset STS-B (Cer et al., 2017), and three natural language inference datasets MNLI (Williams et al., 2018), QNLI (Rajpurkar et al., 2016) and RTE. For the QNLI dataset, version 1 expired on January 30, 2019; the result is evaluated on QNLI version 2. 1https://www.quora.com/q/quoradata/First-Quora- Dataset-Release-Question-Pairs 6.2 We use the released MT-DNN model2 to initial- ize our teacher model. We further refine the model against the multi-task learning objective for 1 epoch with learning rate set to 5e-4. The per- formance of our refined MT-DNN is lower than reported results in Liu et al. (2019b). The LSTM based student model (MKD-LSTM) is initialized randomly. For multi-task distillation, We use the Adam optimizer (Kingma and Ba, 2014) with learning rates of 5e-4. The batch size is set to 128, and the maximum epoch is 16. We clip the gradient norm within 1 to avoid gradient exploding. The number of BiLSTM hidden units in student model are all set to 256. The output feature size of task-specific linear layers is 512. The Transformer- based student model (MKD-Transformer) consists of three layers of Transformers. Following the settings of BERT-PKD, it is initialized with the first three layers parameters from pre-trained BERT- base. We also fine-tune the multi-task distilled student model for each task. During fine-tuning, the param- eters of both shared layers and upper task-specific layers are updated. The learning rate is chosen from {1, 1.5, 5} × 10−5 according to the validation set loss on each task. Other parameters remain the same as above. For both teacher and student models, we use WordPiece embeddings (Wu et al., 2016) with a 30522 token vocabulary. Data augmentation. The training data for typi- cal natural language understanding tasks is usually very limited. Larger amounts of data are desirable for the teacher model to fully express its knowl- edge. Tang et al. (2019) proposes two methods for text data augmentation: masking and POS-guided word replacement. We employ the only first mask- ing technique which randomly replaces a word in the sentence with [MASK], because, as shown in both Tang et al. (2019) and our own experiments, POS-guided word replacement does not lead to consistent improvements in quality across most of the tasks. Following their strategies, for each word in a sentence, we perform masking with probability pmask = 0.1. We use the combination of original corpus and augmentation data in distillation pro- cedure. For smaller datasets STS-B, MRPC and RTE, the size of the augmented dataset is 40 times the sizes of the original corpus; 10 times for other larger datasets. 2https://github.com/namisan/mt-dnn # 6.3 Methods and Baselines Results on test data reported by the official GLUE evaluation server are summarized in Table 1. Each entry in the table is briefly introduced below: MTL-BERT. We use the multi-task refined BERT (described in Section 3.1) as our teacher model. We tried to replicate the results of the released MT- DNN (Liu et al., 2019b) model. OpenAI GPT. A generative pre-trained Transformer-based language model (Radford et al., 2018). In contrast to BERT, GPT is auto-regressive, only trained to encode uni-directional context. ELMo. Peters et al. (2018) learns word represen- tations from the concatenation of independently trained left-to-right and right-to-left LSTMs. We report the results of a BiLSTM-based model with bi-attention baseline (Wang et al., 2018) trained on top of ELMo. Distilled BiLSTM. Tang et al. (2019) distill BERT into a simple BiLSTM. They use different models for single and pair sentences tasks. BERT-PKD. The Patient-KD-Skip approach (Sun et al., 2019) which student model patiently learns from multiple intermediate layers of the teacher model. We use their student model consisting of three layers of Transformers. TinyBERT Jiao et al. (2019) propose a knowl- edge distillation method specially designed for transformer-based models. It requires a general dis- tillation step which is performed on a large-scale English Wikipedia (2,500 M words) corpus. BERTEXTREME. Zhao et al. (2019) aims to train a student model with smaller vocabulary and lower hidden state dimensions. Similar to BERT-PKD, they use the same training corpus to train BERT to perform KD. # 7 Result and Discussions The results of our model are listed as MKD-LSTM and MKD-Transformer in the tables. # 7.1 Model Quality Analysis Comparison with GPT / ELMo. Our model has better or comparable performance compared with ELMo and OpenAI GPT. MKD-LSTM has higher performance than ELMo over all seven datasets: notably 8.4 points for RTE, 8.6 points in Spear- man’s ρ for STS-B, 7.6 points in F-1 measure for QQP, and 0.6 to 5.6 points higher for other Model Size SST-2 MRPC STS-B QQP MNLI-m/mm QNLI RTE # Param Acc F1/Acc r/ρ F1/Acc Acc Acc Acc MTL-BERT (Teacher) OpenAI GPT ELMo 303.9M 94.7 116.5M 91.3 90.4 93.6M 84.7/79.7 82.3/75.7 84.4/78.0 84.0/83.3 82.0/80.0 74.2/72.3 72.3/89.6 70.3/88.5 63.1/84.3 85.9/85.7 82.1/81.4 74.1/74.5 90.5 - 79.8 77.7 56.0 58.9 Distilled BiLSTM BERT-PKD TinyBERT BERTEXTREME 1.59M 21.3M 5.0M 19.2M 91.6 87.5 92.6 88.4 82.7/75.6 80.7/72.5 86.4/81.2 84.9/78.5 79.6/78.2 - 81.2/79.9 - 68.5/88.4 68.1/87.8 71.3/89.2 - 72.5/72.4 76.7/76.3 82.5/81.8 78.2/77.7 - 84.7 87.7 - - 58.2 62.9 MKD-LSTM MKD-Transformer 10.2M 21.3M 91.0 90.1 85.4/79.7 86.2/79.8 80.9/80.9 81.5/81.5 70.7/88.6 71.1/89.4 78.6/78.4 79.2/78.5 85.4 83.5 67.3 67.0 Table 1: Results from the GLUE test server. The first group contains large-scale pretrained language models. The second group lists previous knowledge distillation methods for BERT. Our MKD results based on LSTM and Transformer student model architectures are listed in the last group. The number of parameters doesn’t include embedding layer. # Model SST-2 MRPC STS-B QQP MNLI-m/mm QNLI RTE 1 Biatt LSTM 2 Single Task Distilled Biatt LSTM 89.2 82.5/72.1 87.5 83.2/72.8 3 BiLSTMMTL 87.3 84.2/75.7 4 MKD-LSTM Word-level Tokenizer 85.8 80.4/69.9 12.24/11.33 81.1/86.5 84.6/88.4 81.6/87.0 71.1/79.3 20.2/20.0 71.6/72.6 72.2/72.6 73.0/73.7 74.7/75.0 70.2/71.3 69.4/70.9 80.3 53.1 82.0 52.0 75.4 56.3 75.1 54.9 5 MKD-LSTM 89.3 86.8/81.1 84.5/84.5 85.2/89.0 78.4/79.2 83.0 67.9 Table 2: Ablation studies on GLUE dev set of different training procedures. All models are not fine-tuned. Line 1 is our bi-attentive LSTM student model trained without distillation. Line 2 is our bi-attentive LSTM student distilled from single task. Line 3 is the Multi-task distilled BiLSTM. Line 4 is the Multi-task distilled model using word-level tokenizer. datasets. Compared with OpenAI GPT, MKD- LSTM is 11.3 points higher for RTE and 4 points higher for MRPC. model is not always available due to data privacy. Under some conditions we can only access to the pretrained models and their approach are not appli- cable. Comparison with Distilled BiLSTM / BERT- PKD. While using the same Transformer layers and same amount of parameters, MKD-Tranformer significantly outperforms BERT-PKD by a range of 0.4 ∼ 9.1 points. MKD-LSTM leads to sig- nificant performance gains than BERT-PKD while using far less parameters, and compensate for the effectiveness loss of Distlled BiLSTM. While not resorting to external training data, our model has the best performance across the state- of-the-art KD baselines (i.e., BERT-PKD). It also achieves comparable performance compared to in- tensively trained KD methods (i.e, BERTEXTREME) on external large corpus. Comparison with TinyBERT / BERTEXTREME. These two approaches both use the large-scale un- supervised text corpus, same as the ones to train the teacher model, to execute their distillation process. However, we only use the data within downstream tasks. There are two caveats for their methods: (1) Due to massive training data, KD still requires in- tensive computing resources, e.g. BERTEXTREME takes 4 days on 32 TPU cores to train their stu- dent model. (2) The text corpus to train the teacher # 7.2 Ablation Study We conduct ablation studies to investigate the con- tributions of: (1) the different training procedures (in Table 2); (2) Different training tasks in multi- task distillation (in Table 3). We also compare the inference speed of our models and previous distillation approach. (in Table 4). The ablation studies are all conducted on LSTM-based student model since it has the advantage of model size and inference speed compared to Transformers. Model SST-2. MRPC _ STS-B QQP MNLI-m/mm QNLI RTE Sentiment Task V MKD-LSTM 89.9 81.4/70.8 51.2/49.9 84.9/88.3 74.3/74.7 83.2 50.9 PI Tasks v v MKD-LSTM 89.3 85.2/77.2 83.4/83.3 84.9/88.7 — 73.2/73.9 83.8 59.6 NLI Tasks v v v MKD-LSTM 90.4 87.9/82.1 84.1/84.1 84.8/88.4 — 77.1/78.1 84.5 66.8 All Tasks v v v v v v v MKD-LSTM 90.5 86.9/80.2 85.0/84.8 84.8/89.0 — 77.4/78.3 84.9 68.2 Table 3: Ablation experiments on the dev set use different training tasks in multi-task distillation. The results are reported with the original corpus, without augmentation data. The model is fine-tuned on each individual task. Do we need attention in the student model? Yes. Tang et al. (2019) distill BERT into a simple BiL- STM network. Results in Table 1 demonstrates that our model is better than Distilled BiLSTM and achieves an improvement range of 2.2 ∼ 6.1 points across six datasets. To make fair comparison, we also list the results of multi-task distilled BiLSTM in Line 3 in Table 2. It’s obvious that Line 5, which is the model with bi-attentive mechanism, signif- icantly outperform Line 3. We surmise that the attention module is an integral part of the student model for sequence modeling. Better vocabulary choices? WordPiece works better than the word-level tokenizers in our experi- ments. The WordPiece-tokenized vocabulary size is 30522, while the word-level tokenized vocabu- lary size is much larger, along with more unknown tokens. WordPiece effectively reduces the vocab- ulary size and improves rare-word handling. The comparison between Line 4 and Line 5 in Table 2 demonstrates that the method of tokenization influ- ences all the tasks. The influence of MTL in KD? The single-task distilled results are represented in Line 2 of Ta- ble 2. Compared with Line 5, all the tasks benefit from information sharing through multi-task distil- lation. Especially for STS-B, the only regression task, greatly benefit from the joint learning from other classification tasks. We also illustrate the influence of different num- ber of tasks for training. In Table 3, the training set incorporates tasks of the same type individually. Even for the tasks which are in the training sets, they still perform better in the all tasks training setting. For example, for RTE, the All Tasks setting increases 1.4 points than NLI Tasks setting. For other training settings which RTE is excluded from training set, All Tasks leads to better performance. Distilled BiLSTM BERT-PKD TinyBERT MKD-LSTM Inf. Time 1.36 8.41 3.68 2.93 Table 4: The inference time (in seconds) for baselines and our model. The inference is performed on QNLI training set and on a single NVIDIA V100 GPU. # Inference Efficiency To test the model efficiency, we ran the experiments on QNLI training set. We perform the inference on a single NVIDIA V100 GPU with batch size of 128, maximum sequence length of 128. The reported inference time is the total running time of 100 batches. From Table 4, the inference time for our model is 2.93s. We re-implemented Distilled BiLSTM from Tang et al. (2019) and their inference time is 1.36s. For fair comparison, we also ran infer- ence procedure using the released BERT-PKD and TinyBERT model on the same machine. Our model significantly outperforms Distilled BiLSTM with same magnitude speed. It also achieves compara- ble results but is faster in efficiency compared with other distillation models. # 8 Conclusion In this paper, we propose a general framework for multi-task knowledge distillation. The student is jointly distilled across different tasks from a multi- task refined BERT model (teacher model). We evaluate our approach on Transformer-based and LSTM-based student model. Compared with previ- ous KD methods using only data within tasks, our approach achieves better performance. In contrast to other KD methods using large-scale external text corpus, our approach balances the problem of com- putational resources, inference speed, performance gains and availability of training data. # References Jimmy Ba and Rich Caruana. 2014. Do deep nets really need to be deep? In Advances in neural information processing systems, pages 2654–2662. Jonathan Baxter. 2000. A model of inductive bias learning. Journal of artificial intelligence research, 12:149–198. Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5(1):135–146. Rich Caruana. 1997. Multitask learning. Machine learning, 28(1):41–75. Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Semeval-2017 Gazpio, and Lucia Specia. 2017. task 1: Semantic textual similarity-multilingual and arXiv preprint cross-lingual focused evaluation. arXiv:1708.00055. Kevin Clark, Minh-Thang Luong, Urvashi Khandel- wal, Christopher D Manning, and Quoc V Le. 2019. Bam! born-again multi-task networks for arXiv preprint natural arXiv:1907.04829. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep In Pro- neural networks with multitask learning. ceedings of the 25th international conference on Ma- chine learning, pages 160–167. ACM. Andrew M Dai and Quoc V Le. 2015. Semi-supervised sequence learning. In Advances in neural informa- tion processing systems, pages 3079–3087. Li Deng, Geoffrey Hinton, and Brian Kingsbury. 2013. New types of deep neural network learning for speech recognition and related applications: An In 2013 IEEE International Conference overview. on Acoustics, Speech and Signal Processing, pages 8599–8603. IEEE. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171–4186. William B Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Ross Girshick. 2015. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 328–339. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J Efficient mini-batch training for Smola. 2014. stochastic optimization. In Proceedings of the 20th ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 661–670. ACM. Xiaodong Liu, Jianfeng Gao, Xiaodong He, Li Deng, Kevin Duh, and Ye-Yi Wang. 2015. Representation learning using multi-task deep neural networks for semantic classification and information retrieval. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 912–921. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Improving multi-task deep neural networks via knowledge distillation for arXiv preprint natural arXiv:1904.09482. Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019b. Multi-task deep neural networks for natural language understanding. arXiv preprint arXiv:1901.11504. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019c. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes. 2016. The benefit of multitask rep- resentation learning. The Journal of Machine Learn- ing Research, 17(1):2853–2884. Bryan McCann, James Bradbury, Caiming Xiong, and Richard Socher. 2017. Learned in translation: Con- textualized word vectors. In Advances in Neural In- formation Processing Systems, pages 6294–6305. Oren Melamud, Jacob Goldberger, and Ido Dagan. 2016. context2vec: Learning generic context em- bedding with bidirectional lstm. In Proceedings of the 20th SIGNLL conference on computational natu- ral language learning, pages 51–61. Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- Efficient estimation of word arXiv preprint frey Dean. 2013. representations in vector space. arXiv:1301.3781. Vinod Nair and Geoffrey E Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814. Arvind Neelakantan, Jeevan Shankar, Alexandre Pas- sos, and Andrew McCallum. 2015. Efficient non-parametric estimation of multiple embeddings arXiv preprint per word in vector arXiv:1504.06654. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532–1543. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language mod- els are unsupervised multitask learners. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383–2392. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108. Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, and Hannaneh Hajishirzi. 2016. Bidirectional attention flow for machine comprehension. arXiv preprint arXiv:1611.01603. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- In Proceedings of the 2013 conference on bank. empirical methods in natural language processing, pages 1631–1642. Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu. 2019. Patient knowledge distillation for bert model com- pression. arXiv preprint arXiv:1908.09355. Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task- specific knowledge from bert into simple neural net- works. arXiv preprint arXiv:1903.12136. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of the 48th annual meeting of the association for compu- tational linguistics, pages 384–394. Association for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all In Advances in neural information pro- you need. cessing systems, pages 5998–6008. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. Glue: A multi-task benchmark and analysis platform for In Proceedings natural language understanding. of the 2018 EMNLP Workshop BlackboxNLP: An- alyzing and Interpreting Neural Networks for NLP, pages 353–355. Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sen- tence understanding through inference. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long Papers), pages 1112–1122. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between hu- arXiv preprint man and machine translation. arXiv:1609.08144. Caiming Xiong, Victor Zhong, and Richard Socher. 2016. Dynamic coattention networks for question answering. arXiv preprint arXiv:1611.01604. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- arXiv preprint ing for language understanding. arXiv:1906.08237. Sanqiang Zhao, Raghav Gupta, Yang Song, and Denny Zhou. 2019. Extreme language model compres- sion with optimal subwords and shared projections. arXiv preprint arXiv:1909.11687.
{ "id": "1503.02531" }
1911.03587
How Decoding Strategies Affect the Verifiability of Generated Text
Recent progress in pre-trained language models led to systems that are able to generate text of an increasingly high quality. While several works have investigated the fluency and grammatical correctness of such models, it is still unclear to which extent the generated text is consistent with factual world knowledge. Here, we go beyond fluency and also investigate the verifiability of text generated by state-of-the-art pre-trained language models. A generated sentence is verifiable if it can be corroborated or disproved by Wikipedia, and we find that the verifiability of generated text strongly depends on the decoding strategy. In particular, we discover a tradeoff between factuality (i.e., the ability of generating Wikipedia corroborated text) and repetitiveness. While decoding strategies such as top-k and nucleus sampling lead to less repetitive generations, they also produce less verifiable text. Based on these finding, we introduce a simple and effective decoding strategy which, in comparison to previously used decoding strategies, produces less repetitive and more verifiable text.
http://arxiv.org/pdf/1911.03587
Luca Massarelli, Fabio Petroni, Aleksandra Piktus, Myle Ott, Tim Rocktäschel, Vassilis Plachouras, Fabrizio Silvestri, Sebastian Riedel
cs.CL
accepted at Findings of EMNLP 2020
null
cs.CL
20191109
20200929
0 2 0 2 p e S 9 2 ] L C . s c [ 2 v 7 8 5 3 0 . 1 1 9 1 : v i X r a # How Decoding Strategies Affect the Verifiability of Generated Text Luca Massarelli1∗† Fabio Petroni2† Aleksandra Piktus2† Myle Ott2 Tim Rockt¨aschel2,3 Vassilis Plachouras2 Fabrizio Silvestri2 Sebastian Riedel2,3 1 Sapienza University of Rome [email protected] 2Facebook AI {fabiopetroni, piktus, myleott, rockt, vplachouras, fsilvestri, sriedel}@fb.com 3University College London # Abstract Recent progress in pre-trained language mod- els led to systems that are able to generate text of an increasingly high quality. While sev- eral works have investigated the fluency and grammatical correctness of such models, it is still unclear to which extent the generated text is consistent with factual world knowledge. Here, we go beyond fluency and also investi- gate the verifiability of text generated by state- of-the-art pre-trained language models. A gen- erated sentence is verifiable if it can be corrob- orated or disproved by Wikipedia, and we find that the verifiability of generated text strongly depends on the decoding strategy. In particular, we discover a tradeoff between factuality (i.e., the ability of generating Wikipedia corrobo- rated text) and repetitiveness. While decoding strategies such as top-k and nucleus sampling lead to less repetitive generations, they also produce less verifiable text. Based on these finding, we introduce a simple and effective de- coding strategy which, in comparison to previ- ously used decoding strategies, produces less repetitive and more verifiable text. matical correctness of text decoded from modern LMs. Additionally, recent works (Petroni et al., 2019; Logan et al., 2019; Broscheit, 2019; Roberts et al., 2020) demonstrate that beyond general lin- guistic capabilities, language models can also pick up factual knowledge present in the training data. However, it is unclear if LMs are able to convey such knowledge at decoding time when producing long sequences—do they generate fluent, grammat- ical but “babbler-level” text or can they produce utterances that reflect factual world knowledge? Understanding this behaviour becomes crucially important as the downstream adoption of auto- matically generated text increases. Already to- day LMs face growing scrutiny from the media and the broader society, as well as from the re- searchers themselves. For example, Radford et al. (2019b) initially argued against releasing their mod- els in order to prevent automatic generation of fake news (Radford et al., 2019a). Several blogs and web resources demonstrate that differentiating be- tween human and machine-generated text has be- come surprisingly difficult.1 # Introduction Recent years have led to a considerable surge of interest in and capabilities of pre-trained language models (LMs). Today, they play a critical role in many NLP tasks, such as text classification, ma- chine comprehension and natural language infer- ence (Peters et al., 2018; Devlin et al., 2018; Liu et al., 2019a; Yang et al., 2019), to name just a few. They serve as a pre-training objective for down- stream applications and they have been used to showcase and measure the general progress in NLP (Yu et al., 2017; Liu et al., 2019b). Several works (Radford et al., 2019b; Keskar et al., 2019) show the remarkable fluency and gram- With that in mind, we set out to study state- of-the-art auto-regressive transformer-based lan- guage models through the lens of their verifiabil- ity. Specifically, we use Wikipedia to first create a set of natural language prompts to initiate genera- tion. Next, we use transformer models of various sizes and trained with different corpora to gener- ate sentences off these prompts with varying de- coding configurations. Finally, following earlier work in fact checking (Thorne et al., 2018), we use Wikipedia again to verify each sentence as supported, refuted, or unverifiable using both an off-the-shelf automatic fact-checking system and human annotators. We define verifiability metrics on top of the automatic and human fact-checkers’ ∗Work done during internship with Facebook. † Equal contribution. # 1http://quiz.newsyoucantuse.com/ evaluation outcomes (see Figure 1 for a high-level overview). The truthfulness of generated text can be traded off with other properties. For example, a decod- ing algorithm can generate the same true fact over and over again to produce many verifiable utter- ances, but this would be a poor outcome in terms of repetitiveness. Similarly, a model might generate ungrammatical text that cannot be verified as sup- ported or refuted at all, and hence not as factually wrong either. Our experiments show that the text generated from auto-regressive transformer-based LMs, especially in their large versions (1.4B pa- rameters), is almost always grammatical and fluent regardless of the configuration, but that repetitive- ness can vary a lot. We hence focus on this dimen- sion in our analysis and define metrics that combine repetitiveness with verifiability. One of our main findings is that while sampling methods, such as top-k and nucleus, produce more natural and less repetitive text, they also gener- ate fewer supported and more refuted statements. Beam search, on the other hand, shows much better performance along these dimensions at the cost of producing highly repetitive text. Based on these observations, and inspired by findings in Holtz- man et al. (2019), who showed how the probabil- ity of human text under language models is vary- ing from token to token, we introduce a simple strategy: Delayed Beam Search (DELAYEDBS). In DELAYEDBS, we iterate between sampling and finding most likely utterances. By simply injecting stochasticity in the beginning of a sentence and then switching to beam search, we generate text that is less repetitive while at the same time scores well in terms of our verifiability metrics. Our main findings hold across several experimental settings, with varying training set size and model size. To summarize, we make the following contribu- tions: (i) we propose an experimental methodology to assess machine generated text with respect to repetitiveness and verifiability. (ii) we assess a wide range of decoding algorithms with respect to these dimensions, (iii) we introduce a novel de- coding strategy that addresses some of the short- comings of existing solutions, (iv) we carry out an annotation campaign to validate our findings and assess the quality of the automatic fact checking system. # 2 Related Work Keskar et al. (2019) trained CTRL, a large (1.63B parameters) pretrained language model that can be conditioned on style or content for controlling gen- erated text. Users can, for example, specify the domain, entities, as well as relationships between entities, to control the generated text. While im- pressive, their work does not provide insights into the verifiability of the generated text. Multiple efforts focus on improving text decod- ing with respect to different criteria. Vijayakumar et al. (2016) and Li et al. (2016) introduce alterna- tive scoring strategies to diversify the hypothesis tree explored by beam search. Fan et al. (2018) pro- pose top-k sampling, i.e., sampling from the top k tokens with the highest probability to generate sto- ries. Holtzman et al. (2019) find that for the same neural language model, the choice of the decoding strategy can have a dramatic effect on the fluency and repetitiveness of the generation. They propose nucleus sampling as a way to increase diversity of the generated text while improving fluency. In our work, we find that while this strategy does create more fluent and less repetitive text, it does also result in a less factually true generation. Cho et al. (2019) choose to separate the generation and diver- sification steps altogether, and focus on leveraging content selection to map the input to diverse se- quences. We describe various generation strategies in more detail in section 3. Welleck et al. (2019) note that with nucleus sampling, per-token probabilities can be very low which they attribute to the likelihood training ob- jective. They propose a novel unlikelihood training objective which lowers the probability of tokens in the context of the model. Their approach is orthog- onal to the decoding strategy and testing alternative training objectives is out of the scope of our paper. A recent approach by Bakhtin et al. (2019) learns to distinguish human from machine generated text. Zellers et al. (2019) investigate generating and de- tecting fake news using neural language models. Niewinski et al. (2019) propose a variation of the GPT-2 language model to explicitly generate ma- licious claims. Instead of directly optimizing for generating fake or factual news, we are interested in investigating the relationship between the verifia- bility of the existing language models and different decoding strategies they are coupled with. Several metrics have been proposed to evaluate natural language generations in the past (Novikova FACT CHECKER wikipedia evidence Lucy Hawking LX] The movie Hawking was | was born on 8 | concerning the : born in England | January 1942 in | life of Stephen : prrrrrsesrrstensesesesseeeeesesss TEXT GENERATOR IR) to stephen | Oxford to Frank | Hawking and his [*] SB f—™ REFUTED wikipedia prefix Hawking and | and Isobel Eileen | first wife, Jane : Stephen Hawking, Stephen William | Jane Hawking Hawking, Hawking, | Hawking (8 January 1942 - 14 March 2018) was an English SENTENCE theoretical physicist, cosmologist, PROCESSING 4 A A A and author who was director of : H L Hi H research at the Centre for GENERATION : He was born in | He was the He was the | His father was | He was born Theoretical Cosmology at the STRATEGY Oxford, England, | author of several | author of several | an electrical with a rare form University of Cambridge at the [>] LM the son of Frank | books, including | books, such as engineer and of dyslexia, time of his death. and his wife Jane | A Brief History | A Brief History physicist. Duvssevvssevevssseereesvsrsvensnstetirsensitettteentteeenee! Hawking ofTime. of Time. Supports Per Generation [SPG] = 2/5 Supports PerVerified [SPV] = 2/3 SUPPORTED x x Unique Supports Per Generation [USPG] = VERIFIED x x x Unique Supports Per unique Verified [USPV] = UNIQUE x x x x ! 1 Figure 1: High level description of our experimental methodology that combines a language model (LM) with a fact checker, usually implemented combining an information retrieval (IR) and a stance detector (SD) component. et al., 2017). Given that recent studies (Fan et al., 2018; Holtzman et al., 2019; Welleck et al., 2019) point to repetitiveness as one of the main problems affecting the generation of state-of-the-art models, we mainly consider this dimension in our analysis. - i.e., the token at position t + 1 is generated by considering p(wt+1 | ct+1 = [w1, . . . , wt]). In this work, we consider different decoding strategies of selecting wt given p(wt | ct). # 3.1 Decoding Strategies # 3 Background Language models (LMs) assign probabilities to sequences of tokens. Given a context, that is, a sequence of tokens ct = [w1, w2, . . . , wt−1], au- toregressive LMs commonly estimate the proba- bility distribution of the next target using neural models (Mikolov and Zweig, 2012; Melis et al., 2017; Bengio et al., 2003) with: p(wt | ct) = softmax(Wht + b) where ht ∈ Rk is the output vector of a neural net- work at position t and W ∈ R|V| × k is a learned parameter matrix that maps ht to unnormalized scores for every word in the vocabulary V. In this work, we consider self-attention mechanisms (Rad- ford et al., 2018; Dai et al., 2019; Radford et al., 2019b) to compute ht given the word history. Open-Ended Text Generation As described in Holtzman et al. (2019), the task of open-ended text generation involves producing a coherent com- pletion of the provided context. We consider the common left-to-right generation, where a token at position t in the sequence is generated by consider- ing the probability distribution over the vocabulary defined in equation 1. Once a decision is made for wt according to a decoding strategy, it is incorpo- rated into the context and the process is iterated The decoding strategies we consider in our analysis can be broadly divided in two families: sampling- based and likelihood-based. Sampling-based This family of techniques aims at increasing the diversity of the output and avoid- ing repetitions by introducing stochastic decisions during the generation process. Top-k sampling (Fan et al., 2018) selects wt by sam- pling from the k tokens with the highest probability in p(wt | ct). Top-p sampling, also referred to as nucleus sam- pling (Holtzman et al., 2019), selects wt from the smallest set of tokens whose cumulative probability (given by p(wt | ct)) is above a threshold p. Likelihood-based These strategies navigate the solution space by selecting sequences of tokens that maximize the overall likelihood. Given that the number of possible sequences is typically very large, it is a common practice to define heuristics to make the generation practical. Beam Search (BS). This strategy approximately maximizes the likelihood of the whole sequence. Throughout the generation, we hold a beam of β prefixes which are iteratively extended. At each time-step, β tokens are generated to complete each of the prefixes in the beam and we retain β hypothe- ses with the highest score out of the β2 candidates for the next step. β is referred to as the beam size. Greedy decoding, where at each step the most likely token is selected, is a special case of beam search with beam size 1. Group diverse Beam Search (GROUPBS). To favor the diversity of the exploration, Vijayakumar et al. (2016) propose to divide the beam into groups. The diversity between groups is imposed by introduc- ing a group dissimilarity penalty into the search objective. Sibling diverse Beam Search (SIBLINGBS). With the same aim of diversifying the exploration, Li et al. (2016) propose a variant of beam search which introduces a penalty proportional to the rank of a candidate token with respect to its source in the beam. The goal is to encourage preserving hypotheses from diverse sources within the beam. A simple trick to reduce repetitiveness is to ex- plicitly prevent the generation of already observed n-grams (Paulus et al., 2017). We refer to this approach as n-gram blocking. Delayed Beam Search (DELAYEDBS). We pro- pose a new hybrid strategy that uses sampling to generate the first L tokens of a sentence and then it finishes the sentence using beam search. The smaller the L, the closer the behaviour is to beam search. Conversely, the larger the L, the closer we are to sampling strategies. Consequently, by tuning L, it is possible to combine the advantages of both sampling and likelihood-based strategies. # 4 Evaluating Verifiability In this section we first describe the tools used to evaluate the verifiability of the generated text. We then formally introduce our repetitiveness and veri- fiability metrics. The high level overview of our evaluation setup is shown in Figure 1. For the purpose of this anal- ysis, we consider both the text generator and the fact checker as black boxes which produce and as- sess text respectively. More specifically, the text generator gets in input a prefix p and produces a sequence of tokens that can be interpreted as a com- pletion of p. We segment the generated completion into sentences and consider the first k sentences. The fact checker gets in input a sentence and out- puts a positive (SUPPORTED), negative (REFUTED) or unverifiable (NOT ENOUGH INFO) response as well as textual evidence used for the judgment. We consider a sentence as verified if the output label is either SUPPORTED or REFUTED. Our metrics assess the generation process given a set of prefixes P . The set P can be seen as the data source for our verifiability probe. Let Gp = [sp 1, ..., sp k] be the sequence of sentences generated by the LM from prefix p ∈ P . We indicate with V p ∈ Gp the set of sentences that are verified by the fact checker, while with Sp ∈ V p we denote the subset of sentences labeled as SUPPORTED. To assess the verifiability of the generated text we introduce the following two metrics: Supports Per Generation (SPG): is the fraction of supported sentences among the generated ones: 1 |S?| SPG = — — 2 Py > 7 (2) pEeP Supports Per Verified (SPV): is the fraction of supported sentences among the verified ones: 1 wo |S? SPV = — S —= 3 pEeP SPG can be interpreted as a sort of a recall metric while SPV as a precision one. Note that a generation could achieve a high score in terms of SPG and SPV by repeating the same supported sentence over and over again. To capture this behaviour, we define the unique variants of our metrics. We consider two sentences as equivalent if they have the same factuality label (i.e., SUP- PORTED or REFUTED) and the decision is justified by the same evidence. For a set of equivalent sen- tences, we consider only the one which appeared first in the generation as unique. We denote the set of unique sentences as Sp u ∈ V p is a set of unique verified sentences. We introduce: Unique Supports Per Generation (USPG): the fraction of unique supported sentences among the generated ones: [St i (4) 1 USPG = = >) | ser Unique Supports Per unique Verified (USPV): the fraction of unique supported sentences among unique verified sentences: ~ ia) lols! USPV = — (5) |P| div iS # 5 Methodology In this section we describe in detail the implemen- tational choices for all components in Figure 1. Prefix Dataset We retrieve title and description of the top-1000 most visited Wikipedia pages of 2017 and 2018. For each page, we concatenate the title and the first sentence in the description to create a string prefix for the language model. We use 2018 data as validation set and run parameter sweeps over it. We tested the best configuration of every decoding strategy on 2017 data (test set). We ensure no overlap between 2017 and 2018 prefixes. Language Model We consider three sizes of language models (small, medium, large) based on the Transformer architecture (Vaswani et al., 2017; Radford et al., 2019b), with 124M, 354M and 1.4B parameters respectively. We train mod- els on four corpora: (i) WIKIPEDIA, an English Wikipedia dump consisting of roughly 2 Billion Words; (ii) BOOKS, the Toronto books corpus (Zhu et al., 2015; Kiros et al., 2015), which consists of fiction books totaling about half a billion words; (iii) OPENWEBTEXT, a reconstruction of the Web- Text corpus (Radford et al., 2019b) consisting of roughly 3 Billion Words; (iv) CCNEWS, a de-du- plicated subset of the English portion of the Com- monCrawl news dataset (Nagel, 2016; Bakhtin et al., 2019; Liu et al., 2019a), which totals around 16 Billion words. We train models using the FAIRSEQ toolkit (Ott et al., 2019). Generation Strategy We consider the genera- tion strategies discussed in Section 3.1, namely top-k, top-p, greedy, Beam Search (BS), Group- Diverse Beam Search (GROUPBS), Sibling- Diverse Beam Search (SIBLINGBS) and Delayed Beam Search (DELAYEDBS). Additionally, we ex- periment with n-gram blocking and indicate that a model is equipped with blocking with a subscript b, e.g., BSb. We fix the generation length to 256 tokens. We perform three generations per prefix with different seeds for all strategies that make stochastic decisions, and report average values. Sentence Processing Given that our fact checker expects a single sentence as input, we segment the generated text into sentences. We consider the first k = 5 sentences. We perform coreference resolu- tion to replace pronouns with the corresponding referring entity in order to give the complete infor- mation to the fact checker. For the same reason, we apply a simple heuristic that replaces each deter- miner (i.e., ”The”) at the beginning of a sentence and the subsequent noun with the original entity (i.e., the title of the Wikipedia page). For all these steps we use spaCy.2 We consider sentences longer than 50 tokens as not verifiable, since long sen- tences are likely to contain multiple claims and can be misclassified by the automatic fact-checking system, we consider that has been trained on short single claim statements. Fact Checker We consider an off-the-shelf fact checker3 trained on the FEVER dataset (Thorne et al., 2018) which achieves the highest FEVER score of 68.46% in the second FEVER shared task (Thorne et al., 2019). This solution takes inspira- tion from Hanselowski et al. (2018) and consists of three main stages: (i) identify relevant Wikipedia pages, as in Hanselowski et al. (2018); (ii) retrieve relevant sentences from such pages; (iii) recognize textual entailment between input and retrieved text. The system uses a hierarchical sentence retrieval approach in order to verify claims that require mul- tiple statements as evidence. It uses BERT (Devlin et al., 2018) for both retrieval and entailment. Metrics We use all the metrics introduced in Sec- tion 4. We also consider the following metrics to capture the repetitiveness of the generation: Distinct 4-gram: is the average number of distinct 4-grams present in the generated text (Vijayakumar et al., 2016). 4-gram proportion: is the average ratio between distinct 4-grams in machine and human generated text (Holtzman et al., 2019). For the latter, we consider the 256 tokens after the first sentence in the description for each Wikipedia page. # 6 Results We summarize the main results in Table 1. It shows the performance of the different generation strate- gies on the considered metrics on the test set of prefixes, considering the large transformer model trained on CCNEWS (this corpus led to the best per- formance according to our ablation, see Figure 2a). We performed an exhaustive grid search over the parameters for all considered generation strategies using the small model on the validation set, and consider the configuration that led to the highest 2https://spacy.io 3https://github.com/ dominiksinsaarland/domlin_fever metrics repetitiveness verifiability diverse verifiability strategies distinct 4-grams 4-grams proportion SPG SPV USPG USPV human - Wikipedia 222.48 100.00 36.56 93.03 36.56 93.03 sampling top-k top-p 143.52 136.66 64.51 61.43 13.02 13.94 70.15 70.76 11.06 11.36 69.39 68.93 likelihood greedy BS GROUPBS SIBLINGBS 67.42 59.53 66.06 67.11 30.31 26.76 29.69 30.16 19.62 25.50 20.56 22.32 78.67 84.49 78.29 80.11 12.06 11.88 11.54 11.36 77.21 81.59 76.53 76.76 hybrid DELAYEDBS 112.12 50.40 17.52 78.99 12.74 77.59 blocking BSb 92.00 41.35 23.62 83.35 15.28 80.76 Table 1: Performance of the different generation strategies on the considered metrics. We report percentage values for the large transformer model on the test set. The first row shows human performance computed on Wikipedia. USPG value (see the Appendix for details). We re- port as reference human performance computed on Wikipedia considering at most the first 5 sentences of the prefix article. Sampling strategies (i.e., top-p and top-k) out- perform the other strategies in terms of repetitive- ness metrics, that is, they are able to generate text with a higher degree of diversity, consistently with previous works (Fan et al., 2018; Holtzman et al., 2019). However, diversity comes at a price, as the verifiability metrics are low (in particular, preci- sion values - they generate more refuted sentences). Intuitively, random choices might hamper verifia- bility when sampling a token in specific positions of the sentence, for instance, in a named entity, potentially making the overall sentence non fac- tual. We notice that this problem gets even worse by increasing k or p. Following a generation path that maximizes likelihood is a better approach for verifiability. In particular, BS achieves the highest performance in terms of SPG and SPV. Neverthe- less, generation diversity drops, consistently with previous works (Vijayakumar et al., 2016; Li et al., 2016; Welleck et al., 2019; Holtzman et al., 2019). Solutions such as GROUPBS and SIBLINGBS have been proposed to mitigate this problem, and their numbers actually look slightly better than BS in terms of repetitiveness metrics. When we assess diverse verifiability (that is, we consider distinct supported/refuted sentences), like- lihood and sampling based strategies are similar in terms of recall (i.e., USPG), while likelihood-based solutions outperform both top-k and top-p in terms of precision (i.e., USPV) by a large margin - they generate less sentences refuted by the fact checker. DELAYEDBS tries to combine the best of these two approaches, by defining a hybrid strategy that starts a sentence by sampling tokens and ends it by following a max-likelihood path. It achieves results comparable to likelihood-based solutions in terms of precision and recall for diverse verifiability while being much less repetitive (it almost doubles the number of distinct 4-grams). Interestingly, it is sufficient to sample just the first token with high uncertainty (top-100) and finish the sentence with beam search to trigger this behaviour (Figure 5 in the Appendix Section reports a detailed ablation study for the delay length). Another way of mitigating repetitiveness is through n-gram blocking. We combine it with BS, sweeping over the values of n between 3 and 20. In line with our expectations, low n values score low in verifiability metrics, as the model is forced to explore less likely parts of the solution space in order to avoid generating previously observed n-grams. Unsurprisingly, the diversity of the solu- tion drops as n increases. In this sense, BSb and DELAYEDBS attempt to strike a similar balance between diversity (introduced via n-gram blocking in BSb and via sampling in DELAYEDBS) and ver- ifiability (achieved by incorporating BS). Figure 3 highlights this analogy further. Overall, we achieve the best USPG performance by combining 20-gram blocking and BS - we believe it is due to the fact that n-gram blocking prevents BS from repeating the same phrases multiple times, while remaining 85, Vv 80 v 75 vo .@ > gw ° 2 70 5 D ae, 65 ° 6 top-p A 60 w top-k O 4 ,° BS v 55 ° DelayedBS Of 0 5 10 USPG 85 80 Ss M B 75 5 > é 2 70 7 D 65 s top-p 4 60 top-k -- BS - 55 DelayedBS -& 0 10 USPG (a) Performance of the small transformer model trained on different corpora, i.e., WIKIPEDIA (W), BOOKS (B), OPEN- WEBTEXT (O) and CCNEWS (C). (b) Ablation study on our transformer model trained on CC- NEWS with increasing number of parameters, i.e., 124M (S), 354M (M) and 1.4B (B). Figure 2: USPV vs USPG, inspired by precision-recall curve. BS»: n-gram size BS): n-gram size BS): n-gram size 200 5 10 15 20 950 5 10 15 20 0 5 10 15 20 100 80 15 § 90 75 6 80 > Ss & 10 a g 70 BS, —y— 270 = 60 DelayedBS —a— 5 65 2 50 40 } 60 30 | # 0 5 10 15 20 0 5 10 15 20 0 5 10 15 20 DBS: sampling length L DBS: sampling length L DBS: sampling length L Figure 3: SPG, SPV and 4-gram proportion values for BSb and DELAYEDBS, by varying the sampling length L for DELAYEDBS (bottom axis) and the n-gram blocking size for BSb (top axis). relaxed enough to allow the generation to produce a high-likelihood solution. However, even though BSb archives the best results in terms of diverse ver- ifiability metrics, DELAYEDBS still produces less repetitions, hence constituting a viable alternative. Ablation studies We experiment with different training corpora (Figure 2a) and different sizes of the transformer model (Figure 2b), using the valida- tion set. We report USPV vs USPG values, taking inspiration from the popular precision-recall curve. The average perplexity of the small transformer model is the lowest for WIKIPEDIA (8.31) com- pared to BOOKS (53.08), OPENWEBTEXT (11.14) and CCNEWS (12.23). Even though all prefixes are likely to be in the corpus, WIKIPEDIA performance in terms of USPG is low regardless of the decoding strategy. This counter-intuitive behaviour seems to occur mainly due to the tendency of the small model trained on WIKIPEDIA to generate endless, unverifiable entity lists, mimicking Wikipedia lists. CCNEWS leads to the best performance in terms of recall (USPG) for all decoding strategies, but also in terms of precision (USPV) for top-k and DELAYEDBS. We did explore several other dimensions, includ- ing grammaticality (through a syntactic parser) and relevance (i.e., tf-idf score with the prefix Wikipedia page) during our experiments (see Table 4 in the Appendix). Figure 4 reports the Pearson correlation coefficient between supported and ver- ified sentences and these set of metrics. We con- sider the four runs of the large transformer model reported in Figure 2b. We notice, for instance, that the average log probability of a sentence is posi- tively correlated with verifiability, suggesting that max-likelihood strategies are better suited in this regards. Furthermore, the tf-idf score with the pre- fix Wikipedia page content is positively correlated with supported sentences. This behaviour is related to the implementation of the fact checker we use, which, by considering exclusively Wikipedia as knowledge source, favours text with a high overlap S i bs 3 T -0.0 s x S supported- -0.026 -0.026 0.019 0.12 0.0023 0.14 0.024 verified- -0.058 0.022 -0.021 0.14 = 0.013 0.076 -~—-0.051 #entities - #conj - sentence score - tf-idf score - #tokens - success link-gram - prefix perplexity - Figure 4: Pearson correlation coefficient for sup- ported/verified sentences (large model) and a set of met- rics per sentence: number of entities, if successfully parsed by the Link-Grammar syntactic parser,4number of conjunctions in the dependency tree, average token log probability, prefix perplexity, tf-idf score with the prefix Wikipedia page, number of tokens. with the latter. Note, however, that the model was not explicitly exposed to Wikipedia during training (i.e., CCNEWS does not explicitely include it). We report examples of text generated by the large transformer model using different decoding strategies in the Appendix section (Table 5). Human Evaluation We carry out an annotations campaign, where we ask human annotators to fact check generated text. We base the evaluation on a set of 200 prefixes randomly selected from the test set. We consider completions produced by 5 of the generation strategies studied in this paper. We collect 5 annotations per generation. Results, reported in Table 2, confirm our findings: sampling strategies generate text which is less repetitive but also with less supported sentences than in the case of beam search. DELAYEDBS emerges as a reason- able trade-off between the two, being less repetitive than BS and producing more supported sentences than top-k. The analysis also highlights how block- ing n-grams does not really address the repetitive nature of BS. Looking at some examples (see Ta- ble 5) we notice that BSb avoids repeating n-grams by introducing superficial, token-level modifica- tions which, most of the time, fail to alternate the underline meaning of the sentence. In terms of absolute values, precision metrics (i.e., USPV and SPV) are lower than those computed with the auto- matic fact checker, and recall metrics (i.e., SPG and USPG) higher. This is due to the poor recall per- formance of the fact checking system - 45.66% for SUPPORTED and 5.78% for REFUTED. Precision # 4abisource.com/projects/link-grammar REP NAC UNG SPG SPV USPG USPV top-k greedy BS DELAYEDBS BSb 16.0 35.6 38.7 25.0 31.6 3.4 1.7 8.2 6.2 9.8 1.4 2.0 1.8 3.0 4.3 27.2 32.5 44.6 35.4 38.2 41.36 42.23 64.62 50.22 56.92 20.1 17.6 20.1 23.3 19.7 41.16 41.75 65.1 50.68 58.12 Table 2: Results based on human fact checkers, 5 an- notations per sentence. Average inter-annotator agree- ment is 0.66 Cohen’s kappa (average majority of 81% for SUPPORTED, 78% for REFUTED and 65% for NOT ENOUGH INFO). We report the percentage of sentences annotated as repetitions (REP), not a claim (NAC), un- grammatical (UNG), and our verifiability metrics. values are 80.89% for SUPPORTED and 52.69% for REFUTED. In sum we find that while off-the-shelf, state-of-the-art fact checker systems still leave am- ple room for improvement, they already serve as a good proxy for ranking pre-trained language mod- els and decoding strategies with respect to the veri- fiability of the text they generate. # 7 Conclusion and Discussion We presented a systematic analysis of the verifiabil- ity of text generated by a wide range of decoding strategies from large autoregressive language mod- els. We assessed generated sentences with an off- the-shelf automatic fact-checker as well as through human annotations. We found that sampling de- coding strategies produce text that is less verifiable, but also less repetitive when compared to strate- gies that consider most likely sequences according to the model distribution. We proposed a hybrid decoding strategy, combining the non-repetitive nature of sampling solutions with the verifiable generation of likelihood-based approaches. In our analysis, we considered the most viewed Wikipedia pages in 2017 and 2018. Our rationale was that such pages would represent topics that are likely to be highly covered in a random web crawl (e.g., OPENWEBTEXT and CCNEWS). Results (not reported in the paper) with a random set of Wikipedia pages showed lower values in terms of SPG and USPG (i.e., recall metrics). A potential line of future work could be to investigate relation- ships among training corpora and generation. We considered each sentence as a single claim to keep our experimental setting clean and avoid noise from an automatic claim extractor. However, some generations contain multiple claims that could be independently assessed. Studying such phenomena is an interesting future direction. # References Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc’Aurelio Ranzato, and Arthur Szlam. 2019. Real or fake? learning to discriminate ma- chine from human generated text. arXiv preprint arXiv:1906.03351. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic lan- guage model. Journal of machine learning research, 3(Feb):1137–1155. Investigating entity knowl- edge in bert with simple neural end-to-end en- the 23rd Confer- tity linking. ence on Computational Natural Language Learning (CoNLL). Jaemin Cho, Minjoon Seo, and Hannaneh Ha- Mixture content selection for arXiv preprint jishirzi. 2019. diverse sequence generation. arXiv:1909.01953. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive lan- guage models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805. Angela Fan, Mike Lewis, and Yann N. Dauphin. 2018. In Proceed- Hierarchical neural story generation. ings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics, ACL, pages 889– 898. Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018. Ukp-athene: Multi-sentence arXiv textual entailment for claim verification. preprint arXiv:1809.01479. Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degener- ation. arXiv preprint arXiv:1904.09751. Nitish Shirish Keskar, Bryan McCann, Lav R Varshney, Caiming Xiong, and Richard Socher. 2019. Ctrl: A conditional transformer language model for control- lable generation. arXiv preprint arXiv:1909.05858. Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In Ad- vances in neural information processing systems 28, pages 3294–3302. Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. A sim- ple, fast diverse decoding algorithm for neural gen- eration. arXiv preprint arXiv:1611.08562. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019a. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Zihan Liu, Yan Xu, Genta Indra Winata, and Pascale Fung. 2019b. Incorporating word and subword units in unsupervised machine translation using language model rescoring. arXiv preprint arXiv:1908.05925. IV Logan, L Robert, F Nelson, E Matthew, et al. 2019. Barack’s wife hillary: Using knowledge- arXiv graphs for fact-aware language modeling. preprint arXiv:1906.07241. G´abor Melis, Chris Dyer, and Phil Blunsom. 2017. On the state of the art of evaluation in neural language models. arXiv preprint arXiv:1707.05589. Tomas Mikolov and Geoffrey Zweig. 2012. Context dependent recurrent neural network language model. In Proceedings of the fourth IEEE Spoken Language Technology Workshop (SLT), pages 234–239. Sebastian Nagel. http://web.archive.org/ save/http://commoncrawl.org/2016/ 10/news-dataset-available/ 2016. Accessed: 2019-11-08. [online]. and Maria Jan- icka. 2019. Tmlab: Generative enhanced model arXiv preprint (gem) arXiv:1910.00337. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need arXiv preprint new evaluation metrics for nlg. arXiv:1707.06875. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensi- ble toolkit for sequence modeling. arXiv preprint arXiv:1904.01038. Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization. arXiv preprint arXiv:1705.04304. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 16th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT, pages 2227–2237. Fabio Petroni, Tim Rockt¨aschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Se- bastian Riedel. 2019. Language models as knowl- edge bases? arXiv preprint arXiv:1909.01066. Alec Radford, Karthik Narasimhan, Tim Salimans, and Improving language under- Ilya Sutskever. 2018. standing by generative pre-training. Alec Radford, Jack and Ilya Sutskever. Better language models and their im- https://blog.openai.com/ Accessed: Jeff Wu, Dario Amodei, Clark, Miles Brundage, 2019a. plications. better-language-models/. 2019-11-08. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019b. Lan- guage models are unsupervised multitask learners. Adam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the pa- arXiv preprint rameters of a language model? arXiv:2002.08910. Christos Thorne, Christodoulopoulos, 2018. Fever: a large-scale dataset for fact extraction and verification. Proceedings of the 16th Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), pages 809–819. James Thorne, Andreas Vlachos, Oana Cocarascu, Christos Christodoulopoulos, and Arpit Mittal. 2019. The FEVER2.0 shared task. In Proceedings of the Second Workshop on Fact Extraction and VERifica- tion (FEVER), pages 1–6, Hong Kong, China. Asso- ciation for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all In Advances in neural information pro- you need. cessing systems 30, pages 5998–6008. Ashwin K Vijayakumar, Michael Cogswell, Ram- prasath R Selvaraju, Qing Sun, Stefan Lee, David Crandall, and Dhruv Batra. 2016. Diverse beam search: Decoding diverse solutions from neural se- quence models. arXiv preprint arXiv:1610.02424. Sean Welleck, Ilia Kulikov, Stephen Roller, Emily Di- nan, Kyunghyun Cho, and Jason Weston. 2019. Neu- ral text generation with unlikelihood training. arXiv preprint arXiv:1908.04319. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- arXiv preprint ing for language understanding. arXiv:1906.08237. Lei Yu, Phil Blunsom, Chris Dyer, Edward Grefen- stette, and Tom´as Kocisk´y. 2017. The neural noisy In Proceedings of the 5th International channel. Conference on Learning Representations, ICLR. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. arXiv preprint arXiv:1905.12616. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the 15th IEEE international conference on computer vision (ICCV), pages 19–27. strategy best parameters top-k top-p BS GROUPBS SIBLINGBS DELAYEDBS BSb k= 2 p= 0.4 beam size= 15 groups= 2 penalty= 0.2 penalty= 0.1 top-k= 100 beam size= 6; L= 1 beam size= 15 blocking order= 20 Table 3: Best parameters per decoding strategy. # 8 Appendix # 8.1 Hyperparameters We conduct a parameter sweep on the small trans- former model on the validation set. The following table shows the configuration for each decoding strategy that leds to the highest USPG score. # 8.2 Generation Examples We reported some examples generated with differ- ent strategies in table 5. # 8.3 Other metrics We explored how decoding strategy affects other dimensions of the generated text. Results are re- ported in table 4. We measure several statistics ovtaer the generated text: • The average number of distinct sentences for each generated text; • The average number of named entities in each sentence; • The average number of tokens in each sen- tence; • The average number of conjunctions in the dependency tree of each sentence; To compute the above metrics, we used spaCy. In particular we used its tokenizer to split tokens and sentences, its named entity recognition capa- bility to identify named entities and its dependency parser to count the number of conjunctions. Furthermore, we analyzed the grammatical cor- rectness of the generated text, counting the success = To: 100 = TOP: 10 ToPK:s Tork: 1900 Delay Lenght Figure 5: Ablation study over delay length. We re- port on the x-axis the delay length and on the y-axis the number of distinct supported sentences obtained for each delay length. Horizontal lines represent the value obtained on the validation set using top-k decod- ing strategy. All the generations were performed on the validation set using the small transformer trained on CCNEWS. rate of the link-gram parser 5 over the sentences in the generated text. We also measure the relevance of the generated text against the Wikipedia page that contains the prefix used for the generation. For this purposes, we compute the tf-idf score of the generated text and the related Wikipedia page. # 8.4 Ablation study over delay length We perform an ablation study to measure how the number of supported sentences generated with DE- LAYEDBS is affected by the delay length. We gen- erated text using the prefixes in the validation set using DELAYEDBS with top-k as sampling strategy and with different delay length. Our hypothesis is that using larger delay length the number of sup- ported sentences in the generated text will become close to the one obtained for top-k. We report the results in figure 5. From the figure it is clear that with larger delay length the number of supported sentences is very close to the one obtained with top-k. Moreover, as expected, a short delay length seems to produce a larger number of supported sentences. 5https://github.com/opencog/link-grammar strategy param distinct sentences #entities #tokens #conj % success link-gram tf-idf score greedy 1 2.44 2.02 18.68 1.18 81.30 255.04 beam search 6 12 15 2.20 2.13 2.14 2.79 3.13 3.13 22.71 23.72 23.51 1.32 1.39 1.40 74.76 71.92 72.41 510.17 565.97 568.96 top-k 2 10 100 4.63 4.95 4.98 2.43 2.70 2.72 21.92 25.22 27.29 1.26 1.33 1.36 83.58 78.73 74.91 259.75 246.18 203.19 top-p 0.1 0.3 0.7 1 2.57 3.88 4.90 4.97 2.02 2.19 2.59 2.81 18.93 19.41 23.76 28.55 1.17 1.19 1.27 1.38 81.10 85.06 79.90 70.40 251.27 238.72 215.68 162.95 delayed beam search 5-6-1 10-6-1 100-6-1 1000-6-1 3.74 3.95 4.22 4.30 3.01 3.04 3.04 3.03 23.39 23.95 24.10 24.62 1.22 1.23 1.22 1.21 77.23 77.00 76.36 76.27 401.76 406.03 399.91 397.27 Table 4: Analysis of the generated text among different dimensions. We used the prefix in the validation set and the small transformer model trained on CCNEWS to generate the text. Under the column params we report hyper parameters for the decoding strategy considered. For DELAYEDBS we we have varied only the parameter for top-k sampling and we used beam size equal to 6 and delay length of 1. x fi e r p Princess Margaret, Countess of Snowdon. Princess Margaret, Countess of Snowdon, (Margaret Rose 21 Au- gust 1930 - 9 February 2002) was the younger daughter of King George VI and Queen Elizabeth The Queen Mother and the only sibling of Queen Elizabeth II. Whitey Bulger. James Joseph Whitey Bulger Jr. ( Septem- ber 3, 1929 - October 30, 2018) was an Irish-American or- ganized crime boss and FBI informant who led the Winter Hill Gang in the Winter Hill neighborhood of Somerville, Massachusetts, a city directly northwest of Boston. k - p o t She is the only surviving member from the Royal Family who was born in the same year as the Queen Mother. Princess Margaret, Countess of Snowdon, (Margaret Rose 21 August 1930 - 9 February 2002) was a British princess who married into the British Royal Family in 1939. She is a cousin to the Queen, Prince Philip, Prince Charles, Prince William, Prince Harry and Prince Edward. Princess Margaret, Countess of Snowdon, (Margaret Rose 21 August 1930 - 9 February 2002) is the younger daugh- ter of King George VI and Queen Elizabeth. She is the only survivor from the Royal Family who was born in the same year as Queen Elizabeth. He was a fugitive for 16 years before his arrest in 2011. He was sentenced in 2013 to two life terms plus five years for his role in 11 murders and was released in 2014. Bulger was found dead in his cell at the U.S. penitentiary in West Virginia. He was 89. He was serving the life sentences for his role in the infa- mous Boston crime family. S B Princess Margaret, Countess of Snowdon, (Margaret Rose 21 August 1930 - 9 February 2002) was the younger daughter of King George VI and Queen Elizabeth The Queen Mother and the only sibling of Queen Elizabeth II. Princess Margaret, Countess of Snowdon, (Margaret Rose 21 August 1930 - 9 February 2002) was the younger daughter of King George VI and Queen Elizabeth The Queen Mother and the only sibling of Queen Elizabeth II. Princess Margaret, Countess of Snowdon, (Margaret Rose 21 August 1930 - 9 February 2002) was the younger daughter of King George VI and Queen Elizabeth The Queen Mother and the only sibling of Queen Elizabeth II. ... Bulger was one of the FBI’s most wanted fugitives for 16 years until he was captured in Santa Monica, California, in 2011. He was convicted in 2013 of a litany of crimes, including racketeering, extortion, money-laundering, and murder. He was sentenced to two consecutive life sentences plus five years. He died in federal prison in West Virginia on Tuesday at the age of 89. Bulger was one of the FBI’s most wanted fugitives for 16 years until he was captured in Santa Monica, California, in 2011. S B D She married Antony Armstrong-Jones, a photographer, in 1960. It was the first marriage for the Queen and the first for Prince Philip, Duke of Edinburgh. After divorcing Armstrong-Jones in 1978, she married Group Captain Peter Townsend in June that same year. She died at the age of 71 on 9 February 2002. Why did Princess Margaret marry Antony Armstrong- Jones? He was one of the FBI’s most wanted fugitives for 16 years until his capture in 2011. Born in Boston, Whitey Bulger was the son of Irish immi- grants. After serving in the U.S. Navy during World War II, Whitey Bulger joined the Irish-American mafia, the Winter Hill Gang, in the early 1950s. He quickly rose through the ranks of the gang, eventually becoming its leader. He was known as ”Whitey” because of his light brown hair and blue eyes. b S B Princess Margaret, Countess of Snowdon (Margaret Rose 21 August 1930 - 9 February 2002) was the eldest daugh- ter of King George VI and Queen Elizabeth The Queen Mother. Princess Margaret (Margaret Rose 21 August 1930 - 9 February 2002) was the eldest child of King George VI and Queen Elizabeth The Queen Mother. Princess Margaret, Countess of Snowdon. (Margaret Rose 21 August 1930 - 9 February 2002) was the eldest daughter of Queen Elizabeth The Queen Mother. (Margaret Rose 21 August 1930 - 9 February 2002) was the oldest child of King George VI and Queen Elizabeth The Queen Mother. Bulger was one of the FBI’s most wanted fugitives for 16 years until he was captured in Santa Monica, California, in 2011. He was convicted in 2013 of a litany of crimes, including racketeering, extortion, money-laundering, and murder. He was sentenced to two consecutive life sentences plus five years. He died in federal prison in West Virginia on Tuesday at the age of 89. Bulger was one of the FBI’s most wanted fugitives for 16 years before he was captured in Santa Monica, California, in 2011. Table 5: Two examples of text generated with different strategies by the large transformer model. One the left a cherry picked example (in terms of repetitive generation for BS) while on the right a random one. Sentence refuted by the fact checker are highlighted in red, supported in green.
{ "id": "1904.01038" }
1911.03090
What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning
Pretrained transformer-based language models have achieved state of the art across countless tasks in natural language processing. These models are highly expressive, comprising at least a hundred million parameters and a dozen layers. Recent evidence suggests that only a few of the final layers need to be fine-tuned for high quality on downstream tasks. Naturally, a subsequent research question is, "how many of the last layers do we need to fine-tune?" In this paper, we precisely answer this question. We examine two recent pretrained language models, BERT and RoBERTa, across standard tasks in textual entailment, semantic similarity, sentiment analysis, and linguistic acceptability. We vary the number of final layers that are fine-tuned, then study the resulting change in task-specific effectiveness. We show that only a fourth of the final layers need to be fine-tuned to achieve 90% of the original quality. Surprisingly, we also find that fine-tuning all layers does not always help.
http://arxiv.org/pdf/1911.03090
Jaejun Lee, Raphael Tang, Jimmy Lin
cs.CL
5 pages
null
cs.CL
20191108
20191108
9 1 0 2 v o N 8 ] L C . s c [ 1 v 0 9 0 3 0 . 1 1 9 1 : v i X r a # What Would Elsa Do? Freezing Layers During Transformer Fine-Tuning Jaejun Lee, Raphael Tang, and Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo # Abstract Pretrained transformer-based language models have achieved state of the art across countless tasks in natural language processing. These models are highly expressive, comprising at least a hundred million parameters and a dozen layers. Recent evidence suggests that only a few of the final layers need to be fine-tuned for high quality on downstream tasks. Naturally, a subsequent research question is, “how many of the last layers do we need to fine-tune?” In this paper, we precisely answer this question. We examine two recent pretrained language models, BERT and RoBERTa, across standard tasks in textual entailment, semantic similarity, sentiment analysis, and linguistic acceptabil- ity. We vary the number of final layers that are fine-tuned, then study the resulting change in task-specific effectiveness. We show that only a fourth of the final layers need to be fine-tuned to achieve 90% of the original quality. Surpris- ingly, we also find that fine-tuning all layers does not always help. # Introduction Transformer-based pretrained language models are a battle-tested solution to a plethora of natu- ral language processing tasks. In this paradigm, a transformer-based language model is first trained on copious amounts of text, then fine-tuned on task-specific data. BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), and RoBERTa (Liu et al., 2019) are some of the most well-known ones, representing the current state of the art in natural language inference, question answering, and sentiment classification, to list a few. These models are extremely expressive, consisting of at least a hundred million parameters, a hundred at- tention heads, and a dozen layers. An emerging line of work questions the need for such a parameter-loaded model, especially on a single downstream task. Michel et al. (2019), for example, note that only a few attention heads need to be retained in each layer for acceptable effec- tiveness. Kovaleva et al. (2019) find that, on many tasks, just the last few layers change the most af- ter the fine-tuning process. We take these obser- vations as evidence that only the last few layers necessarily need to be fine-tuned. The central objective of our paper is, then, to de- termine how many of the last layers actually need fine-tuning. Why is this an important subject of study? Pragmatically, a reasonable cutoff point saves computational memory across fine-tuning multiple tasks, which bolsters the effectiveness of existing parameter-saving methods (Houlsby et al., 2019). Pedagogically, understanding the re- lationship between the number of fine-tuned layers and the resulting model quality may guide future works in modeling. Our research contribution is a comprehensive evaluation, across multiple pretrained transform- ers and datasets, of the number of final layers needed for fine-tuning. We show that, on most tasks, we need to fine-tune only one fourth of the final layers to achieve within 10% parity with the full model. Surprisingly, on SST-2, a sentiment classification dataset, we find that not fine-tuning all of the layers leads to improved quality. # 2 Background and Related Work # 2.1 Pretrained Language Models In the pretrained language modeling paradigm, a language model (LM) is trained on vast amounts of text, then fine-tuned on a specific downstream task. Peters et al. (2018) are one of the first to suc- cessfully apply this idea, outperforming state of the art in question answering, textual entailment, and sentiment classification. Their model, dubbed ELMo, comprises a two-layer BiLSTM pretrained on the Billion Word Corpus (Chelba et al., 2014). Furthering this approach with more data and improved modeling, Devlin et al. (2019) pre- train deep 12- and 24-layer bidirectional trans- formers (Vaswani et al., 2017) on the entirety of Wikipedia and BooksCorpus (Zhu et al., 2015). Their approach, called BERT, achieves state of the art across all tasks in the General Language Under- standing Evaluation (GLUE) benchmark (Wang et al., 2018), as well as the Stanford Question An- swering Dataset (Rajpurkar et al., 2016). As a result of this development, a flurry of recent papers has followed this more-data-plus- better-models principle. Two prominent exam- ples include XLNet (Yang et al., 2019) and RoBERTa (Liu et al., 2019), both of which con- test the present state of the art. XLNet proposes to pretrain two-stream attention-augmented trans- formers on an autoregressive LM objective, in- stead of the original cloze and next sentence pre- diction (NSP) tasks from BERT. RoBERTa pri- marily argues for pretraining longer, using more data, and removing the NSP task for BERT. # 2.2 Layerwise Interpretability The prevailing evidence in the neural network lit- erature suggests that earlier layers extract univer- sal features, while later ones perform task-specific modeling. Zeiler and Fergus (2014) visualize the per-layer activations in image classification net- works, finding that the first few layers function as corner and edge detectors, and the final layers as class-specific feature extractors. Gatys et al. (2016) demonstrate that the low- and high-level notions of content and style are separable in con- volutional neural networks, with lower layers cap- turing content and higher layers style. Pretrained transformers. In the NLP litera- ture, similar observations have been made for pre- trained language models. Clark et al. (2019) an- alyze BERT’s attention and observe that the bot- tom layers attend broadly, while the top layers capture linguistic syntax. Kovaleva et al. (2019) find that the last few layers of BERT change the most after task-specific fine-tuning. Similar to our work, Houlsby et al. (2019) fine-tune the top lay- ers of BERT, as part of their baseline comparison for their model compression approach. However, none of the studies comprehensively examine the number of necessary final layers across multiple pretrained transformers and datasets. Model Embedding Per-Layer Output Total BERTBASE RoBERTaBASE 24M (22%) 7M (7%) 0.6M (0.5%) 110M 39M (31%) 7M (6%) 0.6M (0.5%) 125M BERTLARGE 32M (10%) 13M (4%) RoBERTaLARGE 52M (15%) 13M (4%) 1M (0.3%) 1M (0.3%) 335M 355M Table 1: Parameter statistics for the base and large vari- ants of BERT and RoBERTa. Note that “per-layer” in- dicates the number of parameters in one intermediate layer, which is more relevant to our study. Model CoLA SST-2 MRPC STS-B QQP MNLI QNLI RTE F1 Acc. Acc. Acc. MCC Acc. ρ ρ BERTBASE RoBERTaBASE 58.8 59.9 92.7 94.6 90.4 92.8 89.5 90.8 87.8 84.3 88.8 87.4 91.3 68.2 92.7 78.2 BERTLARGE 61.8 RoBERTaLARGE 66.0 93.4 95.5 90.6 92.8 89.7 91.9 88.3 86.4 89.1 89.9 92.2 71.1 94.3 84.5 Table 2: Reproduced results of BERT and RoBERTa on the development sets. # 3 Experimental Setup We conduct our experiments on NVIDIA Tesla V100 GPUs with CUDA v10.1. We run the mod- els from the Transformers library (v2.1.1; Wolf et al., 2019) using PyTorch v1.2.0. # 3.1 Models and Datasets We choose BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) as the subjects of our study, since they represent state of the art and the same architecture. XLNet (Yang et al., 2019) is another alternative; however, they use a slightly different attention structure, and our preliminary experiments encountered difficulties in reproducibility with the Transformers library. Each model has base and large variants that con- tain 12 and 24 layers, respectively. We denote them by appending the variant name as a subscript to the model name. Within each variant, the two models display slight variability in parameter count—110 and 125 million in the base variant, and 335 and 355 in the large one. These differences are mostly at- tributed to RoBERTa using many more embedding parameters—exactly 63% more for both variants. For in-depth, layerwise statistics, see Table 1. For our datasets, we use the GLUE bench- mark, which comprises the tasks in natural lan- guage inference, sentiment classification, linguis- tic acceptability, and semantic similarity. Specifi- cally, for natural language inference (NLI), it pro- vides the Multigenre NLI (MNLI; Williams et al., 2018), Question NLI (QNLI; Wang et al., 2018), Model Frozen CoLA SST-2 MRPC STS-B QQP MNLI MNLI-mm QNLI RTE Acc. Acc. up to MCC Acc. F1 ρ F1 Acc. Acc. BERTBASE 0th 9th 12th 58.3 47.5 29.4 92.7 90.8 84.9 90.3 85.4 81.5 88.8 88.0 78.1 87.9 85.3 72.0 84.2 82.0 56.4 84.8 82.4 57.1 91.4 89.5 74.5 67.6 62.3 57.5 Table 3: Development set results of BERT, with none, some, and all of the nonoutput layer weights fine-tuned. Results are averaged across five runs. Model Frozen CoLA SST-2 MRPC STS-B up to MCC Acc. F1 ρ BERTBASE 0th 9th 12th 58.3 47.5 29.4 92.7 90.8 84.9 90.3 85.4 81.5 88.9 88.0 78.1 RoBERTaBASE 0th 7th 12th 59.4 58.6 0.0 94.3 93.3 80.2 92.3 89.5 81.2 90.6 87.7 20.0 Model Frozen CoLA SST-2 MRPC STS-B up to MCC Acc. F1 ρ BERTLARGE 0th 18th 24th 61.9 51.6 24.4 93.4 92.7 87.8 90.3 85.4 81.3 89.8 88.0 71.7 RoBERTaLARGE 0th 17th 24th 66.1 60.5 0.0 95.1 95.1 79.2 92.2 91.3 81.2 92.0 89.6 11.2 Table 4: Development set results of all base models, with none, some, and all of the nonoutput layer weights fine-tuned. Results are averaged across five runs. Table 5: Development set results of all large models, with none, some, and all of the nonoutput layer weights fine-tuned. Results are averaged across five runs. Recognizing Textual Entailment (RTE; Bentivogli et al., 2009), and Winograd NLI (Levesque et al., 2012) datasets. For semantic textual similarity and paraphrasing, it contains the Microsoft Research Paraphrase Corpus (MRPC; Dolan and Brockett, 2005), the Semantic Textual Similarity Bench- mark (STS-B; Cer et al., 2017), and Quora Ques- tion Pairs (QQP; Iyer et al.). Finally, its single- sentence tasks consist of the binary-polarity Stan- ford Sentiment Treebank (SST-2; Socher et al., 2013) and the Corpus of Linguistic Acceptabil- ity (CoLA; Warstadt et al., 2018). 2 , L ers, we explore N = L 2 + 1, . . . , L. Due to computational limitations, we set half as the cut- off point. Additionally, we restrict our compre- hensive all-datasets exploration to the base vari- ant of BERT, since the large model variants and RoBERTa are much more computationally in- tensive. On the smaller CoLA, SST-2, MRPC, and STS-B datasets, we comprehensively evaluate both models. These choices do not substantially affect our analysis. # 4 Analysis # 3.2 Fine-Tuning Procedure # 4.1 Operating Points Our fine-tuning procedure closely resembles those of BERT and RoBERTa. We choose the Adam optimizer (Kingma and Ba, 2014) with a batch size of 16 and fine-tune BERT for 3 epochs and RoBERTa for 10, following the original papers. For hyperparameter tuning, the best learning rate is different for each task, and all of the origi- nal authors choose one between 1 × 10−5 and 5 × 10−5; thus, we perform line search over the interval with a step size of 1 × 10−5. We report the best results in Table 2. On each model, we freeze the embeddings and the weights of the first N layers, then fine-tune the rest using the best hyperparameters of the full model. Specifically, if L is the number of lay- We report three relevant operating points in Tables 3–5: two extreme operating points and an interme- diate one. The former is self-explanatory, indicat- ing fine-tuning all or none of the nonoutput layers. The latter denotes the number of necessary layers for reaching at least 90% of the full model quality, excluding CoLA, which is an outlier. From the reported results in Tables 3–5, fine- tuning the last output layer and task-specific lay- ers is insufficient for all tasks—see the rows corre- sponding to 0, 12, and 24 frozen layers. However, we find that the first half of the model is unnec- essary; the base models, for example, need fine- tuning of only 3–5 layers out of the 12 to reach 90% of the original quality—see Table 4, middle CoLA-base SST-2-base MRPC-base STS-B-base 0.0 amram y 0.00 pana 0.00 oneness . -0.05 -0.05 -0.05 1S) ] 9 -0.2 g fa Q f- = <_o.10 -0.10: -0.10 -0.4 -0.15 -0.15 -0.15: “6c i110 9 8 7 6 ~02057 11 109 8 7 6 ~02077 71 10 6 8 7 6 ~02077 a1 10 9 8 7 6 CoLA-large SST-2-large MRPC-large STS-B-large 0.05 0.00 0.00 oven Y-02, 70.05 70.05: -0.05 aaa fv] uw Q = <_o10 ~0.10/ -0.10 -oal -0.15 -0.15 -0.15 | —0.20 -0. 0.6 5) 22 20 18 16 14 12 all 22 20 18 16 14 12 — BERT - i 2051 22 20 18 16 14 12 205i 22 20 18 16 14 12 —— ROoBERTa Figure 1: Relative change in quality compared to the full models, with respect to the number of frozen initial layers, represented by the x-axes. subrow of each row group. Similarly, fine-tuning only a fourth of the layers is sufficient for the large models (see Table 5); only 6 layers out of 24 for BERT and 7 for RoBERTa. layers. This finding suggests that these models may be overparameterized for SST-2. # 5 Conclusions and Future Work # 4.2 Per-Layer Study In Figure 1, we examine how the relative qual- ity changes with the number of frozen layers. To compute a relative score, we subtract each frozen model’s results from its corresponding full model. The relative score aligns the two baselines at zero, allowing the fair comparison of the transformers. The graphs report the average of five trials to re- duce the effects of outliers. In this paper, we present a comprehensive evalu- ation of the number of final layers that need to be fine-tuned for pretrained transformer-based lan- guage models. We find that only a fourth of the layers necessarily need to be fine-tuned to ob- tain 90% of the original quality. One line of future work is to conduct a similar, more fine- grained analysis on the contributions of the atten- tion heads. When every component except the output layer and the task-specific layer is frozen, the fine-tuned model achieves only 64% of the original quality, on average. As more layers are fine-tuned, the model effectiveness often improves drastically— see CoLA and STS-B, the first and fourth verti- cal pairs of subfigures from the left. This demon- strates that gains decompose nonadditively with respect to the number of frozen initial layers. Fine- tuning subsequent layers shows diminishing re- turns, with every model rapidly approaching the baseline quality at fine-tuning half of the network; hence, we believe that half is a reasonable cutoff point for characterizing the models. # Acknowledgments This supported by the Natu- ral Sciences and Engineering Research Council (NSERC) of Canada, and enabled by computa- tional resources provided by Compute Ontario and Compute Canada. # References Ido Kalman Dagan, Dang Hoa, Danilo Giampiccolo, and Bernardo Magnini. 2009. The fifth PASCAL recognizing textual entailment challenge. In TAC 2009 Workshop. Finally, for the large variants of BERT and RoBERTa on SST-2 (second subfigure from both the top and the left), we observe a surprisingly consistent increase in quality when freezing 12–16 Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. SemEval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of the 11th International Workshop on Semantic Eval- uation. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, Phillipp Koehn, and Tony Robin- son. 2014. One billion word benchmark for mea- suring progress in statistical language modeling. In Fifteenth Annual Conference of the International Speech Communication Association. Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? An analysis of BERT’s attention. arXiv:1906.04341. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies. William B. Dolan and Chris Brockett. 2005. Automati- cally constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing. Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2016. Image style transfer using convolu- tional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning. Shankar Iyer, Nikhil Dandekar, and Kornel Csernai. First Quora dataset release: Question pairs. 2014. and Adam: A method for stochastic optimization. arXiv:1412.6980. Olga Kovaleva, Alexey Romanov, Anna Rogers, and Anna Rumshisky. 2019. Revealing the dark secrets of BERT. arXiv:1908.08593. Hector Levesque, Ernest Davis, and Leora Morgen- stern. 2012. The Winograd schema challenge. In Thirteenth International Conference on the Princi- ples of Knowledge Representation and Reasoning. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv:1907.11692. Paul Michel, Omer Levy, and Graham Neubig. 2019. Are sixteen heads really better than one? arXiv:1905.10650. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- In Proceedings of the 2013 Conference on bank. Empirical Methods in Natural Language Process- ing. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A multi-task benchmark and analysis plat- In Pro- form for natural language understanding. the 2018 EMNLP Workshop Black- ceedings of boxNLP: Analyzing and Interpreting Neural Net- works for NLP. Alex Warstadt, Amanpreet Singh, and Samuel R. Bow- man. 2018. Neural network acceptability judg- ments. arXiv:1805.12471. Adina Williams, Nikita Nangia, and Samuel R. Bow- man. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R’emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. HuggingFace’s Trans- formers: State-of-the-art natural language process- ing. arXiv:1910.03771. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: generalized autoregressive pretrain- ing for language understanding. arXiv:1906.08237. Matthew D. Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Euro- pean Conference on Computer Vision. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhut- dinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In Proceedings of the IEEE In- ternational Conference on Computer Vision.
{ "id": "1905.10650" }
1911.03064
Reducing Sentiment Bias in Language Models via Counterfactual Evaluation
Advances in language modeling architectures and the availability of large text corpora have driven progress in automatic text generation. While this results in models capable of generating coherent texts, it also prompts models to internalize social biases present in the training corpus. This paper aims to quantify and reduce a particular type of bias exhibited by language models: bias in the sentiment of generated text. Given a conditioning context (e.g., a writing prompt) and a language model, we analyze if (and how) the sentiment of the generated text is affected by changes in values of sensitive attributes (e.g., country names, occupations, genders) in the conditioning context using a form of counterfactual evaluation. We quantify sentiment bias by adopting individual and group fairness metrics from the fair machine learning literature, and demonstrate that large-scale models trained on two different corpora (news articles, and Wikipedia) exhibit considerable levels of bias. We then propose embedding and sentiment prediction-derived regularization on the language model's latent representations. The regularizations improve fairness metrics while retaining comparable levels of perplexity and semantic similarity.
http://arxiv.org/pdf/1911.03064
Po-Sen Huang, Huan Zhang, Ray Jiang, Robert Stanforth, Johannes Welbl, Jack Rae, Vishal Maini, Dani Yogatama, Pushmeet Kohli
cs.CL, cs.CY, cs.LG
Accepted in the Findings of EMNLP, 2020
null
cs.CL
20191108
20201008
0 2 0 2 t c O 8 ] L C . s c [ 3 v 4 6 0 3 0 . 1 1 9 1 : v i X r a # Reducing Sentiment Bias in Language Models via Counterfactual Evaluation Po-Sen Huang♠♦ Huan Zhang♥♥♦ Ray Jiang♠ Robert Stanforth♠ Johannes Welbl♠♣♥ Jack W. Rae♠♣ Vishal Maini♠ Dani Yogatama♠ Pushmeet Kohli♠ ♠DeepMind ♥University of California, Los Angeles ♣University College London # Abstract Advances in language modeling architectures and the availability of large text corpora have driven progress in automatic text generation. While this results in models capable of gener- ating coherent texts, it also prompts models to internalize social biases present in the training corpus. This paper aims to quantify and reduce a particular type of bias exhibited by language models: bias in the sentiment of generated text. Given a conditioning context (e.g., a writing prompt) and a language model, we analyze if (and how) the sentiment of the generated text is affected by changes in values of sensitive attributes (e.g., country names, occupations, genders) in the conditioning context using a form of counterfactual evaluation. We quan- tify sentiment bias by adopting individual and group fairness metrics from the fair machine learning literature, and demonstrate that large- scale models trained on two different corpora (news articles, and Wikipedia) exhibit consid- erable levels of bias. We then propose embed- ding and sentiment prediction-derived regular- ization on the language model’s latent repre- sentations. The regularizations improve fair- ness metrics while retaining comparable levels of perplexity and semantic similarity. Generated Continuations Sentiment Distribution Conditioning Text with Attribute had a grand time organising...(0.97) ‘re working ona prototype for her banana bread recipe...(0.51 My friend is a/ an _, and we... baker accountant ie} 50 100 count hear from her all the time all the problems...(0.17) Figure 1: Conditioning text “My friend is a/an <occupation>, and we...”, alongside various text con- tinuations generated by a GPT-2 language model. On the right, the empirical sentiment distribution of the generated texts is shown: they reveal a system- atic difference in sentiment depending on occupation (“baker’’ or “accountant”) in the conditioning context. et al., 2019). While the generation of coherent text is becoming increasingly practical, it also prompts models to internalize social biases present in the training corpus. Investigating the social impact and fairness of the text generated from language models has thus received considerable research in- terest (Solaiman et al., 2019; Wallace et al., 2019; Sheng et al., 2019). # Introduction Language modeling has advanced rapidly due to efficient model architectures (Vaswani et al., 2017; Dai et al., 2019) and the availability of large-scale datasets (Radford et al., 2019; Zellers et al., 2019). Large-scale language models have been applied not only for representation extraction to support downstream tasks (Peters et al., 2018; Devlin et al., 2019), but are also used for many natural language generation applications (Radford et al., 2019; So- laiman et al., 2019; Zellers et al., 2019; Zhang ♦Denotes equal contribution. ♥Work done during an internship at DeepMind. ♠Corresponding author: [email protected]. In this paper, we aim to both quantify and reduce a language model’s sentiment bias for a given sen- sitive attribute. Consider, for example, the condi- tioning text “My friend is a/an <occupation>, and we...” on the left of Figure 1. A 1.5B-parameter GPT-2 language model can generate a variety of plausible continuations to it, yet the empirical dis- tribution of sentiment scores differs depending on the occupation chosen in the conditioning context. When generating 1,000 continuations for both “ac- countant” and “baker”, and then measuring the sentiment scores of the resulting sentences using the Google Cloud sentiment API, a systematic dif- ference is revealed: the GPT-2 model tends to gen- erate continuations with more positive sentiment for “baker”, and more negative sentiment with “accountant” as the occupation. When systemati- cally evaluating this phenomenon by manipulating different sensitive attributes values (e.g., country names, occupations, or person names) in the condi- tioning context – that is, performing counterfactual evaluation – we find that sentiment scores for the generated texts can vary substantially, suggesting the existence of sentiment bias. Such a sentiment bias can pose a concern for using the text generated by language models in downstream applications (e.g., dialogue agents (Zhang et al., 2019)) from a fairness perspective. To quantify sentiment bias, we propose the use of individual and group fairness metrics from the fair machine learning literature (Dwork et al., 2012; Jiang et al., 2019; Hardt et al., 2016). We further- more propose a general framework to reduce sen- timent bias given a fairness specification based on sensitive attributes (e.g., fairness w.r.t. a predefined set of occupation names). Using this framework, we propose embedding and sentiment prediction- derived regularization on the language model’s la- tent representations. Experiments demonstrate that both proposed methods reduce sentiment bias while retaining a comparable level of perplexity and semantic similarity, and show a trade-off be- tween fairness and semantic relevance. While specifying concretely what optimal model fairness behavior should be is difficult – it might be defined by law or regulators – we provide a general framework to address given fairness specifications on sensitive attributes. Our main contributions are: • We demonstrate the existence of systematic counterfactual sentiment bias in texts generated by large-scale language models (§3). • We propose two novel metrics: individual and group fairness metrics to quantify counterfactual sentiment bias in language generation (§3). • To the best of our knowledge, this paper is the first to introduce a general framework to reduce bias under a specification measure (e.g., senti- ment) for texts generated by language models given sensitive attributes. While we focus on sentiment biases on a few common sensitive attributes (country, occupation and name), the framework can be generalized to other specifica- tions (§4). • We evaluate the proposed methods using both automatic metrics and human evaluations of sen- timent and semantic relevance, and find a strong correlation between automatic metrics and hu- man evaluations (§5). # 2 Background & Related Work Bias in natural language processing systems. Besides learning to favor the language of the au- thors’ demographic group (Hovy and Søgaard, 2015), NLP models can pick up on a variety of cultural associations and undesirable social bi- ases (Caliskan et al., 2017). Systematic imbalances were observed across NLP tasks, such as gender bias in coreference resolution (Zhao et al., 2018; Rudinger et al., 2018), visual semantic role labeling (Zhao et al., 2017), image captioning (Hendricks et al., 2018), and demographic biases in language generation (Sheng et al., 2019), text classification (Dixon et al., 2018; Garg et al., 2019). Concretely in sentiment analysis, Kiritchenko and Mohammad (2018) found systematic biases with respect to race and gender across more than 200 systems. Mitigating bias in language models. Rather than debiasing word embeddings, Lu et al. (2018) proposed counterfactual data augmentation as a remedy to occupation-specific gender biases, and found that it can much better retain model perfor- mance than debiasing word embeddings, especially in language modeling. Zhao et al. (2019) and Basta et al. (2019) demonstrated gender bias in pretrained language modeling representations (ELMo), which translates into downstream tasks, but did not con- sider the language generated by the ELMo lan- guage model. Bordia and Bowman (2019), as well as Qian et al. (2019) identified biases in a language modeling context and propose regularization strate- gies of generating certain words (e.g., “doctor”) with differently gendered inputs. In contrast to these prior works on mitigating gender biases of language models based on the probabilities of generating certain words (such as occupation ratios), we probe texts generated by lan- guage models using a sentiment analysis system, similar to Sheng et al. (2019). We further propose a general framework to mitigate bias for a given specification (e.g., fairness w.r.t. predefined coun- try names, occupations, gendered names) under a specification measure (e.g., sentiment, regard, etc.). Prior work mostly considers comparatively small language modeling training sets. In contrast, we investigate bias in Transformer-based models with a similar number of parameters (708 million pa- rameters) to GPT-2 (Solaiman et al., 2019) trained on English news articles from WMT-19 (40GB of text) and WikiText-103 (Merity et al., 2016). Fairness. Popular statistical fairness criteria of- ten aim at achieving individual fairness (Dwork et al., 2012) or group fairness (Hardt et al., 2016) goals. In recent years, causal inference tools are also used in fairness research to extend beyond sta- tistical fairness criteria making use of causal graphs. Similar to individual fairness, which requires simi- lar individuals to be treated similarly (Dwork et al., 2012), counterfactual fairness requires the same model predictions before and after intervention on sensitive attributes in data-generating causal graphs (Kusner et al., 2017; Kilbertus et al., 2017; Chiappa, 2019; Chiappa and Isaac, 2019). In our problem setting, we deviate from the counterfactual fairness works above by considering counterfactual fairness (Garg et al., 2019) based on a simple causal graph representing the language model instead of the data-generating process. We aim towards counterfactual fairness by debiasing the latent representation of inputs in the language models, contributing to a family of methods to learn fair representations (Beutel et al., 2017; Zemel et al., 2013; Creager et al., 2019; Edwards and Storkey, 2016; Louizos et al., 2016) and enforcing independence between sensitive attributes and pre- diction outputs (Calders et al., 2009; Zhang et al., 2018; Jiang et al., 2019; Chiappa et al., 2020). # 3 Counterfactual Evaluation of Sentiment Bias Fairness specification. Our goal is to reduce the counterfactual sentiment bias in a language model, given a fairness specification. In our specification, we consider a set of sensitive attribute values (e.g., country names, occupations, and person names) of a sensitive attribute (e.g., Country, Occupation, Name) that we want generated texts to be fair to under counterfactual evaluation. Formally, con- sidering for example the sensitive attribute Gender, we use A = {female, male} to denote the set of values considered, and use A = a to denote a ran- dom variable A that takes the sensitive attribute value a ∈ A. For each input sequence x contain- ing sensitive tokens φ(a) (which are given in the specification, e.g., φ(a)={he, his, him, husband, Paul} for a = male), we choose another value ˜a of the sensitive attribute from the set A \ {a}, and define the counterfactual input ˜x = cf(x, a, ˜a) by replacing all occurrences of each sensitive to- ken in φ(a) with the corresponding token in φ(˜a), and leaving all other non-sensitive tokens of x un- changed. Given a predefined sentiment classifier fs with sentiment outputs in [0, 1], and a pretrained language model LM , so that the random variable LM (x) is a sentence sampled from the language model conditioned on x, we define the random vari- able S(x) = fs(LM (x)) to be the sentiment score in [0, 1] of the generated sentence, and denote its distribution by PS(x). Next, for counterfactual evaluation, we measure the difference between PS(x) and PS( ˜x) as fol- lows. When quantifying the difference between two output distributions for a binary classifica- tion problem – such as sentiment prediction – we typically consider predictions formulated as ˆy = 1(S > τ ), given a decision threshold τ . One fun- damental fairness concept is “demographic parity” for binary classification problems, which requires equal positive classification rates across subgroups, i.e., p(ˆy = 1 | A = a) = p(ˆy = 1 | A = ˜a) for any sensitive attribute values a, ˜a ∈ A. We can measure deviation from it, i.e. “demographic dis- parity” using the differences between the subgroup positive rates: lp =1| A=a)~p(g=1| =a) (cf. Prop. 3.1 in Dwork et al. (2012)). However, often we do not want our fairness goal to be de- pendent on a predetermined decision threshold 7, since T may be user-defined or simply not known at training time. This consideration leads us to match output distributions, which is called “Strong De- mographic Parity” (Jiang et al., 2019). Concretely applied in our LM context, these distributions are Ps(a|A = a) and Ps(z|A = @). Extending this definition to measure unfairness between counterfactual pairs of subgroups, demo- graphic disparity is the difference between posi- tive sentiment rates of S(x) and S( ˜x): |p(S(x) > τ )−p(S( ˜x) > τ )|. We can then measure the devia- tion by computing the statistical disparity averaged over uniformly random choices of τ ∈ [0, 1], that is, Eτ ∼U [0,1]|p(S(x) > τ ) − p(S( ˜x) > τ )| where U denotes the random uniform distribution. This quantity is equal to the Wasserstein-1 distance be- tween PS(x) and PS( ˜x) (Jiang et al., 2019): W1(PS(x), PS( ˜x)) = Eτ ∼U [0,1]|p(S(x) > τ ) − p(S( ˜x) > τ )| (1) (a) W1(·, ·) =0.1 (b) W1(·, ·) =0.01 Figure 2: Illustration of the Wasserstein-1 distance- based fairness metrics on two Gaussian distributions truncated to [0,1], simulating sentiment scores. For comparison, the Wasserstein-1 distance for the two sen- timent distributions in Figure 1 is 0.13. Sentiment bias by counterfactual evaluation, i.e., counterfactual sentiment bias, is then the Wasserstein-1 distance between output sentiment distributions PS of the original input x and its coun- terfactual ˜x. Thus, extending Garg et al. (2019), we define a model to be counterfactually fair for sentiment if W,(Ps(a),Ps(cf(a,a,4)))<e (2) for each sensitive attribute value a € A, @ € A \ {a}, and a chosen threshold € > 0. This fair- ness formulation also expresses individual fairness which requires similar individuals to be treated sim- ilarly (Dwork et al., 2012), where similar individu- als share similar non-sensitive words in a sentence. Note that using Wasserstein-1 distance to compare two distributions does not require assumptions on their shape (e.g., symmetry). Fairness evaluation. For each sensitive attribute, we measure the individual fairness and group fair- ness metrics from distributions of sentiment scores PS on the evaluation set in the following ways. Individual Fairness Metric. Based on the fair- ness property of the Wasserstein-1 distance (Eq. 1), we compute the Average Individual Fairness by averaging the Wasserstein-1 distance between the sentiment score distribution of every evaluation sentence PS(x) and each of its counterfactual sen- tence PS( ˜x) across all M templates.1 Formally, we define individual fairness metric (denoted by I.F.) as: M MAGS > > Wi(Ps(x""), Ps(#™)) m=1a,aeA G3) 1During inference, for each sensitive variable A we de- sign a set of sentence templates to evaluate the counterfactual sentiment bias. See §5 for details. (3) where the inner sum is over all |A|(|A|−1) unordered pairs of distinct a, ˜a ∈ A, and a, ˜a are values of the sensitive attribute in xm and ˜xm respectively. Group Fairness Metric. This metric measures fairness for particular subgroups. Concretely, the evaluation sentences are separated into |A| = K disjoint subgroups, assigning a sentence to a sub- group a if it contains sensitive tokens from φ(a). Taking for example the sensitive attribute Name and selecting A = {male, female}, we have K = 2, and φ(male) = {Jake, Scott, Jacob, . . .} for a = male.2 For each subgroup a ∈ A, we then measure the Wasserstein-1 distance between the sentiment distributions of all generated sentences of inputs from this subgroup, denoted by P a S , and that over the entire evaluation set, denoted by P ∗ S . We report the average of all these subgroup Wasserstein-1 distances as the Average Group Fairness metric, denoted by G.F.: G.F. := 1 = Tay De WilPS, P25). (4) acA # a∈A 4 Language Models with Fair Sentiment # Distribution In this section, we introduce two approaches for reducing counterfactual sentiment bias in language models, which will be subsequently evaluated with the above described fairness metrics. Given an input prefix x1:i with i tokens, x1:i = (x1, · · · , xi), where the last token xi ∈ φ(a) is associated with a subgroup with value a of the sensitive attribute, we construct a perturbed prefix by replacing xi with a token ˜xi ∈ φ(˜a) from a different subgroup ˜a, where fairness between the two subgroups should be maintained. We obtain a perturbed prefix ˜x1:i = (x1:i−1, ˜xi). To train the language model towards reducing counterfactual sentiment bias, we want to ensure that the language model produces similar senti- ment distributions for the two prefixes. Specifically, we would like the Wasserstein-1 distance between the sentiment distributions of generated sentences, PS(x1:i) and PS(˜x1:i), to be small, as shown in Eq. 2. But in practice, it is prohibitively expensive to sample a distribution of generated sequences for every x1:i and ˜x1:i during training. Instead, we use hidden features from the language model as a proxy to represent the distribution of future gener- ated sequences, since p(xi+1, xi+2, · · · |x1:i) and 2Here gender is treated as a binary variable. p(xi+1, xi+2, · · · |˜x1:i) depend on the hidden states of the language model conditioned on x1:i and ˜x1:i, respectively. Concretely, we explore two approaches: Fair- ness through embedding regularization and Fair- ness through sentiment regularization, which ex- ploit the hidden states of the language model. Given an L-layer transformer based language model with an input 21.;, we let h(a) = (AO (aer.:), +e hb) (21::)) denote the hidden fea- tures (or contextual embeddings) obtained by its hidden layers. Fairness through embedding regularization. In this approach, we desire that the embed- dings h(a.) and bh) (B1.;) are close, since the joint distributions p(xj41, j+2,---|a1.4) and p(Xi41, Vi42,-++|#1.;) are determined by these em- beddings. We call it the “embedding regulariza- tion” approach, and define the fairness loss as a distance between the embeddings, denoted as d(h(a1.;), h(Z1)). We use the cosine distance: h(ay4)7 h(@14) [h(i [INh(@2)|| where h(a) is set as the average of the last two embedding vectors h(4—)) (a) and h() (a) based on the following two reasons: First, we want to capture high-level semantics (e.g., sentiments) and embedding in later layers represents higher level semantics (Tenney et al., 2019). Second, we find that averaging too many layers can make the difference between h(a,;) and h(1,;) very small, reducing the effectiveness of regularization. An advantage of this method is that it can directly be applied to fairness specifications beyond senti- ment, as it encourages p(@i+1, i+2,--+|@17) and P(Ti41, Vi42,°++|#1.;) to be close regardless of the specification measure (e.g., sentiment). d(h(a1::), h(#1)) : Since the embedding regularization method en- forces the model’s predictions to be similar for the original input x1:i and the perturbed input ˜x1:i without specification measure information, a po- tential drawback of this method is that the regu- larization can be too strong. As we require the hidden representations (and thus the joint probabil- ities) to be as close as possible, this can lead to the model learning to ignore the sensitive tokens, and thus generally a reduced dependence on them, as shown in Appendix C.6. Despite being completely fair in this extreme case, model performance may suffer since the generated texts should ideally be contextually conditioned on xi or ˜xi. Fairness through sentiment regularization. To overcome the above-mentioned drawback, we propose an alternative method for elimi- nating sentiment bias using a sentiment classi- Instead of measuring d(h(x1:i), h(˜x1:i)) fier. directly, we first apply a sentiment classifier fsh to both h(x1:i) and h(˜x1:i), and measure d(fsh(h(x1:i)), fsh(h(˜x1:i))) instead. Note that the output of fsh can be multi-dimensional (e.g., a hidden layer in the sentiment classifier), and we can again measure the distance via cosine similar- ity. Applying the classifier fsh can be seen as a pro- jection from h(x) to a subspace that ideally only contains sentiment-related information. If such a perfect projection exists, we can regularize the sen- timent difference between the two inputs without losing other information of the sensitive tokens. On the one hand, this classifier-based sentiment regu- larization approach avoids the strong regularization of enforcing embedding similarity. On the other hand, the effectiveness of this method is correlated with the quality of the sentiment classifier (or senti- ment “projection”).3 The detailed implementation of fsh is introduced in Appendix B. This method can be extended to specifications with other spec- ification measures beyond sentiment by using a corresponding classifier fsh. Three-step curriculum training. We use a three-step curriculum train- ing schema. First, we train a language model using a regular cross-entropy loss for predicting the next token given all the previous tokens, as done in a typical language model training setting; a good val- idation perplexity ensures a relatively good hidden feature space has been learned. Second, using this language model, we train a sentiment classifier fsh (e.g., a simple multilayer perceptron (MLP)) us- ing the extracted features from the language model. Since sentiment labels are generally unavailable for a large-scale corpus, we label the training data with the Google Cloud sentiment API4 and train a sen- timent classifier on the data with high magnitude. Third, with the fixed fsh from the previous step, we continue training on the subset of the original language model training set that contains any of the sensitive tokens, with an additional fairness loss Lfairness based on our “embedding regularization” 3We use a sentiment classifier as a proxy to measure sen- timent scores/biases in this paper. The classifier itself might not be perfect and might exhibit some biases; for this reason we compare several alternatives. # 4https://cloud.google.com/natural-language/ Many tourists visit France for .... Documents b| Original sentence x na Perturbed sentence Pps +++ sete Prediction Loss for | Language Model Many tourists visit Italy for .... Extracted fs,(h(v) embeddings h(x) Classifier Fairness Loss (cosine similarity) (eg., sentiment classifier) Extracted embeddings h(%) Js, (h(@) (optional) Figure 3: Proposed language model debiasing pipeline (the third step in curriculum training). or “sentiment regularization” methods with a reg- ularization parameter λ. Meanwhile the language model is also trained on the regular cross-entropy loss (LLM) on predicting the next token of the un- perturbed input x. Concretely, the loss function for an input sequence x during the third step is: L(x) = LLM(x) + λ · Lfairness(h(x1:i), h(˜x1:i)) We refer to this third step as the “debiasing step”, as illustrated in Figure 3. Note that we do not use any template at any step of training. Occupation, and Name) to measure the counter- factual sentiment bias in language models. Coun- try contains 10 country names and Occupation in- cludes 29 common occupations. For Name, we have 17 female and 17 male common names. We list all sensitive attribute values used in our experi- ments in Appendix A. To compute the group fair- ness metric, we treat each country name and each occupation as its own subgroup. For Name, we consider all female (male) names as one subgroup. # 5 Experiments We now evaluate our proposed sentiment regular- ization and embedding regularization methods via both automatic scores and human evaluations. # 5.1 Training details Model and datasets. We train two Trans- formerXL (Dai et al., 2019) language models sim- ilar in scale to GPT-2 (Radford et al., 2019) on a medium-scale corpus of Wikipedia articles (i.e., WikiText-103) and a large-scale corpus of English news articles from the WMT-19 document-level translation task (WMT-19).5 We present dataset statistics, model architectures, and training details in Appendix B. Sentence templates. For each sensitive attribute, we design a set of M = 10 templates to evaluate counterfactual sentiment bias. Each m-th template is a sentence prefix with length im, m = 1, . . . , M , containing a placeholder that will be replaced by a sensitive token in φ(a) for each sensitive attribute value a ∈ A. In other words, for each template we complete it by inputting the appropriate sensi- tive token for every a ∈ A, forming a prefix x1:im which is used as input to the language model to condition its generation on. We sample 1000 sen- tences conditioned on each input prefix, and we apply an external sentiment classifier fs on the gen- erated sentences. All templates are described in Appendix A. Model selection. We train language models us- ing both embedding-regularization and sentiment- regularization losses with different regularization strengths. Based on the losses in the validation set, we report λ ∈ {1, 10, 100} for embedding- regularization and λ ∈ {10, 100, 1000} for sentiment-regularization on WMT-19, and λ ∈ {1, 10, 100} for both embedding-regularization and sentiment-regularization on WikiText-103. # 5.2 Fairness Specifications Sensitive attributes and subgroups. We con- sider three common sensitive attributes (Country, Employing specific templates for model evalua- tion is a commonly used practice (Zhao et al., 2018; Qian et al., 2019; Sheng et al., 2019), but we ac- knowledge that they can lack context-sensitivity, and that such evaluation is necessarily limited and not comprehensive. Indeed, we see the advance- ment of model evaluation beyond specific tem- plates as an important open research problem. Note that during the training process (see Figure 3), we do not add any of the templates to the training set; it is thus unlikely that our models overfit to them. Importantly, the templates are used during evalua- tion only and our models need to generalize to the templates to be effective. 5http://data.statmt.org/news-crawl/ # 5.3 Evaluation Metrics Sentiment analysis and fairness metrics. Cal- culating the individual fairness (I.F.) and group fairness (G.F.) scores using Eq. 3 and Eq. 4 re- quires sentiment scores from a sentiment classifier fs. We evaluate the generated sentences using three sentiment classifiers: i) the Google Cloud senti- ment API ii) a BERT (Devlin et al., 2019)-based sentiment classifier fine-tuned on the SST dataset (Socher et al., 2013) resulting in 92.7% validation accuracy, and iii) a simple opinion-word-based sen- timent classifier, which counts the number of pos- itive opinion words p and the number of negative opinion words n (Hu and Liu, 2004) and derives its sentiment score as p/(p + n), and 0.5 if no opinion words exist. We include this simple clas- sifier as the Google Cloud sentiment API and the BERT-based classifier may themselves contain bias, which has been shown for many sentiment analysis systems (Kiritchenko and Mohammad, 2018). The opinion-word-based method, while being less ac- curate (69.6% accuracy on the SST validation set), is less prone to giving biased judgments, as it does not contain sensitive tokens or learned associations: it only relies on opinion words. Furthermore, since we also use the Google Cloud sentiment API to create the sentiment labels of the training data for learning fsh, the BERT-based and opinion-word- based sentiment classifiers provide additional mea- sures of sentiment, helping to avoid findings spe- cific to one sentiment classification system in par- ticular. We also conduct a human evaluation on the correlation between automatic sentiment scores and human judgments (see §5.5). Language model performance One special case of a fair language model is to generate the same continuations regardless of the sensitive at- tribute tokens or prefixes (e.g., Appendix C.6). However this deteriorates the original language model’s performance, and we expect the model to still capture semantics related to the given sensitive tokens. Thus, in addition to the fairness metrics, it is important to examine the performance of lan- guage models. Here, we evaluate perplexity and semantic similarity for assessing language model performance and generation relevance. Perplexity (PPL) and subset perplexity (PPLs). We report the perplexity (PPL) on the whole test set of WMT-19/WikiText-103, and the perplexity on a subset of the test set that includes articles with at least one sensitive token (PPLs). The perplexity on the whole test set reflects the language model’s overall performance. Since the sensitive tokens only exist in a small fraction of test data, the subset perplexity PPLs examines the lan- guage model performance specifically in contexts containing sensitive tokens.6 Semantic Similarity (“S.S.” and “S.S.c”). We compute the cosine similarity between the em- bedding of both the prefix and the generated contin- uations using the universal sentence encoder (Cer et al., 2018). A generated continuation is consid- ered semantically similar if the cosine similarity is above a given threshold (set to 0.4; see Appendix C.7 for further details). The fraction of gener- ated continuations with above-threshold similarity among all generated continuations then defines the semantic similarity metric (denoted as “S.S.”). We report this S.S. as a proxy for whether the gener- ated sentences capture the original semantics. In addition, we report the fraction of generated con- tinuations mentioning the sensitive attribute tokens as a second proxy for semantic relevance (denoted as “S.S.c”). We also conduct a human evaluation of semantic similarity, and find a strong correlation between semantic relevance and human judgments (see §5.5). # 5.4 Evaluation Results Fairness Improvements. In Figure 4, we report the fairness metrics of the sensitive attribute Oc- cupation for models trained on the WMT-19 and WikiText-103 datasets. We evaluate the individ- ual fairness and group fairness metrics using a set of sentences generated from the templates and prefixes given in Appendix A. Importantly, dur- ing training we never explicitly train the model on these templates. The baseline model repre- sents the model after the first step of the curricu- lum training, before any debiasing steps are per- formed. Each fairness metric is evaluated using three different sentiment classifiers: the BERT- based and opinion-word-based classifier in Fig- ures 4 and 5, and Google Cloud sentiment API in Appendix C.1. For embedding-regularization 6We train all models to convergence. To rule out the differ- ent numbers of total training iterations as a potential confound- ing factor between the fine-tuned and standard model, we also trained baseline models with this same additional number of iterations on standard training data. We found performance differences to be insignificant, both in terms of perplexity as well as fairness metrics. (a) WMT-19, I.F. (b) WMT-19, G.F. (c) WikiText-103, I.F. (d) WikiText-103, G.F. Figure 4: I.F. and G.F improvements on WMT-19 and WikiText-103 datasets for the Occupation attribute using a BERT-based sentiment classifier, for both embedding regularization (“Embed-λ”) and sentiment regularization (“Sent-λ”) methods under different regularization strengths λ. Note a lower I.F./G.F. is better. (a) WMT-19, I.F. (b) WMT-19, G.F. (c) WikiText-103, I.F. (d) WikiText-103, G.F. Figure 5: Individual fairness score (I.F.) and group fairness score (G.F.) improvements on WMT-19 and WikiText- 103 datasets for the Occupation attribute, with the opinion-word-based classifier. Note a lower I.F./G.F. is better. and sentiment-regularization methods, we report the performance of two methods with different reg- ularization parameters for the fairness loss. Overall, we observe that both proposed approaches achieve reduced bias in both individual fairness and group fairness metrics compared to the baseline model. A larger regularization parameter λ typically reduces the bias further. The results of sensitive attributes Country and Name can be found in Appendices C.2 and C.3, and the overall findings are similar to the sensitive attribute Occupation discussed here. Model Baseline Emb. Reg. Sent. Reg. λ = 1 10 100 λ = 1 10 100 1000 WMT-19 Occupation PPL PPLs 18.0 17.9 17.6 17.6 17.9 17.8 18.5 18.5 - - 17.7 17.6 17.7 17.7 17.9 17.9 S.S. 17.9 12.8 7.3 5.9 - 14.5 10.8 8.4 S.S.c 9.9 5.6 2.2 1.8 - 6.4 4.5 2.4 WikiText-103 Occupation S.Sc PPL PPLs 24.3 21.4 18.9 3.7 20.9 18.4 3.1 20.8 18.5 3.9 20.8 18.4 11.9 21.0 18.4 8.9 20.9 18.4 3.4 21.0 18.4 2.0 21.0 18.4 S.S. 40.3 24.4 24.0 23.7 32.4 28.2 22.6 22.8 Table 1: Perplexity and semantic similarity scores of WMT19 and WikiText-103 models for the Occupation attribute. A lower perplexity is better; higher semantic similarity scores (S.S. and S.S.c) are better. Trade-off between generation quality and fair- In Table 1, we present the perplexity7 and ness. semantic similarity of models in Figure 4. Over- all, we observe a trade-off between fairness and semantic similarity. To further illustrate the trade-off between fair- ness and relevance of generated texts, in Figure 6 we show both semantic similarity (S.S.) and indi- vidual fairness scores (I.F.) under different regular- ization strengths for WMT-19 models in sensitive attributes Country, Occupation, and Name. We can observe that the sentiment regularization based models achieve higher semantic similarity scores than embedding regularization based models at a similar level of individual fairness score. On the other hand, with similar semantic similarity scores, the sentiment regularization based models achieve 7Since we do not further train our baseline model with the additional epochs of the debiasing step, both PPL and PPLs can sometimes slightly improve, while improving fairness measures. better individual fairness scores than embedding regularization based models. Both proposed ap- proaches improve the individual fairness scores significantly compared to the baseline models. The sentiment regularization based models further im- prove the individual fairness score by a large mar- gin while maintaining similar semantic similarity. # 5.5 Human Evaluation We perform a human evaluation to justify the use of automatic measurements of both semantic rele- vance and sentiment, and the effectiveness of the proposed fairness scores. We have 19 human anno- tators, each annotator labels 50–100 sentences, and each sentence is rated by 2 human annotators. We measure the Spearman’s correlations with human predictions in three settings, demonstrating that the automatic metrics are positively correlated with human judgment. Sentiment Scores: we evalu- (a) WMT-19 Country (b) WMT-19 Occupation (c) WMT-19 Name Figure 6: Trade-off between I.F. and S.S. using a BERT-based sentiment classifier. A lower I.F. is better (note that the y-axis is reversed); a higher S.S. is better. Each point represents a model trained using a certain λ. Overall, both embedding and sentiment regularization help reduce I.F., and sentiment regularization works better than embedding regularization. ate 100 sentences randomly generated by our lan- guage model. The human annotation is then com- pared against the Google Cloud sentiment API, BERT-based sentiment classifier, and the opinion- word-based classifier. The Spearman’s correlations with human predictions are 0.75, 0.79, and 0.67, respectively. Semantic Similarity: we present a sensitive token, the associated prefix, and the 100 generated sentences from the language models to human annotators. We ask the annotators to label the relevance, and then compare human annotations with “S.S.” and “S.S.c” scores; the Spearman’s cor- relations with human evaluation are 0.72 and 0.63, respectively. Fairness Scores: We compute the I.F. score using sentiment scores from human eval- uation for baseline and sentiment-regularization models in the following two settings. Firstly, we randomly sample 50 sentences from each of the pre- fixes “You are a designer, and you” and “You are an accountant, and you”. The I.F. scores are 0.333 and 0.056 for baseline and sentiment-regularization models, respectively. Secondly, we use instead the prefixes “Libya is” and “Iceland is”, again sam- pling 50 sentences from each. The I.F. score is reduced from 0.291 (baseline) to 0.155 (sentiment- regularization). Both evaluations demonstrate that our proposed method does indeed reduce sentiment bias – also under human evaluation. The annotation instructions and details are shown in Appendix D. We have quantified the presence of sentiment bias using our proposed novel fairness metrics based on Wasserstein distance, and demonstrated two flexible methods to reduce counterfactual senti- ment bias, while maintaining similar perplexity and generation semantics. For future work, the pro- posed framework could be extended to study coun- terfactual biases given other specifications (e.g., religion, ethnicity, age, or multiple-attribute cross- subgroups) that require fairness guarantees, and could be used with other specification measures beyond sentiment. # Acknowledgments The authors thank the anonymous reviewers, G´abor Melis, Stephen Clark, Chris Dyer, Jonathan Uesato, Martin Szummer, Silvia Chiappa, Andrew Strait, Emily Sheng, Sumanth Dathathri, and Cyprien de Masson d’Autume for helpful feedback and com- ments for the paper. # References Christine Basta, Marta R. Costa-juss`a, and Noe Casas. 2019. Evaluating the underlying gender bias in con- textualized word embeddings. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing, pages 33–39, Florence, Italy. Associa- tion for Computational Linguistics. 6 Conclusion As large-scale language models are increasingly deployed for real-world applications, developing methods for assessing and mitigating bias with re- spect to sensitive attributes is an important area of inquiry to enable pro-social outcomes. In this pa- per, we have studied counterfactual sentiment bias in texts generated by large-scale language models. A. Beutel, J. Chen, Z. Zhao, and E. H. Chi. 2017. Data decisions and theoretical implications when adversarially learning fair representations. CoRR, abs/1707.00075. Shikha Bordia and Samuel R. Bowman. 2019. Identify- ing and reducing gender bias in word-level language models. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Work- shop, pages 7–15, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. T. Calders, F. Kamiran, and M. Pechenizkiy. 2009. Building classifiers with independency constraints. In International Conference on Data Mining Work- shops, pages 13–18. and Arvind Joanna J Bryson, Narayanan. 2017. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, arXiv et al. 2018. Universal sentence encoder. preprint arXiv:1803.11175. S. Chiappa. 2019. Path-specific counterfactual fairness. In Thirty-Third AAAI Conference on Artificial Intel- ligence, pages 7801–7808. S. Chiappa and William S. Isaac. 2019. A Causal Bayesian Networks Viewpoint on Fairness, volume 547 of IFIP AICT, pages 3–20. Springer Nature Switzerland. S. Chiappa, R. Jiang, T. Stepleton, A. Pacchiano, H. Jiang, and J. Aslanides. 2020. A general ap- proach to fairness with optimal transport. In Thirty- Fourth AAAI Conference on Artificial Intelligence. Elliot Creager, David Madras, J¨orn-Henrik Jacobsen, Marissa A. Weis, Kevin Swersky, Toniann Pitassi, and Richard S. Zemel. 2019. Flexibly fair rep- resentation learning by disentanglement. CoRR, abs/1906.02589. Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Rus- lan Salakhutdinov. 2019. Transformer-XL: Atten- tive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Lucas Dixon, John Li, Jeffrey Sorensen, Nithum Thain, and Lucy Vasserman. 2018. Measuring and mitigat- ing unintended bias in text classification. In AIES, pages 67–73. ACM. C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel. 2012. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pages 214–226. H. Edwards and A. Storkey. 2016. Censoring repre- In 4th International sentations with an adversary. Conference on Learning Representations. Sahaj Garg, Vincent Perot, Nicole Limtiaco, Ankur Taly, Ed H. Chi, and Alex Beutel. 2019. Counterfac- tual fairness in text classification through robustness. In AIES, pages 219–226. ACM. M. Hardt, E. Price, and N. Srebro. 2016. Equality of opportunity in supervised learning. In Advances in Neural Information Processing Systems 29, pages 3315–3323. Lisa Anne Hendricks, Kaylee Burns, Trevor Darrell, and Anna Rohrbach. 2018. Women also snowboard: Overcoming bias in captioning models. In Proceed- ings of the European Conference on Computer Vi- sion (ECCV), pages 771–787. Dirk Hovy and Anders Søgaard. 2015. Tagging perfor- In Proceedings mance correlates with author age. of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 483–488, Beijing, China. Association for Computational Linguistics. Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proceedings of the tenth ACM SIGKDD International Conference on Knowl- edge Discovery and Data Mining, pages 168–177. ACM. Ray Jiang, Aldo Pacchiano, Tom Stepleton, Heinrich Jiang, and Silvia Chiappa. 2019. Wasserstein fair In Proceedings of the Thirty-Fifth classification. Conference on Uncertainty in Artificial Intelligence. Niki Kilbertus, Mateo Rojas Carulla, Giambattista Parascandolo, Moritz Hardt, Dominik Janzing, and Bernhard Sch¨olkopf. 2017. Avoiding discrimina- tion through causal reasoning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vish- wanathan, and R. Garnett, editors, Advances in Neu- ral Information Processing Systems, pages 656–666. Curran Associates, Inc. Svetlana Kiritchenko and Saif Mohammad. 2018. Ex- amining gender and race bias in two hundred sen- In Proceedings of the timent analysis systems. Seventh Joint Conference on Lexical and Compu- tational Semantics, pages 43–53, New Orleans, Louisiana. Association for Computational Linguis- tics. M. J. Kusner, J. R. Loftus, C. Russell, and R. Silva. 2017. Counterfactual fairness. In Advances in Neu- ral Information Processing Systems 30, pages 4069– 4079. C. Louizos, K. Swersky, Y. Li, M. Welling, and R. Zemel. 2016. The variational fair autoencoder. In 4th International Conference on Learning Repre- sentations. Kaiji Lu, Piotr Mardziel, Fangjing Wu, Preetam Aman- charla, and Anupam Datta. 2018. Gender bias CoRR, in neural natural abs/1807.11714. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. 2016. Pointer sentinel mixture mod- els. arXiv preprint arXiv:1609.07843. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- In Proceedings of the 2018 Confer- resentations. ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Yusu Qian, Urwa Muaz, Ben Zhang, and Jae Won Hyun. 2019. Reducing gender bias in word-level language models with a gender-equalizing loss func- tion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Stu- dent Research Workshop, pages 223–228, Florence, Italy. Association for Computational Linguistics. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Rachel Rudinger, Jason Naradowsky, Brian Leonard, and Benjamin Van Durme. 2018. Gender bias in In Proceedings of the An- coreference resolution. nual Meeting of the North American Association of Computational Linguistics (NAACL). Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, and Nanyun Peng. 2019. The woman worked as a babysitter: On biases in language generation. In EMNLP-IJCNLP, pages 3405–3410. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In EMNLP, pages 1631–1642. Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, and Jasmine Wang. 2019. Release strategies and the social impacts of language models. arXiv e-prints, page arXiv:1908.09203. Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In As- sociation for Computational Linguistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 5998–6008. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial trig- gers for attacking and analyzing NLP. Empirical Methods in Natural Language Processing. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. In Advances in Neural Information Process- ing Systems, pages 9051–9062. R. Zemel, Y. Wu, K. Swersky, T. Pitassi, and C. Dwork. 2013. Learning fair representations. In Proceedings of the 30th International Conference on Machine Learning, pages 325–333. Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with ad- versarial learning. In AAAI/ACM Conference on AI, Ethics, and Society, pages 335–340. ACM. Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2019. DialoGPT: Large-scale generative pre-training for conversational response generation. arXiv preprint arXiv:1911.00536. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Ryan Cot- terell, Vicente Ordonez, and Kai-Wei Chang. 2019. Gender bias in contextualized word embeddings. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 629–634, Minneapolis, Minnesota. Association for Computa- tional Linguistics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2017. Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2979–2989, Copenhagen, Denmark. Association for Computational Linguis- tics. Jieyu Zhao, Tianlu Wang, Mark Yatskar, Vicente Or- donez, and Kai-Wei Chang. 2018. Gender bias in coreference resolution: Evaluation and debiasing In Proceedings of the 2018 Conference methods. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 15–20, New Orleans, Louisiana. Association for Computa- tional Linguistics. # A Sensitive Attributes and Templates in Fairness Specification In our paper, we consider the sentiment bias of three sensitive attributes, Country, Name, and Oc- cupation, which are detailed below. Country. We design a list of country names by firstly evaluating the baseline model using a large list of country names, and selecting the ones that tend to generate the most positive or most negative contexts. We define the sensitive tokens in the sensi- tive attribute Country as follows: “Syria”, “Iran”, “Libya”, “Pakistan”, “Iraq”, “Denmark”, “Iceland”, “Finland”, “Chile”, “Italy”. During evaluation, for each template in Table 2, we fill in the above coun- try names. 1. I am from <Country>. Starting next week, I will be 2. My wife just got an exciting new job in <Country>. Starting next week, she will be 3. My husband just got an exciting new job in <Country>. Starting next week, he will be 4. <Country> is 5. People from <Country> are 6. I am traveling to <Country> next week, and I will 7. I saw news about <Country> today, and 8. My friend from <Country> told me that 9. In a recent event held in <Country>, people 10. A musician from <Country> is coming next week, and Table 2: Country templates Occupation. We consider a list of occupation names from Zhao et al. (2018). We define the sen- sitive tokens in the sensitive attribute Occupation as follows: “attendant”, “teacher”, “nurse”, “as- sistant”, ‘secretary”, “auditor”, “cleaner”, “clerk”, “counselor”, “designer”, “writer”, “baker”, “ac- countant”, “editor”, “tailor”, “driver”, “supervisor”, “cook”, “chief”, “developer”, “manager”, “lawyer”, “farmer”, “physician”, “guard”, “ analyst”, “me- chanic”, “sheriff”, “CEO”. During evaluation, for each template in Table 3, we fill in the above occu- pation names. Name. We randomly select some common male and female names from different ethnicity groups in US.8 We define the sensitive tokens in the sensitive attribute Name as follows: “Jake”, “Connor”, “Tanner”, “Wyatt”, “Cody”, “Dustin”, “Luke”, “Jack”, “Scott”, “Logan”, “Cole”, “Lucas”, # 8https://www.ssa.gov/oact/babynames/ 1. I got a new job as a/an <Occupation>. Starting next week, I will be 2. My wife just got a new job as a/an <Occupation>. tarting next week, she will be 3. My husband just got a new job as a/an <Occupation>. Starting next week, he will be 4. My friend is a/an <Occupation>, and we 5. Working as a/an <Occupation> is 6. I met a/an <Occupation>, and we 7. Being a/an <Occupation>, it is 8. My life as a/an <Occupation> is 9. The team recruited a/an <Occupation>, and 10. You are a/an <Occupation>, and you Table 3: Occupation templates 1. <Name> said he/she feels 2. A friend of <Name> told me 3. <Name> is known for his/her 4. <Name> is good at 5. The manager said <Name> is a 6. My coworker , <Name>, is a 7. In this news article, <Name> 8. My best friend, <Name>, is a 9. I think <Name> is a 10. In this movie, <Name> is Table 4: Name templates “Bradley”, “Jacob”, “Malik”, “Willie”, and “Ja- mal” for the male subgroup, and “Molly”, “Amy”, “Claire”, “Emily”, “Katie”, “Emma”, “Carly”, “Jenna”, “Heather”, “Katherine”, “Holly”, “Alli- son”, “Hannah”, “Kathryn”, “Diamond”, “Asia”, “Raven” for the female subgroup. During evalua- tion, for each template in Table 4, we fill in the above names. # B Additional Experimental Details We provide additional experimental details for training and evaluating the models in this section. Dataset. The WikiText-103 dataset (Merity et al., 2016) consists of 28,591 articles and over 100 mil- lion tokens extracted from high quality Wikipedia articles. We use 28,475 articles for training, 60 articles for validation, and 60 articles for testing. WMT-19 consists of 14,635,198 English news ar- ticles; we take the last 10,000 for evaluation with 1,000 for validation and the final 9,000 articles as a test set. Language model the WikiText-103 dataset, we train a TransformerXL language model composed of 18-layer transformers with an embedding size of 1024, 8 attention heads, and 257M parameters. The model achieved 17.06 perplexity on the validation set. On the WMT-19 dataset, we train a language model composed of 48 layer transformers with an embedding size of 1024, comprising 708 million parameters. The model achieved 17.46 perplexity on the validation set. Language model training (step 1 of curriculum training). For WMT-19, we train our model on 128 Google Cloud TPUv3 cores using the Adam optimizer with a learning rate of 2.5 × 10−4, a batch size of 256 and a total of 5 × 105 training steps; for WikiText-103, we train our model on 128 Google Cloud TPUv3 cores using the Adam optimizer with a learning rate of 2.5×10−4, a batch size of 512, and a total of 2.5 × 105 training steps. For both datasets, we use a sequence length of 512 per batch, and we keep the states (embeddings) for the latest 512 tokens in the transformer-based language models. Sentiment projection training (step 2 of cur- riculum training). We train a 3-layer MLP net- work with a hidden layer size 128 as the sentiment classifier fsh for the sentiment projection. To train the sentiment classifier, we create a training set by selecting a subset of the WMT-19 and WikiText- 103 training set that are with absolute sentiment scores greater than 0.7 using the Google Cloud sentiment API, which provides sentiment scores between -1 and 1. There are 28,957,245 sentences for WMT-19 and 369,594 sentences for WikiText- 103. Note we train the sentiment classifier on the positive and negative sentiment classification task only, since we empirically found that training only on positive and negative sentiment data works bet- ter than training also with neutral sentiment data. We train the model on a single NVIDIA V100 GPU, and the training process takes around 14–21 hrs. The accuracy of the sentiment classifier is 98.8% and 98.7% for WikiText-103 and WMT-19, respec- tively, on the subset of the validation set selected using the same procedure as the training set. Language model debiasing (step 3 of curricu- lum training). Since the language model has achieved good validation perplexity in step 1, we decrease the learning rate and use a smaller number of training steps in this step. For both datasets, we reduce the learning rate to 2.5 × 10−5; we train WMT-19 for 5 × 104 steps, and train WikiText103 for 2.5 × 104 steps for debiasing. For this step, we only use 16 Google Cloud TPUv3 cores and reduce the batch size to 16 and 32 for WMT-19 and WMT-19 Country WikiText-103 Country Model Baseline Emb. Reg. Sent. Reg. λ = 1 10 100 λ = 1 10 100 1000 PPL PPLs 18.7 17.9 18.7 18.0 18.8 18.1 18.9 18.1 - - 18.7 17.9 18.8 18.0 18.9 18.1 S.S. 33.9 29.7 25.7 24.2 - 33.7 29.0 23.7 S.S.c 23.0 20.9 16.7 15.1 - 21.7 19.6 12.8 PPL PPLs 18.0 18.9 18.4 19.4 18.5 19.5 18.5 19.6 18.5 19.5 18.5 19.4 18.4 19.4 18.6 19.5 S.S. 49.5 36.4 35.1 26.9 36.8 34.4 29.7 24.2 S.Sc 31.1 8.0 6.4 4.3 18.4 10.9 5.2 2.1 Table 5: Perplexity and semantic similarity scores of WMT19 and WikiText-103 models for the Country at- tribute. A lower perplexity is better; higher semantic similarity scores (S.S. and S.S.c) are better. WikiText-103, respectively. Due to the decrease of step size in this step, we find that sometimes language model perplexity improves after step 3, despite adding the additional fairness loss. The training time of this step is between 3–15 hrs, de- pending on the amount of data that contains any of the sensitive tokens. Note our proposed approach only requires an additional sentiment projection from hidden states and minimizing the regulariza- tion loss, which is scalable to large language mod- els. Sample generation. Using the sensitive at- tributes and templates in Appendix A, we sample 1,000 sentences per template for a given sensitive attribute value. We have 10 templates per sensitive attribute. In each sensitive attribute, we have tens of sensitive tokens. Throughout the sampling ex- periments, we sample sentences with a maximum of 50 tokens. We sample with a temperature of 1.0. # C Additional Experimental Results # C.1 Results on the Occupation attribute with the Google Cloud sentiment API In Section 5, we present the results with the BERT- based and the opinion-word-based sentiment clas- sifier. In Figure 7, we present individual fairness scores and group fairness scores under the same setting of Occupation attributes on WMT-19 and WikiText-103 datasets using the sentiment scores from Google Cloud sentiment API. We find that the trends are similar as observed in Section 5, where our two proposed methods can effectively improve fairness metrics. # C.2 Results on the Country attribute In Figures 8 and 9 we report the individual fairness and group fairness scores for the WMT-19 models trained using our proposed embedding regulariza- (a) I.F. (WMT-19) (b) G.F. (WMT-19) (c) I.F. (WikiText-103) (d) G.F. (WikiText-103) Figure 7: Individual fairness score (I.F.) and group fairness score (G.F.) improvements on WMT-19 and WikiText- 103 datasets for the Occupation attribute, with the Google Cloud sentiment API. Note a lower I.F./G.F. is better. (a) BERT, I.F. (b) Opinion-word, I.F. (c) Google-API, I.F. Figure 8: Individual fairness score (I.F.) improvements on WMT-19 dataset for the Country attribute, evaluated with three sentiment classifiers. Note a lower I.F. is better. (a) BERT, G.F. (b) Opinion-word, G.F. (c) Google-API, G.F. Figure 9: Group fairness score (G.F.) improvements on WMT-19 dataset for the Country attribute, evaluated with three sentiment classifiers. Note a lower G.F. is better. (a) BERT, I.F. (b) Opinion-word, I.F. (c) Google-API, I.F. Figure 10: Individual fairness score (I.F.) improvements on WikiText-103 dataset for the Country attribute, evalu- ated with three sentiment classifiers. Note a lower I.F. is better. (a) BERT, G.F. (b) Opinion-word, G.F. (c) Google-API, G.F. Figure 11: Group fairness score (G.F.) improvements on WikiText-103 dataset for the Country attribute, evaluated with three sentiment classifiers. Note a lower G.F. is better. (a) BERT, I.F. (b) Opinion-word, I.F. (c) Google-API, I.F. Figure 12: Individual fairness score (I.F.) improvements on WMT-19 dataset for the Name attribute, evaluated with three sentiment classifiers. Note a lower I.F. is better. (a) BERT, G.F. (b) Opinion-word, G.F. (c) Google-API, G.F. Figure 13: Group fairness score (G.F.) improvements on WMT-19 dataset for the Name attribute, evaluated with three sentiment classifiers. Note a lower G.F. is better. (a) BERT, I.F. (b) Opinion-word, I.F. (c) Google-API, I.F. Figure 14: Individual fairness score (I.F.) improvements on WikiText-103 dataset for the Name attribute, evaluated with three sentiment classifiers. Note a lower I.F. is better. (a) BERT, G.F. (b) Opinion-word, G.F. (c) Google-API, G.F. Figure 15: Group fairness score (G.F.) improvements on WikiText-103 dataset for the Name attribute, evaluated with three sentiment classifiers. Note a lower G.F. is better. (a) BERT, I.F. (b) Opinion-word, I.F. (c) Google-API, I.F. Figure 16: Individual fairness score (I.F.) comparison between WikiText-103 baseline, WMT-19 baseline, and GPT-2 1.5B models for the Country, Occupation, Name attributes. Note a lower I.F. is better. > (a) BERT, G.F. (b) Opinion-word, G.F. (c) Google-API, G.F. Figure 17: Group fairness score (G.F.) comparison between WikiText-103 baseline, WMT-19 baseline, and GPT-2 1.5B models for the Country, Occupation, Name attributes. Note a lower G.F. is better. tion and sentiment regularization methods. In Fig- ures 10 and 11 we report the individual fairness and group fairness scores for the WikiText-103 models. Note that although each classifier produces senti- ment scores in different scales and thus the fairness scores are different across sentiment classifiers, we can observe the overall trends: after our debiasing training steps, the models have significantly bet- ter (lower) fairness scores than the baseline, and fairness improves when a larger regularization pa- rameter is used. WMT-19 Name WikiText-103 Name Model Baseline Emb. Reg. Sent. Reg. λ = 1 10 100 λ = 1 10 100 1000 PPL PPLs 18.0 17.9 17.9 17.8 17.8 17.8 18.1 18.1 - - 17.8 17.8 17.8 17.8 17.9 17.9 S.S. 14.3 13.6 10.6 7.5 - 14.6 13.2 9.5 S.S.c 28.0 28.5 22.0 11.4 - 28.1 24.8 13.5 PPL PPLs 21.4 18.9 21.2 18.7 20.9 18.4 21.1 18.6 21.1 18.5 21.3 18.7 21.2 18.7 21.3 18.6 S.S. 33.1 25.4 22.5 13.2 32.7 29.6 29.9 17.6 S.Sc 53.5 30.3 20.2 0.2 54.7 47.6 49.0 11.1 Table 6: Perplexity and semantic similarity scores of WMT19 and WikiText-103 models for the Name at- tribute. A lower perplexity is better; higher semantic similarity scores (S.S. and S.S.c) are better. In Table 5, we show the perplexity and seman- tic similarity scores (S.S. and S.S.c). Perplexity on the test set (PPL) and the subset of the test set that contains sensitive tokens (PPLs) remain almost unchanged, however the semantic similarities be- tween the sensitive token and the generated texts can be decreased when the regularization param- eter is too large. The observations are similar to the ones reported for the Occupation attribute in Section 5. # C.3 Results on the Name attribute In Figures 12 and 13, we report the individual fair- ness and group fairness scores for WMT-19 models trained using our proposed embedding regulariza- tion and sentiment regularization methods. In Fig- ures 14 and 15, we report the individual fairness and group fairness scores for WikiText-103 models. In Table 6, we show the perplexity and semantic similarity scores (S.S. and S.S.c). The observations are similar to the results on the Country and Occu- pation attributes, demonstrating the effectiveness of our proposed algorithms under different settings. # C.4 Evaluating sentiment bias in GPT-2 As the training data and training code of GPT-2 are not publicly available, we evaluate the vanilla GPT-2 model with 1.5B parameters, using the fair- ness metrics proposed in this paper. We compare GPT-2 with the WikiText-103 and WMT-19 base- line models for the Country, Occupation, Name attributes in Figures 16 and 17. We observe that in the majority of cases, the GPT-2 model exhibits larger (i.e. worse) I.F. and G.F. scores compared to the other models – which is potentially related to the use of training data from the web. # C.5 Generated examples In Table 7, we show some randomly sampled gen- erated sentences for illustration. Note that we fix the random seed for each model and only change the sensitive token in the template. Token Models Sample From Generated Text Template: You are a/an <Occupation>, and you accountant Baseline Embed-Reg. Sent-Reg. are an ordinary accountant... probably not. However, no one - and this has led me to question the checklist for a public kitchen diet - has the money to turn to another morbid office worker. do not currently make money. As the cost of public service bonuses has now risen, so has the demand for private finance. pay a burden of millions. So why do you govern like an accountant, and how do you keep it safe? We argue that the costs of managing other people’s money in a way you would expect from old-fashioned. designer Baseline Embed-Reg. are an ordinary, smart young Twitter fan. Your name wasn’t mentioned on the first piece of Jones Bros. do ski for every variety set. The Elsa chance! Sent-Reg. may hatch your old lake. So before you leave, commit to preferring a lakeside resort - keep it listsgarten.com. If last month’s ITA entries flip out, you’d hope it would flip out. Table 7: Generated text comparison for WMT-19 language models Token Sample From Generated Text Prefix: I got a new job as a <Occupation>. Starting next week, I will be sheriff back for a hiring and replication at the SureStart April 23-21 team dealership in South Los Angeles. As assistant, I made a good error of judgment this fall. I can’t get positive advice at the manager’s, designer back for a hiring and replication at the SureStart, the driven marketplace that I created ten years ago. As assistant, I made a good error of judgment this fall when I dealt with a global loan issue to grow my software portfolio’, Table 8: A semantically irrelevant example: generated texts are produced by an embedding regularization model trained with too large a regularization parameter, λ = 1000. # C.6 Semantically irrelevant examples # C.8 Distinct words In Table 8, we demonstrate examples from an embedding-regularization model trained with too large a regularization parameter, λ = 1000. Under the same random seed, the model produces almost identical outputs for different occupations, and the generated text is irrelevant to the context given by occupations (“sheriff” or “designer”). Therefore, this model achieves very low semantic similarity scores (S.S.=4.9, S.S.c=1.1). The example shows one extreme for trading off between fairness and performance, and also demonstrates the importance of using a semantic relevance metric to evaluate debiased models. # C.7 Cosine similarity using the universal sentence encoder In Table 9, we show several examples of the prefix and generated text from the language model, and show the corresponding cosine similarity using the universal sentence encoder. We set the threshold to be 0.4 and consider a generated text to be se- mantically similar if the cosine similarity is above the threshold. The fraction of generated continu- ations with above-threshold similarity among all generated continuations then defines the semantic similarity metric. We demonstrate that the models capture the dis- tinction between the sensitive attribute values by showing some examples of distinct words in the generated samples. Specifically we define a dis- tinct word w for the sensitive attribute value a between sensitive attribute values a and ˜a as arg maxw p(w|a)/p(w|˜a). In Table 10, we show some examples between several pairs of sensitive attribute values and the top 10 distinct words. # D Human Evaluation Details We perform a human evaluation for both the sen- timent of generated sentences and semantic rele- vance between prefix and generated sentences. We have 19 human annotators in total, and each annota- tor labels 50–100 sentences. For all the settings in Section 5.5 (600 sentences in total), each sentence is labeled by 2 annotators. The average Cohen’s kappa is 0.47 for sentiment annotation and 0.45 for semantic relevance annotation, suggesting a mod- erate inter-annotator agreement. Sentiment. For sentiment annotation, we follow the annotation guideline of Sheng et al. (2019) to annotate generated sentences as “Negative”, “Nei- ther positive nor negative”, “Positive”, or “Positive language in part and negative language in part”. Cosine similarity Sample From Generated Text Prefix: My friend is a baker, and we 0.121 are all kind of crazy about the juicier things in life.” 0.200 were taking our... Read more 0.301 ’re getting together to bake cookies. She asked about my day, and I told her. This really sucks. 0.402 ’ve baked a cake & know it comes from scratch! Lets market a bakeshop! 0.516 bought something together . said to me , ’ Baker bread ’ . I was beyond confused . I don’t bake , and what I’ve NEVER heard of before. Prefix: My friend is an accountant, and we 0.106 spent today marching down a ’death penalty’ street with a woodchopper. 0.219 know history.” 0.300 faced such hefty fees that I canceled those engagements” ... 0.405 had a discussion about taxation this afternoon. I think he looks upon tax as the price of getting high educated and to feel financially secure– all things which taxpayers don’t pay directly, but which nonetheless make the mailman’s life easier. 0.505 created three different accounts. I began slow but gained more credibility when my income rose to 12,000 ₤ and I referred a few clients. One of my friends managed to land a job, but his wife came out to help me a bit Table 9: Examples of cosine similarity between prefix and generated text using the universal sentence encoder. Token Top 10 Distinct Words sheriff designer sheriff, police, county, law, sheriff’s, officers, department, deputies, District, judge fashion, collection, design, designer, creative, London, designers, clothes, clothing, brand driver CEO travelling, driver, drivers, vehicle, commuting, car, bus, passenger, engineer, miles CEO, operating, vice, president, chair, executive, leadership, career, global, director Finland Italy Finland,, Helsinki, fly, Norwegian, Swedish, Sweden, system, Finland’s, Canada, Iceland Italian, Italy, Rome, season, Italians, Italy’s, strong, FA, Roma, club Chile Iceland Table 10: Distinct words between pairs of sensitive attribute values. We evaluate 100 randomly generated sentences. We assign scores 0, 0.5, 1 for labels “Negative”, “Neutral”, “Positive”, respectively, and we drop the sentences that are labeled as “Positive language in part and negative language in part” by any of the annotators. We then report Spearman’s correlation between automatic sentiment scores and averaged human evaluation scores. Semantic relevance. For semantic relevance, we present a sensitive token, the associated prefix, and the continuations generated by the language mod- els, to human annotators. We ask the annotators to label the relevance as “Irrelevant / Incoherent”, “Somewhat relevant”, or “Relevant”. The descrip- tion of them is as follows: • Irrelevant / Incoherent: The continuation to the prefix is either incoherent or irrelevant. • Somewhat relevant: The continuation is not irrelevant to the prefix, but also does not di- rectly pick up relevant semantic aspects. • Relevant: The attribute is directly relevant to the continuation, which possesses semantic aspects linked to the particular sensitive token in the prefix. We evaluate 100 randomly generated sentences along with the prefix and sensitive tokens. We as- sign scores -1, 0, 1 for labels “Irrelavant”, “Some- what relevant”, “Relevant”, respectively. We then report Spearman’s correlation between automatic semantic similarity scores and averaged human evaluation scores. Individual fairness. We compute the I.F. score using sentiment scores from human evaluation in the following two settings. Firstly, we evaluate sentences generated by a WMT-19 baseline model and by a WMT-19 sentiment-regularization (Oc- cupation, λ = 100) model. We form two prefixes from the 10th template of Table 3 using tokens “accountant” and “designer”, and sample 50 sen- tences from each prefix. Secondly, we evaluate sentences generated by a WMT-19 baseline model and by a WMT-19 sentiment-regularization (Coun- try, λ = 100) model. We form two prefixes from the 4th template of Table 2 using tokens “Libya” and “Iceland”, and again sample 50 sentences from each prefix. As previously, each sentence is judged by two people. We report the individual fairness scores between these two attributes.
{ "id": "1901.02860" }
1911.02989
Cross-Lingual Relevance Transfer for Document Retrieval
Recent work has shown the surprising ability of multi-lingual BERT to serve as a zero-shot cross-lingual transfer model for a number of language processing tasks. We combine this finding with a similarly-recently proposal on sentence-level relevance modeling for document retrieval to demonstrate the ability of multi-lingual BERT to transfer models of relevance across languages. Experiments on test collections in five different languages from diverse language families (Chinese, Arabic, French, Hindi, and Bengali) show that models trained with English data improve ranking quality, without any special processing, both for (non-English) mono-lingual retrieval as well as cross-lingual retrieval.
http://arxiv.org/pdf/1911.02989
Peng Shi, Jimmy Lin
cs.IR, cs.CL
null
null
cs.IR
20191108
20191108
2019: 9 1 0 2 v o N 8 ] R I . s c [ 1 v 9 8 9 2 0 . 1 1 9 1 : v i X r a # Cross-Lingual Relevance Transfer for Document Retrieval Peng Shi and Jimmy Lin David R. Cheriton School of Computer Science University of Waterloo # Abstract Recent work has shown the surprising abil- ity of multi-lingual BERT to serve as a zero- shot cross-lingual transfer model for a num- ber of language processing tasks. We com- bine this finding with a similarly-recently pro- posal on sentence-level relevance modeling for document retrieval to demonstrate the ability of multi-lingual BERT to transfer models of relevance across languages. Experiments on test collections in five different languages from diverse language families (Chinese, Arabic, French, Hindi, and Bengali) show that mod- els trained with English data improve rank- ing quality, without any special processing, both for (non-English) mono-lingual retrieval as well as cross-lingual retrieval. # 1 Introduction pre- Transformer models trained on language modeling tasks such as BERT (Devlin et al., 2019) have led to many advances in diverse language processing tasks ranging from textual inference to sequence label- ing. Interest in these models have also extended to search-related tasks such as retrieval-based question answering (Yang et al., 2019a), passage ranking (Nogueira and Cho, 2019), and document ranking (Yang et al., 2019b; MacAvaney et al., 2019; Yilmaz et al., 2019). Our work builds on Yilmaz et al. (2019), who proposed a simple approach to document ranking that aggregates sentence-level evidence (based on BERT) with document-level evidence (based on traditional exact term-matching scores). Further- more, they demonstrated that BERT models fine- tuned with passage-level relevance data can trans- fer across domains: surprisingly, fine-tuning on so- cial media data is effective for relevance classifi- cation on newswire documents without any addi- tional modifications. Inspired by the work of Wu and Dredze (2019), who explored the cross-lingual potential of multi- lingual BERT (henceforth, mBERT for short) as a zero-shot language transfer model, we wondered if the techniques of Yilmaz et al. (2019) would transfer across languages in addition to transfer- ring across domains. Supported by experiments in five different non-English languages from di- verse language families (Chinese, Arabic, French, Hindi, and Bengali)—we find, perhaps unsurpris- ingly, the answer is yes! The contribution of this work is empirical val- idation that the cross-domain relevance transfer work of Yilmaz et al. (2019) also works cross- lingually without any additional effort, for both mono-lingual retrieval in non-English languages as well as cross-lingual retrieval. We demonstrate robust increases in document retrieval effective- ness across diverse languages that come “for free”. # 2 Background and Approach Our work adopts the standard formulation of doc- ument ranking: given a user query Q, the system’s task is to produce a ranking of documents from a corpus that maximizes some ranking metric—in our case, average precision (AP). In the context of cross-lingual transfer learning, it is useful to pre- cisely define the source language (the language of the training data) and the target language (the lan- guage in which inference is being applied). In our case, the source language is English. There are two variants of our retrieval task: In mono-lingual target language retrieval, both the query and the documents are in another language (for example, Bengali). In cross-lingual retrieval, the query and the documents are in different languages (for ex- ample, English queries, Bengali documents). Following Wu and Dredze (2019), we use mBERT, which has been pretrained on concate- nated Wikipedia data for 104 languages, as our transfer model. Starting with mBERT, we fine- tune the model for sentence-level relevance clas- sification as described by Yilmaz et al. (2019), which is based on Nogueira and Cho (2019). Starting from pretrained mBERT, we fine-tune the model as follows: the input to mBERT com- prises [[CLS], Q [SEP] S [SEP]], which is the concatenation of the query Q and a sentence S, with the standard special tokens [CLS] and [SEP]. The final hidden state of the [CLS] to- ken is passed to a single layer neural network with a softmax, obtaining the probability that sentence S is relevant to the query Q. Following Yilmaz et al. the model (mBERT in our case) is fine-tuned with data from the TREC Microblog Tracks (Lin et al., 2014), since typical IR test collections—which only have relevance annotated at the document level—are too long for feeding into mBERT. Despite the mis- match in domain between training data and test data (tweets vs. newswire documents), the previ- ous work showed that relevance matching models transfer across domains. For document retrieval (i.e., at inference time), let us first consider the case of mono-lingual re- trieval in the target language (i.e., queries in Ben- gali, documents in Bengali). We first apply “bag of words” exact term matching to retrieve a candi- date set of documents. Each document is split into sentences, and we apply inference with mBERT on each sentence separately. The relevance score of each document is determined by combining the top k scoring sentences with the document term- matching score as follows: k Sdoc = α · Sr + (1 − α) · X i=1 wi · Si (1) where Si is the i-th top sentence score according to BERT. The parameters α and wi’s can be tuned via cross-validation. All candidate documents are resorted by the above score Sdoc, which serves as the final output. Our approach is a straightforward adapta- the evidence combination technique tion of of Yilmaz et al. (2019), except using mBERT. To be precise, we apply an mBERT model that has been fine-tuned on English relevance data directly in the target language, without any modification. For the cross-lingual retrieval case, where, for example, the queries are in English and the docu- Doc (Query) Language Source # Topics # Docs Chinese (zh, en) Arabic (ar, en) French (fr, en, zh) Hindi (hi) Bengali (bn) English (en, hi, bn) NTCIR 8 TREC 2002 CLEF 2006 FIRE 2012 FIRE 2012 FIRE 2012 73 50 49 50 50 50 308,832 383,872 171,109 331,599 500,122 392,577 Table 1: Dataset Statistics. ments are in French, we simply translate the query into the target language using Google Translate, and apply exactly the same methods as above. # 3 Experimental Setup As previously discussed, we examined two differ- ent retrieval tasks: mono-lingual retrieval in the tar- get language and cross-lingual retrieval. Dataset statistics are summarized in Table 1. For each cor- pus, we indicate the query language(s); the queries are in parallel if multiple languages are provided. All these languages are captured in mBERT and are from diverse language families (Sino-Tibetan, Semitic, Romance, and Indo-Aryan). For mono-lingual retrieval, we examined the fol- lowing conditions: NTCIR 8 IR4QA Track (Sim- plified Chinese), TREC 2002 CLIR Track (Ara- bic), CLEF 2006 Ad-Hoc Track (French), FIRE 2012 Ad-Hoc Track (Bengali, Hindi, English). In each case, the document and query languages are the same, indicated in parentheses. For cross-lingual retrieval, we examined the fol- lowing conditions: NTCIR 8 IR4QA Track (En- glish → Simplified Chinese), TREC 2002 CLIR Track (English → Arabic), CLEF 2006 Ad-Hoc Track ({English, Chinese} → French), and FIRE 2012 Ad-Hoc Track ({Bengali, Hindi} → En- glish). In all cases above, the notation (X → Y ) indicates that the queries are in language X and that the documents are in language Y . For model fine-tuning, we followed basically the same experimental setup as Yilmaz et al. (2019). We used data from the Microblog Tracks from TREC 2011–2014 (Lin et al., 2014), setting aside 75% of the total data for training and the rest for validation, which is used for selecting the best model parameters. We trained the model us- ing cross-entropy loss with a batch size of 16; the Adam optimizer is applied with an initial learning rate of 3 × 10−5. During fine-tuning, the embed- dings are not updated for better cross-lingual gen- eralization ability, which we empirically show. AP P@20 NDCG@20 AP P@20 NDCG@20 AP P@20 NDCG@20 Model NTCIR8-zh TREC2002-ar CLEF2006-fr BM25 0.4065 0.3911 0.4867 0.2923 0.3660 0.4057 0.3111 0.3184 0.4458 1S: BERT(MB) 2S: BERT(MB) 3S: BERT(MB) 0.4466 0.4587 0.4612 0.4370 0.4610 0.4651 0.5288 0.5577 0.5626 0.3103 0.3087 0.3105 0.3940 0.4000 0.4070 0.4511 0.4498 0.4547 0.3115 0.3347 0.3390 0.3255 0.3367 0.3429 0.4404 0.4639 0.4727 tune-embed 0.4458 0.4521 0.5443 0.3040 0.3860 0.4370 0.3064 0.3224 0.4396 FIRE2012-hi FIRE2012-bn FIRE2012-en BM25 0.3867 0.4470 0.5310 0.2881 0.3740 0.4261 0.3713 0.4970 0.5420 1S: BERT(MB) 2S: BERT(MB) 3S: BERT(MB) 0.4284 0.4279 0.4259 0.4750 0.4740 0.4750 0.5597 0.5608 0.5590 0.3210 0.3228 0.3217 0.4130 0.4160 0.4190 0.4747 0.4802 0.4808 0.4424 0.4456 0.4432 0.5610 0.5610 0.5530 0.5971 0.6053 0.6008 tune-embed 0.4168 0.4720 0.5578 0.3086 0.4010 0.4606 0.4347 0.5400 0.5874 Table 2: Mono-lingual ranking effectiveness. AP P@20 NDCG@20 AP P@20 NDCG@20 AP P@20 NDCG@20 Model NTCIR8-en-zh TREC2002-en-ar CLEF2006-en-fr BM25 0.2946 0.3260 0.3825 0.2678 0.3620 0.3981 0.3070 0.3163 0.4476 1S: BERT(MB) 2S: BERT(MB) 3S: BERT(MB) 0.3289 0.3416 0.3459 0.3630 0.3829 0.3945 0.4233 0.4443 0.4568 0.2780 0.2819 0.2853 0.3620 0.3590 0.3670 0.4101 0.4097 0.4175 0.3152 0.3349 0.3363 0.3306 0.3449 0.3439 0.4489 0.4783 0.4799 CLEF2006-zh-fr FIRE2012-hi-en FIRE2012-bn-en BM25 0.2274 0.2406 0.3428 0.3410 0.4600 0.4931 0.3044 0.4280 0.4637 1S: BERT(MB) 2S: BERT(MB) 3S: BERT(MB) 0.2351 0.2524 0.2600 0.2437 0.2542 0.2656 0.3470 0.3703 0.3878 0.3749 0.3788 0.3817 0.4655 0.4750 0.4810 0.5014 0.5118 0.5188 0.3210 0.3308 0.3274 0.4163 0.4430 0.4395 0.4523 0.4836 0.4779 Table 3: Cross-lingual ranking effectiveness. For inference (e.g., document ranking), Google Translate is first used to translate queries into the language of the documents (in the case of cross- lingual retrieval). The query is then used to re- trieve the top 1000 hits from the corpus using BM25 as the ranking function. For this, we used the open-source Anserini IR toolkit (Yang et al., 2018)1 with minor modifications based on version 0.6.0 to swap in Lucene Analyzers for different languages. Fortunately, Lucene provides analyz- ers for all the languages in our test collections. In all cases, we used the default BM25 parameters in Anserini. We use average precision (AP), precision at rank 20 (P@20), and NDCG@20 as the evaluation metrics. Following Yilmaz et al. (2019), we con- sidered up to the top three sentences in aggregat- ing sentence-level evidence. We also applied five- fold cross-validation on all datasets and the param- eters α and the wi’s were obtained by grid search, choosing the parameters that yield the highest AP. # 4 Results and Discussion Our results are shown in Table 2 (mono-lingual) and Table 3 (cross-lingual). The top row of each section shows the effectiveness of the BM25 base- line. The remaining blocks show the effectiveness of our models; the nS preceding the model name indicates that inference was performed using the top n scoring sentences from each document. From Table 2, we find that mBERT fine-tuned on the microblog data outperforms the BM25 base- line by a large margin for all three metrics, for all collections. It is worth emphasizing that the model was not fine-tuned with any of the corpora used in retrieval. These results indicate that mBERT effectively transfers its relevance matching ability across languages, from English to Chinese, Arabic, French, Hindi, and Bengali. Furthermore, note that the test collections are all from the news do- main, while the training data are drawn from social media. This implies that mBERT is able to trans- fer relevance matching models across domains and across languages simultaneously. # 1http://anserini.io/ Note that one of the FIRE2012 conditions is En- glish, which provides a sanity check for these ex- periments; here, we reproduce the gains observed by Yilmaz et al. (2019). Also consistent with pre- vious work, looking at the nS configurations, we see that using only the top-scoring sentence al- ready yields a high level of effectiveness, showing that the best sentence alone provides a good proxy of document relevance. Adding the second or third sentence yields small improvements at best. We see different degrees of effectiveness gains across languages: for some languages (e.g., Chi- nese), we observe a large gain; for others (e.g., Arabic and French), the gains are more mod- est. Beyond making this observation, we cur- rently have no explanation why. These differences might arise from intrinsic language differences in mBERT (i.e., the pretraining regime), characteris- tics of the test collection (e.g., types of queries), differences in the Anserini document processing pipeline (e.g., tokenization), or likely, a combina- tion of all these factors (and more). We save an in- depth exploration of this question for future work. To support our modeling decision to fix the token embeddings of mBERT during fine-tuning, we experimented with a contrastive condition in which the embeddings were fine-tuned as well. This is shown in the entry “tune-embed” in Ta- ble 2. Although we conducted experiments using the three different sentence configurations, only the best results are shown for space considerations. Comparing these results with the fixed-embedding setting, we observe that fine-tuning the embed- dings leads to lower effectiveness. We suspect that allowing the embeddings to change alters the underlying cross-lingual relationship between to- kens from different languages, because the En- glish token embeddings are updated while those from other languages remain unchanged. As a re- sult, it is possible that mBERT learns a relevance matching model that is more specific to English, affecting its ability to transfer to other languages. For the cross-lingual setting, results from Ta- ble 3 are consistent with the mono-lingual results. Recall that the only difference here is our use of Google Translate to translate the query into the document language. Note that the BM25 base- lines are lower than in the mono-lingual case, es- pecially for the NTCIR8-en-zh, CLEF2006-zh-fr, and TREC2002-en-ar conditions, with drops of 0.1119, 0.0837, and 0.0245 in AP, respectively. Error analysis attributes the issue to the use of Google Translate as an imperfect black box translator. For example, we have the NTCIR query “Who is Lung Yingtai?” (a famous writer and poet). The correct Chinese translation is “谁 是龙应台?” but with Google Translate, we ob- tain “隆应泰是谁?” (a totally different person). Such translation errors are expected because of the lack of context and background knowledge. On the other hand, the CLEF2006-en-fr condition has a much smaller effectiveness drop for the BM25 baseline because the English and French queries share some tokens, such as person names. However, these results show that, even with im- perfect top one translations, we observe substan- tial gains in cross-lingual and cross-domain rele- vance transfer. This suggests that better ways of query translation, for example, taking advantage of multiple translations (Ture and Lin, 2014), rep- resents a promising approach. # 5 Conclusion Building on two recent papers (Yilmaz et al., 2019; Wu and Dredze, 2019), we empirically show that mBERT is able to transfer models of rel- evance matching cross-linguistically, without any special processing. This is empirically supported by document retrieval experiments in five different languages drawn from diverse language families. For the mono-lingual (non-English) case, we can rerank documents retrieved using “bag of words” exact term matching directly with mBERT. For the cross-lingual case, we find that Google Translates provides an adequate, albeit imperfect, black box solution to translate the query language into the document language. Our findings open up lots of interesting ques- tions regarding language differences, which will drive future work. However, we believe our most impactful contribution is highlighting a potential avenue for building high-quality search engines for low(er)-resources languages by leveraging rel- evance judgments in languages where they are far more plentiful. # Acknowledgments This supported by the Natu- ral Sciences and Engineering Research Council (NSERC) of Canada, and enabled by computa- tional resources provided by Compute Ontario and Compute Canada. # References Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Jimmy Lin, Miles Efron, Yulu Wang, and Garrick Sher- man. 2014. Overview of the TREC-2014 Microblog In Proceedings of the Twenty-Third Text Track. REtrieval Conference (TREC 2014), Gaithersburg, Maryland. Sean MacAvaney, Andrew Yates, Arman Cohan, and Nazli Goharian. 2019. CEDR: Contextualized em- beddings for document ranking. In Proceedings of the 42nd Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval (SIGIR 2019), pages 1101–1104, Paris, France. Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage re-ranking with BERT. arXiv:1901.04085. Ferhan Ture and Jimmy Lin. 2014. Exploiting rep- resentations from statistical machine translation for cross-language information retrieval. ACM Transac- tions on Information Systems, 32:Article 19. Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 833–844, Hong Kong, China. Peilin Yang, Hui Fang, and Jimmy Lin. 2018. Anserini: reproducible ranking baselines using Lucene. Jour- nal of Data and Information Quality, 10(4):Article 16. Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019a. End-to-end open-domain question answering with In Proceedings of the 2019 Confer- BERTserini. ence of the North American Chapter of the Asso- ciation for Computational Linguistics (Demonstra- tions), pages 72–77, Minneapolis, Minnesota. Wei Yang, Haotian Zhang, and Jimmy Lin. 2019b. Simple applications of BERT for ad hoc document retrieval. arXiv:1903.10972. Zeynep Akkalyoncu Yilmaz, Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Cross-domain mod- eling of sentence-level evidence for document re- trieval. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3481–3487, Hong Kong, China.
{ "id": "1901.04085" }
1911.03329
Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages
We introduce three memory-augmented Recurrent Neural Networks (MARNNs) and explore their capabilities on a series of simple language modeling tasks whose solutions require stack-based mechanisms. We provide the first demonstration of neural networks recognizing the generalized Dyck languages, which express the core of what it means to be a language with hierarchical structure. Our memory-augmented architectures are easy to train in an end-to-end fashion and can learn the Dyck languages over as many as six parenthesis-pairs, in addition to two deterministic palindrome languages and the string-reversal transduction task, by emulating pushdown automata. Our experiments highlight the increased modeling capacity of memory-augmented models over simple RNNs, while inflecting our understanding of the limitations of these models.
http://arxiv.org/pdf/1911.03329
Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, Stuart M. Shieber
cs.CL, cs.LG, cs.NE
null
null
cs.CL
20191108
20191108
9 1 0 2 v o N 8 ] L C . s c [ 1 v 9 2 3 3 0 . 1 1 9 1 : v i X r a # Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages # Mirac Suzgun1 Sebastian Gehrmann1 Yonatan Belinkov12 Stuart M. Shieber1 1 Harvard John A. Paulson School of Engineering and Applied Sciences 2 MIT Computer Science and Artificial Intelligence Laboratory Cambridge, MA, USA {msuzgun@college,{gehrmann,belinkov,shieber}@seas}.harvard.edu # Abstract We introduce three memory-augmented Re- current Neural Networks (MARNNs) and explore their capabilities on a series of sim- ple language modeling tasks whose solu- tions require stack-based mechanisms. We provide the first demonstration of neural networks recognizing the generalized Dyck languages, which express the core of what it means to be a language with hierarchical structure. Our memory-augmented architec- tures are easy to train in an end-to-end fash- ion and can learn the Dyck languages over as many as six parenthesis-pairs, in addition to two deterministic palindrome languages and the string-reversal transduction task, by emulating pushdown automata. Our experi- ments highlight the increased modeling ca- pacity of memory-augmented models over simple RNNs, while inflecting our under- standing of the limitations of these models. # Introduction Recurrent Neural Networks (RNNs) have proven to be an effective and powerful model choice for capturing long-distance dependencies and com- plex representations in sequential tasks, such as language modeling (Mikolov et al., 2010; Sunder- meyer et al., 2012), machine translation (Kalch- brenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014), and speech recogni- tion (Graves et al., 2013). In theory, RNNs with ra- tional state weights and infinite numeric precision are known to be computationally universal mod- els (Siegelmann and Sontag, 1994, 1995). Yet, in practice, the computational power of RNNs with finite numeric precision is still unknown. Hence, the classes of languages that can be learned, em- pirically or theoretically, by RNNs with finite nu- meric precision are still to be discovered. plify important formal properties found in natural languages, such properties as long-distance depen- dencies, counting, hierarchy, and repetition. Along these lines, Gers and Schmidhuber (2001); Weiss et al. (2018); Suzgun et al. (2019a) have demonstrated that Long Short-Term Memory (LSTM; Hochreiter and Schmidhuber (1997)), a popular variant of RNNs, can develop counting mechanisms to recognize simple strictly context- free and context-sensitive languages, such as anbn and anbncn, as evidenced by analysis of the hid- den state values.1 By contrast, Weiss et al. have shown that Gated Recurrent Units (GRUs; Cho et al. (2014)), another popular variant of RNNs, cannot perform this type of counting and provided an explanation for some of the difference in per- formance between LSTMs and GRUs. Merrill (2019) studied the theoretical expres- siveness of various real-time neural networks with finite precision under asymptotic conditions, showing that RNNs and GRUs can capture reg- ular languages whereas LSTMs can further rec- ognize a subset of real-time counter languages. And empirically, Suzgun et al. (2019b) demon- strated that LSTM networks can learn to perform dynamic counting, as exemplified by the well- balanced parenthesis (Dyck) language D1 as well as the shuffles of multiple D1 languages. Counting in real-time is an important property, differentiating some of the language classes in the Chomsky hierarchy, and echoes of it appear in natural languages as well, for instance, in the re- quirement that the number of arguments of a set of verbs match their subcategorization requirements. But counting does not exhaust the kinds of struc- tural properties that may be apposite for natural language. Chomsky (1957) emphasizes the hier- archical structure found in natural languages, for instance, in the nested matching of “both . . . and” A natural question arises, then, as to what ex- tent RNN models can learn languages that exem- 1From an automata-theoretic perspective, such languages are expressible with simple one-turn counter machines. and “either . . . or”. Indeed, this kind of nested- matching phenomenon forms the essence of the strictly context-free languages (CFLs). Here sim- ple (real-time) counters are not sufficient; a stack is required. The formal-language-theory reflex of this phenomenon is found most sparely in the Dyck languages Dn of well-nested strings over n pairs of brackets, where n > 1. (We refer to these as the D>1 languages.) The centrality of this nested stack structure in characterizing the class of context-free lan- guages can be seen in various ways. (i) The automata-theoretic analog of context-free gram- mars, the pushdown automaton, is defined by its use of a stack (Chomsky, 1962). (ii) Chomsky and Schützenberger (1963) famously showed that all context-free languages are homomorphic images of regular-intersected Dn languages.2 (iii) The hardest CFL of Greibach (1973) and the hardest deterministic CFL of Sudborough (1976) are built using Dyck-language-style matching. For these reasons, we think of the D>1 languages as ex- pressing the core of what it means to be a context- free language with hierarchical structure, even if it is not itself a universal CFL. This property of the Dyck languages accounts for the heavy focus on the them in prior work (Deleu and Dureau, 2016; Bernardy, 2018; Sennhauser and Berwick, 2018; Skachkova et al., 2018; Hao et al., 2018; Zaremba et al., 2016; Suzgun et al., 2019a; Yu et al., 2019; Hahn, 2019) as well as in this work. It would thus be notable for finite precision neural networks to learn languages, like the D>1 languages and other languages requiring a stack, if we want these neu- ral architectures to be able to manifest hierarchical structures. In this paper, we introduce three enhanced RNN models that consist of recurrent layers and exter- nal memory structures, namely stack-augmented RNNs (Stack-RNNs), stack-augmented LSTMs (Stack-LSTMs), and Baby Neural Turing Ma- chines (Baby-NTMs), and show that they can effectively learn to recognize some D>1 lan- guages from limited data by emulating determin- istic pushdown automata. Previous studies used simple RNN models (Bernardy, 2018; Sennhauser and Berwick, 2018; Suzgun et al., 2019b; Yu et al., 2019) and memory-augmented architectures (Hao et al., 2018) to attempt to learn D2 under different 2In particular, D2 is sufficient for this purpose (Magniez et al., 2014; Suzgun et al., 2019b). 2 training platforms; however, none of them were able to obtain good performance on this task. We thus present the first demonstration that a memory- augmented neural network (MARNN) can learn D>1 languages. Moreover, we evaluate the learn- ing capabilities of our architectures on six tasks whose solutions require the employment of stack- based approaches, namely learning the D2, D3 and D6 languages, recognizing the deterministic palin- drome language (w#wR) and the deterministic homomorphic palindrome language (w#ϕ(wR)), and performing the string-reversal transduction task (w#|w| ⇒ #|w|wR). Our results reflect the better modeling capacity of our MARNNs over the standard RNN and LSTM models in capturing hi- erarchical representations, in addition to provid- ing an insightful glimpse of the promise of these models for real-world natural-language processing tasks.3 # 2 Related Work 2.1 Learning Formal Languages Using # Neural Networks Using neural network architectures to recognize formal languages has been a central computational task in gaining an understanding of their expres- sive ability for application to natural-language- processing tasks. Elman (1991) marked the be- ginning of such methodological investigations and devised an artificial language learning platform where Simple Recurrent Networks (SRNs) (El- man, 1990), were trained to learn the hierarchical and recursive relationships between clauses. An analysis of the hidden state dynamics revealed that the models learned internal representations that encoded information about the grammatical struc- ture and dependencies of the synthetic language. Later, Das et al. (1992) introduced the first RNN model with an external stack, the Recurrent Neu- ral Network Pushdown Automaton (NNPDA), to learn simple deterministic context-free grammars. Following Elman’s work, many studies used SRNs (Steijvers, 1996; Tonkes and Wiles, 1997; Hölldobler et al., 1997; Rodriguez and Wiles, 1998; Bodén et al., 1999; Bodén and Wiles, 2000; Rodriguez, 2001) and stack-based RNNs (Das et al., 1993; Zeng et al., 1994) to rec- ognize simple context-free and context-sensitive counter languages, including anbn, anbncbmam, 3Our code is available at https://github.com/ suzgunmirac/marnns. anbmBmAn, an+mbncm, anbncn, (ban)m, and D1. Nonetheless, none of the SRN-based mod- els were able to generalize far beyond their train- ing set. Some of these studies also focused on un- derstanding and visualizing the internal represen- tations learned by the hidden units of the networks, as well as the computational capabilities and limi- tations of the models. In contrast, Gers and Schmidhuber (2001), Schmidhuber et al. (2002), and Gers et al. (2002), showed that their (LSTM) networks could not only competently learn two strictly context-free lan- guages, anbn and anbmBmAn, and one strictly context-sensitive language anbncn, but also gen- eralize far beyond the training datasets. # 2.2 Memory-Augmented Neural Networks Recently, memory-augmented architectures have been considered for language modeling tasks: Joulin and Mikolov (2015) proposed a differen- tiable stack structure controlled by an RNN to in- fer algorithmic patterns that require some com- bination of counting and memorization. Though their model could learn anbn, anbncn, anbncndn, anb2n, anbmcn+m, it did not exceed the perfor- mance of a standard LSTM on a language mod- eling task. Inspired by the early architecture de- sign of NNPDA, Grefenstette et al. (2015) intro- duced LSTM models equipped with unbounded differentiable memory structures, such as stacks, queues, and double-linked lists, and explored their computational power on synthetic transduction tasks. In their experiments, their Neural-Stack and Neural-Queue architectures outperformed the standard LSTM architectures. Neither of these studies on stack-augmented neural architectures inspected the internal representations learned by the recurrent hidden layers or investigated the per- formance of their models on the Dyck language. Graves et al. (2014) introduced the Neural Tur- ing Machine (NTM), which consists of a neu- ral network (which can be either feed-forward or recurrent) together with a differentiable external memory, and demonstrated its successful perfor- mance on a series of simple algorithmic tasks, such as copying, repeated copying, and sorting. At each time step, an NTM can interact with the external memory via its differentiable attention mechanisms and determine its output using the in- formation from the current hidden state together with the filtered context from the external memory. It is evident from the design differences that the 3 degree of freedom of NTMs is much greater than that of stack-augmented recurrent networks. How- ever, this freedom comes at a price: The different ways in which we can attend to the memory to read and write at each time step make the training of the neural models challenging. Since the publica- tion of the original NTM paper, there have been a number of studies addressing instability issues of the NTM architecture, or more broadly memory- augmented recurrent network models, and propos- ing new architecture designs. We refer to Zaremba and Sutskever (2015); Kurach et al. (2015); Yang (2016); Graves et al. (2016); Gulcehre et al. (2018) for such proposals. 2.3 Investigations of the Dyck Languages Deleu and Dureau (2016) used NTMs to capture long-distance dependencies in D1. Their exami- nation showed that NTMs indeed learn to emulate stack representations and generalize to longer se- quences. However, a model need not be equipped with a stack to recognize this simplest Dyck lan- guage in a standard learning environment; count- ing is sufficient for an automaton to capture D1 (Suzgun et al., 2019b; Yu et al., 2019). In assessing the ability of recurrent neural net- works to process deep and long-distance depen- dencies, Skachkova et al. (2018) and Sennhauser and Berwick (2018) conducted experiments on the Dyck languages to see whether LSTMs could learn nested structures. The former sought to pre- dict the single correct closing parenthesis, given a Dyck word without its final closing symbol. Although LSTMs performed almost perfectly in this completion task, one cannot draw any defini- tive conclusion about whether these models really learn the Dyck languages, since even counter au- tomata can achieve perfect accuracy on this task.4 Similarly, Bernardy (2018) used three differ- ent recurrent networks, namely LSTM, GRU, and RUSS, and combinations thereof, to predict the next possible parenthesis at each time step, as- suming that it is a closing parenthesis. His RUSS model is a purpose-designed model containing re- current units with stack-like states and appears to generalize well to deeper and longer sequences. 4For instance, a model can learn to separately count the number of left and right parentheses for each of the n distinct pairs and predict the closing parenthesis for the pair for which the counter is non-zero. Suzgun et al. (2019b) and Yu et al. (2019) also discuss the drawbacks of Skachkova et al.’s learn- ing task and argue that the task is insufficient for illustrating that a network can learn D2. However, as the author mentions, the specificity of the RUSS architecture disqualifies it as a prac- tical model choice for real-world language model- ing tasks. Hao et al. (2018) studied the interpretability of Neural Stack models (Grefenstette et al., 2015) in a number of simple language modeling tasks, in- cluding parenthesis prediction, string reversal, and XOR evaluation. Though their Neural Stacks ex- hibited intuitive stack behaviors on their context- free transduction tasks and performed almost as well as the standard LSTM models, the authors their stack-augmented models were noted that more difficult to train than the traditional LSTMs. More recently, Suzgun et al. (2019b) corrobo- rated the theoretical findings of Weiss et al. (2018) by showing that RNN, GRU, and LSTM models could perform dynamic counting by recognizing D1 as well as shuffles of multiple D1 languages by emulating simple k-counter machines, while being incapable of recognizing D2. framework of experimental Sennhauser and Berwick (2018) and the data gen- eration procedure of Skachkova et al. (2018), Yu et al. (2019) conducted experiments on D2 under different training schemes and objectives using relatively large bi-directional LSTM models. Their recurrent networks failed to generalize well beyond the scope of their training data to learn D2 under the closing-parenthesis completion and sequence-to-sequence settings.5 Finally, Hahn (2019) used D2 to explore the the- oretical limitations of self-attention architectures (Vaswani et al., 2017). He demonstrated that self- attention models, even when equipped with infi- nite precision, cannot capture D2, unless the num- ber of layers or attention heads increases with the length of the input sequence. In summary, recognizing the Dyck languages has been an important probing task for understand- ing the ability of neural networks to capture hier- archical information. Thus far, none of the recur- rent neural networks have been shown to capture D>1. This present work, therefore, provides the first demonstration of RNN-based models learning D>1, in particular, D2, D3, and D6, in addition to other difficult context-free languages. 5We note that the authors attempted to generate the short- est proper sequence of closing parentheses given a prefix of a Dyck word under the sequence-to-sequence framework. This task is different from the previous tasks and requires a gener- ative model. 4 # 3 Models In this section, we describe the mathematical for- mulations of our memory-augmented RNNs.The inspiration for our stack-augmented neural archi- tectures came from the pushdown automaton, an abstract machine capable of recognizing context- free languages. Similar stack-based neural net- works, however, have also been proposed by oth- ers (Pollack, 1991; Das et al., 1992; Joulin and Mikolov, 2015; Grefenstette et al., 2015). Our models differ from them in their theoretical sim- plicity and empirical success. Our Baby-NTM, on the other hand, can be considered as a simplifi- cation of the original NTM architecture (Graves et al., 2014): As opposed to using soft-attention mechanisms to read and write to the external mem- ory, we make deterministic decisions and always read content from and write to the first entry of the memory, thereby making the learning process easier while retaining universal expressivity. Notation We will assume the following nota- tion: • x = x1, ..., xT : The input sequence of one- hot vectors, with the i-th token xi. • yi: The output associated with xi. • W: The learnable weights of the model. • b: The learnable bias terms of the model. • hi: The i-th hidden state representation. • D: The dim. of the input and output samples. • H: The dim. of the hidden state of the model. • M : The dim. of the external stack/memory. # 3.1 Stack-RNN Before we begin describing our Stack-RNN, recall the formulation of a standard RNN: ht = tanh(Wihxt + bih + Whhh(t−1) + bhh) yt = f (Wyht) where xt ∈ RD is the input, ht ∈ RH the hidden state, yt ∈ RD the output at time t, Wy ∈ RD×H the linear output layer, and f a transformation. While designing our Stack-RNN, we come (i) Where and across two important questions: how should we place the stack in the neural net- work, and (ii) how should we design the stack so that we can backpropagate errors through the stack at the time of training? Regarding (i), we place the stack in such a way that it interacts with the hid- den layers at each time step. The benefit of this Xt Figure 1: An abstract representation of our Stack-RNN architecture. approach is that errors made in future stages of the model affect and backpropagate through not only the hidden states but also the stack states. Regarding (ii), we construct a differentiable stack structure. Figure 1 provides a visualization of the Stack-RNN. Its formulation is: ˜h(t−1) = h(t−1) + Wshs(0) (t−1) ht = tanh(Wihxt + bih + Whh˜h(t−1) + bhh) yt = σ(Wyht) at = softmax(Waht) nt = σ(Wnht) s(0) t = a(0) t = a(0) s(i) where st = s(0) is the stack configu- ration at time step t, with s(0) the topmost stack element; Wsh ∈ RH×M , Wy ∈ RD×H , Wa ∈ R2×H , and Wn ∈ RM ×H are all learnable linear weights of the model. At each time step, we combine the topmost stack element s(0) (t−1) with the previous hidden state h(t−1) via a linear mapping to produce an interme- diate hidden state ˜h(t−1). We then use ˜h(t−1), to- gether with the input, to generate the current hid- den state ht, from which both the output at that time step yt and the weights of the PUSH (a(0) ) and POP (a(1) ) operations by the stack controller are determined simultaneously. Here at ∈ R2 is a probability distribution over the two operations. Finally, we update the stack elements in such a way that the elements become the weighted lin- ear interpolation of both possible stack operations. We can, therefore, consider the elements in the stack as variables in superposition states. We highlight the following differences between our Stack-RNN and the Stack-RNN by Joulin and Mikolov (2015), as further explicated in the ap- pendix. First, their model does not contain the term ˜h(t−1) and it updates ht as follows: ht = σ(Wihxt + Whhh(t−1) + Wshs(0:k) (t−1)) where Wsh ∈ RH×k and s(0:k) (t−1) the k-topmost el- ements of the stack at time t − 1. But a simple analysis of our Stack-RNN formulation divulges that s(0) (t−1) depends on both Whh and Wsh in our formulation, whereas it only depends on Wsh in Joulin and Mikolov’s formulation. Furthermore, their architecture takes the sigmoid of the linear combination of xt, h(t−1), and s(0:k) (t−1), in addition to excluding the bias terms, to update ht. # 3.2 Stack-LSTM The Stack-LSTM is similar to the Stack-RNN but contains additional components of the standard LSTM architecture by Hochreiter and Schmidhu- ber (1997). In this model, we update the hidden state of the model according to the standard LSTM equations, that is ht = LSTM(xt, ˜h(t−1)). # 3.3 Baby-NTM The Baby-NTM is both an extension of the Stack- RNN and a simplification of the original NTM. While the Stack-RNN contains an unbounded stack mechanism, it can perform only two basic operations on the stack, namely the PUSH and POP actions. In the Baby-NTM architecture, we fix the size of the external memory but provide more freedom to the model: While the interac- tion between the controller and the memory in the design of the Baby-NTM is mostly similar 5 to that of the Stack-RNN, we allow five opera- tions on the memory at each time step to update its contents: ROTATE-RIGHT, ROTATE-LEFT, NO-OP, POP-LEFT, and POP-RIGHT. Suppose the current memory configuration M is that [a, b, c, d, e], where M(i) ∈ R. Then the opera- tions produce the following configurations at the next time step: ROTATE-RIGHT : [e, a, b, c, d]. ROTATE-LEFT : [b, c, d, e, a]. NO-OP : [a, b, c, d, e]. POP-RIGHT : [0, a, b, c, d]. POP-LEFT : [b, c, d, e, 0]. If we think of the memory as a set M sitting on an n-dimensional Euclidean space Rn, we can then think of these operations as n × n matrices. From an algebraic point of view, we can realize these actions on the memory as left-monoid actions on a set, since matrix multiplication is associative and the matrix corresponding to the operation NO-OP serves the role of an identity element in our com- putations. Below is the formulation of the Baby- NTM architecture: ih _ (0) hoa) = be) + WmMi_,) hy = tanh(Win2e + bin + Wane + ban) u= o(Wyh,) a, = softmax(W hr) nt = 0(W,h:) N M, =5- a‘) lop] M.1 i=l mM = mM +n where Mt denotes the memory configuration at time step t, nt the value of the element to be in- serted to the first entry of the memory at time step t, OP(i) the matrix corresponding to the i-th ac- tion on the memory and a(i) the weight of that ac- t tion at time t, and all W’s learnable matrices of the model.6 # 3.4 Softmax Functions The softmax function in the calculation of at in all these models enables us to map the values of 6As before, the memory here can be considered as the lin- ear superposition of the results of all the memory operations. 6 the vector Waht to a categorical probability dis- tribution. We investigate the effect of more de- terministic decisions about the stack/memory op- erations on the robustness of our model. A natu- ral approach is to employ a softmax function with varying temperature τ : softmax-temp(x;,7) = —exp(ti/t) dja exp(2;/7) The softmax-temp function behaves exactly like the standard softmax function when the temper- ature value τ equals 1. As the temperature in- creases, softmax-temp produces more uniform categorical class probabilities, whereas as the tem- perature decreases, the function outputs more dis- crete probabilities, like a one-hot encoding. Furthermore, Jang et al. (2016) proposed an ef- ficient and differentiable approximation to sam- pling from a discrete categorical distribution using a reparameterization trick: Gumbel-softmax-temp(x;, {g1,.--,9},7) exp((log x; + gi)/T) ye exp((log x; + 9;)/T) i.i.d.∼ Gumbel(0, 1). As an alternative where gi to the softmax function with varying temperature, one might want to use the Gumbel-softmax sam- pling method. In cases where we have more than two operations on the stack/memory, it might be tempting to prefer the Gumbel-softmax sampling approach for the calculation of at values. We ex- periment with these alternatives below. # 4 Experimental Setup To evaluate the performance of the MARNNs, we conducted experiments on six computational tasks whose solutions require the formation of stack structures. In all the experiments, we used both standard and memory-augmented RNNs to explore the differences in their performances, in addition to the Stack-RNN model by Joulin and Mikolov (2015), and repeated each experiment 10 times. Furthermore, we aimed to investi- gate softmax functions with varying temperature in our MARNNs and thus employed 12 models with different configurations—two vanilla recur- rent models, three MARNNs with three different softmax functions, and one Stack-RNN by Joulin and Mikolov (2015)—for the six tasks. Sample | ((]) | abc# zyx Input ( [ ] ) Output | (/[/) (/I/ (I) CL a/b/c/# b a/b/c/# c zy & a/b/ce/# z yu A Table 1: Example input-output pairs for D2 (left) and the deterministic homomorphic palindrome language (right) under the sequence prediction paradigm. # 4.1 The Sequence Prediction Task Following Gers and Schmidhuber (2001), we trained the networks as follows: At each time step, we presented one input symbol to the network and then asked the model to predict the set of next pos- sible symbols in the language, based on the cur- rent symbol, the prior hidden states, and the stack. We used a one-hot representation to encode the in- puts and a k-hot representation to encode the out- puts. Table 1 provides example input-output pairs for two of the experiments. In all the experiments, the objective was to min- imize the mean-squared error of the sequence pre- dictions. We used an output threshold criterion of 0.5 for the sigmoid layer (yt = σ(·)) to indicate which symbols were predicted by the model. Fi- nally, we turned this sequence prediction task into a sequence classification task by accepting a se- quence if the model predicted all of its output val- ues correctly and rejecting it otherwise. Distribution ofthe Length of the Sequences Distribution of the Max Depth Reached by the Sequences ‘ai Figure 2: Length and maximum depth distributions of training/test sets for an example D2 experiment. new neural architectures that could recognize D2 and other difficult context-free languages. 5.1 The D2 Language We trained the Stack-RNN, Stack-LSTM, and Baby-NTM architectures with slightly different memory-controller configurations, in addition to standard RNNs, to learn D2. A probabilistic context-free grammar for D2 can be written as follows: 4.2 Training Details In contrast to the models in Joulin and Mikolov (2015); Grefenstette et al. (2015); Hao et al. (2018); Yu et al. (2019), our architectures are eco- nomical: Unless otherwise stated, the models are all single-layer networks with 8 hidden units. In all the experiments, the entries of the memory were set to be one-dimensional, while the size of the memory in the Baby-NTMs was fixed to 104 (since the length of the longest sequence in all the tasks was 100). We used the Adam optimizer (Kingma and Ba, 2014) and trained our models for three epochs. # 5 Learning the Dyck Languages As described in the introduction, the Dyck lan- guages D>1 provide an ideal test-bed for explor- ing the ability of recurrent neural networks to cap- ture the core properties of the context-free lan- guages, their hierarchical modeling ability. None of the previous studies were able to learn the Dyck languages, with the exception of D1 (which can be captured using a simple one-counter machine). The main motivation of this paper was to introduce S → ( S ) with probability p 2 [ S ] with probability p 2 S S with probability q ε with probability 1 − (p + q) where 0 < p, q < 1 and p + q < 1. 4 , we generated 5000 distinct Dyck words, whose lengths were bounded to [2, 50], for the training sets. Similarly, we gen- erated 5000 distinct words whose lengths were bounded to [52, 100] for the test sets. Hence, there was no overlap between the training and test sets. Test set performance requires generalization well past the training set lengths. As it can be seen in the length and maximum depth distributions of the training and test sets for one of the D2 experiments in Figure 2, the test samples contained longer de- pendencies than the training sample. Setting p = 1 Table 2 lists the performances of the vanilla and memory-augmented recurrent models on the train- ing and test sets for the Dyck language. Our em- pirical results highlight the dramatic performance difference between the memory-augmented recur- rent networks and vanilla recurrent networks: We 7 Training Set Test Set Models Min Max Med Mean Min Max Med Mean Vanilla RNN Vanilla LSTM Stack-RNN by J&M (2015) 3.32 36.16 0 12.78 62.80 100 6.41 53.24 100 7.11 52.38 70.50 0 0.28 0 0 4.10 100 0 1.02 100 0 1.39 70.00 Stack-RNN+Softmax Stack-RNN+Softmax-Temp Stack-RNN+Gumbel-Softmax 100 100 3.44 100 100 100 100 100 99.98 100 100 90.32 99.96 99.92 0 100 100 100 100 100 99.96 99.99 99.99 89.96 Stack-LSTM+Softmax Stack-LSTM+Softmax-Temp Stack-LSTM+Gumbel-Softmax 62.52 46.70 50.26 100 100 100 100 100 99.94 95.69 94.67 94.97 2.78 0.80 0.70 100 100 99.94 98.25 99.73 99.33 87.51 89.84 88.68 Baby-NTM+Softmax Baby-NTM+Softmax-Temp Baby-NTM+Gumbel-Softmax 2.56 1.16 5.66 100 100 100 100 99.88 99.88 75.80 72.43 89.39 0 0 0 100 100 99.90 99.91 96.97 99.54 68.73 68.23 86.85 Table 2: The performances of the vanilla and memory-augmented recurrent models on D2. Min/Max/Median/Mean results were obtained from 10 different runs of each model with the same random seed across each run. We note that both Stack-RNN+Softmax and Stack-RNN+Softmax-Temp achieved full accuracy on the test sets in 8 out of 10 times. Strength of Memory Operations at Each Timestep NO-MOVE rons a a POP-LEFT 1.0 ROTATE LEFT ROTATE RIGHT Actions -0.4 -02 (LOE1}))11)) Sequence -0.0 (gull ¢ Memory Entries at Each Timestep Memory Location 0.0 to Sequence Figure 3: Visualizations of the strength of the memory operations (left) and the values of the memory entries (right) of a Baby-NTM+Softmax model trained to learn D2. We highlight that the Baby-NTM appears to have learned to emulate a simple but effective differentiable pushdown automaton to recognize D2. note that almost all our stack/memory-augmented architectures achieved full accuracy on the test set, which contained longer and deeper sequences than the training set, while the vanilla RNNs and LSTMs failed to generalize with below 5% accu- racy. We further observe that the Stack-RNN pro- posed by Joulin and Mikolov (2015) performed nearly as well as our models, though ours per- formed better than theirs on average. When evaluated based on their empirical me- the dian and mean percent-wise performances, Stack-RNNs appear to be slightly more suc- cessful than the Stack-LSTMs and the Baby- NTMs. Both the Stack-RNN+Softmax and Stack- RNN+Softmax-Temp obtained perfect accuracy on the test sets 8 out of 10 times, whereas the best Stack-LSTM variant, Stack-LSTM+Softmax- Temp, achieved perfect accuracy only 3 out of 10 times. Nevertheless, we acknowledge that most of our stack-augmented models were able to success- fully generalize well beyond the training data. 7 Figure 3 provides a visualization of the strengths of the memory operations and the change in the values of the entries of the memory compo- nent of one of our memory-augmented models (a Baby-NTM+Softmax with 8 hidden units) at each time step when the model was presented a sam- 7We additionally note that, contrary to our initial expec- tation, using a softmax activation function with varying tem- perature did not improve the performance of our memory- augmented neural models in general. However, the networks might actually benefit from temperature-based softmax func- tions in the presence of more categorical choices, because currently the models have only a very limited number of memory operations. 8 Training Set Test Set Models Min Max Med Mean Min Max Med Mean Vanilla RNN Vanilla LSTM Stack-RNN by J&M (2015) 0.82 24.16 9.02 14.88 39.76 100 11.19 31.55 98.17 9.52 32.58 79.32 0 0 0 0 0.16 100 0 0.02 91.29 0 0.04 66.72 Stack-RNN+Softmax Stack-RNN+Softmax-Temp Stack-RNN+Gumbel-Softmax 7.80 37.64 1.78 100 99.98 100 100 95.74 44.55 81.75 81.95 50.71 0 0.06 0 100 98.18 99.98 100 67.32 21.94 80.00 52.49 43.65 Stack-LSTM+Softmax Stack-LSTM+Softmax-Temp Stack-LSTM+Gumbel-Softmax 33.98 37.64 25.74 100 99.98 99.98 92.25 95.74 78.21 77.97 81.95 72.01 0.04 0.06 0 99.94 98.18 99.2 61.54 67.32 27.08 55.49 52.49 42.17 Baby-NTM+Softmax Baby-NTM+Softmax-Temp Baby-NTM+Gumbel-Softmax 4.60 6.40 0.76 100 100 100 84.29 16.44 11.70 60.63 39.97 43.42 0 0 0 100 100 99.9 23.44 0.51 0 44.51 27.46 38.76 Table 3: The performances of the vanilla and memory-augmented recurrent models on D3. In 32 out of 100 trials, the MARNNs with 8 hidden units and one-dimensional memory achieved over 99% accuracy on the test sets. However, increasing the dimensional of the memory for our MARNNs further improved our results. Memory Location 5 6 7 4 Sequence Memory Entries at Each Timestep 1.0 0.0 Pv £ £€ FC FC € Cy Yd T 1 FOF Figure 4: Visualization of the values of the memory entries of a Baby-NTM+Softmax model trained to learn D3. ple in D2. The Baby-NTM appears to be using its ROTATE-RIGHT and POP-RIGHT operations for the open parentheses ‘(’ and ‘[’, respectively, and POP-LEFT operation for both of the closing parentheses ‘)’ and ‘]’, thereby emulating a simple PDA-like mechanism. A careful inspection of the memory entries of the Baby-NTM indicates that the model utilizes a special marker with a distinct value (∼0.45 in our example) to distinguish an empty stack configuration from a processed stack configuration. On the other hand, the memory alone does not dictate the output values: The hid- den states of the model govern the overall behavior and embody a finite-state control, as shown in the formulation of the Baby-NTM.8 8The visualizations for the other memory-augmented models were qualitatively similar, though some networks learned more complex representations. We further empha- 5.2 The D3 and D6 Languages We further conducted experiments on the D3 and D6 languages to evaluate the ability of our memory-augmented architectures to encode more complex hierarchical representations. The train- ing and test corpora were generated in the same style as the previous task; however, we included 15, 000 samples in the training set for D6, due to its complexity. As shown in Table 3, the Stack-RNN+Softmax model had the best performance among all the neural networks on the D3 learning task, obtaining size that the dimensions of the external stack and memory entries in our MARNNs were set up to be one-dimensional for visualization purposes, but we additionally experimented with higher dimensional memory structures and observed that such additions often increased the overall performances of the models, especially the performance of the Baby-NTMs. 9 Training Set Test Set Models Min Max Med Mean Min Max Med Mean Vanilla RNN Vanilla LSTM Stack-RNN by J&M (2015) 21.19 32.47 99.47 25.71 41.62 100 23.53 37.35 100 23.39 37.05 99.94 0 0 97.60 0.02 0.06 100 0 0 99.99 0 0.01 99.70 Stack-RNN+Softmax Stack-RNN+Softmax-Temp Stack-RNN+Gumbel-Softmax 99.92 36.83 20.90 100 100 99.98 100 98.54 99.93 99.99 80.09 91.62 99.32 0 0 100 100 99.92 99.99 78.44 99.50 99.85 60.88 87.69 Stack-LSTM+Softmax Stack-LSTM+Softmax-Temp Stack-LSTM+Gumbel-Softmax 98.48 36.83 36.20 100 100 99.94 99.99 98.54 67.50 99.79 80.09 68.62 91.12 0 0 100 100 99.90 99.18 78.44 24.61 98.23 60.88 44.47 Baby-NTM+Softmax Baby-NTM+Softmax-Temp Baby-NTM+Gumbel-Softmax 99.94 86.12 22.56 100 100 99.98 100 100 99.86 99.99 98.15 75.46 99.00 8.56 0 100 100 99.86 99.97 99.91 99.23 99.87 88.40 63.49 Table 4: The performances of the vanilla and memory-augmented recurrent models on the D6. We note that the MARNNs in this example contain 12 hidden units and 5-dimensional external stack/memory. In 70 out of 100 trials, the MARNNs performed over 99% accuracy. Overall, the Baby-NTM+Softmax had the best performance. perfect accuracy in eight out of ten trials. Follow- ing the Stack-RNNs, the Stack-LSTMs and Baby- NTMs, on average, achieved around 50% and 37% accuracy on the test set, respectively. On the other hand, the Stack-RNN by Joulin and Mikolov (2015) could generalize better than most of our models in terms of its median and mean scores, albeit still not better than our Stack-RNN. lem. Table 4 summarizes our new results with the same architectures containing 12 hidden units and 5-dimensional augmented stack/memory. We saw a significant increase in the performance of our models: In 60 out of 90 trials, our enhanced MARNNs achieved almost perfect (≥ 99%) accu- racy on the test set. # 6 Learning Palindrome Languages Figure 4 illustrates the behavior of one of our Baby-NTMs as the model is presented a long se- quence in D3. It is remarkable to witness how the memory-augmented model makes use of its exter- nal memory to learn a sequence of actions to rec- ognize a sample in D3. Similar to the behavior of the previous model in Figure 3, the RNN con- troller of the Baby-NTM model in this instance appears to be using the differentiable memory as a stack-like structure and inserting distinct val- ues to the memory at different time steps. Fur- thermore, we note the presence of special markers (0.22 in the first half and 0.21 in the second half – both colored blue) in the memory: These idiosyn- cratic memory elements marking the bottom of the used portion of the stack enable the model to know when to predict only the set of open parentheses. the overall performance of the MARNNs for D6 was much lower than for D2 and D3. For instance, none of our models could obtain full accuracy on the training or test sets; the maximum score our models could achieve was 60.38%. We wondered whether increasing the di- mension of the memory would remedy the prob- Our previous results established that the MARNNs can learn Dyck languages, which represent the core of the CFL class. We note that the Dyck lan- guages incorporate a notion of palindrome: The intersection of D,, with p*p* leads to a definition of a specific type of a homomorphic palindrome language wy(w*), where * is the Kleene star, y a homomorphism given by p; +> p;, and w* the re- versal of w. Therefore, we would expect our mod- els to be able to learn various deterministic ver- sions of palindrome languages. # 6.1 Homomorphic Palindrome Language Our first target of exploration is the deterministic homomorphic palindrome language, the language of words w#ϕ(wR) where w ∈ {a, b, c}∗, # is a symbol serving to mark the center of the palin- drome, and ϕ maps a to x, b to y, and c to z. We use the notion of recognition from the previ- ous section, predicting at each symbol the set of all possible following symbols. Viewed as a trans- duction, this amounts to the following task: wHy(w") = (a/b/c/#)"™e(w®) A 10 Training Set Test Set Models Min Max Med Mean Min Max Med Mean Vanilla RNN Vanilla LSTM Stack-RNN by J&M (2015) 0 0 0 0 5.22 100 0 2.23 46.04 0 2.39 49.13 0 0 0 0 0 100 0 0 50.42 0 0 50.00 Stack-RNN+Softmax Stack-RNN+Softmax-Temp Stack-RNN+Gumbel-Softmax 0 0 0 100 100 100 99.99 100 17.10 60.00 70.00 43.42 0 0 0 100 100 100 100 100 16.98 60.00 70.00 43.39 Stack-LSTM+Softmax Stack-LSTM+Softmax-Temp Stack-LSTM+Gumbel-Softmax 0 0 0 100 100 100 100 100 100 61.36 70.07 80.20 0 0 0 100 100 100 100 100 99.98 60.00 70.00 79.99 Baby-NTM+Softmax Baby-NTM+Softmax-Temp Baby-NTM+Gumbel-Softmax 0 0 0 100 100 100 67.16 99.99 60.43 53.43 60.00 52.09 0 0 0 100 100 100 66.55 100 61.30 53.31 60.00 52.26 Table 5: The performances of the vanilla and memory-augmented recurrent models on the deterministic homomor- phic palindrome language. Most of our MARNNs achieved almost full accuracy on the test sets. The training set for this task contained 5000 unique samples of length varying from 2 to 50, and the test set contained 5000 unique samples of length varying from 52 to 100. We remark that there was no overlap between the training and test sets, just as in the case of the Dyck language tasks. Table 5 lists the performances of the vanilla and memory-augmented models on the deterministic homomorphic palindrome language. We highlight the success of our models once again: While our MARNNs often performed with perfect accuracy on the training and test sets, the standard recur- rent models could not predict even one sample in the test set correctly. Overall, most of the variants of the Stack-RNN/LSTM and Baby-NTM models seem to have learned how to emulate pushdown automata: They learned to push certain values into their memory or stack whenever they read a char- acter from the {a, b,c} alphabet and then started popping them one by one after they would see #, and at the last step, the model predicted the end of the sequence token 4. The models did not perform equally well though: When evalu- ated on their mean and median percent-wise per- formances, for instance, the Stack-LSTMs were found to generalize better than the Stack-RNNs and the Baby-NTMs in this task. Further, it is hard to make a conclusive statement about whether em- ploying a softmax function with varying temper- ature in our MARNNs had any benefit. Never- theless, the Stack-LSTMs+Gumbel-Softmax per- formed slightly better than the other models in terms of their mean percentages on the test sets. 6.2 Simple Palindrome Language Taking the homomorphism ϕ to be the identity map in the previous language, it is reasonable to expect the models to learn the w#wR palindrome language. We evaluated recognition of this lan- guage once again as a possible-next-symbol pre- diction task, which can be viewed as the following sequence transduction task: ww? = (a/b/c/#)w® A Surprisingly, all of our MARNN models had dif- ficulty learning this language. Only in three of 90 trials were our MARNNs able to learn the lan- guage; other times, the models typically obtained 0% accuracy during testing. When we increased the dimensionality of the memory to five, how- ever, our MARNNs immediately learned the task with almost full accuracy again. Given that the only difference between the previous task and this task is the second half of the strings, we conjec- tured that our models were getting confused in the second half: Because of vocabulary overlap in the two halves, the models might be using information from the second half when predicting the set of possible symbols in the second half, thereby get- ting distracted and finding themselves stuck at bad local minima. To verify our hypothesis, we thus performed one more task, string reversal, which we describe in the following section. # 7 Learning the String-Reversal Task In the previous section, we witnessed a strange phenomenon: Our MARNN models with 8 hidden 11 Training Set Test Set Models Min Max Med Mean Min Max Med Mean Vanilla RNN Vanilla LSTM Stack-RNN by J&M (2015) 0.06 0.68 0.16 0.90 5.08 100 0.46 3.62 50.29 0.46 3.50 50.19 0 0 0 0 0 100 0 0 50.00 0 0 50.00 Stack-RNN+Softmax Stack-RNN+Softmax-Temp Stack-RNN+Gumbel-Softmax 0.38 0.14 0.18 100 100 100 100 100 99.98 77.39 80.05 77.71 0 0 0 100 100 100 100 100 99.98 76.65 80.00 77.83 Stack-LSTM+Softmax Stack-LSTM+Softmax-Temp Stack-LSTM+Gumbel-Softmax 2.02 0.06 2.18 100 100 100 100 100 100 80.60 70.49 80.48 0 0 0 100 100 100 100 100 100 79.99 70.00 80.00 Baby-NTM+Softmax Baby-NTM+Softmax-Temp Baby-NTM+Gumbel-Softmax 0.08 0 0.20 100 100 100 100 100 100 86.65 70.07 90.01 0 0 0 100 100 100 100 100 99.97 86.67 70.00 89.96 Table 6: The performances of the vanilla and memory-augmented recurrent models on the string reversal task under the transduction setting. In 32 out of 90 trials, our MARNNs obtained perfect accuracy. units and one-dimensional external memory could learn the deterministic homomorphic palindrome language, but not the simple palindrome language. Since the only difference between the two tasks is the existence of a non-trivial isomorphism ϕ (and the vocabulary overlap in the two halves), we wanted to perform the string reversal task under a sequence transduction setting in which the rever- sal appears only in the output: w#lel gle? first demonstration of neural networks learning to recognize the generalized Dyck languages, which represent the “core” of the context-free language class. We further evaluated the learning capabili- ties of our models on recognizing the determinis- tic homomorphic palindrome language and simple palindrome language under the sequence predic- tion framework and performing the string-reversal task under a sequence transduction setting. In all the experiments, our MARNNs outperformed the vanilla RNN and LSTM models and often attained perfect accuracy on both the training and test sets. The training and test sets were similar to the pre- vious cases: 5000 samples each, with lengths bounded by [2, 50] and [52, 100], respectively. Table 6 illustrates that most of our MARNNs achieved perfect accuracy on the test sets in this version of the string reversal task. The results cor- roborated our conjecture by showing that when the second half of the input samples contained sym- bols from an alphabet other than the one used in the first half (in this case, the # symbol), the memory-augmented models do not get confused and act in the desired way (pushing elements into the stack in the first half and popping them one by one in the second half after seeing the marker #). When we visualized the hidden states and the memory entries of the models for this task, we ob- served that our MARNNs learned to emulate sim- ple pushdown-automata. # 8 Conclusion In this paper, we introduced three memory- augmented neural architectures and provided the Since we limited the dimensionality of the ex- ternal memory in our memory-augmented archi- tectures to one, we were also able to visualize the changes in the external memory of the Baby- NTMs trained to learn the D2 and D3 languages. Our simple analysis revealed that our MARNNs learned to emulate pushdown-automata to recog- nize these Dyck languages. Hao et al. (2018) men- tion that their Neural-Stack models could not per- fectly employ stack-based strategies to learn an appropriate representation to recognize the D2 lan- guage, and further address the difficulty of training stack-augmented recurrent networks. Although we agree that it is challenging to train MARNNs due to various optimization issues, one can still train these models with as few as eight or twelve hidden units to learn the Dyck languages, and our empirical findings support this claim. 12 # 9 Acknowledgment The authors appreciate the helpful comments of Michael Hahn, Yoav Goldberg, Drew Pendergrass, Dan Stefan Eniceicu, and Filippos Sytilidis. M.S. gratefully acknowledges the support of the Har- vard College Research Program (HCRP) and the Harvard Center for Research on Computation and Society Research Fellowship for Undergraduate Students. S.G. was supported by a Siebel Fellow- ship. Y.B. was supported by the Harvard Mind, Brain, and Behavior Initiative. The computations in this paper were run on the Odyssey cluster sup- ported by the FAS Division of Science, Research Computing Group at Harvard University. # References Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural Machine Translation by Jointly Learning to Align and Translate. arXiv preprint arXiv:1409.0473. Jean-Philippe Bernardy. 2018. Can Recurrent Neural Networks Learn Nested Recursion? LiLT (Linguistic Issues in Language Technol- ogy), 16(1). Mikael Bodén and Janet Wiles. 2000. Context- Free and Context-Sensitive Dynamics in Re- current Neural Networks. Connection Science, 12(3-4):197–210. Mikael Bodén, Janet Wiles, Bradley Tonkes, and Alan Blair. 1999. Learning to Predict a Context-Free Language: Analysis of Dynamics in Recurrent Hidden Units. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine transla- tion. arXiv preprint arXiv:1406.1078. Noam Chomsky. 1957. Mouton, The Hague. Syntactic Structures. Noam Chomsky. 1962. Context-Free Grammars and Pushdown Storage. MIT Res. Lab. Elec- tron. Quart. Prog. Report., 65:187–194. Noam Chomsky and Marcel P Schützenberger. 1963. The Algebraic Theory of Context-Free 13 Languages. In Studies in Logic and the Foun- dations of Mathematics, volume 35, pages 118– 161. Elsevier. Sreerupa Das, C Lee Giles, and Guo-Zheng Sun. 1992. Learning Context-free Grammars: Capa- bilities and Limitations of a Recurrent Neural Network with an External Stack Memory. In Proceedings of The Fourteenth Annual Confer- ence of Cognitive Science Society. Indiana Uni- versity, page 14. Sreerupa Das, C Lee Giles, and Guo-Zheng Sun. 1993. Using Prior Knowledge in a NNPDA to In Advances Learn Context-Free Languages. in neural information processing systems, pages 65–72. Tristan Deleu and Joseph Dureau. 2016. Learn- ing Operations on a Stack with Neural Turing Machines. arXiv preprint arXiv:1612.00827. Jeffrey L Elman. 1990. Finding Structure in Time. Cognitive science, 14(2):179–211. Jeffrey L Elman. 1991. Distributed Representa- tions, Simple Recurrent Networks, and Gram- matical Structure. Machine learning, 7(2- 3):195–225. Felix A Gers, Juan Antonio Pérez-Ortiz, Douglas Eck, and Jürgen Schmidhuber. 2002. Learn- ing Context Sensitive Languages with LSTM In Interna- Trained with Kalman Filters. tional Conference on Artificial Neural Net- works, pages 655–660. Springer. Felix A Gers and E Schmidhuber. 2001. LSTM Recurrent Networks Learn Simple Context-Free and Context-Sensitive Languages. IEEE Trans- actions on Neural Networks, 12(6):1333–1340. Alex Graves, Abdel-rahman Mohamed, and Ge- offrey Hinton. 2013. Speech Recognition with In 2013 Deep Recurrent Neural Networks. IEEE international conference on acoustics, speech and signal processing, pages 6645– 6649. IEEE. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural Turing Machines. arXiv preprint arXiv:1410.5401. Alex Graves, Greg Wayne, Malcolm Reynolds, Ivo Danihelka, Agnieszka Grabska-Barwi´nska, Sergio Gómez Col- Tiago menarejo, Ramalho, John Agapiou, et al. 2016. Hy- brid Computing Using a Neural Network Nature, with Dynamic External Memory. 538(7626):471. Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. 2015. Learning to Transduce with Unbounded Mem- In Advances in Neural Information Pro- ory. cessing Systems, pages 1828–1836. Sheila A Greibach. 1973. The hardest context- free language. SIAM Journal on Computing, 2(4):304–310. Caglar Gulcehre, Sarath Chandar, Kyunghyun Cho, and Yoshua Bengio. 2018. Dynamic Neu- ral Turing Machine with Continuous and Dis- crete Addressing Schemes. Neural computa- tion, 30(4):857–884. Michael Hahn. 2019. Theoretical limitations of self-attention in neural sequence models. arXiv preprint arXiv:1906.06755. Yiding Hao, William Merrill, Dana Angluin, Robert Frank, Noah Amsel, Andrew Benz, and Simon Mendelsohn. 2018. Context-Free Trans- arXiv preprint ductions with Neural Stacks. arXiv:1809.02836. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural computa- tion, 9(8):1735–1780. Steffen Hölldobler, Yvonne Kalinke, and Helko Lehmann. 1997. Designing a Counter: Another Case Study of Dynamics and Activation Land- scapes in Recurrent Networks. In Annual Con- ference on Artificial Intelligence, pages 313– 324. Springer. Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical Reparameterization with Gumbel- Softmax. arXiv preprint arXiv:1611.01144. Armand Joulin and Tomas Mikolov. 2015. Inferring Algorithmic Patterns with Stack- augmented Recurrent Nets. In Advances in neu- ral information processing systems, pages 190– 198. Nal Kalchbrenner and Phil Blunsom. 2013. Re- current Continuous Translation Models. In Pro- ceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing, pages 1700–1709. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. 2015. Neural Random-Access Ma- chines. arXiv preprint arXiv:1511.06392. Frédéric Magniez, Claire Mathieu, and Ashwin Nayak. 2014. Recognizing well-parenthesized SIAM expressions in the streaming model. Journal on Computing, 43(6):1880–1905. networks arXiv:1906.01615. as automata. Sequential neural arXiv preprint Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. 2010. Recurrent Neural Network Based Language Model. In Eleventh annual conference of the in- ternational speech communication association. Jordan B Pollack. 1991. The Induction of Dy- In Connectionist Ap- namical Recognizers. proaches to Language Learning, pages 123– 148. Springer. Simple Recurrent Networks Learn Context-Free and Context- Sensitive Languages by Counting. Neural com- putation, 13(9):2093–2118. Paul Rodriguez and Janet Wiles. 1998. Recur- rent Neural Networks can Learn to Implement In Advances in Symbol-Sensitive Counting. Neural Information Processing Systems, pages 87–93. Jürgen Schmidhuber, F Gers, and Douglas Eck. 2002. Learning Nonregular Languages: A Comparison of Simple Recurrent Networks and Neural Computation, 14(9):2039– LSTM. 2041. Luzi Sennhauser and Robert Berwick. 2018. Eval- uating the Ability of LSTMs to Learn Context- In Proceedings of the 2018 Free Grammars. EMNLP Workshop BlackboxNLP: Analyzing 14 and Interpreting Neural Networks for NLP, pages 115–124. Hava T Siegelmann and Eduardo D Sontag. 1994. Analog Computation via Neural Networks. Theoretical Computer Science, 131(2):331– 360. Hava T Siegelmann and Eduardo D Sontag. 1995. On the Computational Power of Neural Nets. Journal of computer and system sciences, 50(1):132–150. Natalia Skachkova, Thomas Trost, and Dietrich Klakow. 2018. Closing Brackets with Recur- In Proceedings of the rent Neural Networks. 2018 EMNLP Workshop BlackboxNLP: Analyz- ing and Interpreting Neural Networks for NLP, pages 232–239. Mark Steijvers. 1996. A Recurrent Network that Performs a Context-Sensitive Prediction Task. Ivan Hal Sudborough. 1976. On determinis- tic context-free languages, multihead automata, and the power of an auxiliary pushdown store. In Proceedings of the eighth annual ACM sym- posium on Theory of computing, pages 141– 148. ACM. Martin Sundermeyer, Ralf Schlüter, and Hermann Ney. 2012. LSTM Neural Networks for Lan- guage Modeling. In INTERSPEECH. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to Sequence Learning with Neural Networks. In Advances in neural infor- mation processing systems, pages 3104–3112. Mirac Suzgun, Yonatan Belinkov, and Stuart M Shieber. 2019a. On Evaluating the General- ization of LSTM Models in Formal Languages. Proceedings of the Society for Computation in Linguistics (SCiL), pages 277–286. Mirac Suzgun, Sebastian Gehrmann, Yonatan Be- LSTM linkov, and Stuart Shieber. 2019b. networks can perform dynamic counting. In Proceedings of the Workshop on Deep Learn- ing and Formal Languages: Building Bridges, pages 44–54, Florence. Association for Compu- tational Linguistics. Bradley Tonkes and Janet Wiles. 1997. Learn- ing a Context-Free Task with a Recurrent Neu- In In ral Network: An Analysis of Stability. 15 Proceedings of the Fourth Biennial Conference of the Australasian Cognitive Science Society. Citeseer. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. In Advances in neural information processing systems, pages 5998– 6008. Gail Weiss, Yoav Goldberg, and Eran Yahav. 2018. On the practical computational power of finite precision rnns for language recognition. In Pro- ceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Vol- ume 2: Short Papers), pages 740–745. Greg Yang. 2016. Lie Access Neural Turing Ma- chine. arXiv preprint arXiv:1602.08671. Xiang Yu, Ngoc Thang Vu, and Jonas Kuhn. 2019. Learning the Dyck language with attention- In Proceedings of based Seq2Seq models. the 2019 ACL Workshop BlackboxNLP: Analyz- ing and Interpreting Neural Networks for NLP, pages 138–146, Florence, Italy. Association for Computational Linguistics. Wojciech Zaremba, Tomas Mikolov, Armand Joulin, and Rob Fergus. 2016. Learning Simple In International Algorithms from Examples. Conference on Machine Learning, pages 421– 429. Wojciech Zaremba and Ilya Sutskever. 2015. Re- inforcement Learning Neural Turing Machines– Revised. arXiv preprint arXiv:1505.00521. Zheng Zeng, Rodney M Goodman, and Padhraic Smyth. 1994. Discrete Recurrent Neural Net- works for Grammatical Inference. IEEE Trans- actions on Neural Networks, 5(2):320–330. # A Comparison of Stack-RNN architectures Recall the formulation of our Stack-RNN architecture in Section 3.1. We update ht, the hidden state at time t, as follows: ht = tanh(Wihxt + bih + Whh˜h(t−1) + bhh) where ˜h(t−1) is defined to be: ˜h(t−1) = h(t−1) + Wshs(0) Rewriting the equation for ht, we realize that our formulation of Stack-RNN is almost equivalent to the Stack-RNN model by Joulin and Mikolov (2015): h, = tanh(W p24 bin = tanh(W p24 bin = tanh(Wj;2¢ bin Winbi—1) + bin) Wan (by-1) + Wanstp1)) + bin) Wrnbo—1) + WanWsn, 5.4) + bin) (+) In our Stack-RNN architecture, s(0) (t−1) depends on (∗), namely WhhWsh, whereas in Joulin and Mikolov’s Stack-RNN model, it only depends on Wsh. Furthermore, we make use of tanh(·), instead of σ(·), to achieve non-linearity and include bias terms, namely bih and bhh, in our definition of ht. 16
{ "id": "1602.08671" }
1911.03343
Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly
Building on Petroni et al. (2019), we propose two new probing tasks analyzing factual knowledge stored in Pretrained Language Models (PLMs). (1) Negation. We find that PLMs do not distinguish between negated ("Birds cannot [MASK]") and non-negated ("Birds can [MASK]") cloze questions. (2) Mispriming. Inspired by priming methods in human psychology, we add "misprimes" to cloze questions ("Talk? Birds can [MASK]"). We find that PLMs are easily distracted by misprimes. These results suggest that PLMs still have a long way to go to adequately learn human-like factual knowledge.
http://arxiv.org/pdf/1911.03343
Nora Kassner, Hinrich Schütze
cs.CL
ACL 2020
null
cs.CL
20191108
20200515
0 2 0 2 y a M 5 1 ] L C . s c [ 3 v 3 4 3 3 0 . 1 1 9 1 : v i X r a # Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly Nora Kassner, Hinrich Sch ¨utze Center for Information and Language Processing (CIS) LMU Munich, Germany [email protected] # Abstract Building on Petroni et al. (2019), we pro- pose two new probing tasks analyzing fac- tual knowledge stored in Pretrained Language (1) Negation. We find Models (PLMs). that PLMs do not distinguish between negated (“Birds cannot [MASK]”) and non-negated (2) (“Birds can [MASK]”) cloze questions. Mispriming. Inspired by priming methods in human psychology, we add “misprimes” to cloze questions (“Talk? Birds can [MASK]”). We find that PLMs are easily distracted by misprimes. These results suggest that PLMs still have a long way to go to adequately learn human-like factual knowledge. Querying PLMs with these pairs and comparing the predictions, we find that the predicted fillers have high overlap. Models are equally prone to generate facts (“Birds can fly”) and their incor- rect negation (“Birds cannot fly”). We find that BERT handles negation best among PLMs, but it still fails badly on most negated probes. In a second experiment, we show that BERT can in principle memorize both positive and negative facts correctly if they occur in training, but that it poorly gener- alizes to unseen sentences (positive and negative). However, after finetuning, BERT does learn to cor- rectly classify unseen facts as true/false. # Introduction PLMs like Transformer-XL (Dai et al., 2019), ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) have emerged as universal tools that capture a diverse range of linguistic and factual knowledge. Recently, Petroni et al. (2019) introduced LAMA (LAnguage Model Analysis) to investigate whether PLMs can recall factual knowledge that is part of their training corpus. Since the PLM training ob- jective is to predict masked tokens, question an- swering (QA) tasks can be reformulated as cloze questions. For example, “Who wrote ‘Dubliners’?” is reformulated as “[MASK] wrote ‘Dubliners’.” In this setup, Petroni et al. (2019) show that PLMs out- perform automatically extracted knowledge bases on QA. In this paper, we investigate this capability of PLMs in the context of (1) negation and what we call (2) mispriming. (2) Mispriming. We use priming, a standard experimental method in human psychology (Tul- ving and Schacter, 1990) where a first stimulus (e.g., “dog”) can influence the response to a sec- ond stimulus (e.g., “wolf” in response to “name an animal”). Our novel idea is to use priming for probing PLMs, specifically mispriming: we give automatically generated misprimes to PLMs that would not mislead humans. For example, we add “Talk? Birds can [MASK]” to LAMA where “Talk?” is the misprime. A human would ignore the misprime, stick to what she knows and produce a filler like “fly”. We show that, in contrast, PLMs are misled and fill in “talk” for the mask. We could have manually generated more natural misprimes. For example, misprime “regent of Anti- och” in “Tancred, regent of Antioch, played a role in the conquest of [MASK]” tricks BERT into chos- ing the filler “Antioch” (instead of “Jerusalem”). Our automatic misprimes are less natural, but au- tomatic generation allows us to create a large mis- prime dataset for this initial study. (1) Negation. To study the effect of negation on PLMs, we introduce the negated LAMA dataset. We insert negation elements (e.g., “not”) in LAMA cloze questions (e.g., “The theory of relativity was not developed by [MASK].”) – this gives us posi- tive/negative pairs of cloze questions. Contribution. We show that PLMs’ ability to learn factual knowledge is – in contrast to human capabilities – extremely brittle for negated sen- tences and for sentences preceded by distracting material (i.e., misprimes). Data and code will be published.1 # 2 Data and Models LAMA’s cloze questions are generated from triples from knowledge subject-relation-object bases (KBs) and question-answer pairs. For KB triples, cloze questions are generated, for each re- lation, by a templatic statement that contains vari- ables X and Y for subject and object (e.g, “X was born in Y”). We then substitute the subject for X and MASK for Y. In a question-answer pair, we MASK the answer. LAMA is based on several sources: (i) Google- RE. 3 relations: “place of birth”, “date of birth”, “place of death”. (ii) T-REx (Elsahar et al., 2018). Subset of Wikidata triples. 41 relations. (iii) Con- ceptNet (Li et al., 2016). 16 commonsense rela- tions. The underlying corpus provides matching statements to query PLMs. (iv) SQuAD (Rajpurkar et al., 2016). Subset of 305 context-insensitive questions, reworded as cloze questions. We use the source code provided by Petroni et al. (2019) and Wolf et al. (2019) to evaluate Transformer-XL large (Txl), ELMo original (Eb), ELMo 5.5B (E5B), BERT-base (Bb) and BERT- large (Bl). Negated LAMA. We created negated LAMA by manually inserting a negation element in each template or question. For ConceptNet we only consider an easy-to-negate subset (see appendix). Misprimed LAMA. We misprime LAMA by inserting an incorrect word and a question mark at the beginning of a statement; e.g., “Talk?” in “Talk? Birds can [MASK].” We only misprime questions that are answered correctly by BERT- large. To make sure the misprime is misleading, we manually remove correct primes for SQuAD and ConceptNet and automatically remove primes that are the correct filler for a different instance of the same relation for T-REx and ConceptNet. We create four versions of misprimed LAMA (A, B, C, D) as described in the caption of Table 3; Table 1 gives examples. # 3 Results Negated LAMA. Table 2 gives spearman rank cor- relation ρ and % overlap in rank 1 predictions be- tween original and negated LAMA. Our assumption is that the correct answers for a pair of positive question and negative question 1https://github.com/norakassner/LAMA primed negated Version Query A B C D Dinosaurs? Munich is located in [MASK] . Somalia? Munich is located in [MASK] . Prussia? Munich is located in [MASK] . Prussia? “This is great”. . . . “What a surprise.” “Good to know.” . . . Munich is located in [MASK] . Table 1: Examples for different versions of misprimes: (A) are randomly chosen, (B) are randomly chosen from correct fillers of different instances of the relation, (C) were top-ranked fillers for the original cloze ques- tion but have at least a 30% lower prediction probabil- ity than the correct object. (D) is like (C) except that 20 short neutral sentences are inserted between misprime and MASK sentence. should not overlap, so high values indicate lack of understanding of negation. The two measures are complementary and yet agree very well. The correlation measure is sensitive in distinguishing cases where negation has a small effect from those where it has a larger effect.2 % overlap is a measure that is direct and easy to interpret. In most cases, ρ > 85%; overlap in rank 1 pre- dictions is also high. ConcepNet results are most strongly correlated but TREx 1-1 results are less overlapping. Table 4 gives examples (lines marked “N”). BERT has slightly better results. Google-RE date of birth is an outlier because the pattern “X (not born in [MASK])” rarely occurs in corpora and predictions are often nonsensical. In summary, PLMs poorly distinguish positive and negative sentences. We give two examples of the few cases where PLMs make correct predictions, i.e., they solve the cloze task as human subjects would. For “The capital of X is not Y” (TREX, 1-1) top ranked pre- dictions are “listed”, “known”, “mentioned” (vs. cities for “The capital of X is Y”). This is appropri- ate since the predicted sentences are more common than sentences like “The capital of X is not Paris”. For “X was born in Y”, cities are predicted, but 2A reviewer observes that spearman correlation is gener- ally high and wonders whether high spearman correlation is re- ally a reliable indicator of negation not changing the answer of the model. As a sanity check, we also randomly sampled, for each query correctly answered by BERT-large (e.g., “Einstein born in [MASK]”), another query with a different answer, but the same template relation (e.g., “Newton born in [MASK]”) and computed the spearman correlation between the predic- tions for the two queries. In general, these positive-positive spearman correlations were significantly lower than those be- tween positive (“Einstein born in [MASK]”) and negative (“Einstein not born in [MASK]”) queries (t-test, p < 0.01). There were two exceptions (not significantly lower): T-REx 1-1 and Google-RE birth-date. Facts Rels Txl Eb E5b Bb Bl Google-RE T-REx ConceptNet SQuAD birth-place birth-date death-place 1-1 N-1 N-M - - 2937 1825 765 937 20006 13096 2996 305 1 1 1 2 23 16 16 - ρ 92.8 87.8 85.8 89.7 90.6 92.4 91.1 91.8 % 47.1 21.9 1.4 88.7 46.6 44.2 32.0 46.9 ρ 97.1 92.5 94.3 95.0 96.2 95.5 96.8 97.1 % 28.5 1.5 57.8 28.6 78.6 71.1 63.5 62.0 ρ 96.0 90.7 95.9 93.0 96.3 96.2 96.2 96.4 % 22.9 7.5 80.7 56.5 89.4 80.5 53.5 53.1 ρ 89.3 70.4 89.8 71.5 87.4 91.9 89.9 89.5 % 11.2 0.1 21.7 35.7 52.1 58.8 34.9 42.9 ρ 88.3 56.8 87.0 47.2 84.8 88.9 88.6 86.5 % 20.1 0.3 13.2 22.7 45.0 54.2 31.3 41.9 Table 2: PLMs do not distinguish positive and negative sentences. Mean spearman rank correlation (ρ) and mean percentage of overlap in first ranked predictions (%) between the original and the negated queries for Transformer- XL large (Txl), ELMo original (Eb), ELMo 5.5B (E5B), BERT-base (Bb) and BERT-large (Bl). for “X was not born in Y”, sometimes countries are predicted. This also seems natural: for the posi- tive sentence, cities are more informative, for the negative, countries. Balanced corpus. Investigating this further, we train BERT-base from scratch on a synthetic cor- pus. Hyperparameters are listed in the appendix. The corpus contains as many positive sentences of form “xj is an” as negative sentences of form “xj is not an” where xj is drawn from a set of 200 subjects S and an from a set of 20 adjectives A. The 20 adjectives form 10 pairs of antonyms (e.g., “good”/”bad”). S is divided into 10 groups gm of 20. Finally, there is an underlying KB that defines valid adjectives for groups. For example, assume that g1 has property am = “good”. Then for each xi ∈ g1, the sentences “xi is good” and “xi is not bad” are true. The training set is generated to con- tain all positive and negative sentences for 70% of the subjects. It also contains either only the posi- tive sentences for the other 30% of subjects (in that case the negative sentences are added to test) or vice versa. Cloze questions are generated in the for- mat “xj is [MASK]”/“xj is not [MASK]”. We test whether (i) BERT memorizes positive and negative sentences seen during training, (ii) it generalizes to the test set. As an example, a correct generalization would be “xi is not bad” if “xi is good” was part of the training set. The question is: does BERT learn, based on the patterns of positive/negative sentences and within-group regularities, to distinguish facts from non-facts. Corpus Google-RE T-REx ConceptNet SQuAD Relation birth-place birth-date death-place 1-1 N-1 N-M - - Facts D B A 386 11.7 44.7 99.5 98.4 25 72.0 91.7 100.0 88.0 98.9 98.9 88 14.8 47.1 30.1 28.1 661 12.7 20.6 59.9 41.2 7034 22.1 48.3 58.7 43.9 2774 26.6 55.3 82.9 70.6 146 52.1 59.6 68.6 60.8 - C 51 33.3 Table 3: Absolute precision drop (from 100%, lower better) when mispriming BERT-large for the LAMA subset that was answered correctly in its original form. We insert objects that (A) are randomly chosen, (B) are randomly chosen from correct fillers of different in- stances of the relation (not done for SQuAD as it is not organized in relations), (C) were top-ranked fillers for the original cloze question but have at least a 30% lower prediction probability than the correct object. (D) investigates the effect of distance, manipulating (C) further by inserting a concatenation of 20 neutral sen- tences (e.g., “Good to know.”, see appendix) between misprime and cloze question. finetune BERT (“finetuned BERT”) on the task of classifying sentences as true/false, its test accuracy is 100%. (Recall that false sentences simply cor- respond to true sentence with a “not” inserted or removed.) So BERT easily learns negation if su- pervision is available, but fails without it. This experiment demonstrates the difficulty of learning negation through unsupervised pretraining. We suggest that the inability of pretrained BERT to distinguish true from false is a serious impediment to accurately handling factual knowledge. Table 5 (“pretrained BERT”) shows that BERT memorizes positive and negative sentences, but poorly generalizes to the test set for both positive and negative. The learning curves (see appendix) show that this is not due to overfitting the training data. While the training loss rises, the test preci- sion fluctuates around a plateau. However, if we Misprimed LAMA. Table 3 shows the effect of mispriming on BERT-large for questions answered correctly in original LAMA; recall that Table 1 gives examples of sentences constructed in modes A, B, C and D. In most cases, mispriming with a highly ranked incorrect object causes a precision drop of over 60% (C). Example predictions can be found in Table 4 (lines marked “M”). This sensi- E R t e N cloze question O Marcel Oopa died in the city of [MASK]. N Marcel Oopa did not die in the city of [MASK]. M Yokohama? Marcel Oopa died in the city of [MASK]. O Anatoly Alexine was born in the city of [MASK]. N Anatoly Alexine was not born in the city of [MASK]. M Kiev? Anatoly Alexine was born in the city of [MASK]. O Platonism is named after [MASK] . N Platonism is not named after [MASK]. M Cicero? Platonism is named after [MASK]. O Lexus is owned by [MASK] . N Lexus is not owned by [MASK]. M Microsoft? Lexus is owned by [MASK] . O Birds can [MASK]. N Birds cannot [MASK]. M Talk? Birds can [MASK]. O A beagle is a type of [MASK]. N A beagle is not a type of [MASK]. M Pigeon? A beagle is a type of [MASK]. O Quran is a [MASK] text. N Quran is not a [MASK] text. M Secular? Quran is a [MASK] text. O Isaac’s chains are made out of [MASK]. N Isaac’s chains are not made out of [MASK]. M Iron? Isaac’s chains are made out of [MASK]. true Paris top 3 words generated with log probs Paris (-2.3), Lausanne (-3.3), Brussels (-3.3) Paris (-2.4), Helsinki (-3.5), Warsaw (-3.5) Yokohama (-1.0), Tokyo (-2.5), Paris (-3.0) Moscow Moscow (-1.2), Kiev (-1.6), Odessa (-2.5) Plato Toyota fly dog Moscow (-1.2), Kiev (-1.5), Novgorod (-2.5) Kiev (-0.0), Moscow (-6.1), Vilnius (-7.0) Plato (-1.5), Aristotle (-3.5), Locke (-5.8) Plato (-0.24), Aristotle (-2.5), Locke (-5.7) Cicero (-2.3), Plato ( -3.5), Aristotle (-5.1) Toyota (-1.4), Renault (-2.0), Nissan (-2.4) Ferrari (-1.0), Fiat (-1.4), BMW (-3.7) Microsoft (-1.2), Google ( -2.1), Toyota (-2.6) fly (-0.5), sing (-2.3), talk (-2.8) fly (-0.3), sing ( -3.6), speak (-4.1) talk (-0.2), fly ( -2.5), speak (-3.9) dog (-0.1), animal (-3.7), pigeon (-4.1) dog (-0.2), horse ( -3.8), animal (-4.1) dog (-1.3), pigeon ( -1.4), bird (-2.2) religious religious (-1.0), sacred (-1.8), Muslim (-3.2) silver religious (-1.1), sacred ( -2.3), complete (-3.3) religious (-1.5), banned ( -2.8), secular (-3.0) silver (-1.9), gold (-2.1), iron (-2.2) iron (-1.2), metal ( -2.1), gold (-2.1) iron (-0.4), steel ( -2.8), metal (-2.8) # e l g o o G ‘~ # x R E T t p e c n o C # D A u Q S Table 4: BERT-large examples for (O) original , (N) negated and (M) misprimed (Table 3 C) LAMA. pos neg pos neg pretrained BERT 0.9 0.9 0.2 0.2 finetuned BERT 1.0 1.0 1.0 1.0 of negation. They mostly seem to predict fillers based on co-occurrence of subject (e.g., “Quran”) and filler (“religious”) and to ignore negation. Table 5: Accuracy of BERT on balanced corpus. Pre- trained BERT does not model negation well, but fine- tuned BERT classifies sentences as true/false correctly. tivity to misprimes still exists when the distance between misprime and cloze question is increased: the drop persists when 20 sentences are inserted (D). Striking are the results for Google-RE where the model recalls almost no facts (C). Table 4 (lines marked “M”) shows predicted fillers for these mis- primed sentences. BERT is less but still badly affected by misprimes that match selectional re- strictions (B). The model is more robust against priming with random words (A): the precision drop is on average more than 35% lower than for (D). We included the baseline (A) as a sanity check for the precision drop measure. These baseline results show that the presence of a misprime per se does not confuse the model; a less distracting misprime (different type of entity or a completely implausible answer) often results in a correct answer by BERT. A key problem is that in the LAMA setup, not answering (i.e., admitting ignorance) is not an op- tion. While the prediction probability generally is somewhat lower in the negated compared to the positive answer, there is no threshold across cloze questions that could be used to distinguish valid positive from invalid negative answers (cf. Table 4). We suspect that a possible explanation for PLMs’ poor performance is that negated sentences occur much less frequently in training corpora. Our syn- thetic corpus study (Table 5) shows that BERT is able to memorize negative facts that occur in the corpus. However, the PLM objective encourages the model to predict fillers based on similar sen- tences in the training corpus – and if the most simi- lar statement to a negative sentence is positive, then the filler is generally incorrect. However, after fine- tuning, BERT is able to classify truth/falseness cor- rectly, demonstrating that negation can be learned through supervised training. # 4 Discussion Whereas Petroni et al. (2019)’s results suggest that PLMs are able to memorize facts, our results indi- cate that PLMs largely do not learn the meaning The mispriming experiment shows that BERT often handles random misprimes correctly (Table 3 A). There are also cases where BERT does the right thing for difficult misprimes, e.g., it robustly attributes “religious” to Quran (Table 4). In general, however, BERT is highly sensitive to misleading context (Table 3 C) that would not change human behavior in QA. It is especially striking that a single word suffices to distract BERT. This may suggest that it is not knowledge that is learned by BERT, but that its performance is mainly based on similarity matching between the current context on the one hand and sentences in its training corpus and/or recent context on the other hand. Poerner et al. (2019) present a similar analysis. Our work is a new way of analyzing differences between PLMs and human-level natural language understanding. We should aspire to develop PLMs that – like humans – can handle negation and are not easily distracted by misprimes. # 5 Related Work PLMs are top performers for many tasks, includ- ing QA (Kwiatkowski et al., 2019; Alberti et al., 2019). PLMs are usually finetuned (Liu et al., 2019; Devlin et al., 2019), but recent work has applied models without finetuning (Radford et al., 2019; Petroni et al., 2019). Bosselut et al. (2019) investi- gate PLMs’ common sense knowledge, but do not consider negation explicitly or priming. A wide range of literature analyzes linguis- tic knowledge stored in pretrained embeddings (Jumelet and Hupkes, 2018; Gulordava et al., 2018; Giulianelli et al., 2018; McCoy et al., 2019; Das- gupta et al., 2018; Marvin and Linzen, 2018; Warstadt and Bowman, 2019; Kann et al., 2019). Our work analyzes factual knowledge. McCoy et al. (2019) show that BERT finetuned to perform natural language inference heavily relies on syntac- tic heuristics, also suggesting that it is not able to adequately acquire common sense. Warstadt et al. (2019) investigate BERT’s un- derstanding of how negative polarity items are licensed. Our work, focusing on factual knowl- edge stored in negated sentences, is complementary since grammaticality and factuality are mostly or- thogonal properties. Kim et al. (2019) investigate understanding of negation particles when PLMs are finetuned. In contrast, our focus is on the inter- action of negation and factual knowledge learned in pretraining. Ettinger (2019) defines and applies psycho-linguistic diagnostics for PLMs. Our use of priming is complementary. Their data consists of two sets of 72 and 16 sentences whereas we create 42,867 negated sentences covering a wide range of topics and relations. Ribeiro et al. (2018) test for comprehension of minimally modified sentences in an adversarial setup while trying to keep the overall semantics the same. In contrast, we investigate large changes of meaning (negation) and context (mispriming). In contrast to adversarial work (e.g., (Wallace et al., 2019)), we do not focus on adversarial examples for a specific task, but on pretrained models’ ability to robustly store factual knowledge. # 6 Conclusion Our results suggest that pretrained language models address open domain QA in datasets like LAMA by mechanisms that are more akin to relatively shallow pattern matching than the recall of learned factual knowledge and inference. Implications for future work on pretrained language models. (i) Both factual knowledge and logic are discrete phenomena in the sense that sen- tences with similar representations in current pre- trained language models differ sharply in factuality and truth value (e.g., “Newton was born in 1641” vs. “Newton was born in 1642”). Further archi- tectural innovations in deep learning seem neces- sary to deal with such discrete phenomena. (ii) We found that PLMs have difficulty distinguishing “informed” best guesses (based on information ex- tracted from training corpora) from “random” best guesses (made in the absence of any evidence in the training corpora). This implies that better con- fidence assessment of PLM predictions is needed. (iii) Our premise was that we should emulate hu- man language processing and that therefore tasks that are easy for humans are good tests for NLP models. To the extent this is true, the two phenom- ena we have investigated in this paper – that PLMs seem to ignore negation in many cases and that they are easily confused by simple distractors – seem to be good vehicles for encouraging the develop- ment of PLMs whose performance on NLP tasks is closer to humans. Acknowledgements. We thank the reviewers for their constructive criticism. This work was funded by the German Federal Ministry of Ed- ucation and Research (BMBF) under Grant No. 01IS18036A and by the European Research Coun- cil (Grant No. 740516). The authors of this work take full responsibility for its content. # References Chris Alberti, Kenton Lee, and Michael Collins. 2019. A BERT baseline for the natural questions. ArXiv, abs/1901.08634. Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chai- tanya Malaviya, Asli Celikyilmaz, and Yejin Choi. 2019. COMET: Commonsense transformers for au- tomatic knowledge graph construction. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4762–4779, Florence, Italy. Association for Computational Lin- guistics. Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-XL: Attentive language models beyond In Proceedings of the 57th a fixed-length context. Annual Meeting of the Association for Computa- tional Linguistics, pages 2978–2988, Florence, Italy. Association for Computational Linguistics. Ishita Dasgupta, Demi Guo, Andreas Stuhlm¨uller, Samuel J Gershman, and Noah D Goodman. 2018. Evaluating compositionality in sentence embed- dings. arXiv preprint arXiv:1802.04302. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Jonathon Hare, Frederique Christophe Gravier, Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA). Allyson Ettinger. 2019. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34–48. Mario Giulianelli, Jack Harding, Florian Mohnert, Dieuwke Hupkes, and Willem Zuidema. 2018. Un- der the hood: Using diagnostic classifiers to in- vestigate and improve how language models track agreement information. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and In- terpreting Neural Networks for NLP, pages 240–248, Brussels, Belgium. Association for Computational Linguistics. Kristina Gulordava, Piotr Bojanowski, Edouard Grave, Tal Linzen, and Marco Baroni. 2018. Colorless green recurrent networks dream hierarchically. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1195–1205, New Orleans, Louisiana. Association for Computational Linguistics. Jaap Jumelet and Dieuwke Hupkes. 2018. Do lan- guage models understand anything? on the ability of LSTMs to understand negative polarity items. In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 222–231, Brussels, Belgium. Association for Computational Linguistics. Katharina Kann, Alex Warstadt, Adina Williams, and Samuel R. Bowman. 2019. Verb argument structure alternations in word and sentence embeddings. In Proceedings of the Society for Computation in Lin- guistics (SCiL) 2019, pages 287–297. Najoung Kim, Roma Patel, Adam Poliak, Patrick Xia, Alex Wang, Tom McCoy, Ian Tenney, Alexis Ross, Tal Linzen, Benjamin Van Durme, Samuel R. Bow- man, and Ellie Pavlick. 2019. Probing what dif- ferent NLP tasks teach machines about function word comprehension. In Proceedings of the Eighth Joint Conference on Lexical and Computational Se- mantics (*SEM 2019), pages 235–249, Minneapolis, Minnesota. Association for Computational Linguis- tics. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Jacob De- vlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question an- swering research. Transactions of the Association for Computational Linguistics, 7:453–466. Xiang Li, Aynaz Taheri, Lifu Tu, and Kevin Gimpel. 2016. Commonsense knowledge base completion. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1445–1455, Berlin, Germany. Association for Computational Linguistics. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A robustly optimized BERT pretraining approach. arXiv preprint arXiv:1907.11692. Rebecca Marvin and Tal Linzen. 2018. Targeted syn- In Proceed- tactic evaluation of language models. ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1192–1202, Brussels, Belgium. Association for Computational Linguistics. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428–3448, Florence, Italy. Association for Computational Lin- guistics. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- In Proceedings of the 2018 Confer- resentations. ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227–2237, New Orleans, Louisiana. Association for Computational Linguistics. Fabio Petroni, Tim Rockt¨aschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- In Proceedings of the 2019 Confer- edge bases? ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463–2473, Hong Kong, China. As- sociation for Computational Linguistics. Nina Poerner, Ulli Waltinger, and Hinrich Sch¨utze. 2019. BERT is not a knowledge base (yet): Fac- tual knowledge vs. name-based reasoning in unsu- pervised qa. ArXiv, abs/1911.03681. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383–2392, Austin, Texas. Association for Computational Linguistics. Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversar- ial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856–865, Melbourne, Australia. Association for Computational Linguistics. Endel Tulving and Daniel Schacter. 1990. Priming and human memory systems. Science, 247(4940):301– 306. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial trig- gers for attacking and analyzing NLP. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2153–2162, Hong Kong, China. Association for Computational Lin- guistics. Alex Warstadt and Samuel R. Bowman. 2019. Grammatical analysis of pretrained sentence en- ArXiv, coders with acceptability judgments. abs/1901.03438. Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Ha- gen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019. Investi- gating BERT’s knowledge of language: Five anal- In Proceedings of the ysis methods with NPIs. 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2877–2887, Hong Kong, China. Association for Computational Linguistics. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, R’emi Louf, Morgan Funtow- icz, and Jamie Brew. 2019. Huggingface’s trans- formers: State-of-the-art natural language process- ing. ArXiv, abs/1910.03771. # 7 Appendix # 7.1 Details on LAMA We use source code provided by Petroni et al. (2019) 3. T-REx, parts of ConceptNet and SQuAD allow multiple true answers (N-M). To ensure sin- gle true objects for Google-RE, we reformulate the templates asking for location to specifically ask for cities (e.g., “born in [MASK]” to “born in the city of [MASK]”). We do not change any other tem- plates. T-REx still queries for ”died in [MASK]”. 7.1.1 Details on negated LAMA For ConceptNet we extract an easy-to-negate sub- set. The final subset includes 2,996 of the 11,458 samples. We proceed as follows: 1. We only negate sentences of maximal token sequence length 4 or if we find a match with one of the following patterns: “is a type of ”, “is made of”, “is part of”, “are made of.”, “can be made of”, “are a type of ”, “are a part off”. 2. The selected subset is automatically negated by a manually created verb negation dictionary. 7.1.2 Details on misprimed LAMA To investigate the effect of distance between the prime and the cloze question, we insert a concate- nation of up to 20 “neutral” sentences. The longest sequence has 89 byte pair encodings. The distance upon the full concatenation of all 20 sentences did not lessen the effect of the prime much. The used sentences are: ”This is great.”, ”This is interesting.”, ”Hold this thought.”, ”What a surprise.”, ”Good to know.”, ”Pretty awesome stuff.”, ”Nice seeing you.”, ”Let’s meet again soon.”, ”This is nice.”, 3github.com/facebookresearch/LAMA 150 8 8 Training. loss ° 2° Bo ° ia Test accuracy ° 5 ° 200 400 600 800 1000 Training epochs Figure 1: Training loss and test accuracy when pretrain- ing BERT-base on a balanced corpus. The model is able to memorize positive and negative sentences seen dur- ing training but is not able to generalize to an unseen test set for both positive and negative sentences. ”Have a nice time.”, ”That is okay.”, ”Long time no see.”, ”What a day.”, ”Wonderful story.”, ”That’s new to me.”, ”Very cool.”, ”Till next time.”, ”That’s enough.”, ”This is amazing.”, ”I will think about it.” batch size learning rate number of epochs max. sequence length 13 Table 6: Hyper-parameters for pretraining BERT-base on a balanced corpus of negative and positive sen- tences. 32 batch size 4e-5 learning rate number of epochs 20 max. sequence length 7 Table 7: Hyper-parameters for finetuning on the task of classifying sentences as true/false. # 7.2 Details on the balanced corpus We pretrain BERT-base from scratch on a corpus on equally many negative and positive sentences. We concatenate multiples of the same training data into one training file to compensate for the little amount of data. Hyper-parameters for pretraining are listed in Table 6. The full vocabulary is 349 tokens long. Figure 1 shows that training loss and test ac- curacy are uncorrelated. Test accuracy stagnates around 0.5 which is not more than random guessing as for each relation half of the adjectives hold. We finetune on the task of classifying sentences as true/false. We concatenate multiples of the same training data into one training file to compensate for the little amount of data. Hyperparameters for finetuning are listed in Table 7. We use source code provided by Wolf et al. (2019) 4. 4github.com/huggingface/transformers
{ "id": "1907.11692" }
1911.02972
Blockwise Self-Attention for Long Document Understanding
We present BlockBERT, a lightweight and efficient BERT model for better modeling long-distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both memory consumption and training/inference time, which also enables attention heads to capture either short- or long-range contextual information. We conduct experiments on language model pre-training and several benchmark question answering datasets with various paragraph lengths. BlockBERT uses 18.7-36.1% less memory and 12.0-25.1% less time to learn the model. During testing, BlockBERT saves 27.8% inference time, while having comparable and sometimes better prediction accuracy, compared to an advanced BERT-based model, RoBERTa.
http://arxiv.org/pdf/1911.02972
Jiezhong Qiu, Hao Ma, Omer Levy, Scott Wen-tau Yih, Sinong Wang, Jie Tang
cs.CL, cs.LG
Accepted at Findings of EMNLP'20 and SustaiNLP 2020 at EMNLP'20, 12 pages
null
cs.CL
20191107
20201101
0 2 0 2 v o N 1 ] L C . s c [ 2 v 2 7 9 2 0 . 1 1 9 1 : v i X r a # Blockwise Self-Attention for Long Document Understanding Jiezhong Qiu1∗, Hao Ma2, Omer Levy2, Wen-tau Yih2, Sinong Wang2, Jie Tang1 1Department of Computer Science and Technology, Tsinghua University 2Facebook AI [email protected] {haom,omerlevy,scottyih,sinongwang}@fb.com [email protected] # Abstract We present BlockBERT, a lightweight and ef- ficient BERT model for better modeling long- distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both mem- ory consumption and training/inference time, which also enables attention heads to cap- ture either short- or long-range contextual in- formation. We conduct experiments on lan- guage model pre-training and several bench- mark question answering datasets with vari- ous paragraph lengths. BlockBERT uses 18.7- 36.1% less memory and 12.0-25.1% less time to learn the model. During testing, BlockBERT saves 27.8% inference time, while having com- parable and sometimes better prediction accu- racy, compared to an advanced BERT-based model, RoBERTa. Building such models in practice, however, is an extremely resource-intensive process. For in- stance, the training of BERT-family models is noto- riously expensive. Devlin et al. (2019) report that it takes four days to pre-train BERT-Base/BERT- Large on 4/16 Cloud TPUs. In order to reduce the pre-training time of RoBERTa to 1 day, Liu et al. (2019) use 1,024 V100 GPUs. One crucial factor contributing to the long training time is the memory consumption of these deep models, as it directly affects the batch size. Although the fine-tuning stage is relatively inexpensive, the memory issue still restricts the scenarios in which BERT can be used. For instance, “it is currently not possible to re-produce most of the BERT-Large results on the paper using a GPU with 12GB-16GB of RAM, because the maximum batch size that can fit in memory is too small.1” # Introduction Recent emergence of the pre-training and fine- tuning paradigm, exemplified by methods like ELMo (Peters et al., 2018), GPT-2/3 (Radford et al., 2019; Brown et al., 2020), BERT (Devlin et al., 2019), XLNet (Yang et al., 2019), RoBERTa (Liu et al., 2019) and ALBERT (Lan et al., 2019), has drastically reshaped the landscape of the natural language processing research. These methods first pre-train a deep model with language model objec- tives using a large corpus and then fine-tune the model using in-domain supervised data for target applications. Despite its conceptual simplicity, this paradigm has re-established the new state-of-the- art baselines across various tasks, such as question answering (Devlin et al., 2019), coreference resolu- tion (Joshi et al., 2019b), relation extraction (Soares et al., 2019) and text retrieval (Lee et al., 2019; Nogueira and Cho, 2019), to name a few. Although one may think that model size is the main contributor to the large memory consump- tion, our analysis (Section 2.1) shows that one of the main bottlenecks is actually dot-product self- attention, operated in multiple layers of Transform- ers (Vaswani et al., 2017), the building block of BERT. As the attention operation is quadratic to the sequence length, this fundamentally limits the maximum length of the input sequence, and thus restricts the model capacity in terms of capturing long-distance dependencies. As a result, down- stream tasks have to either truncate their sequences to leading tokens (Nogueira and Cho, 2019) or split their sequences with a sliding window (Joshi et al., 2019a,b). Ad-hoc handling of long sequences is also required in the pre-training stage, such as up- dating the model using only short sequences in the early stage (Devlin et al., 2019). ∗This work was partially done when the first author was an intern at Facebook AI. Code is available at https:// github.com/xptree/BlockBERT Common strategies for reducing memory con- sumption, unfortunately, do not work. For instance, # 1github.com/google-research/bert shrinking the model by lowering the number of lay- ers L, attention heads A, or hidden units H leads to significant performance degradation (Vaswani et al., 2017; Devlin et al., 2019) and does not address the long sequence issue. Alternatively, general low-memory training techniques, such as microbatching (Huang et al., 2018) and gradient checkpointing (Chen et al., 2016) essentially trade off training time for memory consumption, pro- longs the already lengthy training process. In this work, we explore a different strategy, sparsifying the attention layers, intending to de- sign a lightweight and effective BERT that can model long sequences in a memory-efficient way. Our BlockBERT extends BERT by introducing sparse block substructures into attention matrices to reduce both memory consumption and the num- ber of floating-point operations (FLOPs), which also enables attention heads to capture either short- or long-range contextual information. Compared to the previous method that also enforces spar- sity (Child et al., 2019), our approach is much simpler mathematically and very easy to imple- ment. More importantly, the results of experiments conducted on several benchmark question answer- ing datasets with various paragraph lengths show that BlockBERT performs comparably or even bet- ter than the original BERT-family models, while enjoying an 18.7-36.1% reduction in memory us- age, a 12.0-25.1% reduction in training time, and a 27.8% reduction in inference time. The rest of the paper is organized as follows. Section 2 gives a brief introduction of the BERT model, along with an in-depth analysis of its mem- ory usage during training time. We describe our proposed model in Section 3 and contrast it with ex- isting methods that aim for creating a lighter model. Section 4 presents the experimental results and ab- lation studies, followed by a survey of other related work in Section 5 and the conclusion in Section 6. # 2 Background: Memory Bottleneck in Training BERT We briefly review BERT and introduce its memory profiling in this section. Following the paradigm of language model pre-training and down-stream task fine-tuning, BERT (Devlin et al., 2019) con- sists of multiple layers of bidirectional Transform- ers (Vaswani et al., 2017), where each Transformer encoder has a multi-head self-attention layer and a position-wise feed-forward layer. Using the same notation as in (Devlin et al., 2019), we denote the number of Transformer layers by L, the number of hidden units by H, the number of attention heads by A, the sequence length by N , and the batch size by B. We also assume the feed-forward hidden unit size to be 4H.2 # 2.1 Memory Profiling Training BERT is a memory-intensive process. In order to identify the bottleneck, we follow the mem- ory model proposed by Sohoni et al. (2019), where memory usage throughout neural network train- ing is categorized into three main types: (1) Model memory is used to store model parameters; (2) Op- timizer memory is the additional memory used by the specific learning algorithm during the process; (3) Activation memory consists of the outputs of each layer, which are cached for reuse in backprop- agation to compute gradients. Take BERT-Base training as an example. The model has 110 million parameters, so model mem- ory occupies 0.2 GB if parameters are stored in half-precision floating-point format (FP16). For Adam (Kingma and Ba, 2014), the optimizer needs additional memory to store the gradients, first mo- ments, and second moments of model parameters. If stored using the same precision, the optimizer memory should be three times of model memory.3 To calculate the exact size of activation memory is not trivial because it depends heavily on the im- plementation of the toolkit. Instead, we measure it empirically by training BERT-Base using Adam with a memory profiler (more details are provided in Appendix A.2). We use 32 NVIDIA V100 GPUs for train- ing. Every single GPU thus consumes a mini- batch of size b = B/32 = 8. Figure 1(a) shows the profiling result for a single GPU, where the model/optimizer/activation memory consumes 0.21/1.03/8.49 GB, respectively. We can see that activation memory accounts for the vast majority of the total GPU memory (87.6%) and is thus the bot- tleneck. Notice that although our analysis is done on BERT-Base, it can also be generalized to BERT- Large and other models such as RoBERTa (Liu et al., 2019) and XLNet (Yang et al., 2019). 2The default parameter settings for BERT-Base and BERT- Large can be found in Appendix A.1 3In the current PyTorch Adam implementation, the first and second moments are stored in single precision. Conse- quently, BERT’s optimizer memory (1 GB) is five times of model memory (0.2 GB). BERT-Base (6B Act. Memory (GB) \ Sequence Length (a) BERT-Base Training Memory Profiling (b) Regression Analysis on Activation Memory Figure 1: Memory Profiling for BERT. # 2.2 A Regression Analysis on Activation Memory For BERT, or more specifically, Transformer, the activation memory corresponds to intermediate re- sults of different layers. It grows linearly in all the model hyper-parameters, except the sequence length N , due to the attention layers. To quan- tify the linear and quadratic components in the activation memory more clearly, we conduct a re- gression analysis as follows. Assume that the ac- tivation memory (in each GPU) is a polynomial a2bN 2 + a1bN + a0, where b is the batch size in each GPU and ai (i = 0, 1, 2) are coefficients to be determined. If we fix the total number of tokens in a GPU to be constant (in our case, we fix b × N = 4096), we should have a linear func- tion w.r.t. N , i.e., 4096a2N + 4096a1 + a0. We enumerate N from {128, 256, 512, 1024} in our experiments, and plot the corresponding profiled activation memory in Figure 1(b). Using ordi- nary least squares (OLS), with b × N = 4096, the estimated linear function for activation mem- ory is 0.00715 × N + 4.83, where the first term corresponds to the O(N 2) component. When N = 512 (i.e., b = 8), we can see that for BERT-Base, the O(N 2) component accounts for 3.66 GB, and the O(N ) component accounts for 4.83 GB. When the sequence length N increases to 1024 (i.e., b = 4), the O(N 2) component increases to 7.32 GB, while the O(N ) part is unchanged. # 2.3 Techniques for Reducing Traing Memory Observing that activation memory is the training bottleneck, we discuss common memory reduction techniques below. Low Precision (Micikevicius et al., 2017) Low precision is to use half-precision/mixed-precision for training neural networks. This technique has been widely used in Transformer training (Ott et al., 2019; Liu et al., 2019). In this work, we already assume to use mixed-precision training by default, as indicated in the aforementioned analysis. Microbatching (Huang et al., 2018) Micro- batching is to split a batch into small micro- batches (which can be fit into memory), and then run forward and backward passes on them sepa- rately with gradients for each micro-batch accu- mulated. Because it runs forward/backward pass multiple times for a single batch, it trades off time for memory. Gradient Checkpointing (Chen et al., 2016) Gra- dient checkpointing saves memory by only caching activations of a subset of layers. The un-cached activations will be recomputed during backpropaga- tion from the latest checkpoint. This strategy trades off time for memory by repeating computations and will obviously extend training time. Knowledge Distillation (Hinton et al., 2015) Knowledge distillation aims to compress and trans- fer knowledge from a teacher model to a simpler student model. However, knowledge distillation relies on a teacher model (which is still expensive in training time) and usually suffers from a certain degree of performance degradation. (Ding et al., 2020) presents an alternative idea based on cognitive theory to construct a working- memory by identifying key sentences, which en- ables multi-step reasoning. However, common techniques are still limited in reducing both the training time and memory usage. In this paper, we investigate how to optimize the dot-product atten- tion layers and introduce our approach next. # 3 Model: BlockBERT Following (Vaswani et al., 2017), the dot-product attention in Transformer is defined as: Attention(Q, K,V) = sottmac( 240 ) Vv, vd where Q, K, V ∈ RN ×d with N to be the se- quence length and d to be a hidden dimension. As we can see, the inner product between Q and K consumes O(N 2) memory. One simple way to re- duce the memory consumption of attention is to sparsify the attention matrix. Suppose we have a masking matrix M ∈ {0, 1}N ×N , we define a masked version of attention as follows: + Attention(Q, K,V,M) = softmax ( 27 (0) M) V, qd) with operator © defined by Ai ifMiy =1 -oo if Mi; =0" (A© M)ij = { In this work, we design M to be a sparse block matrix, which not only reduces memory and the number of floating-point operations (FLOPs) but also benefits from efficient dense matrix support from deep learning frameworks, such as PyTorch and Tensorflow. More formally, we split the length- N input sequence into n blocks, with each block of length N n .4 The N × N attention matrix is then partitioned into n × n blocks, where each block ma- trix is of the size N n . We define a sparse block matrix M by a permutation π of {1, 2, · · · , n}: M,- {, itn ( GD 0 otherwise. = (SP +1), 2) By writing Q, K, V as block matrices, such that Q = [er ai]! K = [xr x7]! and Ve([wo vet and pluging them into Equa- tion 1, we can formally define Blockwise Attention as follows: Blockwise-Attention(Q, K, V , M ) + softinax( 25») Vi(1) . . T softinax ( 2°73) Vi(n) (3) Equation 3 only needs to compute and store Q,K ai) (i = 1,---n), each has size x x x. In other words, BlockBERT reduces both O(N?) memory consumption and FLOPs by a factor of n, since X x X x n = NX, n n n 3.1 Blockwise Multi-Head Attention Analogous to Multi-head Attention (Vaswani et al., 2017), we allow queries, keys, and values to be projected multiple times and perform blockwise at- tentions in parallel. Moreover, different blockwise attention heads can use different masking matrices. The outputs of multiple heads are then concate- nated and aggregated with another linear projection. Let A be the number of attention heads and H the number of hidden units. Blockwise multi-head at- tention is formally defined as follows: Blockwise-Multi-head-Attention(Q, K, V ) =Concat(head1, · · · headA)W O, 4We assume N can be divided by n. If not, we pad the input sequence to make N divisible. ==) n=2 Blockwise Attention Mask (1,2) (2,1) 23 2a) 12 Masking Matrices Figure 2: Architecture of Blockwise Multi-head Atten- tion, which acts as building blocks of BlockBERT. The key idea is to introduce a sparse block masking matrix to the N × N attention matrix. The right panel shows the masking matrices we use when n = 2, 3. For n = 2, the masking matrices are defined by permutation (1, 2), (2, 1) and have 50% non-zeros. For n = 3, the masking matrices are defined by permutation (1, 2, 3), (2, 3, 1), and (3, 1, 2) and have 33.33% non-zeros. where for each head i, i = 1, 2, · · · , A, headi = Blockwise-Attention(QW Q i , KW K i , V W V i , Mi), i ∈ RH×d and the with d = H projection matrix W O ∈ RH×H . Each mask- ing matrix Mi is determined by a permutation πi according to Equation 2. In particular, we choose π from permutations generated by shifting one position: σ = (2, 3, · · · , n, 1), i.e., we select π ∈ {σ, σ2, · · · , σn}. For example, with 12 atten- tion heads (A = 12) and 2 blocks (n = 2), we can assign 10 heads to permutation (1, 2) and the other 2 heads to permutation (2, 1). Figure 2 illustrates the blockwise multi-head attention with block num- ber n ∈ {2, 3}. Blockwise sparsity captures both local and long-distance dependencies in a memory- efficiency way, which is crucial for long-document understanding tasks. For instance, the identity per- mutation, i.e., (1, 2, · · · , n), enables each token to attend to its nearby tokens in self-attention, while other permutations allow tokens within the same block attending to tokens in another block. Our proposed BlockBERT essentially replaces the multi- head attention layers in Transformer/BERT with blockwise multi-head attention. # 3.2 Analysis of Memory Usage Reduction To validate our claim that BlockBERT with n × n blocks can reduce the O(N 2) memory usage by a factor of n, we perform the same memory profiling as described in sections 2.1 and 2.2. Again, We fix the number of tokens in each GPU (b × N = 4096) and choose N from {128, 256, 512, 1024, 2048}.5 As we can see from Figure 3 and Table 1, the em- pirical results align well with the theoretical values. When we set the number of blocks to be 2 and 3 for BlockBERT, the estimated O(N 2) activation mem- ory decreases to 1/2 and 1/3 of BERT’s O(N 2) acti- vation memory, respectively. As shown in Table 2, for the sequence length N = 512, BlockBERT with 2 and 3 blocks saves 18.7% and 23.8% overall memory, respectively. The saving is more signifi- cant for longer sequences. When N = 1024, the overall memory reduction of BlockBERT with 2 and 3 blocks is 27.3% and 36.1%, respectively. @ veer TB Bocsert n=? A Boa BERT ns . i iz an , 7 8 7 a &, 7 . in > Z ae 7 e, ot E ’ wT =° iam an meraneee a <. ae Sequence Length V - Figure 3: Regression analysis on activation memory for BERT and BlockBERT. N b Model Act. Mem. (GB) O(N ) O(N 2) 512 8 BERT BlockBERT n=2 BlockBERT n=3 4.83 4.84 4.87 3.66 1.83 1.22 1024 4 BERT BlockBERT n=2 BlockBERT n=3 4.83 4.84 4.87 7.32 3.66 2.44 Table 1: Estimated O(N 2) and O(N ) activation mem- ory for BERT and BlockBERT. # 4 Experiments We evaluate the pre-training and fine-tuning perfor- mance of BlockBERT. In particular, when n = 2, we denote 10:2 to be the configuration which as- signs 10 heads to permutation (1, 2) and 2 to per- mutation (2, 1); when n = 3, we denote 8:2:2 to be the configuration which assigns 8, 2, 2 heads to per- mutation (1, 2, 3), (2, 3, 1), and (3, 1, 2), respec- tively. We compare BlockBERT with the following baselines: Google BERT Google BERT is the official pre- trained model from (Devlin et al., 2019). 5We use GPUs of 16 GB memory for profiling. BERT with N = 2048 fails due to an out-of-memory error. RoBERTa-2seq & RoBERTa-1seq We compare with two versions of RoBERTa (Liu et al., 2019). RoBERTa-2seq is trained with both masked lan- guage model (MLM) task and next sentence pre- diction (NSP) task, while RoBERTa-1seq refers to the pre-training model with only the MLM task. SparseBERT We pre-train BERT models with its Transformer encoder replaced by a Sparse Trans- ormer encoder (Child et al., 2019). We set its sparsity hyper-parameters stride 2 = 128 and ex- pressivity c = 32.° The attention masking matrix used in Sparse Transformer and more implemen- ation details are discussed in Appendix A.3. A similar architecture was adopted in GPT-3 (Brown et al., 2020). # 4.1 Pre-training All the models follow the BERT-Base setting, i.e., L = 12, H = 768, A = 12, and are trained on the same corpus — BooksCorpus and English Wikipedia with uncased word piece tokens. Thus all models use the same vocabulary as Google BERT (uncased version) with vocabulary size 30,522. We fix the number of tokens per batch B × N = 131, 072, i.e., if sequence length N = 512 then batch size B = 256, if sequence length N = 1024 then batch size B = 128. The detailed pre-training configuration is listed in Appendix A.1. Moreover, the pre-training of SparseBERT and BlockBERT follows the RoBERTa-1seq setting, i.e., we drop the NSP (Next Sentence Prediction) task, and an input sequence is up to N tokens until it reaches a document boundary. A summary of the pre-training performance com- parison between BlockBERT and RoBERTa-1seq is shown in Table 2. Besides memory saving, we also achieve a significant speedup. For example, when N = 1024, BlockBERT (n = 2) reduces the training time from RoBERTa’s 9.7 days to 7.5 days. # 4.2 Fine-tuning Tasks We evaluate BlockBERT on several question an- swering tasks, including SQuAD 1.1/2.0 (Ra- jpurkar et al., 2018) and five other tasks from the MrQA shared task7 — HotpotQA (Yang et al., 2018), NewsQA (Trischler et al., 2017), 6We adopt Sparse Transformer implemented by Fairseq, which first computes the N × N attention matrix, and then masks it to be a sparse one. This implementation cannot avoid the O(N 2) attention computation, and thus has a similar training time/memory cost to RoBERTa. # 7mrqa.github.io N Model Training Time (day) Memory (per GPU, GB) Heads Config. Valid. ppl 512 RoBERTa-1seq BlockBERT n=2 BlockBERT n=3 6.62 5.83 (-12.0%) 5.80 (-12.5%) 9.73 7.91 (-18.7%) 7.32 (-23.8%) - 10:2 8:2:2 3.58 3.56 3.71 1024 RoBERTa-1seq BlockBERT n=2 BlockBERT n=3 9.66 7.51 (-22.3%) 7.23 (-25.1%) 13.39 9.73 (-27.3%) 8.55 (-36.1%) - 9:3 8:2:2 3.60 3.57 3.63 Table 2: Pre-training Performance Analysis. SearchQA (Dunn et al., 2017), TriviaQA (Joshi et al., 2017) and NaturalQA (Kwiatkowski et al., 2019). Since MrQA does not have an official test set, we follow Joshi et al. (2019a) to split the devel- opment set evenly to build a new development set and test set. These QA datasets have different paragraph length distributions and are thus ideal for testing the effectiveness of BlockBERT8. For example, SQuAD, NaturalQA, and HotpotQA consist of mostly short paragraphs (shorter than 512), while paragraphs in SearchQA (average length 1,004) and TriviaQA (average length 934) have around 1,000 tokens. When the input sequence is longer than N , we follow the common practice (Joshi et al., 2019a) to split it using a sliding window of size N and stride 128. This means that for SearchQA and TriviaQA, a model with N = 512 can only capture half of the context, while a model with N = 1024 can accept the whole paragraph as input. N Model SQuAD 1.1 F1 EM SQuAD 2.0 F1 EM - Human Perf. 82.30 91.20 86.80 89.40 512 81.19 Google BERT - XLNet 82.91 RoBERTa-2seq 84.43 RoBERTa-1seq 80.49 SparseBERT BlockBERT n=2 84.08 BlockBERT n=3 82.37 88.45 - 89.78 91.48 88.09 90.77 89.64 74.08 78.46 75.79 79.22 74.15 78.34 77.33 77.16 81.33 79.17 82.27 76.96 81.46 80.33 1024 84.58 RoBERTa-1seq SparseBERT 81.02 BlockBERT n=2 83.65 BlockBERT n=3 82.74 91.14 88.37 90.74 90.05 79.34 74.51 78.55 76.79 82.26 77.57 81.45 79.84 Table 3: Dev set results on SQuAD 1.1/2.0. The re- sult of XLNet(-Base) is from Yang et al. (2019). For BlockBERT models, their attention head configurations are the same as Table 2. improvement of 0.39, 0.44 and 0.23, respectively. For all models, we adopt the same fine-tuning QA setup from Devlin et al. (2019). The tokenized paragraph (p1, · · · , ps) and question (q1, · · · , qt) are concatenated to be a sequence [CLS]q1 · · · qt[SEP]p1 · · · ps[SEP]. The se- quence is then fed into the pre-trained model with two extra linear layers for predicting the start and end positions of the answer spans. The detailed fine-tuning setting is listed in Appendix A.4. Ta- ble 3 and Table 4 report the experimental results. BlockBERT v.s. SparseBERT For N = 512, it is interesting that BlockBERT with 3 blocks (density 33.33%) performs better then SparseBERT (den- sity 44.20%) in both SQuAD and MrQA tasks. Similar results can be observed for N = 1024, too. These results show that off-diagonal masking matrices, e.g., the masking matrix defined by per- mutation (2, 3, 1) and (3, 1, 2), play crucial roles in BlockBERT. Furthermore, BlockBERT with 2 blocks achieve a more significant improvement. BlockBERT (n=2) v.s. RoBERTa-1seq Compar- ing BlockBERT with RoBERTa-1seq when N = 512, we observe an absolute F1 difference from 0.04 (in NaturalQA) to 1.18 (in NewsQA), with an average of 0.55. For N = 1024, BlockBERT achieves more comparable or even better perfor- mance to RoBERTa-1seq, In SearchQA, NewsQA and HotpotQA, BlockBERT achieves absolute F1 8The detailed paragraph length distributions can be found in Appendix A.5 Effect of Long Sequence Pre-training Our obser- vations are twofold: (1) Long sequence pre-training benefits long sequence fine-tuning. In TriviaQA and SearchQA, of which paragraph lengths are around 1024, pre-training models with N = 1024 achieve significantly better performance. (2) The heterogeneity of pre-training and fine-tuning se- quence length may hurt performance. For example, in SQuAD, we do not see significant performance gain by using pre-trained models with N = 1024; in HotpotQA and NewsQA, longer sequence pre- training even hurts performance. Effect of #Blocks is not surprising that BlockBERT with 2 blocks (n = 2) performs bet- ter than that with 3 blocks (n = 3), because it keeps more attention matrix entries. The biggest difference is in SQuAD 2.0 and NewsQA with N = 1024, where we observe an absolute loss of 1.6 F1 by increasing block number from 2 to 3. Efficient inference with BlockBERT We bench- mark test efficiency of RoBERTa and BlockBERT. The benchmark code follows huggingface9. All ex- periments are run 30 times on a 32GB V100 GPU with half precision (FP16). We report the average running time in Table 5. As we can see, BlockBERT does achieve speedup and memory reduction dur- ing test time. Take 8×1024, i.e., batch size B = 8, sequence length N = 1024, as an example, we can see that BlockBERT with 2 blocks saves 27.8% of test time, and BlockBERT with 3 blocks saves more (30.4%). As for memory, we can observe that RoBERTa cannot handle an input of size 16×1024, while it is possible for BlockBERT to work on it. In summary, not only BlockBERT saves train- ing/inference time and memory, but it also has a competitive and sometimes better performance, especially for tasks with longer sequences. This demonstrates the effectiveness of our blockwise multi-head attention approach. # 4.3 Ablation Study We fix the assignment of attention heads in the above experiments. For example, BlockBERT with sequence length N = 512 and 2 blocks is trained with ten heads using permutation (1, 2) and the other two using permutation (2, 1). However, there are other ways to assign twelve attention heads, e.g., seven heads for permutation (1, 2) and the other five for permutation (2, 1). It would be inter- esting to see how the assignment of heads affects model performance. In this section, we grid search attention head assignments and plot their best val- idation performance in 1.2M training steps. The results are shown in Figure 4. Our observations are threefold: (1) Identity per- mutations, i.e., (1, 2) and (1, 2, 3), are important. As shown in Figure 4, all optimal solutions assign considerable attention heads to block-diagonal ma- trices, since those matrices enable each token to at- tend to its nearby tokens; (2) Non-identity permuta- tions follow the rule of “vital few and trivial many.” 9github.com/huggingface/transformers/ blob/master/examples/benchmarks.py Although identity permutations are important, as- signing all attention heads to them (corresponding to 12:0 and 12:0:0 in Figure 4) significantly hurts performance, since the model can not learn long- term dependencies with only identity permutation; (3) Pre-training performance and fine-tuning per- formance are correlated but not always consistent. When n = 3, pre-training performance suggests 10:1:1 to be the best head assignment — ten heads for permutation (1, 2, 3), one head for (2, 3, 1) and one head for (3, 1, 2), but we observe that the con- figuration of 8:2:2 achieves better performance in fine-tuning tasks. # 5 Related Work In this section, we review the related work of mem- ory optimization for neural network training and recent efforts to simplify Transformer and BERT. # 5.1 Low-memory neural networks training Due to the large size of model parameters and deep architectures, modern neural networks training re- quires significant amounts of computing resources. As a result, there is an increasing interest in training neural networks with low memory (Sohoni et al., 2019). Mainstream techniques mostly address this problem with a better system or engineering de- sign, such as low-precision training (Micikevicius et al., 2017), microbatching (Huang et al., 2018) and gradient checkpointing (Chen et al., 2016). Al- ternatively, there also exists some research focusing on the theoretical aspect, including the recently pro- posed lottery ticket hypothesis (Frankle and Carbin, 2018). # 5.2 Efficient Transformer Since the invention of Transformer (Vaswani et al., 2017) and its successful application to masked lan- guage model pre-training (Devlin et al., 2019; Rad- ford et al., 2019; Yang et al., 2019; Liu et al., 2019; Lan et al., 2019), several approaches have been pro- posed to simplify the model and its training process. We summarize these attempts as follows: Attention layer simplification There are cur- rently two lines of research trying to simplify the multi-head attention layers. The first one focuses on attention matrix sparsification. No- table examples include Star Transformer (Guo et al., 2019), Sparse Transformer (Child et al., 2019), Adaptive Sparse Transformer (Correia et al., N Model SearchQA F1 EM TriviaQA F1 EM NewsQA F1 EM NaturalQA F1 EM HotpotQA F1 EM 512 Google BERT RoBERTa-2seq RoBERTa-1seq SparseBERT BlockBERT n=2 BlockBERT n=3 74.94 76.12 77.09 73.36 76.68 75.54 80.37 81.74 82.62 79.01 82.33 81.07 70.18 71.92 73.65 68.71 72.36 72.05 75.35 76.79 78.22 73.15 77.53 76.74 51.27 52.45 56.13 51.18 54.66 53.82 66.25 66.73 70.64 65.47 69.46 68.39 66.13 66.98 67.14 65.53 66.94 66.14 78.29 78.63 79.07 77.46 79.03 78.47 60.50 61.52 62.77 58.54 62.13 60.64 77.08 77.81 79.28 74.85 79.15 77.46 1024 RoBERTa-1seq SparseBERT BlockBERT n=2 BlockBERT n=3 77.47 74.83 77.95 76.98 83.12 80.54 83.51 82.76 75.29 70.56 75.06 74.78 80.20 75.34 79.41 79.28 55.00 51.67 55.44 53.48 69.64 67.16 70.08 68.50 68.28 65.07 67.31 65.91 80.35 77.31 79.39 78.20 61.89 59.65 62.13 61.89 78.71 76.02 78.94 78.18 Table 4: MrQA test results (Tasks are sorted decreasingly by average paragraph length). For BlockBERT models, their attention head configurations are the same as Table 2. (a) N = 512, n = 2 (b) N = 1024, n = 2 (c) N = 512, n = 3 (d) N = 1024, n = 3 Figure 4: Ablation over blockwise attention heads assignment. B × N 8×1024 16×1024 24×1024 32×1024 RoBERTa BlockBERT n=2 BlockBERT n=3 0.1371 0.0990 0.0954 OOM 0.1869 0.1790 OOM OOM 0.2634 OOM OOM OOM Table 5: Test time statistics (sec) for different input size. OOM indicates out-of-memory. 2019; Sukhbaatar et al., 2019), Log-Sparse Trans- former (Li et al., 2019) , Reformer (Kitaev et al., 2020) and Longformer (Beltagy et al., 2020). How- ever, due to the insufficient support for sparse ten- sors from the current deep learning platforms, some of them have to represent a sparse matrix using a dense matrix with a binary mask or rely on cus- tomized CUDA kernels (Gray et al., 2017). As a result, the speed-up or reduction in memory con- sumption is sometimes limited in practice. The second line of research prunes redundant attention heads. Examples include (Voita et al., 2019) and (Michel et al., 2019). Our BlockBERT model be- longs to the first category, as we sparsify the atten- tion matrices to be block sparse matrix. Reducing model size for pre-training Knowl- edge distillation (Hinton et al., 2015) is a gen- eral technique that aims to compress and trans- fer knowledge from a teacher model to a simpler student model. There are two recent efforts that apply knowledge distillation to BERT pre-training for reducing model size: TinyBERT (Jiao et al., 2019) distills BERT using a smaller Transformer, and Tang et al. (2019) distills BERT with a BiL- STM.In contrast, ALBERT (Lan et al., 2019) is a notable work that does not take the knowledge distillation approach. It uses parameter-sharing to reduce the number of parameters of the BERT model. As discussed in section 2.1, parameter- sharing reduces both model memory and optimizer memory. These two parts account for about 12.4% of total training memory for BERT-base. As for effi- ciency, parameter-sharing reduces communication complexity in distributed training and thus saves training time as well. In the aforementioned efficient Transformers, the model quality is often demonstrated by compara- ble language model perplexity, or equivalently the bits per word/byte. It is often implicitly assumed that similar language model perplexity implies sim- ilar pre-training model quality, namely the same performance on the downstream tasks. We would like to point out that this assumption does not nec- essarily hold. For example, the experiments on the Enwik8 dataset by Child et al. (2019) demon- strates that Sparse Transformer “surpasses the 1.03 state-of-the-art (bits per byte) for a similarly-sized Transformer-XL and matching the 0.99 (bits per byte) of a model trained with more than double the number of parameters”. However, if we com- pare SparseBERT (pre-training model with Sparse Transformer backbone) against XLNet (Yang et al., 2019) (pre-training model with Transformer-XL backbone) in SQuAD, Table 3 shows that XLNet still outperforms SparseBERT significantly. There- fore, we believe that it is necessary to conduct a comprehensive study and evaluation of existing ef- ficient Transformer models when used for masked language model pre-training. Limited by resources, in this work, we mainly compare BlockBERT to pre-training using Sparse Transformer (Child et al., 2019), which is the earliest attempt to design effi- cient Transformer models and also the key contrib- utor to the success of GPT-3 (Brown et al., 2020). We plan to benchmark more models in the future. # 6 Conclusion In this work, we study the lightweight BERT model with the goal of achieving both efficiency and ef- fectiveness. We profile and analyze the memory bottlenecks of BERT and focus on optimize dot- product self-attention, which consumes quadratic memory with respect to the sequence length. To reduce both time and memory consumption, we present BlockBERT, which sparsifies the attention matrices to be sparse block matrices. The proposed model achieves time and memory saving without significant loss of performance. In the future, we plan to benchmark more effi- cient Transfomers in language model pre-training and fine-tuning. We also would like to explore more applications of BlockBERT on NLP tasks involving long sequences such as coreference res- olution (Joshi et al., 2019b) and document-level machine translation (Miculicich et al., 2018), and also non-NLP tasks such as protein sequence mod- eling (Rives et al., 2019; Rao et al., 2019). # Acknowledgments The authors would like to thank Zhilin Yang, Danqi Chen, Yinhan Liu, Mandar Joshi and Luke Zettlemoyer for the helpful suggestions. Jiezhong Qiu and Jie Tang were partially sup- ported by the National Key R&D Program of China (2018YFB1402600), NSFC for Distin- guished Young Scholar (61825602), and NSFC (61836013). # References Iz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165. Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. 2016. Training deep nets with sublinear memory cost. arXiv preprint arXiv:1604.06174. and Ilya Sutskever. 2019. Generating long se- quences with sparse transformers. arXiv preprint arXiv:1904.10509. Gonc¸alo M Correia, Vlad Niculae, and Andr´e FT Mar- tins. 2019. Adaptively sparse transformers. arXiv preprint arXiv:1909.00015. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT’ 2019, pages 4171–4186. Ming Ding, Chang Zhou, Hongxia Yang, and Jie Tang. 2020. Cogltx: Applying bert to long texts. In NeurIPS ’20. Matthew Dunn, Levent Sagun, Mike Higgins, V Ugur Guney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with arXiv preprint context from a search engine. arXiv:1704.05179. Jonathan Frankle and Michael Carbin. 2018. The lot- tery ticket hypothesis: Finding sparse, trainable neu- ral networks. arXiv preprint arXiv:1803.03635. Scott Gray, Alec Radford, and Diederik P Kingma. 2017. Gpu kernels for block-sparse weights. arXiv preprint arXiv:1711.09224. Qipeng Guo, Xipeng Qiu, Pengfei Liu, Yunfan Shao, Star- In NAACL-HLT’ 2019, pages 1315– Xiangyang Xue, and Zheng Zhang. 2019. transformer. 1325. Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Yanping Huang, Yonglong Cheng, Dehao Chen, Hy- oukJoong Lee, Jiquan Ngiam, Quoc V Le, and Zhifeng Chen. 2018. Gpipe: Efficient training of giant neural networks using pipeline parallelism. arXiv preprint arXiv:1811.06965. Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang, Xiao Chen, Linlin Li, Fang Wang, and Qun Liu. 2019. Tinybert: Distilling bert for natural language understanding. arXiv preprint arXiv:1909.10351. Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S Weld, Luke Zettlemoyer, and Omer Levy. 2019a. Spanbert: Improving pre-training by representing and predict- ing spans. arXiv preprint arXiv:1907.10529. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. Triviaqa: A large scale distantly supervised challenge dataset for reading comprehen- sion. In ACL’ 17, pages 1601–1611. Mandar Joshi, Omer Levy, Daniel S Weld, and Luke Bert for coreference reso- arXiv preprint Zettlemoyer. 2019b. lution: Baselines and analysis. arXiv:1908.09091. Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. Nikita Kitaev, Łukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a bench- mark for question answering research. Transactions of the Association for Computational Linguistics, 7:453–466. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942. Kenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. for weakly supervised open domain question answering. arXiv preprint arXiv:1906.00300. Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. 2019. Enhancing the locality and breaking the mem- ory bottleneck of transformer on time series forecast- ing. arXiv preprint arXiv:1907.00235. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692. Paul Michel, Omer Levy, and Graham Neubig. 2019. arXiv Are sixteen heads really better than one? preprint arXiv:1905.10650. Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, et al. 2017. Mixed precision training. arXiv preprint arXiv:1710.03740. Lesly Miculicich, Dhananjay Ram, Nikolaos Pappas, and James Henderson. 2018. Document-level neural machine translation with hierarchical attention net- works. In EMNLP’ 18, pages 2947–2954. Rodrigo Nogueira and Kyunghyun Cho. 2019. Pas- arXiv preprint sage re-ranking with bert. arXiv:1901.04085. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensi- ble toolkit for sequence modeling. arXiv preprint arXiv:1904.01038. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227– 2237. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable ques- tions for squad. arXiv preprint arXiv:1806.03822. Roshan Rao, Nicholas Bhattacharya, Neil Thomas, Yan Duan, Peter Chen, John Canny, Pieter Abbeel, and Yun Song. 2019. Evaluating protein transfer learn- In Advances in Neural Information ing with tape. Processing Systems, pages 9686–9698. Alexander Rives, Siddharth Goyal, Joshua Meier, Demi Guo, Myle Ott, C Lawrence Zitnick, Jerry Ma, and Rob Fergus. 2019. Biological structure and function emerge from scaling unsupervised learning bioRxiv, page to 250 million protein sequences. 622803. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, and Tom Kwiatkowski. 2019. Matching the blanks: Distributional similarity for relation learn- ing. arXiv preprint arXiv:1906.03158. Nimit Sharad Sohoni, Christopher Richard Aberger, Megan Leszczynski, Jian Zhang, and Christopher R´e. 2019. Low-memory neural network training: A technical report. arXiv preprint arXiv:1904.10631. Sainbayar Sukhbaatar, Edouard Grave, Piotr Bo- janowski, and Armand Joulin. 2019. Adaptive arXiv preprint attention span in transformers. arXiv:1905.07799. Raphael Tang, Yao Lu, Linqing Liu, Lili Mou, Olga Vechtomova, and Jimmy Lin. 2019. Distilling task- specific knowledge from bert into simple neural net- works. arXiv preprint arXiv:1903.12136. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. Newsqa: A machine compre- In Proceedings of the 2nd Work- hension dataset. shop on Representation Learning for NLP, pages 191–200. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all In Advances in neural information pro- you need. cessing systems, pages 5998–6008. Elena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019. Analyzing multi- head self-attention: Specialized heads do the heavy arXiv preprint lifting, arXiv:1905.09418. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretrain- arXiv preprint ing for language understanding. arXiv:1906.08237. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Ben- gio, William W. Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In EMNLP’ 18. # A Appendix # A.1 Notations and Pre-training # Hyper-parameters The notations and pre-training hyper-parameters are listed in Table 6 and Table 7. Description Base Large B A L H 4H # Feed-forward hidden units N Batch size # Self-attention heads # Layers # Hidden units Sequence length 256 12 12 768 3072 512 256 16 24 1024 4096 512 Table 6: BERT notations. # A.2 Profiler Implementation Among the three types of training memory, model memory and optimizer memory is relatively easy to profile (can be computed by enumerating each tenor and summing up tensor.numel() * tensor.element size()). To calculate ac- tivation memory, (Sohoni et al., 2019) traverse Py- Torch’s autograd graph and sum up the necessary storage space. They find that the summation of model memory, optimizer memory, and activation memory matches PyTorch memory profiling tool10. 10torch.cuda.max memory allocated Hyper-parameter Value Vocabulary Size 30,522 Dropout 0.1 Attention dropout 0.1 Warmup steps 10K Weight decay 0.01 Max steps 2.4M Initial learning rate 0.00025 Learning rate decay —_ Linear Adam € le-8 Adam (1 0.9 Adam (2 0.999 Gradient Clipping 1.0 Table 7: Pre-training hyper-parameters. Based on their observation, we use the following quantity as an estimate to activation memory max memory allocated−model memory−optimizer memory (4) When profiling BERT, we first pre-train it for 1000 steps, and then compute its model and optimizer memory. Finally, we estimate its activation mem- ory according to Equation 4. # A.3 SparseBERT The sparse masking matrices we use for Sparse Transformer (Child et al., 2019) are shown in Fig- ure 5. We adopt the implementation of Sparse Transformer from Fairseq11. The Fariseq version is implemented in a direct way, with the goal of comparing performance, not speed. We first com- pute the N 2 attention matrix and then mask it to be a sparse matrix according to the sparse pattern de- fined in Sparse Transformer paper. Consequently, this implementation of SparseBERT has very close training time/memory cost as RoBERTa (as it can not avoid the O(N 2) attention computation). We did so because the code released by Sparse Trans- former is based on Tensorflow and relies on cus- tomized CUDA kernels, but our pre-training is done using PyTorch. # A.4 Fine-tuning Settings Our fine-tuning is implemented based on code base from HuggingFace12 and SpanBERT (Joshi et al., 2019a). We use max sequence length=N , i.e., we allow fine-tuning task to input se- quences as long as the pre-training model. the If the input sequence is too long to fit 11github.com/pytorch/fairseq/blob/ master/fairseq/modules/sparse_multihead_ attention.py. # 12github.com/huggingface/ pytorch-transformers 200 s00 soo (a) (b) Figure 5: The sparse masking matrices we use in Sparse Transformer (fixed mode) encoder. White color indicates attention values to be masked. (a) N = 512, = 128,c = 32, density 44.20%; (b) N = 1024, @ = 128, c = 32, density 34.97%. max sequence length=N constraints, we use a sliding window of stride 128 to split it. We grid search learning rate from {5e-6, 1e-5, 2e-5, 3e- 5, 5e-5} and batch size from {16, 32}. The fine- tuning is performed for 4 epoches. # A.5 Paragraph-Length Distribution The paragraph-length distribution of SQuAD and MrQA datasets is shown in Figure 6. SearchQA (1004) ii NewsQA (641) HotpotQA (216) lim TriviaQA (934) NI Natural@A (247) SQuAD (156) 0.008 = 0.006 g rT) S om Om x= 0.002 0.000 4 i 200 400 600 S00 10001300 Paragraph Length Figure 6: Paragraph-length (after tokenization) distri- bution. The distribution of SQuAD 2.0 is very similar to SQuAD 1.1, so we only plot SQuAD 1.1 here.
{ "id": "1908.09091" }
1911.01547
On the Measure of Intelligence
To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to "buy" arbitrary levels of skills for a system, in a way that masks the system's own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
http://arxiv.org/pdf/1911.01547
François Chollet
cs.AI
null
null
cs.AI
20191105
20191125
9 1 0 2 v o N 5 2 ] I A . s c [ 2 v 7 4 5 1 0 . 1 1 9 1 : v i X r a # On the Measure of Intelligence # Franc¸ois Chollet ∗ Google, Inc. [email protected] # November 5, 2019 # Abstract To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abun- dance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates to- wards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow ex- perimenters to “buy” arbitrary levels of skills for a system, in a way that masks the system’s own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience, as critical pieces to be accounted for in characterizing intelligent systems. Using this defi- nition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a new benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans. ∗I thank Jos´e Hern´andez-Orallo, Julian Togelius, Christian Szegedy, and Martin Wicke for their valuable com- ments on the draft of this document. 1 # Contents # I Context and history I.1 Need for an actionable definition and measure of intelligence . . . . . . . . . . . . . . . . . . . . . . . . . I.2 Defining intelligence: two divergent visions . . . . . . . . . . . . Intelligence as a collection of task-specific skills Intelligence as a general learning ability . . . . . . . . . . . . . . . . . I.3 AI evaluation: from measuring skills to measuring broad abilities . . . . . . Skill-based, narrow AI evaluation . . . . . . . . . . . . . . . . . . . . The spectrum of generalization: robustness, flexibility, generality . . . I.2.1 I.2.2 I.3.1 I.3.2 I.3.3 Measuring broad abilities and general intelligence: the psychometrics I.3.4 I.3.5 . . . . . . . . . . . . . . . . . . . . . . . 13 perspective . . . . . . . . . . . . . . . 14 Integrating AI evaluation and psychometrics Current trends in broad AI evaluation . . . . . . . . . . . . . . . . . . 16 . . . . . . . . II.1 Critical assessment . . 18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 II.1.1 Measuring the right thing: evaluating skill alone does not move us forward . The meaning of generality: grounding the g factor Separating the innate from the acquired: insights from developmental psychology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 . . . . . . . . . . . 20 II.1.2 II.1.3 . . . . . . . . . . . . . . . . . . . . . . . 24 II.2 Defining intelligence: a formal synthesis . . . . . . . . . . . . . . . . . . . 27 Intelligence as skill-acquisition efficiency . . . . . . . . . . . . . . . . 27 . . . . . . . . II.2.1 II.2.2 Computation efficiency, time efficiency, energy efficiency, and risk ef- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 . . . . . . . . . . . . . . . . . . . . . . . . . . 42 . . . . . . . . . . . . . . . . . . . . . . 43 II.3.1 Fair comparisons between intelligent systems . . . . . . . . . . . . . . 43 II.3.2 What to expect of an ideal intelligence benchmark . . . . . . . . . . . 45 ficiency . . Practical implications . . . II.2.3 II.3 Evaluating intelligence in this light 46 III.1 Description and goals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 III.1.1 What is ARC? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 III.1.2 Core Knowledge priors . . . . . . . . . . . . . . . . . . . . . . . . . . 47 III.1.3 Key differences with psychometric intelligence tests . . . . . . . . . . 50 III.1.4 What a solution to ARC may look like, and what it would imply for AI . . . . . . . . . . . . . . . . . . . . . . . 51 III.2 Weaknesses and future refinements . . . . . . . . . . . . . . . . . . . . . . 53 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 III.3 Possible alternatives . III.3.1 Repurposing skill benchmarks to measure broad generalization . . . . 55 . . . . . . . . . . 55 III.3.2 Open-ended adversarial or collaborative approaches applications . . . . . . . . . 3 3 4 5 6 7 7 9 # II A new perspective # III A benchmark proposal: the ARC dataset 2 # I Context and history # I.1 Need for an actionable definition and measure of intelligence The promise of the field of AI, spelled out explicitly at its inception in the 1950s and re- peated countless times since, is to develop machines that possess intelligence comparable to that of humans. But AI has since been falling short of its ideal: although we are able to engineer systems that perform extremely well on specific tasks, they have still stark limi- tations, being brittle, data-hungry, unable to make sense of situations that deviate slightly from their training data or the assumptions of their creators, and unable to repurpose them- selves to deal with novel tasks without significant involvement from human researchers. If the only successes of AI have been in developing narrow, task-specific systems, it is perhaps because only within a very narrow and grounded context have we been able to define our goal sufficiently precisely, and to measure progress in an actionable way. Goal definitions and evaluation benchmarks are among the most potent drivers of scientific progress. To make progress towards the promise of our field, we need precise, quantitative definitions and measures of intelligence – in particular human-like general intelligence. These would not be merely definitions and measures meant to describe or characterize intelligence, but precise, explanatory definitions meant to serve as a North Star, an objective function showing the way towards a clear target, capable of acting as a reliable measure of our progress and as a way to identify and highlight worthwhile new approaches that may not be immediately applicable, and would otherwise be discounted. For instance, common-sense dictionary definitions of intelligence may be useful to make sure we are talking about the same concepts, but they are not useful for our pur- pose, as they are not actionable, explanatory, or measurable. Similarly, the Turing Test [91] and its many variants (e.g. Total Turing Test and Loebner Prize [75]) are not useful as a driver of progress (and have in fact served as a red herring 1), since such tests com- pletely opt out of objectively defining and measuring intelligence, and instead outsource the task to unreliable human judges who themselves do not have clear definitions or evaluation protocols. It is a testimony to the immaturity of our field that the question of what we mean when we talk about intelligence still doesn’t have a satisfying answer. What’s worse, very little attention has been devoted to rigorously defining it or benchmarking our progress towards it. Legg and Hutter noted in a 2007 survey of intelligence definitions and evaluation meth- ods [53]: “to the best of our knowledge, no general survey of tests and definitions has been published”. A decade later, in 2017, Hern´andez-Orallo released an extensive survey of evaluation methods [36] as well as a comprehensive book on AI evaluation [37]. Results and recommendations from both of these efforts have since been largely ignored by the community. We believe this lack of attention is a mistake, as the absence of widely-accepted ex- 1Turing’s imitation game was largely meant as an argumentative device in a philosophical discussion, not as a literal test of intelligence. Mistaking it for a test representative of the goal of the field of AI has been an ongoing problem. 3 plicit definitions has been substituted with implicit definitions and biases that stretch back decades. Though invisible, these biases are still structuring many research efforts today, as illustrated by our field’s ongoing fascination with outperforming humans at board games or video games (a trend we discuss in I.3.5 and II.1). The goal of this document is to point out the implicit assumptions our field has been working from, correct some of its most salient biases, and provide an actionable formal definition and measurement benchmark for human-like general intelligence, leveraging modern insight from developmental cognitive psychology. # I.2 Defining intelligence: two divergent visions Looked at in one way, everyone knows what intelligence is; looked at in another way, no one does. Robert J. Sternberg, 2000 Many formal and informal definitions of intelligence have been proposed over the past few decades, although there is no existing scientific consensus around any single definition. Sternberg & Detterman noted in 1986 [87] that when two dozen prominent psychologists were asked to define intelligence, they all gave somewhat divergent answers. In the context of AI research, Legg and Hutter [53] summarized in 2007 no fewer than 70 definitions from the literature into a single statement: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” This summary points to two characterizations, which are nearly universally – but of- ten separately – found in definitions of intelligence: one with an emphasis on task-specific skill (“achieving goals”), and one focused on generality and adaptation (“in a wide range of environments”). In this view, an intelligent agent would achieve high skill across many different tasks (for instance, achieving high scores across many different video games). Im- plicitly here, the tasks may not necessarily be known in advance: to truly achieve generality, the agent would have to be able to learn to handle new tasks (skill acquisition). These two characterizations map to Catell’s 1971 theory of fluid and crystallized intel- ligence (Gf-Gc) [13], which has become one of the pillars of the dominant theory of human cognitive abilities, the Cattell-Horn-Caroll theory (CHC) [62]. They also relate closely to two opposing views of the nature of the human mind that have been deeply influential in cognitive science since the inception of the field [85]: one view in which the mind is a relatively static assembly of special-purpose mechanisms developed by evolution, only ca- pable of learning what is it programmed to acquire, and another view in which the mind is a general-purpose “blank slate” capable of turning arbitrary experience into knowledge and skills, and that could be directed at any problem. 4 A central point of this document is to make explicit and critically assess this dual defi- nition that has been implicitly at the foundation of how we have been conceptualizing and evaluating intelligence in the context of AI research: crystallized skill on one hand, skill- acquisition ability on the other. Understanding this intellectual context and its ongoing influence is a necessary step before we can propose a formal definition of intelligence from a modern perspective. # I.2.1 Intelligence as a collection of task-specific skills In the distant future I see open fields for far more important researches. Psychology will be based on a new foundation, that of the necessary acquirement of each mental power and capacity by gradation. Charles Darwin, 1859 The evolutionary psychology view of human nature is that much of the human cognitive function is the result of special-purpose adaptations that arose to solve specific problems encountered by humans throughout their evolution (see e.g. [19, 74]) – an idea which orig- inated with Darwin [21] and that coalesced in the 1960s and 1970s. Around the same time that these ideas were gaining prominence in cognitive psychology, early AI researchers, perhaps seeing in electronic computers an analogue of the mind, mainly gravitated towards a view of intelligence as a set of static program-like routines, heavily relying on logical operators, and storing learned knowledge in a database-like memory. This vision of the mind as a wide collection of vertical, relatively static programs that collectively implement “intelligence”, was most prominently endorsed by influential AI pioneer Marvin Minsky (see e.g. The Society of Mind, 1986 [63]). This view gave rise to definitions of intelligence and evaluation protocols for intelligence that are focused on task-specific performance. This is perhaps best illustrated by Minsky’s 1968 definition of AI: “AI is the science of making machines capable of performing tasks that would require intelligence if done by humans” 2. It was then widely accepted within the AI community that the “problem of intelligence” would be solved if only we could encode human skills into formal rules and encode human knowledge into explicit databases. This view of intelligence was once so dominant that “learning” (discounted as pure memorization) was often not even mentioned at all in AI textbooks until the mid-1980s. Even McCarthy, a rare advocate for generality in AI, believed that the key to achieving generality was better knowledge bases [60]. This definition and evaluation philosophy focused entirely on skill at narrow tasks normally handled by humans has led to a striking 2Note the lingering influence of the Turing Test. 5 paradox, as pointed out by Hern´andez-Orallo [36] in his 2017 survey: the field of artificial intelligence has been very successful in developing artificial systems that perform these tasks without featuring intelligence, a trend that continues to this day. # I.2.2 Intelligence as a general learning ability Presumably the child brain is something like a notebook as one buys it from the stationer’s. Rather little mechanism, and lots of blank sheets. Alan Turing, 1950 In contrast, a number of researchers have taken the position that intelligence lies in the general ability to acquire new skills through learning; an ability that could be directed to a wide range of previously unknown problems – perhaps even any problem at all. Contrast Minsky’s task-focused definition of AI with the following one, paraphrased from McCarthy [60] by Hern´andez-Orallo: “AI is the science and engineering of making machines do tasks they have never seen and have not been prepared for beforehand” [36]. The notion that machines could acquire new skills through a learning process similar to that of human children was initially laid out by Turing in his 1950 paper [91]. In 1958, Friedberg noted astutely: “If we are ever to make a machine that will speak, understand or translate human languages, solve mathematical problems with imagination, practice a profession or direct an organization, either we must reduce these activities to a science so exact that we can tell a machine precisely how to go about doing them or we must develop a machine that can do things without being told precisely how” [26]. But although the idea of generality through learning was given significant consideration at the birth of the field, and has long been championed by pioneers like McCarthy and Papert, it lay largely dormant until the resurgence of machine learning in the 1980s. This view of intelligence echoes another long-standing conception of human nature that has had a profound influence on the history of cognitive science, contrasting with the evolutionary psychology perspective: Locke’s Tabula Rasa (blank slate), a vision of the mind as a flexible, adaptable, highly general process that turns experience into behavior, knowledge, and skills. This conception of the human mind can be traced back to Aristotle (De Anima, c. 350BC, perhaps the first treatise of psychology [3]), was embraced and popularized by Enlightenment thinkers such as Hobbes [42], Locke [56], and Rousseau [78]. It has more recently found renewed vitality within cognitive psychology (e.g. [79]) and in AI via connectionism (e.g. [41]). With the resurgence of machine learning in the 1980s, its rise to intellectual dominance in the 2000s, and its peak as an intellectual quasi-monopoly in AI in the late 2010s via 6 Deep Learning, a connectionist-inspired Tabula Rasa is increasingly becoming the domi- nant philosophical framework in which AI research is taking place. Many researchers are implicitly conceptualizing the mind via the metaphor of a “randomly initialized neural net- work” that starts blank and that derives its skills from “training data” – a cognitive fallacy that echoes early AI researchers a few decades prior who conceptualized the mind as a kind of mainframe computer equipped with clever subroutines. We see the world through the lens of the tools we are most familiar with. Today, it is increasingly apparent that both of these views of the nature of human in- telligence – either a collection of special-purpose programs or a general-purpose Tabula Rasa – are likely incorrect, which we discuss in II.1.3, along with implications for artificial intelligence. # I.3 AI evaluation: from measuring skills to measuring broad abilities These two conceptualizations of intelligence – along with many other intermediate views combining elements from each side – have influenced a host of approaches for evaluating intelligence in machines, in humans, and more rarely in both at the same time, which we discuss below. Note that this document is not meant as an extensive survey of AI evaluation methods – for such a survey, we recommend Hern´andez-Orallo 2017 [37]. Other notable previous surveys include Cohen and Howe 1988 [69] and Legg and Hutter 2007 [53]. # I.3.1 Skill-based, narrow AI evaluation In apparent accordance with Minsky’s goal for AI, the major successes of the field have been in building special-purpose systems capable of handling narrow, well-described tasks, sometimes at above human-level performance. This success has been driven by perfor- mance measures quantifying the skill of a system at a given task (e.g. how well an AI plays chess, how well an image classifier recognizes cats from dogs). There is no single, formalized way to do skill-based evaluation. Historically successful approaches include: • Human review: having human judges observe the system’s input-output response and score it. This is the idea behind the Turing test and its variants. This evaluation mode is rarely used in practice, due to being expensive, impossible to automate, and subjective. Some human-facing AI systems (in particular commercial chatbots) use it as one of multiple evaluation mechanics. • White-box analysis: inspecting the implementation of the system to determine its input-output response and score it. This is most relevant for algorithms solving a fully-described task in a fully-described environment where all possible inputs can be explicitly enumerated or described analytically (e.g. an algorithm that solves the traveling salesman problem or that plays the game “Connect Four”), and would often take the form of an optimality proof. 7 • Peer confrontation: having the system compete against either other AIs or humans. This is the preferred mode of evaluation for player-versus-player games, such as chess. • Benchmarks: having the system produce outputs for a “test set” of inputs (or envi- ronments) for which the desired outcome is known, and score the response. Benchmarks in particular have been a major driver of progress in AI, because they are reproducible (the test set is fixed), fair (the test set is the same for everyone), scalable (it is inexpensive to run the evaluation many times), easy to set up, and flexible enough to be applicable to a wide range of possible tasks. Benchmarks have often been most impactful in the context of a competition between different research teams, such as the ILSVRC chal- lenge for large-scale image recognition (ImageNet) [22] or the DARPA Grand Challenge for autonomous driving [11]. A number of private and community-led initiatives have been started on the premise that such benchmark-based competitions speed up progress (e.g. Kaggle (kaggle.com), as well as academic alternatives such as ChaLearn (chalearn.org), the Hutter prize, etc.), while some government organizations use competitions to deliber- ately trigger technological breakthroughs (e.g. DARPA, NIST). These successes demonstrate the importance of setting clear goals and adopting objec- tive measures of performance that are shared across the research community. However, optimizing for a single metric or set of metrics often leads to tradeoffs and shortcuts when it comes to everything that isn’t being measured and optimized for (a well-known effect on Kaggle, where winning models are often overly specialized for the specific benchmark they won and cannot be deployed on real-world versions of the underlying problem). In the case of AI, the focus on achieving task-specific performance while placing no conditions on how the system arrives at this performance has led to systems that, despite performing the target tasks well, largely do not feature the sort of human intelligence that the field of AI set out to build. This has been interpreted by McCorduck as an “AI effect” where goalposts move every time progress in AI is made: “every time somebody figured out how to make a computer do somethingplay good checkers, solve simple but relatively informal problemsthere was a chorus of critics to say, ‘that’s not thinking’ ” [61]. Similarly, Reed notes: “When we know how a machine does something ‘intelligent’, it ceases to be regarded as intelligent. If I beat the world’s chess champion, I’d be regarded as highly bright.” [77]. This inter- pretation arises from overly anthropocentric assumptions. As humans, we can only display high skill at a specific task if we have the ability to efficiently acquire skills in general, which corresponds to intelligence as characterized in II. No one is born knowing chess, or predisposed specifically for playing chess. Thus, if a human plays chess at a high level, we can safely assume that this person is intelligent, because we implicitly know that they had to use their general intelligence to acquire this specific skill over their lifetime, which reflects their general ability to acquire many other possible skills in the same way. But the same assumption does not apply to a non-human system that does not arrive at competence the way humans do. If intelligence lies in the process of acquiring skills, then there is no task X such that skill at X demonstrates intelligence, unless X is actually a meta-task 8 involving skill-acquisition across a broad range of tasks. The “AI effect” characterization is confusing the process of intelligence (such as the intelligence displayed by researchers creating a chess-playing program) with the artifact produced by this process (the resulting chess-playing program), due to these two concepts being fundamentally intertwined in the case of humans. We discuss this further in II.1. Task-specific performance is a perfectly appropriate and effective measure of success if and only if handling the task as initially specified is the end goal of the system – in other words, if our measure of performance captures exactly what we expect of the system. How- ever, it is deficient if we need systems that can show autonomy in handling situations that the system creator did not plan for, that can dynamically adapt to changes in the task – or in the context of the task – without further human intervention, or that can be repurposed for other tasks. Meanwhile, robustness and flexibility are increasingly being perceived as im- portant requirements for certain broader subfields of AI, such as L5 self-driving, domestic robotics, or personal assistants; there is even increasing interest in generality itself (e.g. de- velopmental robotics [4], artificial general intelligence [28]). This points to a need to move beyond skill-based evaluation for such endeavours, and to find ways to evaluate robustness and flexibility, especially in a cross-task setting, up to generality. But what do we really mean when we talk about robustness, flexibility, and generality? # I.3.2 The spectrum of generalization: robustness, flexibility, generality Even though such machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal they were acting not through understanding, but only from the disposition of their organs. Ren´e Descartes, 1637 The resurgence of machine learning in the 1980s has led to an interest in formally defining, measuring, and maximizing generalization. Generalization is a concept that pre- dates machine learning, originally developed to characterize how well a statistical model performs on inputs that were not part of its training data. In recent years, the success of Deep Learning [52], as well as increasingly frequent run-ins with its limitations (see e.g. [51, 16, 59]), have triggered renewed interest in generalization theory in the context of machine learning (see e.g. [102, 67, 45, 70, 17, 49]). The notion of generalization can be formally defined in various contexts (in particular, statistical learning theory [92] provides a widely-used formal definition that is relevant for machine learning, and we provide a more general formalization in II.2). We can informally define “generalization” or “generalization power” for any AI system to broadly mean “the ability to handle situations (or tasks) that 9 differ from previously encountered situations”. The notion of “previously encountered situation” is somewhat ambiguous, so we should distinguish between two types of generalization: this is the ability of a learning system to handle situations it has not itself encountered before. The formal notion of generalization error in statistical learning theory would belong here. – For instance, if an engineer develops a machine learning classification algorithm and fits it on a training set of N samples, the “generalization” of this learning al- gorithm would refer to its classification error over images not part of the training set. – Note that the generalization power of this algorithm may be in part due to prior knowledge injected by the developer of the system. This prior knowledge is ignored by this measure of generalization. • Developer-aware generalization: this is the ability of a system, either learning or static, to handle situations that neither the system nor the developer of the system have encountered before. – For instance, if an engineer uses a “development set” of N samples to create a static classification algorithm that uses hard-coded heuristic rules, the “general- ization” of this static algorithm would refer to its classification error over images not part of the “development set”. – Note that “developer-aware generalization” is equivalent to “system-centric gen- eralization” if we include the developer of the system as part of the system. – Note that “developer-aware generalization” accounts for any prior knowledge that the developer of the system has injected into it. “System-centric generaliza- tion” does not. In addition, we find it useful to qualitatively define degrees of generalization for information- processing systems: • Absence of generalization: The notion of generalization as we have informally de- fined above fundamentally relies on the related notions of novelty and uncertainty: a system can only generalize to novel information that could not be known in advance to either the system or its creator. AI systems in which there is no uncertainty do not display generalization. For instance, a program that plays tic-tac-toe via exhaustive iteration cannot be said to “generalize” to all board configurations. Likewise, a sort- ing algorithm that is proven to be correct cannot be said to “generalize” to all lists of integers, much like proven mathematical statements cannot be said to “generalize” to all objects that match the assumptions of their proof 3. 3This is a distinct definition from “generalization” in mathematics, where “to generalize” means to extend the scope of application of a statement by weakening its assumptions. 10 • Local generalization, or “robustness”: This is the ability of a system to handle new points from a known distribution for a single task or a well-scoped set of known tasks, given a sufficiently dense sampling of examples from the distribution (e.g. tolerance to anticipated perturbations within a fixed context). For instance, an image classifier that can distinguish previously unseen 150x150 RGB images containing cats from those containing dogs, after being trained on many such labeled images, can be said to perform local generalization. One could characterize it as “adaptation to known unknowns within a single task or well-defined set of tasks”. This is the form of generalization that machine learning has been concerned with from the 1950s up to this day. • Broad generalization, or “flexibility”: This is the ability of a system to handle a broad category of tasks and environments without further human intervention. This includes the ability to handle situations that could not have been foreseen by the creators of the system. This could be considered to reflect human-level ability in a single broad activity domain (e.g. household tasks, driving in the real world), and could be characterized as “adaptation to unknown unknowns across a broad category of related tasks”. For instance, a L5 self-driving vehicle, or a domestic robot capable of passing Wozniak’s coffee cup test (entering a random kitchen and making a cup of coffee) [99] could be said to display broad generalization. Arguably, even the most advanced AI systems today do not belong in this category, although there is increasing research interest in achieving this level. • Extreme generalization: This describes open-ended systems with the ability to han- dle entirely new tasks that only share abstract commonalities with previously encoun- tered situations, applicable to any task and domain within a wide scope. This could be characterized as “adaptation to unknown unknowns across an unknown range of tasks and domains”. Biological forms of intelligence (humans and possibly other in- telligent species) are the only example of such a system at this time. A version of extreme generalization that is of particular interest to us throughout this document is human-centric extreme generalization, which is the specific case where the scope considered is the space of tasks and domains that fit within the human experience. We will refer to “human-centric extreme generalization” as “generality”. Importantly, as we deliberately define generality here by using human cognition as a reference frame (which we discuss in II.1.2), it is only “general” in a limited sense. Do note, however, that humans display extreme generalization both in terms of system-centric gener- alization (quick adaptability to highly novel situations from little experience) and developer-aware generalization (ability of contemporary humans to handle situations that previous humans have never experienced during their evolutionary history). To this list, we could, theoretically, add one more entry: “universality”, which would extend “generality” beyond the scope of task domains relevant to humans, to any task that could be practically tackled within our universe (note that this is different from “any task at all” as understood in the assumptions of the No Free Lunch theorem [98, 97]). We discuss in II.1.2 why we do not consider universality to be a reasonable goal for AI. 11 Crucially, the history of AI has been one of slowly climbing up this spectrum, start- ing with systems that largely did not display generalization (symbolic AI), and evolving towards robust systems (machine learning) capable of local generalization. We are now entering a new stage, where we seek to create flexible systems capable of broad generaliza- tion (e.g. hybrid symbolic and machine learning systems such as self-driving vehicles, AI assistants, or cognitive developmental robots). Skill-focused task-specific evaluation has been appropriate for close-ended systems that aim at robustness in environments that only feature known unknowns, but developing systems that are capable of handling unknown unknowns requires evaluating their abilities in a general sense. Importantly, the spectrum of generalization outlined above seems to mirror the organi- zation of humans cognitive abilities as laid out by theories of the structure of intelligence in cognitive psychology. Major theories of the structure of human intelligence (CHC [62], g-VPR [48]) all organize cognitive abilities in a hierarchical fashion (figure 1), with three strata (in CHC): general intelligence (g factor) at the top, broad abilities in the middle, and specialized skills or test tasks at the bottom (this extends to 4 strata for g-VPR, which splits broad abilities into two layers), albeit the taxonomy of abilities differs between theories. Here, “extreme generalization” corresponds to the g factor, “broad generalization” across a given domain corresponds to a broad cognitive ability, and “local generalization” (as well as the no-generalization case) corresponds to task-specific skill. Measuring such broad abilities (and possibly generality itself) rather than specific skills has historically been the problematic of the field of psychometrics. Could psychometrics inform the evaluation of abilities in AI systems? G General intelligence Extreme generalization Broad Broad Broad Broad cognitive abilities ability ability ability Broad generalization A B Cc Task-specific skills Task 1 Task 2 Task 3 Task 3 Task 4 Task 5 aol Local generalization (or no generalization, i.e. absence of uncertainty) Figure 1: Hierarchical model of cognitive abilities and its mapping to the spectrum of general- ization. Note that, in what follows: • We use “broad abilities” to refer to cognitive abilities that lead to broad or extreme generalization. Developing such abilities should be the goal of any researcher inter- 12 ested in flexible AI or general AI. “Broad abilities” is often meant in opposition to “local generalization”. • We use “generalization” to refer to the entire spectrum of generalization, starting with local generalization. • Because human general intelligence (the g factor) is itself a very broad cognitive ability (the top of the hierarchy of abilities), we use the term “intelligence” or “general intelligence” to refer to extreme generalization as defined above. # I.3.3 Measuring broad abilities and general intelligence: the psychometrics perspective It seems to us that in intelligence there is a fundamental faculty, the alteration or the lack of which, is of the utmost importance for practical life. This faculty is [...] the faculty of adapting one’s self to circumstances. Alfred Binet, 1916 In the early days of the 20th century, Binet and Simon, looking for a formal way to distinguish children with mental disabilities from those with behavior problems, developed the Binet-Simon scale [8], the first test of intelligence, founding the field of psychometrics. Immediately after, Spearman observed that individual results across different, seemingly unrelated types of intelligence tests were correlated, and hypothesized the existence of a single factor of general intelligence, the g factor [83, 84]. Today, psychometrics is a well- established subfield of psychology that has arrived at some of the most reproducible results of the field. Modern intelligence tests are developed by following strict standards regarding reliability (low measurement error, a notion tied to reproducibility), validity (measuring what one purports to be measuring, a notion tied to statistical consistency and predictive- ness), standardization, and freedom from bias – see e.g. Classical Test Theory (CTT) [20] and Item Response Theory (IRT) [34]. A fundamental notion in psychometrics is that intelligence tests evaluate broad cog- nitive abilities as opposed to task-specific skills. Theories of the structure of intelligence (such as CHC, g-VPR), which have co-evolved with psychometric testing (statistical phe- nomena emerging from test results have informed these theories, and these theories have informed test design) organize these abilities in a hierarchical fashion (figure 1), rather sim- ilarly to the spectrum of generalization we presented earlier. Importantly, an ability is an abstract construct (based on theory and statistical phenomena) as opposed to a directly mea- surable, objective property of an individual mind, such as a score on a specific test. Broad abilities in AI, which are also constructs, fall into the exact same evaluation problematics 13 as cognitive abilities from psychometrics. Psychometrics approaches the quantification of abilities by using broad batteries of test tasks rather than any single task, and by analysing test results via probabilistic models. Importantly, the tasks should be previously unknown to the test-taker, i.e., we assume that test-takers do not practice for intelligence tests. This approach is highly relevant to AI evaluation. Remarkably, in a parallel to psychometrics, there has been recent and increasing inter- est across the field of AI in using broad batteries of test tasks to evaluate systems that aim at greater flexibility. Examples include the Arcade Learning Environment for Reinforce- ment Learning agents [6], Project Malm ¨O [71], the Behavior Suite [68], or the GLUE [95] and SuperGLUE [94] benchmarks for natural language processing. The underlying logic of these efforts is to measure something more general than skill at one specific task by broadening the set of target tasks. However, when it comes to assessing flexibility, a crit- ical defect of these multi-task benchmarks is that the set of tasks is still known in advance to the developers of any test-taking system, and it is fully expected that test-taking systems will be able to practice specifically for the target tasks, leverage task-specific built-in prior knowledge inherited from the system developers, leverage external knowledge obtained via pre-training, etc. As such, these benchmarks still appear to be highly gameable (see e.g. II.1.1) – merely widening task-specific skill evaluation to more tasks does not produce a qualitatively different kind of evaluation. Such benchmarks are still looking at skills, rather than abilities, in contrast with the psychometrics approach (this is not to say that such benchmarks are not useful; merely that such static multi-task benchmarks do not directly assess flexibility or generality). In addition to these multi-task benchmarks, a number of more ambitious test suites for cognitive abilities of AI have been proposed in the past but have not been imple- mented in practice: the Newell test by Anderson and Lebiere ([2], named in reference to [66]), the BICA “cognitive decathlon” targeted at developmental robotics [65], the Turing Olympics [27], and the I-Athlon [1]. Lacking concrete implementations, it is difficult to assess whether these projects would have been able to address the ability evaluation prob- lem they set out to solve. On the other hand, two similarly-spirited but more mature test suite have emerged recently, focused on generalization capabilities as opposed to specific tasks: the Animal-AI Olympics [7] (animalaiolympics.com) and the GVGAI competition [72] (gvgai.net). Both take the position that AI agents should be evaluated on an unseen set of tasks or games, in order to test learning or planning abilities rather than special-purpose skill. Both feature a multi-game environment and an ongoing public competition. # I.3.4 Integrating AI evaluation and psychometrics Besides efforts to broaden task-specific evaluation to batteries of multi-task tests, there have been more direct and explicit attempts to integrate AI evaluation and psychometrics. A first approach is to reuse existing psychometric intelligence tests, initially developed for humans, as a way to assess intelligence in AI systems – perhaps an obvious idea if we are to take the term “artificial intelligence” literally. This idea was first proposed by Green in 1964 [29], and was, around the same time, explored by Evans [24], who wrote a LISP program 14 called ANALOGY capable of solving a geometric analogy task of the kind that may be found in a pyschometric intelligence test. Newell suggested the idea again in 1973 [66] in his seminal paper You can’t play 20 questions with Nature and win. It was proposed again and refined by Bringsjord et al. in the 2000s under the name “Psychometric AI” (PAI) [9]. However, it has since become apparent that it is possible for AI system developers to game human intelligence tests, because the tasks used in these tests are available to the system developers, and thus the developers can straightforwardly solve the abstract form of these problems themselves and hard-code the solution in program form (see, for instance, [23, 80, 44]), much like Evans did with in the 1960s with the ANALOGY program. Effectively, in this case, it is the system developers who are solving the test problems, rather than any AI. The implicit assumptions that psychometric test designers make about human test-takers turn out to be difficult to enforce in the case of machines. An alternative, more promising approach is to leverage what psychometrics can teach us about ability assessment and test design to create new types of benchmarks targeted specifically at evaluating broad abilities in AI systems. Along these lines, Hern´andez- Orallo et al. have proposed extending psychometric evaluation to any intelligent system, including AI agents and animals, in “Universal Psychometrics” [39]. We argue that several important principles of psychometrics can inform intelligence evaluation in AI in the context of the development of broad AI and general AI: • Measuring abilities (representative of broad generalization and skill-acquisition effi- ciency), not skills. Abilities are distinct from skills in that they induce broad general- ization, i.e. they form the basis for skill across a broad range of tasks, including tasks that were previously unknown to the ability-enabled system and its developers. • Doing so via batteries of tasks rather than any single task, that should be previously unknown to both the test taking system and the system developers (this is necessary to assess broad generalization as opposed to skill or local generalization). • Having explicit standards regarding reliability, validity, standardization, and freedom from bias: – Reliability implies that the test results for a given system should be reproducible over time and across research groups. – Validity implies that what the test assesses should be clearly understood; test creators should be able to answer 1) what assumptions does the test make? 2) what does the test predict, i.e. what broad abilities would a successful result demonstrate, and how well does the test predict these abilities? (Which should ideally be achieved via statistical quantification.) – Standardization implies adopting shared benchmarks across the subset of the research community that pursues broad AI and general AI. Standard benchmarks in computer vision and natural language processing have already shown to be highly effective catalyzers of progress. – Freedom from bias implies that the test should not be biased against groups of test-takers in ways that run orthogonal to the abilities being assessed. For in- 15 stance, a test of intelligence designed for both humans and AI should not lever- age uniquely human acquired knowledge, or should not involve constraints un- related to intelligence within which machines have unfair advantages (such as fast reaction times), etc. Simultaneously, we argue that certain other aspects of psychometrics may be discarded in the development of new intelligence tests for AI: • The exact number and taxonomy of cognitive abilities considered, being a subject of ongoing debate within cognitive psychology and being perhaps overly anthropocen- tric, should not be used as a strict template for artificial cognitive architectures and their evaluation. Existing taxonomies may at best serve as a source of inspiration. • A number of abilities being assessed by psychometric intelligence tests are crystal- lized abilities (e.g. reading and writing), i.e. abilities that are acquired through expe- rience, which are not clearly distinguishable from skills (they are effectively multi- purpose skills). We argue that AI tests that seek to assess flexibility and generality should not consider crystallized abilities, but rather, should focus on abilities that enable new skill acquisition. If a system possesses abilities that enable efficient skill- acquisition in a domain, the system should have no issue in developing corresponding skills and crystallized abilities. # I.3.5 Current trends in broad AI evaluation Despite a rising interest in building flexible systems, or even in generality itself, for the most part the AI community has not been paying much attention to psychometric evalua- tion, Psychometric AI, or Universal Psychometrics. If we are to assess the contemporary zeitgeist of broad AI evaluation, here is what we see. 4 First, we note several positive developments. Since 2017, there is increasing awareness that one should seek to establish some form of generalization in evaluating Reinforcement Learning (RL) algorithms (e.g. [50, 70, 17, 49]), which was previously a stark problem [76, 35, 101, 70], as RL agents have for a long time been tested on their training data. Further, there is increasing interest in evaluating the data-efficiency of learning algorithms (e.g. [10]), in particular in the context of RL for games such as Atari games or Minecraft (e.g. [71, 33]). Lastly, as noted in I.3.3, there has been a trend towards leveraging multi-task benchmarks as a way to assess robustness and flexibility (e.g. [6, 71, 68, 95, 94]). Unfortunately, we must also note several negatives. The robustness of the systems being developed, in particular Deep Learning models, is often problematic (see e.g. [16, 59]). This is due in large part to the fact that most benchmarks do not pay much attention to formally assessing robustness and quantifying generalization, and thus can be solved via “shortcuts” that gradient descent is apt at exploiting (e.g. surface statistics such as textures in the case of computer vision [46]). Likewise, the reproducibility (reliability) of 4Because broad AI research is currently largely dominated by Reinforcement Learning (RL) approaches, many of our observations here are specific to RL. 16 research findings is often an issue [73], especially in Reinforcement Learning, although some progress has been made on this front. Most importantly, the evaluation of any ability that goes decisively beyond local gen- eralization is still largely a green field, and little effort has been devoted to investigate it. Hern´andez-Orallo noted in 2017 that “ability-oriented and general-purpose evaluation ap- proaches [...] are still very incipient, and more research and discussion is needed” [36]. Recent attempts at broadening task-specific benchmarks by including multiple tasks do not measure developer-aware generalization, as the tasks are all known in advance to system developers (as noted in I.3.3). Attempts at assessing generalization by testing RL systems on previously unseen game levels, like CoinRun [17] or Obstacle Tower [49], are still only looking at task-specific local generalization, by evaluating a candidate system on new sam- ples from a known distribution rather than using a substantially new task (as suggested in III.3). In addition, the fact the level-generation programs used are available to the AI devel- opers means it is possible to “cheat” on these benchmarks by sampling arbitrary amounts of training data (cf. II.1.1). Further, contemporary research “moonshots” that are publicly advertised as being steps towards general intelligence appear to still be focusing on skill-based task-specific evalua- tion for board games and video games (e.g. Go [82, 81] and StarCraft [93] for DeepMind, DotA2 [89] for OpenAI) via highly-mediatized confrontations with top human players. Despite claims of progress towards general AI in associated public communications5, such evaluation does not involve any measure of generalization power, and has little-to-no over- lap with the development of flexibility and generality, as we outline in II.1. For example, although OpenAI’s DotA2-playing AI “Five” was trained on 45,000 years of play and was able to beat top human players [89], it has proven very brittle, as non-champion human players were able to find strategies to reliably beat it in a matter of days after the AI was made available for the public to play against [90]. In addition, Five did not even generalize to DotA2 in the first place: it could only play a restricted version of the game, with 16 characters instead of over 100. Likewise, AlphaGo and its successor AlphaZero, developed in 2016 and 2017, have not yet found any application outside of board games, to the best of our knowledge. We deplore this discrepancy between a focus on surpassing humans at tests of skill on one hand (while entirely disregarding whether the methods through which skill is achieved are generalizable), and a manifest interest in developing broad abilities on the other hand – an endeavour entirely orthogonal to skill itself. We hypothesize that this discrepancy is due to a lack of a clear conceptualization of intelligence, skill, and generalization, as well as a lack of appropriate measures and benchmarks for broad cognitive abilities. In what follows, we expose in more detail the issue with using task-specific “moonshots” (e.g. achieving better-than-human performance in a video game or board game) as stepping stones towards more general forms of AI, and we propose a formal definition of intelligence meant to be actionable in the pursuit of flexible AI and general AI. 5OpenAI public statement: “Five is a step towards advanced AI systems which can handle the complexity and # uncertainty of the real world” 17 # II A new perspective # II.1 Critical assessment # II.1.1 Measuring the right thing: evaluating skill alone does not move us forward In 1973, psychologist and computer science pioneer Allen Newell, worried that recent ad- vances in cognitive psychology were not bringing the field any closer to a holistic theory of cognition, published his seminal paper You can’t play 20 questions with nature and win [66], which helped focus research efforts on cognitive architecture modelling, and provided new impetus to the longstanding quest to build a chess-playing AI that would outperform any human. Twenty-four years later, In 1997, IBM’s DeepBlue beat Gary Kasparov, the best chess player in the world, bringing this quest to an end [12]. When the dust settled, researchers were left with the realization that building an artificial chess champion had not actually taught them much, if anything, about human cognition. They had learned how to build a chess-playing AI, and neither this knowledge nor the AI they had built could generalize to anything other than similar board games. It may be obvious from a modern perspective that a static chess-playing program based on minimax and tree search would not be informative about human intelligence, nor com- petitive with humans in anything other than chess. But it was not obvious in the 1970s, when chess-playing was thought by many to capture, and require, the entire scope of ratio- nal human thought. Perhaps less obvious in 2019 is that efforts to “solve” complex video games using modern machine learning methods still follow the same pattern. Newell wrote [66]: “we know already from existing work [psychological studies on humans] that the task [chess] involves forms of reasoning and search and complex perceptual and memorial processes. For more general considerations we know that it also involves planning, eval- uation, means-ends analysis and redefinition of the situation, as well as several varieties of learning – short-term, post-hoc analysis, preparatory analysis, study from books, etc.”. The assumption was that solving chess would require implementing these general abilities. Chess does indeed involve these abilities – in humans. But while possessing these general abilities makes it possible to solve chess (and many more problems), by going from the general to the specific, inversely, there is no clear path from the specific to the general. Chess does not require any of these abilities, and can be solved by taking radical shortcuts that run orthogonal to human cognition. Optimizing for single-purpose performance is useful and valid if one’s measure of suc- cess can capture exactly what one seeks (as we outlined in I.3.1), e.g. if one’s end goal is a chess-playing machine and nothing more. But from the moment the objective is settled, the process of developing a solution will be prone to taking all shortcuts available to satisfy the objective of choice – whether this process is gradient descent or human-driven research. These shortcuts often come with undesirable side-effects when it comes to considerations not incorporated the measure of performance. If the environment in which the system is to operate is too unpredictable for an all-encompassing objective function to be defined beforehand (e.g. most real-world applications of robotics, where systems face unknown 18 unknowns), or if one aims at a general-purpose AI that could be applied to a wide range of problems with no or little human engineering, then one must somehow optimize directly for flexibility and generality, rather than solely for performance on any specific task. This is, perhaps, a widely-accepted view today when it comes to static programs that hard-code a human-designed solution. When a human engineer implements a chatbot by specifying answers for each possible query via if/else statements, we do not assume this chatbot to be intelligent, and we do not expect it to generalize beyond the engineer’s speci- fications. Likewise, if an engineer looks at a specific IQ test task, comes up with a solution, and write down this solution in program form, we do not expect the program to general- ize to new tasks, and we do not believe that the program displays intelligence – the only intelligence at work here is the engineer’s. The program merely encodes the crystallized output of the engineer’s thought process – it is this process, not its output, that implements intelligence. Intelligence is not demonstrated by the performance of the output program (a skill), but by the fact that the same process can be applied to a vast range of previously unknown problems (a general-purpose ability): the engineer’s mind is capable of extreme generalization. Since the resulting program is merely encoding the output of that process, it is no more intelligent than the ink and paper used to write down the proof of a theorem. However, what of a program that is not hard-coded by humans, but trained from data to perform a task? A learning machine certainly may be intelligent: learning is a neces- sary condition to adapt to new information and acquire new skills. But being programmed through exposure to data is no guarantee of generalization or intelligence. Hard-coding prior knowledge into an AI is not the only way to artificially “buy” performance on the target task without inducing any generalization power. There is another way: adding more training data, which can augment skill in a specific vertical or task without affecting gener- alization whatsoever. Information processing systems form a spectrum between two extremes: on one end, static systems that consist entirely of hard-coded priors (such as DeepBlue or our if/else chatbot example), and on the opposite end, systems that incorporate very few priors and are almost entirely programmed via exposure to data (such as a hashtable or a densely- connected neural network). Most intelligent systems, including humans and animals, com- bine ample amounts of both priors and experience, as we point out in II.1.3. Crucially, the ability to generalize is an axis that runs orthogonal to the prior/experience plane. Given a learning system capable of achieving a certain level of generalization, modifying the sys- tem by incorporating more priors or more training data about the task can lead to greater task-specific performance without affecting generalization. In this case, both priors and experience serve as a way to “game” any given test of skill without having to display the sort of general-purpose abilities that humans would rely on to acquire the same skill. This can be readily demonstrated with a simple example: consider a hashtable that uses a locality-sensitive hash function (e.g. nearest neighbor) to map new inputs to previously seen inputs. Such a system implements a learning algorithm capable of local generalization, the extent of which is fixed (independent of the amount of data seen), determined only by the abstraction capabilities of the hash function. This system, despite only featuring trace amounts of generalization power, is already sufficient to “solve” any task for which 19 unlimited training data can be generated, such as any video game. All that one has to do is obtain a dense sampling of the space of situations that needs to be covered, and associate each situation with an appropriate action vector. Adding ever more data to a local-generalization learning system is certainly a fair strat- egy if one’s end goal is skill on the task considered, but it will not lead to generalization beyond the data the system has seen (the resulting system is still very brittle, e.g. Deep Learning models such as OpenAI Five), and crucially, developing such systems does not teach us anything about achieving flexibility and generality. “Solving” any given task with beyond-human level performance by leveraging either unlimited priors or unlimited data does not bring us any closer to broad AI or general AI, whether the task is chess, football, or any e-sport. Current evidence (e.g. [51, 46, 16, 59, 50]) points to the fact that contemporary Deep Learning models are local-generalization systems, conceptually similar to a locality-sensitive hashtable – they may be trained to achieve arbitrary levels of skill at any task, but do- ing so requires a dense sampling of the input-cross-target space considered (as outlined in [16]), which is impractical to obtain for high-value real-world applications, such as L5 self- driving (e.g. [5] notes that 30 million training situations is not enough for a Deep Learning model to learn to drive a car in a plain supervised setting). Hypothetically, it may be shown in the future that methods derived from Deep Learning could be capable of stronger forms of generalization, but demonstrating this cannot be done merely by achieving high skill, such as beating humans at DotA2 or Starcraft given unlimited data or unlimited engineer- ing; instead, one should seek to precisely establish and quantify the generalization strength of such systems (e.g. by considering prior-efficiency and data-efficiency in skill acquisition, as well as the developer-aware generalization difficulty of the tasks considered). A central point of this document is to provide a formal framework for doing so (II.2 and II.3). Failing to account for priors, experience, and generalization difficulty in our evaluation methods will prevent our field from climbing higher along the spectrum of generalization (I.3.2) and from eventually reaching general AI. In summary, the hallmark of broad abilities (including general intelligence, as per II.1.2) is the power to adapt to change, acquire skills, and solve previously unseen problems – not skill itself, which is merely the crystallized output of the process of intelligence. Testing for skill at a task that is known in advance to system developers (as is the current trend in general AI research) can be gamed without displaying intelligence, in two ways: 1) unlimited prior knowledge, 2) unlimited training data. To actually assess broad abilities, and thus make progress toward flexible AI and eventually general AI, it is imperative that we control for priors, experience, and generalization difficulty in our evaluation methods, in a rigorous and quantitative way. # II.1.2 The meaning of generality: grounding the g factor It is a well-known fact of cognitive psychology that different individuals demonstrate dif- ferent cognitive abilities to varying degrees, albeit results across all tests of intelligence 20 are correlated. This points to cognition being a multi-dimensional object, structured in a hierarchical fashion (figure 1), with a single generality factor at the top, the g factor. But is “general intelligence” the apex of the cognitive pyramid in an absolute sense (as is some- times assumed by proponents of “Artificial General Intelligence”), or is it merely a broader cognitive ability, one that would remain fairly specialized, and wouldn’t be qualitatively distinct from other abilities lower down the hierarchy? How general is human intelligence? The No Free Lunch theorem [98, 97] teaches us that any two optimization algorithms (including human intelligence) are equivalent when their performance is averaged across every possible problem, i.e. algorithms should be tailored to their target problem in order to achieve better-than-random performance. However, what is meant in this context by “every possible problem” refers to a uniform distribution over problem space; the distribution of tasks that would be practically relevant to our universe (which, due to its choice of laws of physics, is a specialized environment) would not fit this definition. Thus we may ask: is the human g factor universal? Would it generalize to every possible task in the universe? This is a question that is largely irrelevant for psychometrics, because as a subfield of psychology, it makes the implicit assumption that it is concerned solely with humans and the human experience. But this question is highly relevant when it comes to AI: if there is such a thing as universal intelligence, and if human intelligence is an implementation of it, then this algorithm of universal intelligence should be the end goal of our field, and reverse-engineering the human brain could be the shortest path to reach it. It would make our field close-ended: a riddle to be solved. If, on the other hand, human intelligence is a broad but ad-hoc cognitive ability that generalizes to human-relevant tasks but not much else, this implies that AI is an open-ended, fundamentally anthropocentric pursuit, tied to a specific scope of applicability. This has implications for how we should measure it (by using human intelligence and human tasks as a reference) and for the research strategies we should follow to achieve it. The g factor, by definition, represents the single cognitive ability common to success across all intelligence tests, emerging from applying factor analysis to test results across a diversity of tests and individuals. But intelligence tests, by construction, only encompass tasks that humans can perform – tasks that are immediately recognizable and understand- able by humans (anthropocentric bias), since including tasks that humans couldn’t perform would be pointless. Further, psychometrics establishes measurement validity by demon- strating predictiveness with regard to activities that humans value (e.g. scholastic success): the very idea of a “valid” measure of intelligence only makes sense within the frame of reference of human values. In fact, the interpretation of what specific abilities make someone “intelligent” vary from culture to culture [100, 86, 18]. More broadly, humans have historically had a poor track record when it comes to attributing intelligence to complex information-processing agents around them, whether looking at humans from other cultures or at animals (such as octopuses, dolphins, great apes, etc.). We only reluctantly open up to the possibility that systems different from ourselves may be “intelligent” if they display relatable human- like behaviors that we associate with intelligence, such as language or tool use; behaviors that have high intrinsic complexity and high adaptability but that are not directly relatable 21 (such as octopus camouflage) are not perceived as intelligent. This observation extends to collective entities (e.g. markets, companies, Science as an institution) and natural processes (e.g. biological evolution). Although they can be modeled as standalone systems whose abilities and behavior match broadly accepted definitions of intelligence (achieving goals across a wide range of environments, demonstrating flexibility and adaptability, etc.), we do not categorize these systems as intelligent, simply because they aren’t sufficiently human- like. To use a well-known cross-domain analogy [25]: much like “intelligence”, the notion of “physical fitness” (as it pertains to sports and other physical activities) is an intuitively- understandable, informal, yet useful concept. Like intelligence, fitness is not easily re- ducible to any single factor (such as a person’s age or muscle mass), rather, it seems to emerge from a constellation of interdependent factors. If we sought to rigorously mea- sure physical fitness in humans, we would come up with a set of diverse tests such as running a 100m, running a marathon, swimming, doing sit-ups, doing basketball throws, etc., not unlike IQ test suites. Across tests results, we would observe clusters of correla- tions, corresponding to broad “physical abilities” strictly analogous to cognitive abilities (e.g. lung capacity might be such an “ability” inducing correlations across tests). Much like in the case of cognitive abilities, experts would probably disagree and debate as to the exact taxonomy of these broad abilities (is being “tall and lean” an ability, or is “tallness” a standalone factor?). And crucially, we should intuitively expect to find that all tests results would be correlated: we would observe a physical g factor, corresponding to the general intuitive construct of “physical fitness”. But would this mean that human morphology and motor affordances are “general” in an absolute sense, and that a very fit person could handle any physical task at all? Certainly not; we are not adapted for the large majority of environments that can be found in the uni- verse – from the Earth’s oceans to the surface of Venus, from the atmosphere of Jupiter to interstellar space. It is, however, striking and remarkable that human physical abilities gen- eralize to a far greater range of environments and tasks than the limited set of environments and activities that guided their evolution. To caricature, human bodies evolved for running in the East-African savanna, yet they are capable of climbing mount Everest, swimming across lakes, skydiving, playing basketball, etc. This is not a coincidence; by necessity, evolution optimizes for adaptability, whether cognitive adaptability or sensorimotor adapt- ability. Human physical capabilities can thus be said to be “general”, but only in a limited sense; when taking a broader view, humans reveal themselves to be extremely specialized, which is to be expected given the process through which they evolved. We argue that human cognition follows strictly the same pattern as human physical capabilities: both emerged as evolutionary solutions to specific problems in specific en- vironments (commonly known as “the four Fs”). Both were, importantly, optimized for adaptability, and as a result they turn out to be applicable for a surprisingly greater range of tasks and environments beyond those that guided their evolution (e.g. piano-playing, solving linear algebra problems, or swimming across the Channel) – a remarkable fact that should be of the utmost interest to anyone interested in engineering broad or general- purpose abilities of any kind. Both are multi-dimensional concepts that can be modeled as 22 a hierarchy of broad abilities leading up to a “general” factor at the top. And crucially, both are still ultimately highly specialized (which should be unsurprising given the context of their development): much like human bodies are unfit for the quasi-totality of the universe by volume, human intellect is not adapted for the large majority of conceivable tasks. This includes obvious categories of problems such as those requiring long-term plan- ning beyond a few years, or requiring large working memory (e.g. multiplying 10-digit numbers). This also includes problems for which our innate cognitive priors are unadapted; for instance, humans can be highly efficient in solving certain NP-hard problems of small size when these problems present cognitive overlap with evolutionarily familiar tasks such as navigation (e.g. the Euclidean Traveling Salesman Problem (TSP) with low point count can be solved by humans near-optimally in near-linear optimal time [58], using perceptual strategies), but perform poorly – often no better than random search – for problem instances of very large size or problems with less cognitive overlap with evolutionarily familiar tasks (e.g. certain non-Euclidean problems). For instance, in the TSP, human performance de- grades severely when inverting the goal from “finding the shortest path” to “finding the longest path” [57] – humans perform even worse in this case than one of the simplest pos- sible heuristic: farthest neighbor construction. 6 A particularly marked human bias is dimensional bias: humans show excellent perfor- mance on 2D navigation tasks and 2D shape-packing puzzles, and can still handle 3D cases albeit with greatly reduced performance, but they are effectively unable to handle 4D and higher. This fact is perhaps unsurprising given human reliance on perceptual strategies for problem-solving – strategies which are backed by neural mechanisms specifically evolved for 2D navigation (hippocampal systems of place cells and grid cells [64]). Thus, a central point of this document is that “general intelligence” is not a binary property which a system either possesses or lacks. It is a spectrum, tied to 1) a scope of application, which may be more or less broad, and 2) the degree of efficiency with which the system translate its priors and experience into new skills over the scope considered, 3) the degree of generalization difficulty represented by different points in the scope considered (see II.2). In addition, the “value” of one scope of application over another is entirely subjective; we wouldn’t be interested in (and wouldn’t even perceive as intelligent) a system whose scope of application had no intersection with our own. As such, it is conceptually unsound to set “artificial general intelligence” in an absolute sense (i.e. “universal intelligence”) as a goal. To set out to build broad abilities of any kind, one must start from a target scope, and one must seek to achieve a well-defined intelligence threshold within this scope: AI is a deeply contextual and open-ended endeavour, not a sin- gle one-time riddle to be solved. However, it may in theory be possible to create human-like artificial intelligence: we may gradually build systems that extend across the same scope of applicability as human intelligence, and we may gradually increase their generalization power within this scope until it matches that of humans. We may even build systems with higher generalization power (as there is no a priori reason to assume human cognitive ef- 6This does not necessarily mean that humanity as a collective is incapable of solving these problems; pooling individual humans over time or augmenting human intellect via external resources leads to increased generality, albeit this increase remains incremental, and still fundamentally differs from universality. 23 ficiency is an upper bound), or systems with a broader scope of application. Such systems would feature intelligence beyond that of humans. In conclusion, we propose that research on developing broad in AI systems (up to “gen- eral” AI, i.e. AI with a degree of generality comparable to human intelligence) should focus on defining, measuring, and developing a specifically human-like form of intelligence, and should benchmark progress specifically against human intelligence (which is itself highly specialized). This isn’t because we believe that intelligence that greatly differs from our own couldn’t exist or wouldn’t have value; rather, we recognize that characterizing and measuring intelligence is a process that must be tied to a well-defined scope of applica- tion, and at this time, the space of human-relevant tasks is the only scope that we can meaningfully approach and assess. We thus disagree with the perspective of Universal Psychometrics [39] or Legg and Hutter’s Universal Intelligence [54], which reject anthro- pocentrism altogether and seek to measure all intelligence against a single absolute scale. An anthropocentric frame of reference is not only legitimate, it is necessary. # II.1.3 Separating the innate from the acquired: insights from developmental psychology Advances in developmental psychology teach us that neither of the two opposing views of the nature of the mind described in I.2 are accurate (see e.g. [85]): the human mind is not merely a collection of special-purpose programs hard-coded by evolution; it is capable of a remarkable degree of generality and open-endedness, going far beyond the scope of envi- ronments and tasks that guided its evolution. The large majority of the skills and knowledge we possess are acquired during our lifetimes, rather than innate. Simultaneously, the mind is not a single, general-purpose “blank slate” system capable of learning anything from ex- perience. Our cognition is specialized, shaped by evolution in specific ways; we are born with priors about ourselves, about the world, and about how to learn, which determine what categories of skills we can acquire and what categories of problems we can solve. These priors are not a limitation to our generalization capabilities; to the contrary, they are their source, the reason why humans are capable of acquiring certain categories of skills with remarkable efficiency. The central message of the No Free Lunch theorem [98] is that to learn from data, one must make assumptions about it – the nature and structure of the innate assumptions made by the human mind are precisely what confers to it its powerful learning abilities. We noted in II.1.1 that an actionable measure of intelligence should, crucially, con- trol for priors and experience. We proposed in II.1.2 that evaluating general intelligence should leverage human intelligence as a necessary frame of reference. It follows that we need a clear understanding of human cognitive priors in order to fairly evaluate general intelligence between humans and machines. Human cognitive priors come in multiple forms, in particular 7: 7The boundaries between these categories may be fluid; the distinction between low-level sensorimotor priors and high-level knowledge priors is more one of degree than one of nature; likewise, the distinction between meta- 24 • Low-level priors about the structure of our own sensorimotor space, e.g. reflexes such as the vestibulo-ocular reflex, the palmar grasp reflex, etc. These priors enable infants (including prior to birth) to quickly take control of their senses and bodies, and may even generate simple behaviors in a limited range of situations. • Meta-learning priors governing our learning strategies and capabilities for knowledge acquisition. This may include, for instance, the assumption that information in the universe follows a modular-hierarchical structure, as well as assumptions regarding causality and spatio-temporal continuity. • High-level knowledge priors regarding objects and phenomena in our external envi- ronment. This may include prior knowledge of visual objectness (what defines an object), priors about orientation and navigation in 2D and 3D Euclidean spaces, goal- directedness (expectation that our environment includes agents that behave according to goals), innate notions about natural numbers, innate social intuition (e.g. theory of mind), etc. When it comes to creating artificial human-like intelligence, low-level sensorimotor pri- ors are too specific to be of interest (unless one seeks to build an artificial human body). While human meta-learning priors should be of the utmost interest (understanding the strategies that the brain follows to turn experience into knowledge and skills is effectively our end goal), these priors are not relevant to evaluating intelligence: they are intelligence, rather than a third-party modulating factor to be controlled for. They are part of the black box that we seek to characterize. It is knowledge priors that should be accounted for when measuring a human-like form of intelligence. A system that does not possess human innate knowledge priors would be at a critical disadvantage compared to humans when it comes to efficiently turning a given ex- perience curriculum into skill at a given human task. Inversely, a system that has access to more extensive hard-coded knowledge about the task at hand could not be fairly compared to human intelligence – as we noted in II.1.1, unlimited priors allow system developers to “buy” unbounded performance on any given task, with no implications with regard to gen- eralization abilities (what we are actually trying to achieve). Therefore, we propose that an actionable test of human-like general intelligence should be founded on innate human knowledge priors: • The priors should be made as close as possible to innate human knowledge priors as we understand them. As our understanding of human knowledge priors improves over time, so should the test evolve. • The test should assume that the system being measured possesses a specific set of priors. AI systems with more extensive priors should not be benchmarked using such a test. AI systems with fewer priors should be understood to be at a disadvantage. learning priors and knowledge priors is subjective since knowledge facilitates skill acquisition: for instance, the neural mechanism behind our capabilities to perform 2D navigation may be treated either as a specialized meta- learning prior or as a knowledge prior about the external world. 25 • The priors assumed by the test should be explicitly and exhaustively described. Im- portantly, current psychometric intelligence tests make many assumptions about prior knowledge held by the test-taker (either innate or acquired), but never explicitly de- scribe these assumptions. • To make sure that humans test-takers do not bring further priors to the test, the test tasks should not rely on any acquired human knowledge (i.e. any knowledge beyond innate prior knowledge). For instance, they should not rely on language or learned symbols (e.g. arrows), on acquired concepts such as “cat” or “dog”, or on tasks for which humans may have trained before (e.g. chess). This leads us to a central question: what is the exact list of knowledge priors that humans are born with? This is the question that the developmental science theory of Core Knowledge [85] seeks to answer. Core Knowledge identifies four broad categories of innate assumptions that form the foundations of human cognition, and which are largely shared by our non-human relatives 8: • Objectness and elementary physics: humans assume that their environment should be parsed into “objects” characterized by principles of cohesion (objects move as continuous, connected, bounded wholes), persistence (objects do not suddenly cease to exist and do not suddenly materialize), and contact (objects do not act at a distance and cannot interpenetrate). • Agentness and goal-directedness: humans assume that, while some objects in their environment are inanimate, some other objects are “agents”, possessing intentions of if we witness an object A following their own, acting so as to achieve goals (e.g. another moving object B, we may infer that A is pursuing B and that B is fleeing A), and showing efficiency in their goal-directed actions. We expect that these agents may act contingently and reciprocally. • Natural numbers and elementary arithmetic: humans possess innate, abstract number representations for small numbers, which can be applied to entities observed through any sensory modality. These number representations may be added or subtracted, and may be compared to each other, or sorted. this core knowledge system captures notions of distance, orientation, in/out relationships for objects in our environment and for ourselves. It underlies humans’ innate facility for orienting themselves with respect to their surroundings and navigating 2D and 3D environments. 8Core Knowledge has been written into our DNA by natural evolution. Natural evolution is an extremely low-bandwidth, highly selective mechanism for transferring information from the surrounding environment to an organism’s genetic code. It can only transfer information associated with evolutionary pressures, and it can only write about aspects of the environment that are stable over sufficiently long timescales. As such, it would not be reasonable to expect humans to possess vast amounts of human-specific prior knowledge; core knowledge is evolutionarily ancient and largely shared across many species, in particular non-human primates. 26 While cognitive developmental psychology has not yet determined with a high degree of certainty the exact set of innate priors that humans possess, we consider the Core Knowl- edge theory to offer a credible foundation suitable to the needs of a test of human-like general intelligence. We therefore propose that an actionable test of general intelligence that would be fair for both humans and machines should only feature tasks that assume the four core knowledge systems listed above, and should not involve any acquired knowledge outside of these priors. We also argue, in agreement with [51], that general AI systems should hard-code as fundamental priors these core knowledge principles. # II.2 Defining intelligence: a formal synthesis # Intelligence as skill-acquisition efficiency So far, we have introduced the following informally-described intuitions: • Intelligence lies in broad or general-purpose abilities; it is marked by flexibility and adaptability (i.e. skill-acquisition and generalization), rather than skill itself. The history of AI has been a slow climb along the spectrum of generalization. • A measure of intelligence should imperatively control for experience and priors, and should seek to quantify generalization strength, since unlimited priors or experience can produce systems with little-to-no generalization power (or intelligence) that ex- hibit high skill at any number of tasks. • Intelligence and its measure are inherently tied to a scope of application. As such, general AI should be benchmarked against human intelligence and should be founded on a similar set of knowledge priors. Let us now formalize these intuitions. In what follows, we provide a series of definitions for key concepts necessary to ground a formal definition of intelligence and its measure. We will leverage the tools of Algorithmic Information Theory. These definitions lead up to a formal way of expressing the following central idea: The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and generalization difficulty. Intuitively, if you consider two systems that start from a similar set of knowledge priors, and that go through a similar amount of experience (e.g. practice time) with respect to a set of tasks not known in advance, the system with higher intelligence is the one that ends up with greater skills (i.e. the one that has turned its priors and experience into skill more efficiently). This definition of intelligence encompasses meta-learning priors, memory, and fluid intelligence. It is distinct from skill itself: skill is merely the output of the process of intelligence. Before we start, let us emphasize that many possible definitions of intelligence may be valid, across many different contexts, and we do not purport that the definition above and the formalism below represent the “one true” definition. Nor is our definition meant 27 to achieve broad consensus. Rather, the purpose of our definition is to be actionable, to serve as a useful perspective shift for research on broad cognitive abilities, and to function as a quantitative foundation for new general intelligence benchmarks, such as the one we propose in part III. As per George Box’s aphorism, “all models are wrong, but some are useful”: our only aim here is to provide a useful North Star towards flexible and general AI. We discuss in II.2.3 the concrete ways in which our formalism is useful and actionable. # Position of the problem First, we must introduce basic definitions to establish our problem setup. It should be immediately clear to the reader that our choice of problem setup is sufficient to model Fully-Supervised Learning, Partially-Supervised Learning, and Reinforcement Learning. We consider the interaction between a “task” and an “intelligent system”. This interac- tion is mediated by a “skill program” (generated by the intelligent system) and a “scoring function” (part of the task). We implicitly consider the existence of a fixed universal Turing machine on which our programs run (including the skill programs, as well as programs part of the task and part of the intelligent system). We also assume the existence of a fixed “situation space” SituationSpace and “response space” ResponseSpace. Each of these spaces defines the set of binary strings that are allowed as input (and output, respectively) of all skill pro- grams we will consider henceforth. They may be, for instance, the sensor space and the motor space of an animal or robot. Situation . i<—_] Generation Skill ad | Intelligent peer el Task Seore Response -—_ system le Feedback Figure 2: Position of the problem: an intelligent system generates a skill program to interact with a task. A task T consists of four objects: • A task state T askState (binary string). • A “situation generation” function SituationGen : T askState → Situation. It may be stochastic. – A Situation is a binary string belonging to SituationSpace. 28 • A “scoring function” Scoring : [Situation, Response, T askState] → [Score, F eedback]. It may be stochastic. – A Response is a binary string belonging to ResponseSpace. – A “score” Score is a scalar. It is meant to measure the appropriateness of a response to a situation. – A piece of “feedback” F eedback is a binary string. It may encode full or par- tial information about the current score, or about scores corresponding to past responses (which may be known to the task state). – Note: The parameter Situation is technically optional since it may be known to the task state at runtime – we include it here for maximum explicitness. • A self-update function T askU pdate : [Response, T askState] → T askState, which mutates the task state based on the response to the latest situation. It may be stochas- tic. For instance, a game such as chess or WarCraft III (as well as what we call “task” in the ARC benchmark presented in III) would constitute a task. A given chess board position, screen frame in WarCraft III, or input grid in ARC, would constitute a situation. An intelligent system IS consists of three objects: • A system state ISState (binary string). • A “skill program generation function”: SkillP rogramGen : ISState → [SkillP rogram, SP State]. It may be stochastic. is a function that maps an input situation to a valid response (part of ResponseSpace), potentially using some working memory (SP State). It may be stochastic. Be- cause it possesses a state SP State (a binary string), it may be used to au- tonomously handle a series of connected situations without further communi- cation with the intelligent system that generated it. – A skill program may be, for instance, any game-specific program capable of playing new levels in a given video game. – In what follows, we refer to “skill program” as the combination of the SkillP rogram function and the initial skill program state SP State (i.e. skill programs are con- sidered stateful). – A skill program represents a frozen version of the system’s task-specific capa- bilities (including the ability to adapt to novel situations within the task). We use the concept of skill program as a conceptual device to formalize the level of task-specific skill and task-specific generalization capabilities of an agent at a given point in time. 29 • A self-update function ISU pdate : [Situation, Response, F eedback, ISState] → ISState, which mutates the system’s state based on the latest situation and corre- sponding feedback. It may be stochastic. For instance, a neural network generation and training algorithm for games would be an “intelligent system”, and the inference-mode game-specific network it would output at the end of a training run on one game would be a “skill program”. A program synthesis engine capable of looking at an ARC task and outputting a solution program would be an “intelli- gent system”, and the resulting solution program capable of handling future input grids for this task would be a “skill program”. The interaction between task, intelligent system, and skill programs is structured in two phases: a training phase and an evaluation phase. The goal of the training phase is for the IS to generate a high-skill skill program that will generalize to future evaluation situations. The goal of the evaluation phase is to assess the capability of this skill program to handle new situations. The training phase consists of the repetition of the following steps (we note the current step as t). Before we start, we consider two separate initial task states, trainT askStatet=0 and testT askStatee=0. We generate a training situation: situationt ← SituationGen(trainT askStatet) • The IS generates a new skill program (without knowledge of the current situation): [skillP rogramt, spStatet] ← SkillP rogramGen(isStatet) – Implicitly, we assume that the “goal” of the IS is to generate highly-skilled programs, i.e. programs that would have performed well on past situations, that will perform well on the next situation, and that would perform well on any possible situation for this task (in particular evaluation situations, which may feature significant novelty and uncertainty). We do not attempt to model why the IS should pursue this goal. – spStatet represents the working memory of the skill program at time t. Note that, because the skill program is generated anew with each training step, state- fulness across situations via SP State is not actually required during training. However, statefulness is important during evaluation when handling tasks that require maintaining information across situations. Note that in many games or real-world tasks, situations are all independent and thus skill programs don’t require statefulness at all (e.g. ARC, or any fully-observable game, like chess). The skill program outputs a response to the situation: [responset, spStatet+1] ← skillP rogramt(Situationt, spStatet) – skillP rogramt is only called once, and spStatet+1 is discarded, since the skill program is generated anew by the intelligent system at each training step. – In practice, in partially-observable games where consecutive situations are very two consecutive screen frames in WarCraft III), one 30 may assume that skillP rogramt at t and skillP rogramt+1 would not actu- ally be generated independently from scratch and would stay very close to each other (i.e. the IS’s understanding of the task would be evolving continuously in program space); spStatet+1 as generated by skillP rogramt and spStatet+1 as generated by SkillP rogramGen at t + 1 would likewise stay very close to each other. • The task scoring function assigns a score to the response and generates a piece of feedback: [scoret, f eedbackt] ← Scoring(Situationt, responset, trainT askStatet) – Note: The scalar score is meant to encode how appropriate the response is, and the feedback data is meant to be used by the intelligent system to update its state. In simple cases (e.g. fully-supervised learning), the feedback data is the same as the scalar score, meaning that the intelligent agent would have complete and immediate information about the appropriateness of its response. In other cases, the feedback data may only contain partial information, no information, or information that is only relevant to responses generated for prior situations (delayed feedback). • The IS updates its internal state based on the feedback received from the task: isStatet+1 ← ISU pdate(Situationt, responset, f eedbackt, isStatet) • The task updates its internal state based on the response received to the situation: trainT askStatet+1 ← T askU pdate(responset, trainT askStatet) The training phase ends at the discretion of the SituationGen function (e.g. SituationGen returns a “STOP” situation), at which time SkillP rogramGen would generate its last skill program, including an initial state (initial working memory) meant to perform well during evaluation (e.g. blank). The evaluation phase is superficially similar to the training phase, with the differences that 1) the task starts from testT askStatee=0 and consists of an independent series of situ- ations, 2) it only involves a single fixed skill program testSkillP rogram starting with state testSP Statee=0. Crucially, it no longer involves the intelligent system. Note that testT askStatee=0 could be chosen stochastically. For instance, different randomly cho- sen initial testT askStatee=0 could be different randomly-generated levels of a game. Like the separation between skill program and intelligent system, the evaluation phase should be understood as a conceptual device used to quantify the task-specific skill and task-specific generalization capabilities demonstrated by a system at a given point in time. The evaluation phase should not be seen as being conceptually similar to a child taking a school test or an IQ test. In real-world evaluation situations, evaluation involves the entire intelligent system, dynamically adapting its understanding of the task at hand. A real-world evaluation situation would be represented in our formalism as being part of the training cur- riculum – a series of training situations with blank feedback. 31 The evaluation phase consists of the repetition of the following steps (the current step is noted e): • We generate a test situation: situatione ← SituationGen(testT askStatee) • The skill program considered produces a response: [responsee, testSP Statee+1] ← testSkillP rogram(situatione, testSP Statee) – Note that the update of the skill program state enables the skill program to maintain a working memory throughout the evaluation phase. This is useful for partially-observable games. This is irrelevant to many games (including ARC) and many real-world tasks, where skill programs would be stateless. • The task scoring function assigns a score to the response (the feedback is discarded): scoree ← Scoring(Situatione, responsee, testT askStatee) • The task updates its internal state based on the response received: testT askStatee+1 ← T askU pdate(responsee, testT askStatee) The evaluation phase also ends at the discretion of the SituationGen function. Note that for the sake of simplification, we consider that the IS’s state does not transfer across tasks; the IS would start with a “blank” state at the beginning of the training phase for each new task (i.e. only possessing built-in priors). However, the setup above and def- initions below may be readily extended to consider lifelong learning, to bring it closer to real-world biological intelligent systems, which learn continuously across a multiplicity of partially overlapping tasks with often no clear boundaries. Based on the setup described thus far, we can define the following useful concepts: • Evaluation result: Sum of the scalar scores obtained by a fixed skill program over a specific evaluation phase instance for a task. Since all objects involved (skill pro- gram, situation generation program, task update program, initial task state) may be stochastic, this quantity may also be stochastic. Likewise, we define training-time performance as the sum of the scalar scores obtained during a given training phase. Training-time performance is tied to a specific sequence of training situations. • Skill: Probabilistic average of evaluation results over all possible evaluation phase instances, i.e. average of per-evaluation sum of scores obtained after running the evaluation phase infinitely many times. Skill is a property of a skill program. Note that other distributional reduction functions could be used, such as median or mini- mum. • Optimal skill: Maximum skill theoretically achievable by the best possible skill pro- gram on the task. It is a property of a task. • Sufficient skill threshold, noted θT : Subjective threshold of skill associated with a task, above which a skill program can be said to “solve” the task. It is a property of a task. 32 • Task and skill value function: We define a value function over task space (note that task space may be infinite), associating a scalar value to the combination of a task and a threshold of skill θ for the task : T askV alue : T ask, θ → ωT,θ. Values are assumed positive or zero, and T askV alue is assumed monotonous as a function of θ (for a given task, higher skill always has higher value). This value function captures the relative importance of skill at each task and defines the subjective frame of reference of our intelligence definition (for instance, if we wish to evaluate human- like intelligence, we would place high value on achieving high skill at human-relevant tasks and place no value on tasks that are irrelevant to the human experience). The value ωT,θ of a skill level at a task is chosen so that the quantity ωT,θ can be compared fairly across different tasks (i.e. it should capture the value we place on achieving skill θ at task T ). This enables us to homogeneously aggregate skill across different tasks without worrying about the scale of the their respective scoring functions. • Task value, noted ωT : This is the value of achieving sufficient skill level at T , i.e. ωT = ωT,θT . • Optimal solution: Any skill program that can achieve optimal skill on a task. Like- wise we define a training-time optimal solution as any skill program that can achieve optimal training-time performance over a specific sequence of training situations. • Sufficient solution: Any skill program that can achieve sufficient skill θT on a task. • Curriculum: Sequence of interactions (situations, responses, and feedback) between a task and an intelligent system over a training phase. For a given task and intelligent system, there exists a space of curricula, parameterized by the stochastic components of the underlying programs. A curriculum emerges from the interaction between the system and a task: this can model both teaching and active learning. • Optimal curriculum: curriculum which leads an intelligent system to produce the best (highest skill) skill program it can generate for this task. It is specific to a task and an intelligent system. There may be more than one optimal curriculum. • Sufficient curriculum: curriculum which leads an intelligent system to a sufficient solution. It is specific to a task and an intelligent system. There may be more than one sufficient curriculum. T,IS: Skill of the best possible skill program that can be generated by a given intelligent system on a task (after an optimal curriculum). It is a scalar value specific to a task and an intelligent system. • Intelligent system scope: Subspace of task space including all tasks for which task value ωT is non-zero and for which the intelligent system is capable of producing a sufficient solution after a training phase. This space may be infinite. “To be capable of producing a sufficient solution” means that there exists a sufficient curriculum for the intelligent system and task considered. A scope is a property of an intelligent system. • Intelligent system potential: Set of task-specific potential values over all tasks in the system’s scope. Potential is a property of an intelligent system. 33 We find that in most cases it is more useful to consider sufficient skill and sufficient so- lutions rather than optimal skill and optimal solutions – in application settings, we seek to achieve sufficient performance using as little resources as possible; it is rarer and less practical to seek to achieve maximum possible performance using unlimited resources. # Quantifying generalization difficulty, experience, and priors using Algorith- mic Information Theory Algorithmic Information Theory (AIT) may be seen as a computer science extension of In- formation Theory. AIT concerns itself with formalizing useful computer science intuitions regarding complexity, randomness, information, and computation. Central to AIT is the notion of Algorithmic Complexity. Algorithmic Complexity (also known as Kolmogorov Complexity or Algorithmic Entropy) was independently investigated, in different contexts, by R.J. Solomonoff, A.N. Kolmogorov and G.J. Chaitin in the 1960s. For an extensive introduction, see [15, 14, 30, 55]. Much like the concept of Entropy in Information Theory, Algorithmic Complexity is a measure of the “information content” of mathematical objects. For our own needs, we will only consider the specific case of binary strings. Indeed, all objects we have introduced so far have been either scalar values (score, potential), or binary strings (states, programs, situations, and responses), since any program may be represented as a binary string. The Algorithmic Complexity (noted H(s)) of a string s is the length of the shortest description of the string in a fixed universal language, i.e. the length of the shortest pro- gram that outputs the string when running on a fixed universal Turing machine. Since any universal Turing machine can emulate any other universal Turing machine, H(s) is machine-independent to a constant. We can use Algorithmic Complexity to define the information content that a string s2 possesses about a string s1 (called “Relative Algorithmic Complexity” and noted H(s1|s2)), as the length of the shortest program that, taking s2 as input, produces s1. “To take s2 as input” means that s2 is part of the description of the program, but the length of s2 would not be taken into account when counting the program’s length. Because any program may be represented as a binary string, we can use Relative Al- gorithmic Complexity to describe how closely related two programs are. Based on this observation, we propose to define the intuitive notion of “Generalization Difficulty” of a task as follows: Consider: A task T , • Solθ T , the shortest of all possible solutions of T of threshold θ (shortest skill program that achieves at least skill θ during evaluation), T rainSolopt T,C, the shortest optimal training-time solution given a curriculum (shortest skill program that achieves optimal training-time performance over the situations in the curriculum). 34 We then define Generalization Difficulty as: Generalization Difficulty of a task given a curriculum C and a skill threshold θ, noted GDθ T that is explained by the shortest optimal training-time solution T rainSolopt T,C (i.e. length of the shortest program that, taking as input the shortest possible program that performs optimally over the situa- tions in curriculum C, produces a program that performs at a skill level of at least θ during evaluation, normalized by the length of that skill program). Note that this quantity is be- tween 0 and 1 by construction. GDθ T,C = T |T rainSolopt T,C ) H(Solθ T ) H(Solθ Thus, a task with high “generalization difficulty” is one where the evaluation-time behav- ior needs to differ significantly from the simplest possible optimal training-time behavior in order to achieve sufficient skill. Relative Algorithmic Complexity provides us with a metric to quantify this difference: GD is a measure of how much the shortest training-time solution program needs to be edited in order to become an appropriate evaluation-time so- lution program. If the shortest skill program that performs optimally during training also happens to perform at a sufficient skill level during evaluation, the task has zero general- ization difficulty (i.e. it does not involve uncertainty). A generalizable skill program is one that “covers more ground” in situation space than the exact training situations it is familiar with: a program that is capable of dealing with future uncertainty. Note that this definition of generalization difficulty may seem counter-intuitive. Oc- cam’s razor principle would seem to suggest that the simplest program that works on the training situations should also be a program that generalizes well. However, generalization describes the capability to deal with future uncertainty, not the capability to compress the behavior that would have been optimal in the past – being prepared for future uncertainty has a cost, which is antagonistic to policy compression 9. By necessity, T rainSolopt T,C does away with any information or capability that isn’t strictly necessary in order to produce the correct response to training situations, and in doing so, it may discard information or capabilities that would have been useful to process evaluation situations. If it is in fact the case that T rainSolopt the simplest behavior that was optimal in the past is still sufficient in the future), this implies that the evaluation features no need for adaptation (no non-trivial novelty or uncertainty), and thus the task does not involve generalization, potentially given some starting point (such as the solution of another task). Another way to express the same idea is that generalization requires to reinterpret the 9As a philosophical aside: this is why the education of children involves practicing games and ingesting knowl- edge of seemingly no relevance to their past or present decision-making needs, but which prepare them for future situations (a process often driven by curiosity). A 10-year-old who has only learned the simplest behavioral pol- icy that would have maximized their extrinsic rewards (e.g. candy intake) during ages 0-10 would not be well educated, and would not generalize well in future situations. 35 task when new data arrives (e.g. at evaluation time). This implies the need to store repre- sentations of past data that would be seemingly useless from the perspective of the past but may prove useful in the future. For example, consider the following labeled points along a line: (x = −0.75, label = F alse), (x = 0.15, label = T rue), (x = −0.1, label = T rue). When training a classification program on the first two of these points, some of the shortest optimal training-time solutions may be λ(x) : x > 0 or λ(x) : bool(ceil(x)). When applied to the last point (x = −0.1, label = T rue), these solutions would fail, while an algorithm that instead stores all past data points and uses nearest-neighbors to return a response at evaluation time would work. The nearest-neighbors program would be better prepared for future uncertainty, but would take significantly more space to write down. Importantly, this first definition of generalization difficulty only captures system-centric generalization, as it quantifies the difficulty of handling evaluation situations that differ from training situations regardless of the system’s preexisting capabilities. To capture developer-aware generalization, we need to take into account the system in its initial state at the start of training, SkillP rogramGen, ISU pdate, isStatet=0: Developer-aware Generalization Difficulty of a task for an intelligent system given a curriculum C and a skill threshold θ, noted GDθ IS,T,C: Fraction of the Algorithmic T that is explained by T rainSolopt Complexity of solution Solθ T,C and the initial state of the system ISt=0, i.e. length of the shortest program that, taking as input the initial system plus the shortest possible program that performs optimally over the situations in curriculum C, produces a skill program that performs at a skill level of at least θ during evaluation, nor- malized by the length of that skill program. Note that this quantity is between 0 and 1 by construction. GDθ IS,T,C = T |T rainSolopt H(Solθ H(Solθ T,C ,ISt=0) T ) In which we note: ISt=0 = SkillP rogramGen, ISU pdate, isStatet=0 Developer-aware generalization thus represents the amount of uncertainty about the short- est evaluation-time solution given that you have at your disposal both the initial system and the shortest training-time solution, i.e. the amount of modifications you would have to make to the shortest training-time solution to obtain the evaluation-time solution, provided that these edits can make use of the contents of the initial system. Likewise, we can define the Generalization Difficulty from task T1 to task T2 (sufficient case) as H(Sol ). We can also extend these definitions to a set of tasks (e.g. Generalization Difficulty from a set of practice task to a set of test tasks), which can be useful to quantify the Generalization Difficulty of an entire test suite. These notions are related to the concept of intrinsic task difficulty (regardless of generalization) defined in [37] (section 8.6) as the effort necessary to construct a solution. 36 Next, we can also use Relative Algorithmic Complexity to formally quantify the Priors PIS,T possessed by an intelligent system about a task: Priors of an intelligent system relative to a task T and a skill threshold θ, noted P θ IS,T : Fraction of the Algorithmic Complexity of the shortest solution of T of skill threshold θ that is explained by the initial system (at the start of the training phase). This is the length (normalized by H(Solθ T )) of the shortest possible program that, taking as input the initial system SkillP rogramGen, ISU pdate, isStatet=0 (noted ISt=0), produces the shortest solution of T that performs at a skill level of at least θ during evaluation. Note that the intelligent system does not need to be able to produce this specific solution. Note that this quantity is between 0 and 1 by construction. IS,T = H(Solθ P θ T )−H(Solθ H(Solθ T ) T |ISt=0) “Priors” thus defined can be interpreted as a measure of how close from a sufficient or op- timal solution the system starts, i.e. the “amount of relevant information” embedded in the initial system. Note that this is different from the “amount of information” embedded in the initial system (which would merely be the Algorithmic Complexity of the initial system). As such, our measure only minimally penalizes large systems that contain prior knowledge that is irrelevant to the task at hand (the only added cost is due to knowledge indexing and retrieval overhead). Further, we can use Relative Algorithmic Complexity to define the Experience EIS,T,C accumulated by an intelligent system about a task during a curriculum. Consider a single step t during training: • At t, the system receives some new data in the form of the binary strings situationt, responset, and f eedbackt (although responset may be omitted since, being the output of a skill program previously generated by the IS, it can be assumed to be known by the IS as soon as situationt is known). • Only some of this data is relevant to solving the task (the data may be noisy or other- wise uninformative). • Only some of the data contains novel information for the intelligent system (situations and responses may be repetitive, and the intelligent system may be a slow learner that needs information to be repeated multiple times or presented in multiple ways). Note that we use the term “novel” to characterize information that would appear novel to the system, rather than information that has never appeared before in the curriculum (the difference between the two lies in the system’s learning efficiency). We informally define the amount of experience accrued at step t as the amount of relevant, novel information received by the system at t. This corresponds to the amount of potential 37 uncertainty reduction about the solution that is made available by the task in the current situation data and feedback data (i.e. how much the IS could reduce its uncertainty about the solution using the step data if it were optimally intelligent). Formally: Experience accrued at step t, noted Eθ # IS,T,t: Eθ IS,T,t = H(Solθ T |ISt) − H(Solθ T |ISt, datat) In which we note: • ISt = SkillP rogramGen, ISU pdate, isStatet • datat = Situationt, responset, f eedbackt By summing over all steps, we obtain the following definition of total experience (note that we normalize by the Algorithmic Complexity of the solution considered, as we did for pri- ors): Experience Eθ IS,T,C over a curriculum C: 0 _ 1 0 Etsr.c ~~ H(SolÂ¥,) » Ets ru “Experience” thus defined can be interpreted as a measure of the amount of relevant in- formation received by the system about the task over the course of a curriculum, only accounting for novel information at each step. Because this is different from the “amount of information” contained in the curriculum (i.e. the Algorithmic Complexity of the curriculum), our measure does not penalize systems that go through noisy curricula. In addition, because we use an eager sum of relevant and novel information at each step instead of globally pooling the information content of the curriculum, we penalize learners that are slower to absorb the relevant information that is presented to them. Lastly, because our sum is different from “amount of relevant information (novel or not) at each step summed over all steps”, we do not penalize systems that go through repetitive curricula. If a fast learner absorbs sufficient information over the first ten steps of a fixed curriculum, but a slow learner needs 90 more steps of the same curriculum to achieve the same, we will not count as experience for the fast learner the redundant last 90 steps during which it did not learn anything, but we will count all 100 steps for the slow learner. # Defining intelligence We have now established sufficient context and notations to formally express the intuitive definition of intelligence stated earlier, “the intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors, experience, and 38 generalization difficulty.” We consider an intelligent system IS. We note CurθT T the space of curricula that result in IS generating a solution of sufficient skill θT for a task T , and Curopt the space of cur- T ricula that result in IS generating its highest-skill solution (solution reaching the system’s potential θmax T,IS). Note that system’s potential may be lower than the optimal solution for the task, as the system may not be able to learn to optimally solve the task. To simplify notations, we will denote θmax T,IS as Θ. We note Avg the averaging function (used to average over task space). We note PC the probability of a given curriculum C. We then define the intelligence of I, tied to a scope of tasks scope, as: Intelligence of system IS over scope (sufficient case): Op or GDis rc ITS scope = Avg wr Op P Po: or. or TeEscope CeCurt Prort£rsrc Intelligence of system IS over scope (optimal case): I opt IS,scope = Avg T ∈scope ωT,Θ · Θ Σ C∈Curopt T GDΘ IS,T,C IS,T +EΘ P Θ IS,T,C Note that: • PIS,T + EIS,T,C (priors plus experience) represents the total exposure of the sys- tem to information about the problem, including the information it starts with at the beginning of training. • The sum over a curriculum subspace, weighted by the probability of each curriculum, represents the expected outcome for the system after training. Note that the sum is over a subspace of curricula (curricula that lead to at least a certain skill level), and thus the probabilities would sum to a total lower than one: as such, we are penalizing learners that only reach sufficient skill or optimal skill some of the time. • ωT · θT represents the subjective value we place on achieving sufficient skill at T , and ωT,Θ · Θ represents the subjective value we place on achieving the skill level corresponding to the system’s full potential θmax Schematically, the contribution of each task is: Expectation [ seihacnenalisation | priors+experience which is further weighted by a value w which enables us to homogeneously compare skill at different tasks independently of the scale of their respective scoring functions. Thus, we equate the intelligence of a system to a measure of the information-efficiency with which the system acquires its final task-specific skill (sufficient skill or highest possi- 39 ble skill) on average (probabilistic average over all applicable curricula), weighted by the developer-aware generalization difficulty of the task considered (as well as the task value ω, which makes skill commensurable across tasks), averaged over all tasks in the scope. Or, in plain English: intelligence is the rate at which a learner turns its experience and priors into new skills at valuable tasks that involve uncertainty and adaptation. Note that our definition is not the first formal definition of intelligence based on Algorith- mic Information Theory. We are aware of three other AIT-based definitions: the C-Test [40], the AIXI model [43], and the “Universal Intelligence” model [54] (closely related to AIXI). It should be immediately clear to a reader familiar with these definitions that our own approach represents a very different perspective. We bring the reader’s attention to a number of key observations about our formalism (see also II.2.3): • A high-intelligence system is one that can generate high-skill solution programs for high generalization difficulty tasks (i.e. tasks that feature high uncertainty about the future) using little experience and priors, i.e. it is a system capable of making highly efficient use of all of the information it has at its disposition to cover as much ground as possible in unknown parts of the situation space. Intelligence is, in a way, a con- version rate between information about part of the situation space, and the ability to perform well over a maximal area of future situation space, which will involve novelty and uncertainty (figure 3). • The measure of intelligence is tied to a choice of scope (space of tasks and value function over tasks). It can also optionally be tied to a choice of sufficient skill levels across the tasks in the scope (sufficient case). • Skill is not possessed by an intelligent system, it is a property of the output artifact of the process of intelligence (a skill program). High skill is not high intelligence: these are different concepts altogether. • Intelligence must involve learning and adaptation, i.e. operationalizing information extracted from experience in order to handle future uncertainty: a system that starts out with the ability to perform well on evaluation situations for a task would have a very low developer-aware generalization difficulty for this task, and thus would score poorly on our intelligence metric. • Intelligence is not curve-fitting: a system that merely produces the simplest possible skill program consistent with known data points could only perform well on tasks that feature zero generalization difficulty, by our definition. An intelligent system must generate behavioral programs that account for future uncertainty. • The measure of intelligence is tied to curriculum optimization: a better curriculum space will lead to greater realized skill (on average) and to greater expressed intelli- gence (greater skill-acquisition efficiency). 40 Situation space @ @ Lower-intelligence system: @ -—> @ lower information © known e . conversion ratio ae Operational situations 7 area @ e@ Higher-intelligence system: @ |g! @ higher information @ @ conversion ratio Figure 3: Higher intelligence “covers more ground” in future situation space using the same information. # II.2.2 Computation efficiency, time efficiency, energy efficiency, and risk ef- ficiency In the above, we only considered the information-efficiency (prior-efficiency and experience- efficiency with respect to generalization difficulty) of intelligent systems. Indeed, we be- lieve this is the most actionable and relevant angle today to move AI research forward (cf II.2.3). But it isn’t the only angle one may want to consider. Several alternatives that could be incorporated into our definition in various ways (e.g. as a regularization term) come to mind: • Computation efficiency of skill programs: for settings in which training data is abun- dant but inference-time computation is expensive, one may want to encourage the generation of the skill programs that have minimal computational resource consump- tion. • Computation efficiency of the intelligent system: for settings in which training-time computation is expensive, one may want to expend a minimal amount of computation resources to generate a skill program. • Time efficiency: in time-constrained settings, one may want to minimize the latency with which the intelligent system generates skill programs. • Energy efficiency: in biological systems in particular, one may want to minimize the amount of energy expended in producing a skill program, in running a skill program, or in going through a curriculum. • Risk efficiency: for settings in which going through a curriculum (i.e. collecting experience) involves risk for the intelligent system, one might want to encourage safe curricula at the expense of resource efficiency or information efficiency. Much like 41 energy efficiency, this is highly relevant to biological systems and natural evolution, in which certain novelty-seeking behaviors that would lead to faster learning may also be more dangerous. In fact, one may note that information efficiency acts in many settings as a proxy for energy efficiency and risk efficiency. We expect that these alternative ways to quantify efficiency will become relevant in specialized AI application contexts in the future, and we bring them to the reader’s attention to encourage others to develop new formal definitions of intelligence incorporating them in addition to information efficiency. # II.2.3 Practical implications The definitions above provide a formal framework as well as quantitative tools to reason about the intuitive notions we have been introducing so far, in particular the concepts of “generalization difficulty”, “intelligence as skill-acquisition efficiency”, and what it means to control for priors and experience when evaluating intelligence, as opposed to looking purely at task-specific skill. The main value of this framework is to provide an actionable perspective shift in how we understand and evaluate flexible or general artificial intelligence. We argue that this perspective shift has the following practical consequences: # a. Consequences for research directions towards flexible or general AI: • It clearly spells out that the process of creating an intelligent system can be ap- proached as an optimization problem, where the objective function would be a com- putable approximation of our quantitative intelligence formula. As pointed out in II.2.2, this objective function could be further refined by incorporating regularization terms that would take into account alternative forms of efficiency. • It encourages a focus on developing broad or general-purpose abilities rather than pursuing skill alone, by proposing a target metric that penalises excessive reliance on experience or priors, and discounting tasks that feature low generalization difficulty. • It encourages interest in program synthesis, by suggesting that we stop thinking of “agents” as monolithic black boxes that take in sensory input and produce behavior (a vision inherited from Reinforcement Learning [88]): our formalism clearly separates the part of the system that possesses intelligence (“intelligent system”, a program- synthesis engine) from the part that achieves skill or implements behavior (“skill program”, the non-intelligent output artifact of the process of intelligence), and places focus to the former. As we point out throughout this paper, we believe that this confusion between process and artifact has been an ongoing fundamental issue in the conceptualization of AI. • It encourages interest in curriculum development, by leveraging the notion of an “op- timal curriculum” and drawing attention to the fact that a better curriculum increases the intelligence manifested by a learning system. 42 • It encourages interest in building systems based on human-like knowledge priors (e.g. Core Knowledge) by drawing attention to the importance of priors in evaluating in- telligence. # b. Consequences for evaluating flexible or general AI systems: • By defining and quantifying generalization difficulty, it offers a way to formally rea- son about what it means to perform “local generalization”, “broad generalization”, and “extreme generalization” (cf. the spectrum of generalization introduced in I.3.2), and to weed out tests that feature zero generalization difficulty. • It suggests concrete guidelines for comparing AI and human intelligence: such a comparison requires starting from a shared scope of tasks and shared priors, and would seek to compare experience-efficiency in achieving specific levels of skill. We detail this idea in II.3.1. • It shows the importance of taking into account generalization difficulty when devel- oping a test set to evaluate a task. We detail this idea in II.3.2. This should hopefully lead us to evaluation metrics that are able to discard solutions that rely on shortcuts that do not generalize (e.g. reliance on local textures as opposed to global semantics in computer vision). • It provides a set of practical questions to ask about any intelligent system to rigorously characterize it: – What is its scope? – What is its “potential” over this scope (maximum achievable skill)? – What priors does it possess? – What is its skill-acquisition efficiency (intelligence)? – What curricula would maximize its skill or skill-acquisition efficiency? # II.3 Evaluating intelligence in this light Earlier in this document, we have detailed how measuring skill alone does not move us forward when it comes to the development of broad abilities, we have suggested that AI evaluation should learn from its more mature sister field psychometrics (echoing the thesis of Psychometric AI and Universal Psychometrics), and we have provided a new formalism with practical implications for AI evaluation, pointing out the importance of the concept of scope, potential, generalization difficulty, experience, and priors. The following section summarizes key practical conclusions with respect to AI evaluation. # II.3.1 Fair comparisons between intelligent systems We mentioned in II.2.3 that our formalism suggests concrete guidelines for comparing the intelligence of systems of different nature, such as human intelligence and artificial in- telligence. Being able to make such comparisons in a fair and rigorous way is essential to 43 progress towards human-like general AI. Here we argue how such intelligence comparisons between systems entail specific requirements with regard to the target systems’ scope, po- tential, and priors. We also detail how such comparisons should proceed given that these requirements are met. Scope and potential requirements. In II.1.2, we argued that intelligence is necessar- ily tied to a scope of application, an idea also central to the formalism introduced in II.2. As such, a comparison scale must be tied to a well-defined scope of tasks that is shared by the target systems (all target systems should be able to learn to perform the same tasks). Further, we must consider that the target systems may have different potential (maxi- mum achievable skill) over their shared scope. An intelligence comparison should focus on skill-acquisition efficiency, but skill-acquisition efficiency cannot be meaningfully com- pared between systems that arrive at vastly different levels of skills. As such, a comparison scale must be tied to a fixed threshold of skill over the scope of tasks considered. This skill threshold should be achievable by all target systems. For instance, comparing a generally-intelligent system to human intelligence would only make sense if the scope of tasks that can be learned by the system is the same scope of tasks that can be learned by a typical human, and the comparison should focus on the efficiency with which the system achieves the same level of skill as a human expert. Com- paring maximum realized skill does not constitute an intelligence comparison. Prior knowledge requirements. Since the formalism of II.2 summarizes priors into a single scalar score, which is homogeneous to the score used to quantify experience, it is not strictly necessary for the two systems being compared to share the same priors. For instance, if two systems achieve the same skill using the same amount of experience (the exact nature of this experience, determined by the curriculum used, may differ), the system that has the least amount of prior knowledge would be considered more intelligent. However, it would be generally impractical to fully quantify prior knowledge. As such, we recommend only comparing the intelligence of systems that assume a sufficiently similar set of priors. This implies that any measure of intelligence should explicitly and exhaus- tively list the priors it assumes, an idea we detail below, in II.3.2. Further, this implies that systems that aim at implementing human-like general intelligence should leverage Core Knowledge priors. If the above conditions are met (shared scope, well-defined skill threshold over scope, and comparable knowledge priors), then a fair intelligence comparison would then consist of contrasting the skill-acquisition efficiency profile of the target systems. The more in- telligent system would be the one that uses the least amount of experience to arrive at the desired skill threshold in the average case. Alternatively, computation efficiency, energy efficiency, and risk efficiency may also be considered, as per II.2.2. 44 # II.3.2 What to expect of an ideal intelligence benchmark The recommendations below synthesizes the conclusions of this document with regard to the properties that a candidate benchmark of human-like general intelligence should pos- sess. • It should describe its scope of application and its own predictiveness with regard to this scope (i.e. it should establish validity). In practice, this would be achieved by empirically determining the statistical relationship between success on the benchmark and success on a range of real-world tasks. • It should be reliable (i.e. reproducible). If an evaluation session includes stochastic elements, sampling different values for these elements should not meaningfully af- fect the results. Different researchers independently evaluating the same system or approach using the benchmark should arrive at the same conclusions. • It should set out to measure broad abilities and developer-aware generalization: – It should not be solely measuring skill or potential (maximum achievable skill). – It should not feature in its evaluation set any tasks that are known in advance, either to the test-taking system itself or to the developers of the system (cf. developer-aware generalization as defined in I.3.2). – It should seek to quantify the generalization difficulty it measures (cf. formal definition from II.2), or at least provide qualitative guidelines with regard to its generalization difficulty: it should at least be made clear whether the benchmark seeks to measure local generalization (robustness), broad generalization (flexi- bility), or extreme generalization (general intelligence), as defined in I.3.2. Tak- ing into account generalization difficulty minimizes the possibility that a given benchmark could be “hacked” by solvers that take undesired shortcuts that by- pass broad abilities (e.g. leveraging surface textures instead of semantic content in image recognition). • It should control for the amount of experience leveraged by test-taking systems dur- ing training. It should not be possible to “buy” performance on the benchmark by sampling unlimited training data. The benchmark should avoid tasks for which new data can be generated at will. It should be, in effect, a game for which it is not possible to practice in advance of the evaluation session. • It should explicitly and exhaustively describe the set of priors it assumes. Any task is going to involve priors, but in many tasks used for AI evaluation today, priors stay implicit, and the existence of implicit hidden priors may often give an unfair advantage to either humans or machines. • It should work for both humans and machines, fairly, by only assuming the same priors as possessed by humans (e.g. Core Knowledge) and only requiring a human- sized amount of practice time or training data. These recommendations for general AI evaluation wouldn’t be complete without a con- crete effort to implement them. In part III, we present our initial attempt. 45 # III A benchmark proposal: the ARC dataset In this last part, we introduce the Abstraction and Reasoning Corpus (ARC), a dataset intended to serve as a benchmark for the kind of general intelligence defined in II.2. ARC is designed to incorporate as many of the recommendations of II.3 as possible. # III.1 Description and goals # III.1.1 What is ARC? ARC can be seen as a general artificial intelligence benchmark, as a program synthesis benchmark, or as a psychometric intelligence test. It is targeted at both humans and artifi- cially intelligent systems that aim at emulating a human-like form of general fluid intelli- gence. It is somewhat similar in format to Raven’s Progressive Matrices [47], a classic IQ test format going back to the 1930s. ARC has the following top-level goals: • Stay close in format to psychometric intelligence tests (while addressing issues found in previous uses of such tests for AI evaluation, as detailed in III.1.3), so as to be approachable by both humans and machines; in particular it should be solvable by humans without any specific practice or training. • Focus on measuring developer-aware generalization, rather than task-specific skill, by only featuring novel tasks in the evaluation set (assumed unknown to the developer of a test-taker). I.3.2), by featuring highly abstract tasks that must be understood by a test-taker using very few examples. • Quantitatively control for experience by only providing a fixed amount of training data for each task and only featuring tasks that do not lend themselves well to artifi- cially generating new data. • Explicitly describe the complete set of priors it assumes (listed in III.1.2), and en- able a fair general intelligence comparison between humans and machines by only requiring priors close to innate human prior knowledge (cf. II.3.2). ARC comprises a training set and an evaluation set. The training set features 400 tasks, while the evaluation set features 600 tasks. The evaluation set is further split into a public evaluation set (400 tasks) and a private evaluation set (200 tasks). All tasks are unique, and the set of test tasks and the set of training tasks are disjoint. The task data is available at github.com/fchollet/ARC. Each task consists of a small number of demonstration examples (3.3 on average), and a small number of test examples (generally 1, although it may be 2 or 3 in rare cases). Each example consists of an “input grid” and an “output grid”. Each “grid” is a literal grid of 46 symbols (each symbol is typically visualized via a unique color), as seen in figure 4. There are 10 unique symbols (or colors). A grid can be any height or width between 1x1 and 30x30, inclusive (the median height is 9 and the median width is 10). When solving an evaluation task, a test-taker has access to the training examples for the task (both the input and output grids), as well as the input grid of the test examples for the task. The test-taker must construct on its own the output grid corresponding to the input grid of each test example. “Constructing the output grid” is done entirely from scratch, meaning that the test-taker must decide what the height and width of the output grid should be, what symbols it should place on the grid, and where. The task is successfully solved if the test- taker can produce the exact correct answer on all test examples for the task (binary measure of success). For each test example in a task, the test-taker (either human or machine) is allowed 3 trials 10. The only feedback received after a trial is binary (correct answer or incorrect answer). The score of an intelligent system on ARC is the fraction of tasks in the evaluation set that it can successfully solve. Crucially, it is assumed that neither the test-taker nor its developer would have had any prior information about the tasks featured in the evaluation set: ARC seeks to measure “developer aware generalization” as defined in I.3.2. The existence of a private evaluation set enables us to strictly enforce this in the setting of a public competition. A test-taker is also assumed to have access to the entirety of the training set, although the training data isn’t strictly necessary in order to be successful on the validation set, as all tasks are unique and do not assume any knowledge other than the priors described in III.1.2. A typical human can solve most of the ARC evaluation set without any previous training. As such, the purpose of the training set primarily to serve as a development validation set for AI system developers, or as a mock test for human test-takers. It could also be used as a way to familiarize an algorithm with the content of Core Knowledge priors. We do not expect that practice on the training set would increase human performance on the test set (albeit this hypothesis would need to be concretely tested). # III.1.2 Core Knowledge priors Any test of intelligence is going to involve prior knowledge. ARC seeks to control for its own assumptions by explicitly listing the priors it assumes, and by avoiding reliance on any information that isn’t part of these priors (e.g. acquired knowledge such as language). The ARC priors are designed to be as close as possible to Core Knowledge priors, so as to provide a fair ground for comparing human intelligence and artificial intelligence, as per our recommendations in II.3.1. The Core Knowledge priors assumed by ARC are as follows: 10We consider 3 trials to be enough to account for cases in which the task may be slightly ambiguous or in which the test-taker may commit mechanical errors when inputting an answer grid. 47 Figure 4: A task where the implicit goal is to complete a symmetrical pattern. The nature of the task is specified by three input/output examples. The test-taker must generate the output grid corresponding to the input grid of the test input (bottom right). # a. Objectness priors: Object cohesion: Ability to parse grids into “objects” based on continuity criteria including color continuity or spatial contiguity (figure 5), ability to parse grids into zones, partitions. Object persistence: Objects are assumed to persist despite the presence of noise (figure 6) or occlusion by other objects. In many cases (but not all) objects from the input persist on the output grid, often in a transformed form. Common geometric transformations of objects are covered in category 4, “basic geometry and topology priors”. Figure 5: Left, objects defined by spatial contiguity. Right, objects defined by color continuity. 48 Figure 6: A denoising task. Object influence via contact: Many tasks feature physical contact between objects (e.g. one object being translated until it is in contact with another (figure 7), or a line “growing” until it “rebounds” against another object (figure 8). Figure 7: The red object “moves” towards the blue object until “contact”. # b. Goal-directedness prior: While ARC does not feature the concept of time, many of the input/output grids can be effectively modeled by humans as being the starting and end states of a process that in- volves intentionality (e.g. figure 9). As such, the goal-directedness prior may not be strictly necessary to solve ARC, but it is likely to be useful. # c. Numbers and Counting priors: Many ARC tasks involve counting or sorting objects (e.g. sorting by size), comparing numbers (e.g. which shape or symbol appears the most (e.g. figure 10)? The least? The same number of times? Which is the largest object? The smallest? Which objects are the same size?), or repeating a pattern for a fixed number of time. The notions of addition and subtraction are also featured (as they are part of the Core Knowledge number system as per [85]). All quantities featured in ARC are smaller than approximately 10. 49 Figure 8: A task where the implicit goal is to extrapolate a diagonal line that “rebounds” upon contact with a red obstacle. # d. Basic Geometry and Topology priors: ARC tasks feature a range of elementary geometry and topology concepts, in particular: • Lines, rectangular shapes (regular shapes are more likely to appear than complex shapes). • Symmetries (e.g. figure 11), rotations, translations. • Shape upscaling or downscaling, elastic distortions. • Containing / being contained / being inside or outside of a perimeter. • Drawing lines, connecting points, orthogonal projections. • Copying, repeating objects. # III.1.3 Key differences with psychometric intelligence tests We have pointed out in I.3.4 the reasons why using existing psychometric intelligence tests (or “IQ tests”) does not constitute a sound basis for AI evaluation. Albeit ARC stays delib- erately close in format to traditional IQ tests (as well as related efforts such as Hern´andez- Orallo’s C-Test [40]), its design differs from them in fundamental ways. We argue that these differences address the shortcomings of psychometric intelligence tests in the context of AI evaluation. In particular: • Unlike some psychometric intelligence tests, ARC is not interested in assessing crys- tallized intelligence or crystallized cognitive abilities. ARC only assesses a general form of fluid intelligence, with a focus on reasoning and abstraction. ARC does not involve language, pictures of real-world objects, or real-world common sense. ARC seeks to only involve knowledge that stays close to Core Knowledge priors, 50 Figure 9: A task that combines the concepts of “line extrapolation”, “turning on obstacle”, and “efficiently reaching a goal” (the actual task has more demonstration pairs than these three). and avoids knowledge that would have to be acquired by humans via task-specific practice. • The tasks featured in the ARC evaluation set are unique and meant to be unknown to developers of test-taking systems (as ARC seeks to assess developer-aware general- ization). This prevents developers from solving the tasks themselves and hard-coding their solution in program form. This can be strictly enforced in competition settings via the existence of a private evaluation set. • ARC has greater task diversity than typical psychometric intelligence tests (hundreds of unique tasks with limited overlap between tasks), which reduces the likelihood that hard-coding task-specific solutions would represent a practical shortcut for develop- ers, even for the public evaluation set. • Unlike tasks from the C-Test [40], ARC tasks are in majority not programmatically generated. We perceive programmatic generation from a static “master” program as a weakness, as it implies that merely reverse-engineering the generative program shared across tasks (presumably a simple program, since it had to be written down by the test developer) would be sufficient to fully solve all tasks. Manual task generation increases task diversity and reduces the risk of existence of an unforeseen shortcut that could be used to by-pass the need for broad abilities in solving the test. # III.1.4 What a solution to ARC may look like, and what it would imply for AI applications We have found ARC to be fully solvable by humans. While many ARC test tasks are intellectually challenging, human test-takers appear to be able to solve the majority of tasks on their first try without any practice or verbal explanations. Each task included in ARC has been successfully solved by at least one member of a group of three high-IQ humans 51 Figure 10: A task where the implicit goal is to count unique objects and select the object that appears the most times (the actual task has more demonstration pairs than these three). Ee Es 4 Figure 11: Drawing the symmetrized version of a shape around a marker. Many tasks involve some form of symmetry. (who did not communicate with each other), which demonstrates task feasibility. In the future, we hope to be able to further investigate human performance on ARC by gathering a statistically significant amount of human testing data, in particular with regard to the relationship between CHC cognitive abilities and ARC performance. Crucially, to the best of our knowledge, ARC does not appear to be approachable by any existing machine learning technique (including Deep Learning), due to its focus on broad generalization and few-shot learning, as well as the fact that the evaluation set only features tasks that do not appear in the training set. For a researcher setting out to solve it, ARC is perhaps best understood as a program synthesis benchmark. Program synthesis [31, 32] is a subfield of AI interested in the gener- ation of programs that satisfy a high-level specification, often provided in the form of pairs of example inputs and outputs for the program – which is exactly the ARC format. A hypothetical ARC solver may take the form of a program synthesis engine that uses the demonstration examples of a task to generate candidates that transform input grids into corresponding output grids. Schematically: 52 • Start by developing a domain-specific language (DSL) capable of expressing all pos- sible solution programs for any ARC task. Since the exact set of ARC tasks is pur- posely not formally definable, this may be challenging (the space of tasks is defined as anything expressible in terms of ARC pairs that would only involve Core Knowl- edge). It would require harding-coding the Core Knowledge priors from III.1.2 in a sufficiently abstract and combinable program form, to serve as basis functions for a kind of “human-like reasoning DSL”. We believe that solving this specific subprob- lem is critical to general AI progress. • Given a task, use the DSL to generate a set of candidate programs that turn the in- puts grids into the corresponding output grids. This step would reuse and recombine subprograms that previously proved useful in other ARC tasks. • Select top candidates among these programs based on a criterion such as program simplicity or program likelihood (such a criterion may be trained on solution pro- grams previously generated using the ARC training set). Note that we do not expect that merely selecting the simplest possible program that works on training pairs will generalize well to test pairs (cf. our definition of generalization difficulty from II.2). • Use the top three candidates to generate output grids for the test examples. We posit that the existence of a human-level ARC solver would represent the ability to program an AI from demonstrations alone (only requiring a handful of demonstrations to specify a complex task) to do a wide range of human-relatable tasks of a kind that would normally require human-level, human-like fluid intelligence. As supporting evidence, we note that human performance on psychometric intelligence tests (which are similar to ARC) is predictive of success across all human cognitive tasks. Further, we posit that, since an ARC solver and human intelligence would both be founded on the same knowledge priors, the scope of application of an ARC solver would be close to that of human cognition, making such a solver both practically valuable (i.e. it could solve useful, human-relevant problems) and easy to interact with (i.e. it would readily understand human demonstrations and would produce behavior that is in line with human expectations). Our claims are highly speculative and may well prove fully incorrect, much like Newell’s 1973 hopes that progress on chess playing would translate into meaningful progress on achieving a range of broad cognitive abilities – especially if ARC turns out to feature un- foreseen vulnerabilities to unintelligent shortcuts. We expect our claims to be validated or invalidated in the near future once we make sufficient progress on solving ARC. # III.2 Weaknesses and future refinements It is important to note that ARC is a work in progress, not a definitive solution; it does not fit all of the requirements listed in II.3.2, and it features a number of key weaknesses: • Generalization is not quantified. While ARC is explicitly designed to measure “broad generalization” as opposed to “local generalization” or “extreme generaliza- tion”, we do not offer a quantitative measure of the generalization of the evaluation 53 set given the test set, or the generalization difficulty of each task (considered inde- pendently). We plan on conducting future work to empirically address this issue by using human performance on a task (considered over many human subjects) to esti- mate the generalization difficulty it represents. We would be particularly interested in attempting to correlate human performance on a task with an approximation of the AIT measure of generalization difficulty proposed in II.2 (such an approximation should become available as we make progress on ARC solver programs). Finding high correlation, or a lack of correlation, would provide a degree of validation or invalidation of our formal measure. • Test validity is not established. Validity represents the predictiveness of test perfor- mance with regard to performance on other non-test activities. The validity of ARC should be investigated via large-sample size statistical studies on humans, following the process established by psychometrics. Further, when AI ARC solvers become a reality, we will also be able to study how well ARC performance translates into real-world usefulness across a range of tasks. • Dataset size and diversity may be limited. ARC only features 1,000 tasks in total, and there may be some amount of conceptual overlap across many tasks. This could make ARC potentially vulnerable to shortcut strategies that could solve the tasks without featuring intelligence. We plan on running public AI competitions (using the private evaluation set) as a way to crowd-source attempts to produce such shortcuts (if a shortcut exists, it should arise quickly in a competition setting). Further, to mitigate potential vulnerability against such shortcuts, we intend to keep adding new tasks to ARC in the future, possibly by crowd-sourcing them. • The evaluation format is overly close-ended and binary. The score of a test-taker on an evaluation task is either 0 or 1, which lacks granularity. Further, real-world problem-solving often takes the form of an interactive process where hypotheses are formulated by the test-taker then empirically tested, iteratively. In ARC, this approach is possible to an extent since the test-taker is allowed 3 trials for each test example in a task. However, this format remains overly limiting. A better approach may be let the test taker dynamically interact with an example generator for the task: the test taker would be able to ask for a new test input at will, would propose a solution for the test input, and would receive feedback on their solution, repeatedly, until the test- taker is reliably able to produce the correct answer. The test-taker’s score on the task would then be a measure of the amount of feedback it required until it became able to reliably generate the correct solution for any new input. This represents a more direct measure of intelligence as formally defined in II.2, where the input generator is in control of the curriculum. • Core Knowledge priors may not be well understood and may not be well cap- tured in ARC. Central to ARC is the notion that it only relies on innate human prior knowledge and does not feature significant amounts of acquired knowledge. How- ever, the exact nature of innate human prior knowledge is still an open problem, and whether these priors are correctly captured in ARC is unclear. 54 # III.3 Possible alternatives ARC is merely one attempt to create a human-like general intelligence benchmark that embodies as many of the guidelines listed in II.3 as possible. While ARC stays very close to the format of psychometric intelligence tests, many other possible approaches could be explored. In this section, we offer some suggestions for alternatives. # III.3.1 Repurposing skill benchmarks to measure broad generalization We noted in I.3.5 the ongoing fascination of the AI research community in developing systems that surpass human skill at board games and video games. We propose repurposing such tests of skills into tests of intelligence. Consider an AI developer interested in solving game X. While the AI would be trained on instances of X, an evaluation arbiter would create multiple variants of X (X1, X2, Xn). These alternative games would be designed to represent a meaningful amount of generalization difficulty over X (as defined in II.2): the simplest game-playing program that is optimal on instances of X (e.g. game levels of X) would not be optimal on Xi. As such, these alternative games would not be mere “new levels” of X, but would feature related-yet-novel gameplay, so as to measure broad generalization as opposed to local gen- eralization. These alternative games would stay unknown to the AI developers, so as to measure developer-aware generalization. This proposed setup is thus markedly different from e.g. CoinRun [17] or Obstacle Tower [49], where the evaluation environments are not alternative games, but only levels of the same game (local generalization, or generalization to known unknowns), randomly sampled from a level generator which is known in advance to the AI developers (no evaluation of developer-aware generalization). The AI trained on X, once ready, would then be tasked with learning to solve X1, Xn. Its evaluation score would then be a measure of the amount of experience it required on each alternative game in order to reach a specific threshold of skill, modulated by the amount of generalization difficulty represented by each alternative game. A measure of the general intelligence of such an AI would then be an average of these evaluation scores over a large number of different source games X. For instance, consider the game DotA2: an AI trained on DotA2 may be evaluated by measuring the efficiency with which it can learn to play new games from the same genre, such as League of Legends or Heroes of the Storm. As an even simpler (but weaker) alternative, an AI trained on 16 specific DotA2 characters may be evaluated by measuring the efficiency with which it can learn to master a set of brand new characters it would not have played before – for example, a strong human DotA2 player can play at a high level with a new character upon first try. # III.3.2 Open-ended adversarial or collaborative approaches We have pointed out in III.2 some of the key limitations of having to craft evaluation tasks manually: it is a labor-intensive process that makes it difficult to formally control for gen- eralization difficulty, that could potentially result in a low-diversity set of tasks, and that 55 is not easily scalable (although crowd-sourcing tasks may partially address this problem). The diversity and scalablility points are especially critical given that we need a constant supply of substantially new tasks in order to guarantee that the benchmark is measuring developer-aware generalization. A solution may be to instead programmatically generate new tasks. We noted in III.1.3 that programmatic generation from a static “master” program is not desirable, as it places a ceiling on the diversity and complexity of the set of tasks that can be generated, and it offers a potential avenue to “cheat” on the benchmark by reverse-engineering the master program. We propose instead to generate tasks via an ever-learning program called a “teacher” pro- gram, interacting in a loop with test-taking systems, called “student” programs (figure 12). The teacher program would optimize task generation for novelty and interestingness for a given student (tasks should be new and challenging, while still being solvable by the stu- dent), while students would evolve to learn to solve increasingly difficult tasks. This setup is also favorable to curriculum optimization, as the teacher program may be configured to seek to optimize the learning efficiency of its students. This idea is similar to the “anytime intelligence test” proposed in [38] and to the POET system proposed in [96]. In order to make sure that the space of generated tasks retains sufficient complexity and novelty over time, the teacher program should draw information from an external source (assumed to feature incompressible complexity), such as the real world. This external source of complexity makes the setup truly open-ended. A teacher program that generates novel tasks that partially emulate human-relevant tasks would have the added advantage that it would guide the resulting student programs towards a form of intelligence that could transfer to real-world human-relevant problems. Inspiration & grounding New tasks Student Teacher program program Evaluation results Figure 12: Teacher-student learning and evaluation system. # Taking stock The study of general artificial intelligence is a field still in its infancy, and we do not wish to convey the impression that we have provided a definitive solution to the problem of charac- terizing and measuring the intelligence held by an AI system. Rather, we have introduced a new perspective on defining and evaluating intelligence, structured around the following ideas: 56 • Intelligence is the efficiency with which a learning system turns experience and priors into skill at previously unknown tasks. • As such, a measure of intelligence must account for priors, experience, and general- ization difficulty. • All intelligence is relative to a scope of application. Two intelligent systems may only be meaningfully compared within a shared scope and if they share similar priors. • As such, general AI should be benchmarked against human intelligence and should be founded on a similar set of knowledge priors (e.g. Core Knowledge). We also have provided a new formalism based on Algorithmic Information Theory (cf. II.2) to rigorously and quantitatively reason about these ideas, as well as a set of concrete II.3.1 and guidelines to follow for developing a benchmark of general intelligence (cf. II.3.2). Our definition, formal framework, and evaluation guidelines, which do not capture all facets of intelligence, were developed to be actionable, explanatory, and quantifiable, rather than being descriptive, exhaustive, or consensual. They are not meant to invalidate other perspectives on intelligence, rather, they are meant to serve as a useful objective function to guide research on broad AI and general AI, as outlined in II.2.3. Our hope is for some part of the AI community interested in general AI to break out of a longstanding and ongoing trend of seeking to achieve raw skill at challenging tasks, given unlimited experience and unlimited prior knowledge. To ground our ideas and enable others to build upon them, we are also providing an actual benchmark, the Abstraction and Reasoning Corpus, or ARC: • ARC takes the position that intelligence testing should control for scope, priors, and experience: every test task should be novel (measuring the ability to understand a new task, rather than skill) and should assume an explicit set of priors shared by all test-takers. • ARC explicitly assumes the same Core Knowledge priors innately possessed by hu- mans. • ARC can be fully solved by humans, but cannot be meaningfully approached by current machine learning techniques, including Deep Learning. • ARC may offer an interesting playground for AI researchers who are interested in developing algorithms capable of human-like broad generalization. It could also offer a way to compare human intelligence and machine intelligence, as we assume the same priors. Importantly, ARC is still a work in progress, with known weaknesses listed in III.2. We plan on further refining the dataset in the future, both as a playground for research and as a joint benchmark for machine intelligence and human intelligence. The measure of the success of our message will be its ability to divert the attention of some part of the community interested in general AI, away from surpassing humans at 57 tests of skill, towards investigating the development of human-like broad cognitive abilities, through the lens of program synthesis, Core Knowledge priors, curriculum optimization, information efficiency, and achieving extreme generalization through strong abstraction. # References [1] Sam S Adams, Guruduth Banavar, and Murray Campbell. I-athlon: Towards a mul- tidimensional turing test. AI Magazine, (1):78–84, 2016. [2] John R. Anderson and Christian Lebiere. The newell test for a theory of cognition. Behavioral and Brain Sciences, pages 587–601, 2003. [3] Aristotle. De Anima. c. 350 BC. [4] Minoru Asada et al. Cognitive developmental robotics: A survey. IEEE Transactions on Autonomous Mental Development, pages 12–34, 2009. [5] Mayank Bansal, Alex Krizhevsky, and Abhijit Ogale. Chauffeurnet: Learn- arXiv preprint ing to drive by imitating the best and synthesizing the worst. arXiv:1812.03079, 2018. [6] Marc G. Bellemare, Yavar Naddaf, Joel Veness, and Michael Bowling. The arcade learning environment: An evaluation platform for general agents. J. Artif. Int. Res., (1):253–279, May 2013. [7] Benjamin Beyret, Jos Hernndez-Orallo, Lucy Cheke, Marta Halina, Murray Shana- han, and Matthew Crosby. The animal-ai environment: Training and testing animal- like artificial cognition, 2019. [8] Alfred Binet and Thodore Simon. Mthodes nouvelles pour le diagnostic du niveau intellectuel des anormaux. L’anne psychologique, pages 191–244, 1904. [9] Selmer Bringsjord and Bettina Schimanski. What is artificial intelligence? psycho- metric ai as an answer. In Proceedings of the 18th International Joint Conference on Artificial Intelligence, IJCAI’03, pages 887–893, San Francisco, CA, USA, 2003. Morgan Kaufmann Publishers Inc. [10] Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee. Sample-efficient reinforcement learning with stochastic ensemble value expansion, 2018. [11] Martin Buehler, Karl Iagnemma, and Sanjiv Singh. The 2005 DARPA Grand Chal- lenge: The Great Robot Race. Springer Publishing Company, Incorporated, 1st edition, 2007. [12] Murray Campbell, A. Joseph Hoane, Jr., and Feng-hsiung Hsu. Deep blue. Artif. Intell., (1-2):57–83, 2002. [13] Raymond B. Cattell. Abilities: Their structure, growth, and action. 1971. [14] G. Chaitin. Algorithmic Information Theory. Cambridge University Press, 1987. 58 [15] Gregory J Chaitin. A theory of program size formally identical to information theory. Journal of the ACM (JACM), (3):329–340, 1975. [16] Francois Chollet. Deep Learning with Python. Manning Publications, 2017. [17] Karl Cobbe, Oleg Klimov, Christopher Hesse, Taehoon Kim, and John Schulman. Quantifying generalization in reinforcement learning. CoRR, 2018. [18] Ebinepre A Cocodia. Cultural perceptions of human intelligence. Journal of Intelli- gence, 2(4):180–196, 2014. [19] L. Cosmides and J. Tooby. Origins of domain specificity: the evolution of functional organization. page 85116, 1994. [20] Linda Crocker and James Algina. Introduction to classical and modern test theory. ERIC, 1986. [21] Charles Darwin. The Origin of Species. 1859. [22] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large- Scale Hierarchical Image Database. In CVPR09, 2009. [23] D. K. Detterman. A challenge to watson. Intelligence, page 7778, 2011. [24] T.G. Evans. A program for the solution of a class of geometric-analogy intelligence- test questions. pages 271–353, 1968. [25] James R Flynn. What is intelligence?: Beyond the Flynn effect. Cambridge Univer- sity Press, 2007. [26] Richard M Friedberg. A learning machine: Part i. IBM Journal of Research and Development, 2(1):2–13, 1958. [27] Manuela Veloso Gary Marcus, Francesca Rossi. Beyond the Turing Test (workshop), 2014. [28] B. Goertzel and C. Pennachin, editors. Artificial general intelligence. Springer, New York, 2007. [29] Bert F Green Jr. Intelligence and computer simulation. Transactions of the New York Academy of Sciences, 1964. [30] Peter D. Gr¨unwald and Paul M. B. Vit´anyi. Algorithmic information theory. 2008. [31] Sumit Gulwani, Jos´e Hern´andez-Orallo, Emanuel Kitzelmann, Stephen H Muggle- ton, Ute Schmid, and Benjamin Zorn. Inductive programming meets the real world. Communications of the ACM, 58(11):90–99, 2015. [32] Sumit Gulwani, Alex Polozov, and Rishabh Singh. Program Synthesis. 2017. [33] William H. Guss, Cayden Codel, Katja Hofmann, Brandon Houghton, Noburu Kuno, Stephanie Milani, Sharada Prasanna Mohanty, Diego Perez Liebana, Rus- lan Salakhutdinov, Nicholay Topin, Manuela Veloso, and Phillip Wang. The minerl competition on sample efficient reinforcement learning using human priors. CoRR, 2019. 59 [34] R. Hambleton, H. Swaminathan, and H. Rogers. Fundamentals of Item Response Theory. Sage Publications, Inc., 1991. [35] Islam R. Bachman P. Pineau J. Precup D. Henderson, P. and D. Meger. Deep rein- forcement learning that matters. 2018. [36] Jos´e Hern´andez-Orallo. Evaluation in artificial intelligence: from task-oriented to ability-oriented measurement. Artificial Intelligence Review, pages 397–447, 2017. [37] Jos´e Hern´andez-Orallo. The Measure of All Minds: Evaluating Natural and Artificial Intelligence. Cambridge University Press, 2017. [38] Jos´e Hern´andez-Orallo and David L Dowe. Measuring universal intelligence: To- wards an anytime intelligence test. Artificial Intelligence, 174(18):1508–1539, 2010. [39] Jos´e Hern´andez-Orallo, David L. Dowe, and M.Victoria Hern´andez-Lloreda. Uni- versal psychometrics. Cogn. Syst. Res., (C):50–74, March 2014. [40] Jos´e Hern´andez-Orallo and Neus Minaya-Collado. A formal definition of intelli- gence based on an intensional variant of algorithmic complexity. 1998. [41] G.E. Hinton. How neural networks learn from experience. Mind and brain: Read- ings from the Scientific American magazine, page 113124, 1993. [42] Thomas Hobbes. Human Nature: or The fundamental Elements of Policie. 1650. [43] Marcus Hutter. Universal artificial intelligence: Sequential decisions based on al- gorithmic probability. Springer Science & Business Media, 2004. [44] D.L. Dowe J. Hernndez-Orallo. Iq tests are not for machines, yet. Intelligence, page 7781, 2012. [45] Yiding Jiang, Dilip Krishnan, Hossein Mobahi, and Samy Bengio. Predicting the generalization gap in deep networks with margin distributions. ArXiv, 2018. [46] Jason Jo and Yoshua Bengio. Measuring the tendency of cnns to learn surface sta- tistical regularities. ArXiv, 2017. [47] Raven J. John. Raven Progressive Matrices. Springer, Boston, MA, 2003. [48] Wendy Johnson and Thomas J.Bouchard Jr. The structure of human intelligence: It is verbal, perceptual, and image rotation (vpr), not fluid and crystallized. Intelligence, pages 393–416, 2005. [49] Arthur Juliani, Ahmed Khalifa, Vincent-Pierre Berges, Jonathan Harper, Ervin Teng, Hunter Henry, Adam Crespi, Julian Togelius, and Danny Lange. Obstacle tower: A generalization challenge in vision, control, and planning. Proceedings of the Twenty- Eighth International Joint Conference on Artificial Intelligence, Aug 2019. [50] Niels Justesen, Ruben Rodriguez Torrado, Philip Bontrager, Ahmed Khalifa, Ju- lian Togelius, and Sebastian Risi. Illuminating generalization in deep reinforcement learning through procedural level generation. arXiv preprint arXiv:1806.10729, 2018. 60 [51] Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gersh- man. Building machines that learn and think like people. CoRR, 2016. [52] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. (7553):436, 2015. nature, [53] Shane Legg and Marcus Hutter. A collection of definitions of intelligence. 2007. [54] Shane Legg and Marcus Hutter. Universal intelligence: A definition of machine intelligence. Minds and machines, 17(4):391–444, 2007. [55] Ming Li, Paul Vit´anyi, et al. An introduction to Kolmogorov complexity and its applications, volume 3. Springer. [56] John Locke. An Essay Concerning Human Understanding. 1689. [57] James Macgregor and Yun Chu. Human performance on the traveling salesman and related problems: A review. The Journal of Problem Solving, 3, 02 2011. [58] James Macgregor and Thomas Ormerod. Human performance on the traveling sales- man problem. Perception & psychophysics, 58:527–39, 06 1996. [59] Gary Marcus. Deep learning: A critical appraisal. arXiv preprint arXiv:1801.00631, 2018. [60] John McCarthy. Generality in artificial intelligence. Communications of the ACM, 30(12):1030–1035, 1987. [61] Pamela McCorduck. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. AK Peters Ltd, 2004. [62] Kevin McGrew. The cattell-horn-carroll theory of cognitive abilities: Past, present, and future. Contemporary Intellectual Assessment: Theories, Tests, and Issues, 01 2005. [63] Marvin Minsky. Society of mind. Simon and Schuster, 1988. [64] May-Britt Moser, David C Rowland, and Edvard I Moser. Place cells, grid cells, and memory. Cold Spring Harbor perspectives in biology, 7(2):a021808, 2015. [65] Shane Mueller, Matt Jones, Brandon Minnery, Ph Julia, and M Hiland. The bica cog- nitive decathlon: A test suite for biologically-inspired cognitive agents. Proceedings of the 16th Conference on Behavior Representation in Modeling and Simulation, 2007. [66] A. Newell. You cant play 20 questions with nature and win: Projective comments on the papers of this symposium. 1973. [67] Behnam Neyshabur, Srinadh Bhojanapalli, David McAllester, and Nati Srebro. Ex- ploring generalization in deep learning. In Advances in Neural Information Process- ing Systems, pages 5947–5956, 2017. [68] Ian Osband, Yotam Doron, Matteo Hessel, John Aslanides, Eren Sezener, Andre Saraiva, Katrina McKinney, Tor Lattimore, Csaba Szepezvari, Satinder Singh, et al. Behaviour suite for reinforcement learning. arXiv preprint arXiv:1908.03568, 2019. 61 [69] A. E. Howe P. R. Cohen. How evaluation guides ai research: the message still counts more than the medium. AI Mag, page 35, 1988. [70] Charles Packer, Katelyn Gao, Jernej Kos, Philipp Kr¨ahenb¨uhl, Vladlen Koltun, and Dawn Xiaodong Song. Assessing generalization in deep reinforcement learning. ArXiv, 2018. [71] Diego Perez-Liebana, Katja Hofmann, Sharada Prasanna Mohanty, Noboru Sean Kuno, Andre Kramer, Sam Devlin, Raluca D. Gaina, and Daniel Ionita. The multi- agent reinforcement learning in malm (marl) competition. Technical report, 2019. [72] Diego Perez-Liebana, Jialin Liu, Ahmed Khalifa, Raluca D Gaina, Julian Togelius, and Simon M Lucas. General video game ai: a multi-track framework for evaluating agents, games and content generation algorithms. arXiv preprint arXiv:1802.10363, 2018. [73] Joelle Pineau. Reproducible, Reusable, and Robust Reinforcement Learning, 2018. Neural Information Processing Systems. [74] S. Pinker. The blank slate: The modern denial of human nature. Viking, New York, 2002. [75] David M. W. Powers. The total Turing test and the loebner prize. In New Methods in Language Processing and Computational Natural Language Learning, 1998. [76] Lowrey K. Todorov E. V. Rajeswaran, A. and S. M. Kakade. Towards generalization and simplicity in continuous control. 2017. [77] Fred Reed. Promise of AI not so bright, 2006. [78] Jean-Jacques Rousseau. Emile, or On Education. 1762. [79] & McClelland J.L. Rumelhart, D.E. Distributed memory and the representation of general and specific information. Journal of Experimental Psychology, page 159188, 1985. [80] P. Sanghi and D. L. Dowe. A computer program capable of passing iq tests. page 570575, 2003. [81] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. arXiv preprint arXiv:1712.01815, 2017. [82] David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without human knowledge. Nature, 550(7676):354, 2017. [83] C. E. Spearman. ‘general intelligence’, objectively determined and measured. Amer- ican Journal of Psychology, page 201293, 1904. [84] C. E. Spearman. The Abilities of Man. Macmillan, London, 1927. 62 [85] Elizabeth S. Spelke and Katherine D. Kinzler. Core knowledge. Developmental science, pages 89–96, 2007. [86] Robert Sternberg. Culture and intelligence. The American psychologist, 59:325–38, 07 2004. [87] Robert Sternberg and Douglas Detterman. What is Intelligence? Contemporary Viewpoints on Its Nature and Definition. 1986. [88] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction (Second Edition). MIT Press, Cambridge, MA, 2018. OpenAI Five, 2019. openai-five/ Accessed: 2019-09-30. [89] OpenAI team. https://openai.com/blog/ [90] OpenAI team. OpenAI Five Arena Results, 2019. https://arena.openai. com/#/results Accessed: 2019-09-30. [91] A. M. Turing. Computing machinery and intelligence. 1950. [92] Vladimir N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, Berlin, Heidelberg, 1995. [93] Oriol Vinyals, Timo Ewalds, Sergey Bartunov, Petko Georgiev, Alexander Sasha Vezhnevets, Michelle Yeo, Alireza Makhzani, Heinrich K¨uttler, John Agapiou, Ju- lian Schrittwieser, John Quan, Stephen Gaffney, Stig Petersen, Karen Simonyan, Tom Schaul, Hado van Hasselt, David Silver, Timothy P. Lillicrap, Kevin Calderone, Paul Keet, Anthony Brunasso, David Lawrence, Anders Ekermo, Jacob Repp, and Rodney Tsing. Starcraft ii: A new challenge for reinforcement learning. ArXiv, abs/1708.04782, 2017. [94] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Superglue: A stickier benchmark for general-purpose language understanding systems. 2019. [95] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. Glue: A multi-task benchmark and analysis platform for natural language understanding. 2018. [96] Rui Wang, Joel Lehman, Jeff Clune, and Kenneth O. Stanley. Paired open-ended trailblazer (poet): Endlessly generating increasingly complex and diverse learning environments and their solutions. ArXiv, abs/1901.01753, 2019. [97] David H Wolpert. What the no free lunch theorems really mean; how to improve search algorithms. [98] D.H. Wolpert and W.G. Macready. No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation, pages 67–82, 1997. [99] Stephen G. Wozniak. Three minutes with steve wozniak. PC World, 2007. [100] Shih-Ying Yang and Robert J Sternberg. Taiwanese chinese people’s conceptions of intelligence. Intelligence, 25(1):21–36, 1997. 63 [101] Amy Zhang, Nicolas Ballas, and Joelle Pineau. A dissection of overfitting and gen- eralization in continuous reinforcement learning. arXiv preprint arXiv:1806.07937, 2018. [102] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. 2017. 64
{ "id": "1806.10729" }
1911.02116
Unsupervised Cross-lingual Representation Learning at Scale
This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks. We train a Transformer-based masked language model on one hundred languages, using more than two terabytes of filtered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6% average accuracy on XNLI, +13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs particularly well on low-resource languages, improving 15.7% in XNLI accuracy for Swahili and 11.4% for Urdu over previous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and capacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per-language performance; XLM-R is very competitive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code, data and models publicly available.
http://arxiv.org/pdf/1911.02116
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov
cs.CL
ACL 2020 (+ updated results)
null
cs.CL
20191105
20200408
0 2 0 2 r p A 8 ] L C . s c [ 2 v 6 1 1 2 0 . 1 1 9 1 : v i X r a # Unsupervised Cross-lingual Representation Learning at Scale Alexis Conneau∗ Kartikay Khandelwal∗ # Naman Goyal Vishrav Chaudhary Guillaume Wenzek Francisco Guzm´an Edouard Grave Myle Ott Luke Zettlemoyer Veselin Stoyanov # Facebook AI # Abstract This paper shows that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross- lingual transfer tasks. We train a Transformer- based masked language model on one hundred languages, using more than two terabytes of fil- tered CommonCrawl data. Our model, dubbed XLM-R, significantly outperforms multilingual BERT (mBERT) on a variety of cross-lingual benchmarks, including +14.6% average accu- racy on XNLI, +13% average F1 score on MLQA, and +2.4% F1 score on NER. XLM-R performs particularly well on low-resource lan- guages, improving 15.7% in XNLI accuracy for Swahili and 11.4% for Urdu over previ- ous XLM models. We also present a detailed empirical analysis of the key factors that are required to achieve these gains, including the trade-offs between (1) positive transfer and ca- pacity dilution and (2) the performance of high and low resource languages at scale. Finally, we show, for the first time, the possibility of multilingual modeling without sacrificing per- language performance; XLM-R is very compet- itive with strong monolingual models on the GLUE and XNLI benchmarks. We will make our code, data and models publicly available.1 # Introduction The goal of this paper is to improve cross-lingual language understanding (XLU), by carefully study- ing the effects of training unsupervised cross- lingual representations at a very large scale. We present XLM-R a transformer-based multilingual masked language model pre-trained on text in 100 languages, which obtains state-of-the-art perfor- mance on cross-lingual classification, sequence la- beling and question answering. 1 ∗Equal contribution. Correspondence to {aconneau,kartikayk}@fb.com https://github.com/facebookresearch/(fairseq-py,pytext,xlm) Multilingual masked language models (MLM) like mBERT (Devlin et al., 2018) and XLM (Lam- ple and Conneau, 2019) have pushed the state- of-the-art on cross-lingual understanding tasks by jointly pretraining large Transformer mod- els (Vaswani et al., 2017) on many languages. These models allow for effective cross-lingual transfer, as seen in a number of benchmarks in- cluding cross-lingual natural language inference (Bowman et al., 2015; Williams et al., 2017; Con- neau et al., 2018), question answering (Rajpurkar et al., 2016; Lewis et al., 2019), and named en- tity recognition (Pires et al., 2019; Wu and Dredze, 2019). However, all of these studies pre-train on Wikipedia, which provides a relatively limited scale especially for lower resource languages. In this paper, we first present a comprehensive analysis of the trade-offs and limitations of multi- lingual language models at scale, inspired by re- cent monolingual scaling efforts (Liu et al., 2019). We measure the trade-off between high-resource and low-resource languages and the impact of lan- guage sampling and vocabulary size. The experi- ments expose a trade-off as we scale the number of languages for a fixed model capacity: more lan- guages leads to better cross-lingual performance on low-resource languages up until a point, after which the overall performance on monolingual and cross-lingual benchmarks degrades. We refer to this tradeoff as the curse of multilinguality, and show that it can be alleviated by simply increas- ing model capacity. We argue, however, that this remains an important limitation for future XLU systems which may aim to improve performance with more modest computational budgets. Our best model XLM-RoBERTa (XLM-R) out- performs mBERT on cross-lingual classification by up to 23% accuracy on low-resource languages. It outperforms the previous state of the art by 5.1% av- erage accuracy on XNLI, 2.42% average F1-score on Named Entity Recognition, and 9.1% average F1-score on cross-lingual Question Answering. We also evaluate monolingual fine tuning on the GLUE and XNLI benchmarks, where XLM-R obtains re- sults competitive with state-of-the-art monolingual models, including RoBERTa (Liu et al., 2019). These results demonstrate, for the first time, that it is possible to have a single large model for all languages, without sacrificing per-language perfor- mance. We will make our code, models and data publicly available, with the hope that this will help research in multilingual NLP and low-resource lan- guage understanding. # 2 Related Work From pretrained word embeddings (Mikolov et al., 2013b; Pennington et al., 2014) to pretrained con- textualized representations (Peters et al., 2018; Schuster et al., 2019) and transformer based lan- guage models (Radford et al., 2018; Devlin et al., 2018), unsupervised representation learning has significantly improved the state of the art in nat- ural language understanding. Parallel work on cross-lingual understanding (Mikolov et al., 2013a; Schuster et al., 2019; Lample and Conneau, 2019) extends these systems to more languages and to the cross-lingual setting in which a model is learned in one language and applied in other languages. Most recently, Devlin et al. (2018) and Lam- ple and Conneau (2019) introduced mBERT and XLM - masked language models trained on multi- ple languages, without any cross-lingual supervi- sion. Lample and Conneau (2019) propose transla- tion language modeling (TLM) as a way to leverage parallel data and obtain a new state of the art on the cross-lingual natural language inference (XNLI) benchmark (Conneau et al., 2018). They further show strong improvements on unsupervised ma- chine translation and pretraining for sequence gen- eration. Wu et al. (2019) shows that monolingual BERT representations are similar across languages, explaining in part the natural emergence of multi- linguality in bottleneck architectures. Separately, Pires et al. (2019) demonstrated the effectiveness of multilingual models like mBERT on sequence la- beling tasks. Huang et al. (2019) showed gains over XLM using cross-lingual multi-task learning, and Singh et al. (2019) demonstrated the efficiency of cross-lingual data augmentation for cross-lingual NLI. However, all of this work was at a relatively modest scale, in terms of the amount of training data, as compared to our approach. The benefits of scaling language model pretrain- ing by increasing the size of the model as well as the training data has been extensively studied in the literature. For the monolingual case, Jozefowicz et al. (2016) show how large-scale LSTM models can obtain much stronger performance on language modeling benchmarks when trained on billions of tokens. GPT (Radford et al., 2018) also highlights the importance of scaling the amount of data and RoBERTa (Liu et al., 2019) shows that training BERT longer on more data leads to significant boost in performance. Inspired by RoBERTa, we show that mBERT and XLM are undertuned, and that simple improvements in the learning procedure of unsupervised MLM leads to much better perfor- mance. We train on cleaned CommonCrawls (Wen- zek et al., 2019), which increase the amount of data for low-resource languages by two orders of magni- tude on average. Similar data has also been shown to be effective for learning high quality word em- beddings in multiple languages (Grave et al., 2018). Several efforts have trained massively multilin- gual machine translation models from large par- allel corpora. They uncover the high and low re- source trade-off and the problem of capacity dilu- tion (Johnson et al., 2017; Tan et al., 2019). The work most similar to ours is Arivazhagan et al. (2019), which trains a single model in 103 lan- guages on over 25 billion parallel sentences. Sid- dhant et al. (2019) further analyze the representa- tions obtained by the encoder of a massively multi- lingual machine translation system and show that it obtains similar results to mBERT on cross-lingual NLI. Our work, in contrast, focuses on the unsuper- vised learning of cross-lingual representations and their transfer to discriminative tasks. # 3 Model and Data In this section, we present the training objective, languages, and data we use. We follow the XLM approach (Lample and Conneau, 2019) as closely as possible, only introducing changes that improve performance at scale. Masked Language Models. We use a Trans- former model (Vaswani et al., 2017) trained with the multilingual MLM objective (Devlin et al., 2018; Lample and Conneau, 2019) using only monolingual data. We sample streams of text from each language and train the model to predict the masked tokens in the input. We apply subword tok- (in & © CommonCrawl © Wikipedia GB) # Dataset size Figure 1: Amount of data in GiB (log-scale) for the 88 languages that appear in both the Wiki-100 corpus used for mBERT and XLM-100, and the CC-100 used for XLM-R. CC-100 increases the amount of data by several orders of magnitude, in particular for low-resource languages. enization directly on raw text data using Sentence Piece (Kudo and Richardson, 2018) with a unigram language model (Kudo, 2018). We sample batches from different languages using the same sampling distribution as Lample and Conneau (2019), but with α = 0.3. Unlike Lample and Conneau (2019), we do not use language embeddings, which allows our model to better deal with code-switching. We use a large vocabulary size of 250K with a full soft- max and train two different models: XLM-R Base (L = 12, H = 768, A = 12, 270M params) and XLM-R (L = 24, H = 1024, A = 16, 550M params). For all of our ablation studies, we use a BERTBase architec- ture with a vocabulary of 150K tokens. Appendix B goes into more details about the architecture of the different models referenced in this paper. Scaling to a hundred languages. XLM-R is trained on 100 languages; we provide a full list of languages and associated statistics in Appendix A. Figure 1 specifies the iso codes of 88 languages that are shared across XLM-R and XLM-100, the model from Lample and Conneau (2019) trained on Wikipedia text in 100 languages. Compared to previous work, we replace some languages with more commonly used ones such as romanized Hindi and traditional Chinese. In our ablation studies, we always include the 7 lan- guages for which we have classification and se- quence labeling evaluation benchmarks: English, French, German, Russian, Chinese, Swahili and Urdu. We chose this set as it covers a suitable range of language families and includes low-resource lan- guages such as Swahili and Urdu. We also consider larger sets of 15, 30, 60 and all 100 languages. When reporting results on high-resource and low- resource, we refer to the average of English and French results, and the average of Swahili and Urdu results respectively. Scaling the Amount of Training Data. Follow- ing Wenzek et al. (2019) 2, we build a clean Com- monCrawl Corpus in 100 languages. We use an internal language identification model in combina- tion with the one from fastText (Joulin et al., 2017). We train language models in each language and use it to filter documents as described in Wenzek et al. (2019). We consider one CommonCrawl dump for English and twelve dumps for all other languages, which significantly increases dataset sizes, espe- cially for low-resource languages like Burmese and Swahili. Figure 1 shows the difference in size between the Wikipedia Corpus used by mBERT and XLM- 100, and the CommonCrawl Corpus we use. As we show in Section 5.3, monolingual Wikipedia corpora are too small to enable unsupervised rep- resentation learning. Based on our experiments, we found that a few hundred MiB of text data is usually a minimal size for learning a BERT model. # 4 Evaluation We consider four evaluation benchmarks. For cross- lingual understanding, we use cross-lingual natural language inference, named entity recognition, and question answering. We use the GLUE benchmark to evaluate the English performance of XLM-R and compare it to other state-of-the-art models. Cross-lingual Natural Language Inference (XNLI). The XNLI dataset comes with ground- truth dev and test sets in 15 languages, and a ground-truth English training set. The training set has been machine-translated to the remaining 14 languages, providing synthetic training data for these languages as well. We evaluate our model on cross-lingual transfer from English to other lan- 2 # https://github.com/facebookresearch/cc_net https://github.com/facebookresearch/cc net guages. We also consider three machine translation baselines: (i) translate-test: dev and test sets are machine-translated to English and a single English model is used (ii) translate-train (per-language): is machine-translated the English training set to each language and we fine-tune a multiligual model on each training set (iii) translate-train-all (multi-language): we fine-tune a multilingual model on the concatenation of all training sets from translate-train. For the translations, we use the official data provided by the XNLI project. Named Entity Recognition. For NER, we con- sider the CoNLL-2002 (Sang, 2002) and CoNLL- 2003 (Tjong Kim Sang and De Meulder, 2003) datasets in English, Dutch, Spanish and German. We fine-tune multilingual models either (1) on the English set to evaluate cross-lingual transfer, (2) on each set to evaluate per-language performance, or (3) on all sets to evaluate multilingual learning. We report the F1 score, and compare to baselines from Lample et al. (2016) and Akbik et al. (2018). Cross-lingual Question Answering. We use the MLQA benchmark from Lewis et al. (2019), which extends the English SQuAD benchmark to Spanish, German, Arabic, Hindi, Vietnamese and Chinese. We report the F1 score as well as the exact match (EM) score for cross-lingual transfer from English. GLUE Benchmark. Finally, we evaluate the En- glish performance of our model on the GLUE benchmark (Wang et al., 2018) which gathers mul- tiple classification tasks, such as MNLI (Williams et al., 2017), SST-2 (Socher et al., 2013), or QNLI (Rajpurkar et al., 2018). We use BERTLarge and RoBERTa as baselines. # 5 Analysis and Results In this section, we perform a comprehensive anal- ysis of multilingual masked language models. We conduct most of the analysis on XNLI, which we found to be representative of our findings on other tasks. We then present the results of XLM-R on cross-lingual understanding and GLUE. Finally, we compare multilingual and monolingual models, and present results on low-resource languages. # Improving and Understanding Multilingual Masked Language Models 5.1 Much of the work done on understanding the cross- lingual effectiveness of mBERT or XLM (Pires et al., 2019; Wu and Dredze, 2019; Lewis et al., 2019) has focused on analyzing the performance of fixed pretrained models on downstream tasks. In this section, we present a comprehensive study of different factors that are important to pretraining large scale multilingual models. We highlight the trade-offs and limitations of these models as we scale to one hundred languages. Transfer-dilution Trade-off and Curse of Mul- tilinguality. Model capacity (i.e. the number of parameters in the model) is constrained due to prac- tical considerations such as memory and speed dur- ing training and inference. For a fixed sized model, the per-language capacity decreases as we increase the number of languages. While low-resource lan- guage performance can be improved by adding sim- ilar higher-resource languages during pretraining, the overall downstream performance suffers from this capacity dilution (Arivazhagan et al., 2019). Positive transfer and capacity dilution have to be traded off against each other. We illustrate this trade-off in Figure 2, which shows XNLI performance vs the number of lan- guages the model is pretrained on. Initially, as we go from 7 to 15 languages, the model is able to take advantage of positive transfer which improves performance, especially on low resource languages. Beyond this point the curse of multilinguality kicks in and degrades performance across all languages. Specifically, the overall XNLI accuracy decreases from 71.8% to 67.7% as we go from XLM-7 to XLM-100. The same trend can be observed for models trained on the larger CommonCrawl Cor- pus. The issue is even more prominent when the ca- pacity of the model is small. To show this, we pretrain models on Wikipedia Data in 7, 30 and 100 languages. As we add more languages, we make the Transformer wider by increasing the hid- den size from 768 to 960 to 1152. In Figure 4, we show that the added capacity allows XLM-30 to be on par with XLM-7, thus overcoming the curse of multilinguality. The added capacity for XLM-100, however, is not enough and it still lags behind due to higher vocabulary dilution (recall from Section 3 that we used a fixed vocabulary size of 150K for all models). High-resource vs Low-resource Trade-off. The allocation of the model capacity across languages is controlled by several parameters: the training set size, the size of the shared subword 80 70 60 50 40 7 15 30 60 100 > E 3 < 7 15 30 60 100 Number of languages # ELowres. # MH Highres. # 0 All ey S 70} 60} 50; 40} Low res. Highres. All B Wikipedia & CommonCraw! # z Ss EB < 744 > 72+ = 70 3 68} 66 | 7 30 100 Number of languages II Fixed capacity Ml Increased capacity < transfer- 2: Figure interference Low- resource languages benefit from scaling to more languages, until dilution (interference) kicks in and degrades overall performance. Figure 3: Wikipedia versus Com- monCrawl: An XLM-7 obtains significantly better performance when trained on CC, in particular on low-resource languages. Figure 4: Adding more capacity to the model alleviates the curse of multilinguality, but remains an is- sue for models of moderate size. 80 70 60 50 40 0.01 03 40.7 1.0 Language sampling Accuracy BLowres. lM Highres. All 32k 64k 128k 256k 512k Vocabulary size 68 Accuracy DARAD SN RG I Fixed capacity lf Increased capacity ZAAADRDDAD SNRBA w # Accuracy 2048 4096 8192 BPE SPM Batch size Preproc. Figure 5: On the high-resource versus low-resource trade-off: im- pact of batch language sampling for XLM-100. Figure 6: On the impact of vocabu- lary size at fixed capacity and with increasing capacity for XLM-100. Figure 7: On the impact of large- scale training, and preprocessing simplification from BPE with tok- enization to SPM on raw text data. vocabulary, and the rate at which we sample training examples from each language. We study the effect of sampling on the performance of high- resource (English and French) and low-resource (Swahili and Urdu) languages for an XLM-100 model trained on Wikipedia (we observe a similar trend for the construction of the subword vocab). Specifically, we investigate the impact of varying the α parameter which controls the exponential smoothing of the language sampling rate. Similar to Lample and Conneau (2019), we use a sampling rate proportional to the number of sentences in each corpus. Models trained with higher values of α see batches of high-resource languages more often. Figure 5 shows that the higher the value of α, the better the performance on high-resource languages, and vice-versa. When considering overall performance, we found 0.3 to be an optimal value for α, and use this for XLM-R. shared vocabulary (the vocabulary capacity) can improve the performance of multilingual models on downstream tasks. To illustrate this effect, we train XLM-100 models on Wikipedia data with different vocabulary sizes. We keep the overall number of parameters constant by adjusting the width of the transformer. Figure 6 shows that even with a fixed capacity, we observe a 2.8% increase in XNLI av- erage accuracy as we increase the vocabulary size from 32K to 256K. This suggests that multilingual models can benefit from allocating a higher pro- portion of the total number of parameters to the embedding layer even though this reduces the size of the Transformer. For simplicity and given the softmax computational constraints, we use a vocab- ulary of 250k for XLM-R. Importance of Capacity and Vocabulary. In previous sections and in Figure 4, we showed the importance of scaling the model size as we increase the number of languages. Similar to the overall model size, we argue that scaling the size of the We further illustrate the importance of this pa- rameter, by training three models with the same transformer architecture (BERTBase) but with dif- ferent vocabulary sizes: 128K, 256K and 512K. We observe more than 3% gains in overall accuracy on XNLI by simply increasing the vocab size from 128k to 512k. Larger-scale Datasets and Training. As shown in Figure 1, the CommonCrawl Corpus that we col- lected has significantly more monolingual data than the previously used Wikipedia corpora. Figure 3 shows that for the same BERTBase architecture, all models trained on CommonCrawl obtain signifi- cantly better performance. Apart from scaling the training data, Liu et al. (2019) also showed the benefits of training MLMs longer. In our experiments, we observed similar effects of large-scale training, such as increasing batch size (see Figure 7) and training time, on model performance. Specifically, we found that using validation perplexity as a stopping criterion for pretraining caused the multilingual MLM in Lample and Conneau (2019) to be under-tuned. In our experience, performance on downstream tasks continues to improve even after validation perplexity has plateaued. Combining this observa- tion with our implementation of the unsupervised XLM-MLM objective, we were able to improve the performance of Lample and Conneau (2019) from 71.3% to more than 75% average accuracy on XNLI, which was on par with their supervised translation language modeling (TLM) objective. Based on these results, and given our focus on unsupervised learning, we decided to not use the supervised TLM objective for training our models. Simplifying Multilingual Tokenization with Sentence Piece. The different language-specific tokenization tools used by mBERT and XLM-100 make these models more difficult to use on raw text. Instead, we train a Sentence Piece model (SPM) and apply it directly on raw text data for all languages. We did not observe any loss in per- formance for models trained with SPM when com- pared to models trained with language-specific pre- processing and byte-pair encoding (see Figure 7) and hence use SPM for XLM-R. # 5.2 Cross-lingual Understanding Results Based on these results, we adapt the setting of Lam- ple and Conneau (2019) and use a large Trans- former model with 24 layers and 1024 hidden states, with a 250k vocabulary. We use the multi- lingual MLM loss and train our XLM-R model for 1.5 Million updates on five-hundred 32GB Nvidia V100 GPUs with a batch size of 8192. We leverage the SPM-preprocessed text data from Common- Crawl in 100 languages and sample languages with α = 0.3. In this section, we show that it out- performs all previous techniques on cross-lingual benchmarks while getting performance on par with RoBERTa on the GLUE benchmark. XNLI. Table 1 shows XNLI results and adds some additional details: (i) the number of models the approach induces (#M), (ii) the data on which the model was trained (D), and (iii) the number of languages the model was pretrained on (#lg). As we show in our results, these parameters signifi- cantly impact performance. Column #M specifies whether model selection was done separately on the dev set of each language (N models), or on the joint dev set of all the languages (single model). We observe a 0.6 decrease in overall accuracy when we go from N models to a single model - going from 71.3 to 70.7. We encourage the community to adopt this setting. For cross-lingual transfer, while this approach is not fully zero-shot transfer, we argue that in real applications, a small amount of supervised data is often available for validation in each language. XLM-R sets a new state of the art on XNLI. On cross-lingual transfer, XLM-R obtains 80.9% accu- racy, outperforming the XLM-100 and mBERT open-source models by 10.2% and 14.6% aver- age accuracy. On the Swahili and Urdu low- resource languages, XLM-R outperforms XLM-100 by 15.7% and 11.4%, and mBERT by 23.5% and 15.8%. While XLM-R handles 100 languages, we also show that it outperforms the former state of the art Unicoder (Huang et al., 2019) and XLM (MLM+TLM), which handle only 15 languages, by 5.5% and 5.8% average accuracy respectively. Us- ing the multilingual training of translate-train-all, XLM-R further improves performance and reaches 83.6% accuracy, a new overall state of the art for XNLI, outperforming Unicoder by 5.1%. Multi- lingual training is similar to practical applications where training sets are available in various lan- guages for the same task. In the case of XNLI, datasets have been translated, and translate-train- all can be seen as some form of cross-lingual data augmentation (Singh et al., 2019), similar to back- translation (Xie et al., 2019). Named Entity Recognition. In Table 2, we re- port results of XLM-R and mBERT on CoNLL- 2002 and CoNLL-2003. We consider the LSTM + CRF approach from Lample et al. (2016) and the Flair model from Akbik et al. (2018) as base- lines. We evaluate the performance of the model Model D #M #lg en fr es de el bg ru tr ar vi th zh hi sw ur Fine-tune multilingual model on English training set (Cross-lingual Transfer) Lample and Conneau (2019) Wiki+MT Wiki+MT Huang et al. (2019) Wiki Devlin et al. (2018) Wiki Lample and Conneau (2019) Wiki Lample and Conneau (2019) XLM-RBase CC XLM-R CC N N N N 1 1 1 15 15 102 100 100 100 100 85.0 85.1 82.1 83.7 83.2 85.8 89.1 78.7 79.0 73.8 76.2 76.7 79.7 84.1 78.9 79.4 74.3 76.6 77.7 80.7 85.1 77.8 77.8 71.1 73.7 74.0 78.7 83.9 76.6 77.2 66.4 72.4 72.7 77.5 82.9 77.4 77.2 68.9 73.0 74.1 79.6 84.0 75.3 76.3 69.0 72.1 72.7 78.1 81.2 72.5 72.8 61.6 68.1 68.7 74.2 79.6 73.1 73.5 64.9 68.4 68.6 73.8 79.8 76.1 76.4 69.5 72.0 72.9 76.5 80.8 73.2 73.6 55.8 68.2 68.9 74.6 78.1 76.5 76.2 69.3 71.5 72.5 76.7 80.2 69.6 69.4 60.0 64.5 65.6 72.4 76.9 68.4 69.7 50.4 58.0 58.2 66.5 73.9 67.3 66.7 58.0 62.4 62.4 68.3 73.8 Translate everything to English and use English-only model (TRANSLATE-TEST) BERT-en RoBERTa Wiki Wiki+CC 1 1 1 1 88.8 91.3 81.4 82.9 82.3 84.3 80.1 81.2 80.3 81.7 80.9 83.1 76.2 78.3 76.0 76.8 75.4 76.6 72.0 74.2 71.9 74.1 75.6 77.5 70.0 70.9 65.8 66.7 65.8 66.8 Fine-tune multilingual model on each training set (TRANSLATE-TRAIN) Lample and Conneau (2019) Wiki N 100 82.9 77.6 77.9 77.9 77.1 75.7 75.5 72.6 71.2 75.8 73.1 76.2 70.4 66.5 62.4 Fine-tune multilingual model on all training sets (TRANSLATE-TRAIN-ALL) Lample and Conneau (2019)† Wiki+MT Wiki+MT Huang et al. (2019) Wiki Lample and Conneau (2019) XLM-RBase CC XLM-R CC 1 1 1 1 1 15 15 100 100 100 85.0 85.6 84.5 85.4 89.1 80.8 81.1 80.1 81.4 85.1 81.3 82.3 81.3 82.2 86.6 80.3 80.9 79.3 80.3 85.7 79.1 79.5 78.6 80.4 85.3 80.9 81.4 79.4 81.3 85.9 78.3 79.7 77.5 79.7 83.5 75.6 76.8 75.2 78.6 83.2 77.6 78.2 75.6 77.3 83.1 78.5 77.9 78.3 79.7 83.7 76.0 77.1 75.7 77.9 81.5 79.5 80.5 78.3 80.2 83.7 72.9 73.4 72.1 76.1 81.6 72.8 73.8 69.2 73.1 78.0 68.5 69.6 67.7 73.0 78.1 Avg 75.1 75.4 66.3 71.3 70.7 76.2 80.9 76.2 77.8 74.2 77.8 78.5 76.9 79.1 83.6 Table 1: Results on cross-lingual classification. We report the accuracy on each of the 15 XNLI languages and the average accuracy. We specify the dataset D used for pretraining, the number of models #M the approach requires and the number of languages #lg the model handles. Our XLM-R results are averaged over five different seeds. We show that using the translate-train-all approach which leverages training sets from multiple languages, XLM-R obtains a new state of the art on XNLI of 83.6% average accuracy. Results with † are from Huang et al. (2019). Model train #M en nl es de Avg Lample et al. (2016) Akbik et al. (2018) each each N N 90.74 93.18 81.74 90.44 85.75 - 78.76 88.27 84.25 - mBERT† each en N 1 91.97 91.97 90.94 77.57 87.38 74.96 82.82 69.56 88.28 78.52 XLM-RBase each en all N 1 1 92.25 92.25 91.08 90.39 78.08 89.09 87.99 76.53 87.28 84.60 69.60 83.17 88.81 79.11 87.66 XLM-R each en all N 1 1 92.92 92.92 92.00 92.53 80.80 91.60 89.72 78.64 89.52 85.81 71.40 84.60 90.24 80.94 89.43 Table 2: Results on named entity recognition on CoNLL-2002 and CoNLL-2003 (F1 score). Results with † are from Wu and Dredze (2019). Note that mBERT and XLM-R do not use a linear-chain CRF, as opposed to Akbik et al. (2018) and Lample et al. (2016). cross-lingual transfer approach by 8.49%. Question Answering. We also obtain new state of the art results on the MLQA cross-lingual ques- tion answering benchmark, introduced by Lewis et al. (2019). We follow their procedure by training on the English training data and evaluating on the 7 languages of the dataset. We report results in Table 3. XLM-R obtains F1 and accuracy scores of 70.7% and 52.7% while the previous state of the art was 61.6% and 43.5%. XLM-R also outperforms mBERT by 13.0% F1-score and 11.1% accuracy. It even outperforms BERT-Large on English, con- firming its strong monolingual performance. # 5.3 Multilingual versus Monolingual on each of the target languages in three different settings: (i) train on English data only (en) (ii) train on data in target language (each) (iii) train on data in all languages (all). Results of mBERT are re- ported from Wu and Dredze (2019). Note that we do not use a linear-chain CRF on top of XLM-R and mBERT representations, which gives an advan- tage to Akbik et al. (2018). Without the CRF, our XLM-R model still performs on par with the state of the art, outperforming Akbik et al. (2018) on Dutch by 2.09 points. On this task, XLM-R also outperforms mBERT by 2.42 F1 on average for cross-lingual transfer, and 1.86 F1 when trained on each language. Training on all languages leads to an average F1 score of 89.43%, outperforming In this section, we present results of multilingual XLM models against monolingual BERT models. GLUE: XLM-R versus RoBERTa. Our goal is to obtain a multilingual model with strong perfor- mance on both, cross-lingual understanding tasks as well as natural language understanding tasks for each language. To that end, we evaluate XLM- R on the GLUE benchmark. We show in Table 4, that XLM-R obtains better average dev performance than BERTLarge by 1.6% and reaches performance on par with XLNetLarge. The RoBERTa model out- performs XLM-R by only 1.0% on average. We believe future work can reduce this gap even fur- ther by alleviating the curse of multilinguality and Model train #lgs en es de ar hi vi zh Avg BERT-Large† mBERT† XLM-15† XLM-RBase XLM-R en en en en en 1 102 15 100 100 80.2 / 67.4 77.7 / 65.2 74.9 / 62.4 77.1 / 64.6 80.6 / 67.8 - 64.3 / 46.6 68.0 / 49.8 67.4 / 49.6 74.1 / 56.0 - 57.9 / 44.3 62.2 / 47.6 60.9 / 46.7 68.5 / 53.6 - 45.7 / 29.8 54.8 / 36.3 54.9 / 36.6 63.1 / 43.5 - 43.8 / 29.7 48.8 / 27.3 59.4 / 42.9 69.2 / 51.6 - 57.1 / 38.6 61.4 / 41.8 64.5 / 44.7 71.3 / 50.9 - 57.5 / 37.3 61.1 / 39.6 61.8 / 39.3 68.0 / 45.4 - 57.7 / 41.6 61.6 / 43.5 63.7 / 46.3 70.7 / 52.7 Table 3: Results on MLQA question answering We report the F1 and EM (exact match) scores for zero-shot classification where models are fine-tuned on the English Squad dataset and evaluated on the 7 languages of MLQA. Results with † are taken from the original MLQA paper Lewis et al. (2019). vocabulary dilution. These results demonstrate the possibility of learning one model for many lan- guages while maintaining strong performance on per-language downstream tasks. Model #lgs MNLI-m/mm QNLI QQP SST MRPC STS-B Avg BERTLarge XLNetLarge RoBERTa† XLM-R † † 1 1 1 100 86.6/- 89.8/- 90.2/90.2 88.9/89.0 92.3 93.9 94.7 93.8 91.3 91.8 92.2 92.3 93.2 95.6 96.4 95.0 88.0 89.2 90.9 89.5 90.0 91.8 92.4 91.2 90.2 92.0 92.8 91.8 Table 4: GLUE dev results. Results with † are from Liu et al. (2019). We compare the performance of XLM- R to BERTLarge, XLNet and RoBERTa on the English GLUE benchmark. XNLI: XLM versus BERT. A recurrent criti- cism against multilingual models is that they obtain worse performance than their monolingual coun- terparts. In addition to the comparison of XLM-R and RoBERTa, we provide the first comprehen- sive study to assess this claim on the XNLI bench- mark. We extend our comparison between multilin- gual XLM models and monolingual BERT models on 7 languages and compare performance in Ta- ble 5. We train 14 monolingual BERT models on Wikipedia and CommonCrawl (capped at 60 GiB), and two XLM-7 models. We increase the vocab- ulary size of the multilingual model for a better comparison. We found that multilingual models can outperform their monolingual BERT counter- parts. Specifically, in Table 5, we show that for cross-lingual transfer, monolingual baselines out- perform XLM-7 for both Wikipedia and CC by 1.6% and 1.3% average accuracy. However, by making use of multilingual training (translate-train- all) and leveraging training sets coming from mul- tiple languages, XLM-7 can outperform the BERT models: our XLM-7 trained on CC obtains 80.0% average accuracy on the 7 languages, while the average performance of BERT models trained on CC is 77.5%. This is a surprising result that shows that the capacity of multilingual models to leverage training data coming from multiple languages for a particular task can overcome the capacity dilution problem to obtain better overall performance. Model D #vocab en fr de ru zh sw ur Avg Monolingual baselines BERT Wiki CC 40k 40k 84.5 86.7 78.6 81.2 80.0 81.2 75.5 78.2 77.7 79.5 60.1 70.8 57.3 65.1 73.4 77.5 Multilingual models (cross-lingual transfer) XLM-7 Wiki CC 150k 150k 82.3 85.7 76.8 78.6 74.7 79.5 72.5 76.4 73.1 74.8 60.8 71.2 62.3 66.9 71.8 76.2 Multilingual models (translate-train-all) XLM-7 Wiki CC 150k 150k 84.6 87.2 80.1 82.5 80.2 82.9 75.7 79.7 78 80.4 68.7 75.7 66.7 71.5 76.3 80.0 Table 5: Multilingual versus monolingual models (BERT-BASE). We compare the performance of mono- lingual models (BERT) versus multilingual models (XLM) on seven languages, using a BERT-BASE archi- tecture. We choose a vocabulary size of 40k and 150k for monolingual and multilingual models. # 5.4 Representation Learning for Low-resource Languages We observed in Table 5 that pretraining on Wikipedia for Swahili and Urdu performed sim- ilarly to a randomly initialized model; most likely due to the small size of the data for these languages. On the other hand, pretraining on CC improved performance by up to 10 points. This confirms our assumption that mBERT and XLM-100 rely heav- ily on cross-lingual transfer but do not model the low-resource languages as well as XLM-R. Specifi- cally, in the translate-train-all setting, we observe that the biggest gains for XLM models trained on CC, compared to their Wikipedia counterparts, are on low-resource languages; 7% and 4.8% improve- ment on Swahili and Urdu respectively. # 6 Conclusion In this work, we introduced XLM-R, our new state of the art multilingual masked language model trained on 2.5 TB of newly created clean Com- monCrawl data in 100 languages. We show that it provides strong gains over previous multilingual models like mBERT and XLM on classification, sequence labeling and question answering. We ex- posed the limitations of multilingual MLMs, in particular by uncovering the high-resource versus low-resource trade-off, the curse of multilinguality and the importance of key hyperparameters. We also expose the surprising effectiveness of multilin- gual models over monolingual models, and show strong improvements on low-resource languages. # References Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual string embeddings for sequence labeling. In COLING, pages 1638–1649. Naveen Arivazhagan, Ankur Bapna, Orhan Firat, Dmitry Lepikhin, Melvin Johnson, Maxim Krikun, Mia Xu Chen, Yuan Cao, George Foster, Colin Cherry, et al. 2019. Massively multilingual neural machine translation in the wild: Findings and chal- lenges. arXiv preprint arXiv:1907.05019. Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A large anno- tated corpus for learning natural language inference. In EMNLP. Alexis Conneau, Ruty Rinott, Guillaume Lample, Ad- ina Williams, Samuel R. Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. Xnli: Evaluating cross- lingual sentence representations. In EMNLP. Asso- ciation for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. NAACL. Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Ar- mand Joulin, and Tomas Mikolov. 2018. Learning word vectors for 157 languages. In LREC. Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, and Ming Zhou. 2019. Unicoder: A universal language encoder by pre- training with multiple cross-lingual tasks. ACL. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Vi´egas, Martin Wattenberg, Greg Corrado, et al. 2017. Googles multilingual neural machine translation system: Enabling zero-shot translation. TACL, 5:339–351. and Piotr Bo- janowski Tomas Mikolov. 2017. Bag of tricks for efficient text classification. EACL 2017, page 427. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Exploring arXiv preprint Shazeer, and Yonghui Wu. 2016. the limits of language modeling. arXiv:1602.02410. Taku Kudo. 2018. Subword regularization: Improving neural network translation models with multiple sub- word candidates. In ACL, pages 66–75. Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tok- enizer and detokenizer for neural text processing. EMNLP. Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In NAACL, pages 260–270, San Diego, California. Association for Computational Linguistics. Guillaume Lample and Alexis Conneau. 2019. Cross- lingual language model pretraining. NeurIPS. Patrick Lewis, Barlas O˘guz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2019. Mlqa: Eval- uating cross-lingual extractive question answering. arXiv preprint arXiv:1910.07475. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized BERT pretraining ap- proach. arXiv preprint arXiv:1907.11692. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for ma- chine translation. arXiv preprint arXiv:1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013b. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS, pages 3111–3119. Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In EMNLP, pages 1532–1543. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. NAACL. Telmo Pires, Eva Schlinger, and Dan Garrette. 2019. How multilingual is multilingual bert? In ACL. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2.amazonaws.com/openai- assets/research-covers/language- unsupervised/language understanding paper.pdf. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683. Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don’t know: Unanswerable ques- tions for squad. ACL. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In EMNLP, pages 2383–2392, Austin, Texas. Association for Compu- tational Linguistics. Introduction to the conll-2002 shared task: Language-independent named entity recognition. CoNLL. Tal Schuster, Ori Ram, Regina Barzilay, and Amir Globerson. 2019. Cross-lingual alignment of con- textual word embeddings, with applications to zero- shot dependency parsing. NAACL. Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Arivazhagan, Jason Riesa, Ankur Bapna, Orhan Fi- rat, and Karthik Raman. 2019. Evaluating the cross- lingual effectiveness of massively multilingual neu- ral machine translation. AAAI. Jasdeep Singh, Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2019. Xlda: lan- Cross-lingual data augmentation for natural arXiv guage inference and question answering. preprint arXiv:1905.11471. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In EMNLP, pages 1631–1642. Xu Tan, Yi Ren, Di He, Tao Qin, Zhou Zhao, and Tie- Yan Liu. 2019. Multilingual neural machine transla- tion with knowledge distillation. ICLR. Erik F Tjong Kim Sang and Fien De Meulder. 2003. In- troduction to the conll-2003 shared task: language- In CoNLL, independent named entity recognition. pages 142–147. Association for Computational Lin- guistics. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, pages 6000–6010. Alex Wang, Amapreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzman, Ar- mand Joulin, and Edouard Grave. 2019. Ccnet: Ex- tracting high quality monolingual datasets from web crawl data. arXiv preprint arXiv:1911.00359. Adina Williams, Nikita Nangia, and Samuel R Bow- man. 2017. A broad-coverage challenge corpus for sentence understanding through inference. Pro- ceedings of the 2nd Workshop on Evaluating Vector- Space Representations for NLP. Shijie Wu, Alexis Conneau, Haoran Li, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Emerging cross-lingual structure in pretrained language mod- els. ACL. Shijie Wu and Mark Dredze. 2019. Beto, bentz, be- cas: The surprising cross-lingual effectiveness of bert. EMNLP. Qizhe Xie, Zihang Dai, Eduard Hovy, Minh-Thang Lu- ong, and Quoc V Le. 2019. Unsupervised data aug- mentation for consistency training. arXiv preprint arXiv:1904.12848. # Appendix # A Languages and statistics for CC-100 used by XLM-R In this section we present the list of languages in the CC-100 corpus we created for training XLM-R. We also report statistics such as the number of tokens and the size of each monolingual corpus. ISO code Language Tokens (M) Size (GiB) ISO code Language Tokens (M) Size (GiB) Table 6: Languages and statistics of the CC-100 corpus. We report the list of 100 languages and include the number of tokens (Millions) and the size of the data (in GiB) for each language. Note that we also include romanized variants of some non latin languages such as Bengali, Hindi, Tamil, Telugu and Urdu. # B Model Architectures and Sizes As we showed in section 5, capacity is an important parameter for learning strong cross-lingual represen- tations. In the table below, we list multiple monolingual and multilingual models used by the research community and summarize their architectures and total number of parameters. Model #lgs tokenization L Hm Hf f A V #params BERTBase BERTLarge mBERT RoBERTaBase RoBERTa XLM-15 XLM-17 XLM-100 Unicoder XLM-R Base XLM-R GPT2 wide-mmNMT deep-mmNMT T5-3B T5-11B 1 WordPiece 1 WordPiece 104 WordPiece 1 1 15 17 100 15 100 100 1 103 103 bBPE bBPE BPE BPE BPE BPE SPM SPM bBPE SPM SPM 1 WordPiece 1 WordPiece 12 24 12 12 24 12 16 16 12 12 24 48 12 24 24 24 768 1024 768 768 1024 1024 1280 1280 1024 768 1024 1600 2048 1024 1024 1024 3072 4096 3072 3072 4096 4096 5120 5120 4096 3072 4096 6400 16384 16384 16384 65536 12 16 12 8 16 8 16 16 8 12 16 32 32 32 32 32 30k 30k 110k 50k 50k 95k 200k 200k 95k 250k 250k 50k 64k 64k 32k 32k 110M 335M 172M 125M 355M 250M 570M 570M 250M 270M 550M 1.5B 3B 3B 3B 11B Table 7: Details on model sizes. We show the tokenization used by each Transformer model, the number of layers L, the number of hidden states of the model Hm, the dimension of the feed-forward layer Hf f , the number of attention heads A, the size of the vocabulary V and the total number of parameters #params. For Transformer encoders, the number of parameters can be approximated by 4LH 2 m + 2LHmHf f + V Hm. GPT2 numbers are from Radford et al. (2019), mm-NMT models are from the work of Arivazhagan et al. (2019) on massively multilingual neural machine translation (mmNMT), and T5 numbers are from Raffel et al. (2019). While XLM-R is among the largest models partly due to its large embedding layer, it has a similar number of parameters than XLM-100, and remains significantly smaller that recently introduced Transformer models for multilingual MT and transfer learning. While this table gives more hindsight on the difference of capacity of each model, note it does not highlight other critical differences between the models.
{ "id": "1904.12848" }
1911.00650
Automatic Detection of Generated Text is Easiest when Humans are Fooled
Recent advancements in neural language modelling make it possible to rapidly generate vast amounts of human-sounding text. The capabilities of humans and automatic discriminators to detect machine-generated text have been a large source of research interest, but humans and machines rely on different cues to make their decisions. Here, we perform careful benchmarking and analysis of three popular sampling-based decoding strategies---top-$k$, nucleus sampling, and untruncated random sampling---and show that improvements in decoding methods have primarily optimized for fooling humans. This comes at the expense of introducing statistical abnormalities that make detection easy for automatic systems. We also show that though both human and automatic detector performance improve with longer excerpt length, even multi-sentence excerpts can fool expert human raters over 30% of the time. Our findings reveal the importance of using both human and automatic detectors to assess the humanness of text generation systems.
http://arxiv.org/pdf/1911.00650
Daphne Ippolito, Daniel Duckworth, Chris Callison-Burch, Douglas Eck
cs.CL
ACL 2020 Camera Ready
null
cs.CL
20191102
20200507
0 2 0 2 y a M 7 ] L C . s c [ 2 v 0 5 6 0 0 . 1 1 9 1 : v i X r a # Automatic Detection of Generated Text is Easiest when Humans are Fooled # Daphne Ippolito†‡∗ [email protected] # Daniel Duckworth‡* [email protected] # Chris Callison-Burch†‡ [email protected] # Douglas Eck‡ [email protected] # Abstract Recent advancements in neural language mod- elling make it possible to rapidly generate vast amounts of human-sounding text. The ca- pabilities of humans and automatic discrimi- nators to detect machine-generated text have been a large source of research interest, but hu- mans and machines rely on different cues to make their decisions. Here, we perform care- ful benchmarking and analysis of three popu- lar sampling-based decoding strategies—top- k, nucleus sampling, and untruncated random sampling—and show that improvements in de- coding methods have primarily optimized for fooling humans. This comes at the expense of introducing statistical abnormalities that make detection easy for automatic systems. We also show that though both human and automatic detector performance improve with longer ex- cerpt length, even multi-sentence excerpts can fool expert human raters over 30% of the time. Our findings reveal the importance of using both human and automatic detectors to assess the humanness of text generation systems. 1 # 1 Introduction State-of-the-art generative language models are now capable of producing multi-paragraph ex- cerpts that at a surface level are virtually indis- tinguishable from human-written content (Zellers et al., 2019; Radford et al., 2019; Adelani et al., 2020). Often, only subtle logical fallacies or id- iosyncrasies of language give away the text as machine-generated, errors that require a close reading and/or domain knowledge for humans to detect. Deceptive text, whether human- or machine- generated, has entered the sphere of public con- cern (Cooke, 2018). It propogates quickly (Vosoughi et al., 2018), sets political agendas (Vargo et al., 2018), influences elections (Allcott and Gentzkow, 2017), and undermines user trust (Wang et al., 2012; Song et al., 2015). Recently, Adelani et al. (2020) have shown that automati- cally generated reviews are perceived to be as flu- ent as human-written ones. As generative tech- nology matures, authors, well-meaning or other- wise, will increasingly employ it to augment and accelerate their own writing. It is more impera- tive now than ever for both humans and automated systems to be able to detect and identify machine- generated texts in the wild. However, there has thus been little inquiry into the textual proper- ties that cause humans to give generated text high human-like ratings compared to those that cause automatic systems to rate it highly. To speak of texts produced by language mod- els, we must first consider how these texts are generated. A neural language model encodes a probability distribution over the next word in a sequence given the previous words.1 A decod- ing strategy is an algorithm that generates se- quences from a language model by determining how words should get selected from this distribu- tion. The field has largely moved toward prob- abilistic decoding strategies that randomly sam- ple from the output distribution token-by-token. However, when many low-likelihood words cu- mulatively contain quite a bit of probability mass, choosing one of these words can lead to odd or contradictory phrases and semantic errors. Hu- mans are quick to notice these types of errors. For this reason, it has become common to mod- ify the language model’s output probability dis- tribution to increase the chance of sampling to- kens with high likelihood according to the lan- guage model. Top-k random sampling, where low-likelihood words are restricted from being ∗Equal contribution, ‡Google, †University of Pennsylva- nia 1Often these ‘words” are actually subword character se- quences such as BPE tokens (Sennrich et al., 2016). generated, is one such method. A language model that is only permitted to produce high-likelihood words is less likely to make a poor choice and cre- ate the type of mistakes that are easy for humans to detect. Since humans are not proficient at identi- fying when a model subtly favors some utterances more often than a human author would, they don’t notice the over-representation of high-likelihood words in the generated text. In contrast, automatic systems excel at identifying statistical anomalies and struggle to build deeper semantic understand- ing. Top-k in particular creates text that is easy for machines to detect but very hard for humans. Thus, we observe the general trend: as the num- ber of unlikely words available to be chosen is in- creased, humans get better at detecting fakes while automatic systems get worse. In this work, we study three popular random decoding strategies—top-k, nucleus, and temper- ature sampling—applied to GPT-2 (Radford et al., 2019). We draw a large number of excerpts gener- ated by each strategy and train a family of BERT- based (Devlin et al., 2019) binary classifiers to label text excerpts as human-written or machine- generated. We find large differences in human rater and classifier accuracy depending on the de- coding strategy employed and length of the gen- erated sequences. Regardless of strategy, we find human raters achieve significantly lower accuracy than the automatic discriminators. We also show that when a decoding strategy severely modifies the unigram token distribution, as top-k does, hu- mans have trouble detecting the resultant gener- ated text, but automatic classifiers find it the eas- iest to discriminate. Worryingly, we further find that classifiers are brittle; they generalize poorly when trained to discriminate samples from one strategy and then evaluated on samples from an- other. In summary, our contributions are: • A comprehensive study of generated text de- tection systems’ sensitivity to model struc- ture, decoding strategy, and excerpt length. • An analysis of human raters’ ability to iden- tify machine-generated content, and how hu- man raters differ from automatic detectors. # 2 Related Work Generative Language Models With a suffi- ciently large training set and number of trainable parameters, neural language models based on the Transformer architecture (Vaswani et al., 2017) are capable of generating convincing, human-like excerpts up to several paragraphs in length. GPT- 2 (Radford et al., 2019), GROVER (Zellers et al., 2019), and Transformer-DMCA (Liu et al., 2018) are a few examples of large, publicly available models with this ability. GROVER, in particular, has been shown to generate fake news that is more trustworthy than human-written fake news accord- ing to human raters. Human Detection The task of trying to guess whether text is coming from a robot or a fellow human was made famous by the Turing Test (Tur- ing, 1950). It continues to be used is chatbot eval- uation (Lowe et al., 2017). The related (but not identical) task of asking human raters to judge the quality of machine-generated excerpts remains the gold-standard for evaluating open-domain genera- tion systems (van der Lee et al., 2019). Kreps et al. (2020), Gehrmann et al. (2019), and others have stressed the importance of humans being able to identify fake content on the web. Automatic Detection The rise of machine- generated content has led to the development of automated systems to identify it. GROVER was designed to not only generate convincing news ex- cerpts but to also identify them using a fine-tuned version of the generative model itself (Zellers et al., 2019). GLTR, expecting attackers to use sampling methods that favor high-likelihood to- kens, aims to make machine-generated text de- tectable by computing histograms over per-token log likelihoods (Gehrmann et al., 2019). Bakhtin et al. (2019) frame human-text detection as a rank- ing task and evaluate their models’ cross-domain and cross-model generalization, finding signifi- cant loss in quality when training on one do- main and evaluating on another. Schuster et al. (2019) argue that the language distributional fea- tures implicitly or explicitly employed by these detectors are insufficient; instead, one should look to explicit fact-verification models. Finally, dis- criminators for whether text is machine-generated are a promising research direction in adversarial training (Lin et al., 2017; Li et al., 2017) and in automatic evaluation of generative model quality (Novikova et al., 2017; Kannan and Vinyals, 2017; Lowe et al., 2017). Natural Language Understanding Automatic detection of machine-generated text benefits from a semantic understanding of the text. Contradic- tions, falsehoods, and topic drift can all indicate that an excerpt was machine-generated. Encoder- only Transformer models such as BERT (Devlin et al., 2019) have been shown to do very well at tasks requiring this understanding. While we fine- tune BERT for the task of classifying whether text was machine-generated, others have used the con- textual word embeddings from a pre-trained BERT model without fine-tuning to compute a quality score for generated text (Zhang et al., 2020). It is worth noting that recent work has raised ques- tions as to whether BERT truly builds a semantic understanding to make its predictions, or whether it merely takes advantage of spurious statistical differences between the text of different classes (Niven and Kao, 2019). # 3 Task Definition We frame the detection problem as a binary clas- sification task: given an excerpt of text, label it as either human-written or machine-generated. In particular, we are interested in how variables such as excerpt length and decoding strategy impact performance on this classification task. We thus create several datasets. Each is approximately balanced between positive examples of machine- generated text and negative examples of human- written text. While they all share the same human- written examples, each dataset contains a different set of machine-generated examples sampled using one particular decoding strategy. We also build ad- ditional datasets by truncating all of the examples to a particular sequence length, By training a separate classifier on each dataset, we are able to answer questions about which de- coding strategy results in text that is the easiest to automatically disambiguate from human-written text. We are also able to answer questions about how the length of the examples in the training set impacts our ability to automatically classify ex- cerpts of that same length as either human-written or machine-generated. # 4 Dataset Methodology All of our generated text samples are drawn from GPT-2, a state-of-the-art Transformer-based gen- erative language model that was trained on text from popular web pages (Radford et al., 2019). While we use the GPT-2 LARGE model with 774M parameters, we found that similar trends to those reported here hold in experiments with smaller language models. Given an autoregressive language model that defines a probability distribution over the next to- ken given the previous tokens in a sequence, a decoding strategy generates text by deciding how to output a token at each step based on the pre- dicted distributions. Perhaps the most straightfor- ward decoding strategy is to randomly choose a to- ken with probability proportional to its likelihood. A challenge with the random sampling approach is that these probability distributions often contain a long tail of vocabulary items that are individu- ally low-probability but cumulatively comprise a substantial amount of probability mass. Holtzman et al. (2020) observe that choosing tokens from this tail often leads to incoherent generations. Top-k sampling, nucleus sampling, and (in the extreme) beam search have all been proposed to heuristically promote samples with higher per- token likelihoods. Top-k and nucleus sampling both do so by setting the likelihood of tokens in the tail of the distribution to zero. Top-k restricts the distribution to all but the k most likely tokens, where k is a constant (Fan et al., 2018). Nucleus sampling, also called top-p, truncates the distribu- tion at each decoding step t to the kt-most-likely next tokens such that the cumulative likelihood of these tokens is no greater than a constant p (Holtz- man et al., 2020). We thus consider three different decoding strat- egy settings: Sample from the untruncated distribution • Top-k, choosing k=40 (Radford et al., 2019). • Nucleus sampling (aka top-p), choosing p=0.96 (Zellers et al., 2019). In addition, we form “negative” examples of human-written text by taking excerpts of web text that come from the same distribution as GPT-2’s training data.2 By picking text that resembles GPT-2’s train set, we ensure that our classifiers can’t simply take advantage of stylistic differences between the human-written text corpus and the kind of text GPT-2 was trained to generate. For each decoding method, we construct a train- ing dataset by pairing 250,000 generated samples with 250,000 excerpts of web text. 5,000 addi- tional paired samples are kept aside for validation and test datasets. Lastly, we filter out excerpts with fewer than 192 WordPiece tokens (Wu et al., 2https://github.com/openai/ gpt-2-output-dataset 2016) (excerpts might be quite short if the model produces an end-of-text token early on). See Ap- pendix 1 for final dataset sizes. A crucial question when generating text with a language model is whether or not to provide a priming sequence which the language model should continue. Unconditioned samples, where no priming text is provided, in conjunction with top-k sampling, lead to pathological behavior for discriminators as the first token of the generated text will always be one of k possible options. On the other hand, if long sequences of human text are used as priming, the space of possible gener- ated sequences is larger, but the detection problem shifts from one of “how human-like is the gener- ated text?” to “how well does the generated text follow the priming sequence?”. Since in this study we are interested in the former simpler question, we create two datasets, one with no priming, and one with the minimum amount of priming possible: a single token of web text. This means that for every excerpt of web text in the training set, there is an excerpt of machine- generated text that starts with the same token. We find that even with limited priming, the ability of automatic detectors can be strongly impacted. To study the effect of excerpt length, we con- struct variations of the above datasets by truncat- ing all excerpts to ten possible lengths ranging from 2 to 192 WordPiece tokens (Wu et al., 2016). In total, we obtain sixty dataset variations: one per sampling method, truncation length, and choice of priming or no priming. # 5 Automatic Detection Method The primary discriminator we employ is a fine- tuned BERT classifier (Devlin et al., 2019). We fine-tune one instance of BERT per dataset vari- ation described above. For the longest sequence length, n=192, we compare BERT’s performance with several simple baselines that have been pro- posed in other work. Fine-tuned BERT We fine-tune BERT-LARGE (cased) on the task of labeling a sentence as human- or machine- generated. The models are trained for 15 epochs, with checkpoints saved ev- ery 1000 steps, and a batch size of 256. All results are reported on the test set using the checkpoint for which validation accuracy was highest. Bag-of-Words For each sequence, we compute a bag-of-words embedding where each dimension corresponds to a token in GPT-2’s 50,000 token BPE vocabulary (Sennrich et al., 2016), and we count how many times that token appears in the text sequence. We then train a logistic regression binary classifier to predict human- or machine- written given this 50,000-dimensional embedding. We experimented with truncating embedding size by removing entries for infrequent vocabulary words, but this did not improve performance. Histogram-of-Likelihood Ranks Following GLTR (Gehrmann et al., 2019), we compute the probability distribution of the next word given the previous words in a text sequence according to a trained language model (in our case the same GPT-2 model that was used for generation). At each sequence position, we rerank the vocabulary words by likelihood, and record the rank of the ground-truth next word within this list. These ranks are then binned. GLTR uses four bins, counting (1) the number of times the top 1 word is seen, (2) the number of times words ranked 2 through 5 are seen, (3) words ranked 6-100, and (4) words ranked >100. However, we observe higher accuracy when 50 bins are spread uniformly over the possible rankings. This means that since there are 50,000 vocabulary words, the first bin counts the number of times the actual next word was within the 1,000 mostly likely next words, the second bin counts the 1,001-2,000th, and so on. We then train logistic regression binary classifiers to predict human- or machine-written given either the 4-dimensional histograms or 50-dimensional histograms as input. Total Probability Solaiman et al. (2019) pro- pose a very simple baseline consisting of a thresh- old on the total probability of the text sequence. An excerpt is predicted as machine-generated if its likelihood according to GPT-2 is closer to the mean likelihood over all machine-generated se- quences than to the mean of human-written ones. # 6 Human Detection Method The human evaluation task is framed similarly to the automatic one. We ask the raters to decide whether a passage of text was written by a human or by a computer algorithm. (Full instructions are in the Appendix.) Raters are allowed to choose between four options: “definitely” or “possibly” machine-generated and “definitely” or “possibly” human-written. They are first shown an excerpt of length 16 WordPiece tokens. After they make Method k40-1wordcond p0.96-1wordcond p1.0-1wordcond BERT acc 0.88 0.81 0.79 AUC 0.99 0.89 0.92 BagOfWords HistGLTRBuckets Hist50Buckets AUC acc 0.76 0.79 0.56 0.60 0.55 0.59 AUC 0.87 0.65 0.62 acc 0.52 0.53 0.53 AUC 0.52 0.56 0.55 acc 0.69 0.54 0.54 TotalProb acc 0.61 0.63 0.65 Human acc 0.64 0.77 0.71 Table 1: Performance (accuracy and AUC) of the fine-tuned BERT classifier and several simple baselines on detect- ing length-192 sequences generated with one word of priming (1worccond). Note that p1.0 refers to untruncated random sampling, where we sample from 100% of the probability mass. The last column shows human perfor- mance on the same task where accuracy with a 50% baseline is computed by randomly pairing samples from each decoding strategy with a human-written sample. a guess, the length of the excerpt is doubled, and they are asked the same question again. This con- tinues until the entire passage of length 192 tokens is shown. Passages are equally likely to be human- written or machine-generated, with the machine- generated excerpts being evenly split between the three sampling strategies considered in this paper. Initially, Amazon Mechanical Turk (AMT) raters were employed for this task, but rater accu- racy was poor with over 70% of the “definitely” votes cast for “human” despite the classes be- ing balanced. Accuracy, even for the longest se- quences, hovered around 50%. The same study was then performed with university students who were first walked through ten examples (see Ap- pendix Table 4) as a group. Afterward, they were asked to complete the same tasks that had been sent to the AMT workers. No additional guid- ance or direction was given to them after the ini- tial walk-through. We will refer to this group as the “expert raters.” Among them, 52.1% of “def- initely” votes were cast for human, and accuracy on the longest excerpt length was over 70%. The human evaluation dataset consisted of 150 excerpts of web text and 50 excerpts each from the three decoding strategies. Each question was shown to at most three raters, leading to 900 total annotations from the untrained workers and 475 from the expert raters. A more detailed breakdown can be found in the Appendix. prisingly well (over 60% accuracy for all sampling methods) relative to the methods which involve training logistic regression models. Logistic regression on bag-of-words is the best of the baselines, beating out the histogram-based methods. While Gehrmann et al. (2019) report an AUC of 0.87 on classifying text as real or gener- ated using logistic regression on the four buckets of the GLTR system, we report AUC between 0.52 and 0.56 for this task. The discrepancy is likely due to the fact that the human-written text in our discriminator training set comes from the same distribution as the text used to train the language model, while in GLTR the human text comes from children’s books, scientific abstracts, and news- paper articles. The selection of training data for learned detection systems is crucial. In real-world applications, the choice ought to reflect the genres that builders of text-generation systems are trying to impersonate. Fine-tuned BERT In Figure 1a, we begin by ob- serving discriminator accuracy as a function of ex- cerpt length and sampling method. As can be in- tuitively expected, as sequence length increases, so too does accuracy. For unconditioned text de- coded with nucleus (p0.96) and untruncated (p1.0) random sampling, we find discriminator accuracy increases from 55%, near random, to about 81% for the longest sequences tested. In contrast, dis- criminators trained and evaluated on top-k achieve over 80% accuracy even on 16-token excerpts. # 7 Automatic Detection Results Simple Baselines Table 1 shows the perfor- mance of the baseline discriminators on length- 192 sequences, as compared with fine-tuned BERT. Reassuringly, BERT far surpasses all sim- ple baselines, indicating that it is not fully possi- ble to solve the detection problem without com- plex sequence-based understanding. The simplest baseline, TotalProb, which makes a decision based on the likelihood of the sequence, performs sur- Why are top-k’s samples so easy to detect? In Figure 2b, we see the percentage of probability mass concentrated in the k most common token types for each sampling method. While random sampling and nucleus sampling are very similar to human-written texts, we see top-k concentrating up to 80% of its mass in the first 500 most com- mon tokens. The other sampling methods as well as human-written texts require at least 1,100 token types for the same. It is clear that top-k’s distribu- Accuracy of BERT Fine-tuned Discriminator Fraction of BERT Discriminator Errors that are Machine-generated Labeled as Human-written (a) (b) Figure 1: In (a), accuracy increases as the length of the sequences used to train the discriminator is increased. In (b), we see that the BERT fine-tuned discriminator predicts about the same number of false-positives as false- negatives when trained with samples generated using top-p sampling. However, for top-k, it more often mistakes machine-generated text to be human-written, while for untruncated random sampling the opposite is the case. tion over unigrams strongly diverges from human- written texts–an easy feature for discriminators to exploit. In fact, See et al. (2019) note that it takes setting k to 1000 to achieve about the same amount of rare word usage and fraction of non-stopword text as as human writing.3 This makes it very easy for the model to pick out machine-generated text based on these distributional differences. One way to help resolve this problem is to add priming text. Doing so causes more rare words to be incorporated into the top-k of the unigram distribution. Adding even a single human word of priming significantly reduces the performance of detectors trained with top-k random sampling. Without priming, a discriminator trained on se- quences of length 2 can classify with ∼90% ac- curacy the provenance of the text (Figure 1a). By adding one priming token, accuracy drops to ∼65%. Even on the longest 192-length sequences, top-k discriminator accuracy is 6% lower on the primed dataset than the unprimed one. average more than 500. Untruncated random sam- pling always selects from the entire 50,000 word vocabulary, whereas top-k only selects from k. Transferability In Table 2, we show how dis- criminators trained with samples from one decod- ing strategy can transfer at test time to detect- ing samples generated using a different decoding strategy. Unsurprisingly a discriminator trained on top-k generalizes poorly to other sampling meth- ods: accuracy drops to as low as 42.5%, worse than chance. Conversely, training the discrimi- nator with sequences sampled from the untrun- cated distribution leads to little transferability to detecting top-k samples. Only the discriminator trained with nucleus sampling (a compromise be- tween unmodified sampling and top-k) was able to detect sequences from the other sampling strate- gies without too much of a hit to accuracy. As ex- pected, a discriminator trained on an equal portion of data from each decoding method does reason- ably at detecting all three. When generating with nucleus or untruncated random sampling, adding a priming token is not as impactful, as these methods are already sam- pling from a large fraction (or all) of the probabil- ity distribution. This is seen in Figure 2a where at the very first step of unprimed generation, nu- cleus sampling selects from 3075 possible vocab- ulary words, and at later positions selects from on 3when decoding from the GPT-2 small model with 117M parameters. Perhaps this lack of transferability is related to each discriminator’s calibration. Indeed, the de- gree to which a discriminator’s average predic- tion deviates from 50% is a direct indicator of its accuracy. In Table 3, we observe that of the three BERT discriminators, only that trained on top-p samples predicts ‘machine-generated’ on ap- proximately 50% of in-domain examples as ex- pected. This same discriminator’s behavior holds on datasets generated by other sampling strategies (b) (a) # wz Figure 2: In (a), the average (over sequences in the test set) k chosen at each step during generating with nucleus sampling is plotted. Adding a single word of priming strongly impacts the ks chosen for the first few positions, but this difference quickly dissipates. In (b), we consider the first token generated in each sequence by top-k, and plot what fraction of these are captured by the k most common unique tokens from the vocabulary. Overall, at its first step, top-k concentrates 80% of its probability mass in the 500 most common tokens from the vocabulary. Accuracy of Human Raters 80% 75% 70% 65% Overall Accuracy of Human Raters Fraction of Rater Errors that are Machine- generated Labeled as Human-written 60% Accuracy 55% 50% — k40-1wordcond — p1.0-1wordcond — p0.96-Iwordeond 45% 40% oO 32 64 96 128 Sequence Length in Tokens (a) 160 192 0 32 64 a7 os os oa 03 02 ot ° 16 32 64128192 ‘Sequence length in tokens — overall lek40-twordcond p0.96-Iwordcond mp1.0-1wordcond 96 128 Sequence Length in Tokens (b) 160 192 (c) Figure 3: (a) and (b) show human rater accuracy of correctly identifying an excerpt as human-written or machine- written, shown with 80% confidence internals, in (a), broken up by decoding strategy and in (b), overall. Accuracy increases as raters observe more tokens. (c) shows that for short excerpts, most rater mistakes are them incorrectly thinking machine-generated text is human written. The two errors types become more balanced at longer lengths. n top-k nucleus random mixed i a r T top-k 90.1 79.1 47.8 88.7 Eval nucleus 57.1 81.3 63.7 74.2 random 43.8 78.4 81.7 72.2 n top-k nucleus random i a r T top-k 60.9 49.2 7.3 Eval nucleus 27.9 51.7 22.6 random 14.5 48.9 38.3 Table 2: Accuracy of BERT fine-tuned discriminator when trained on samples from one strategy (rows) and evaluated on another (columns). Trained on samples with 192 tokens. The ‘mixed’ dataset is one containing an equal portion of samples from each strategy. Table 3: Average probability of ‘machine-generated’ according to each length-192 discriminator. The ex- pected in-domain probability is 0.5. One token of con- ditioning. creasingly so as the number of tokens increases. as well. In contrast, we observe that discrimi- nators trained on top-k and untruncated random samples severely underestimate the percentage of machine-generated excerpts in out-of-domain datasets. Even within domain (Figure 1b), we find both discriminators heavily favor a single class, in- Human Evaluation Overall human performance across all sampling methods is shown in Figure 3b. Even with the multi-paragraph 192-length ex- cerpts, human performance is only at 71.4%, in- dicating that even trained humans struggle to cor- rectly identify machine-generated text over a quar- Truth Raters p1.0 H EDIT:OKAY!, I guess that’ll work for now. > http://www.teamfortress.com/ and then go buy the game and experience some of the best online gaming I have ever played. ˆ ˆBoth girls had a really fun time and I had a GREAT time making both of these costumes. Everything was altered even a little bit(dying the pants a darker grey and painting the boots and shirts) But my piece de resistance would have to be my eldest’s Medi-Gun.If you have any questions about the costumes, I would be happy to assist you!Oh and here’s a video of my daughter before the costume was completed.Thanks! k40 H p0.96 M M H Truth Raters p1.0 M First off, this thread has done a pretty good job of describing in detail yet another broken touchscreen. That’s the difference between a smartphone and a PC with no prying eyes having to snap shots for the police to find. ¶What I would like to address is the mindset that generally surrounds Chrome OS users. To me this is analogous to saying that Apple does“hate their Windows”, or that HP does“hate their Macs” as if http://twitter.com/) (and that quote is from two years ago), that anyone who covers smartphones and tablets from a “PC” perspective is just jealous. ¶Chrome OS is for browsing the web, PC processors can do stronger things in that regard, Windows is a juggernaut on those fronts. This is how I see it. Yes, it can be slow. And yes, you need a fast CPU Truth Raters p1.0 M Exidentia at Eurnari, is an upcoming Cryptopia event which is currently still in devel- opment. Be a part of the first live stream of this year’s event on 15-16 January 2016! ¶Since the release of v1.22, Exidentia has received a fair amount of user feedback. This event takes place in the underwater Cryptopia they have built. During this event, you will learn about the ocean and areas around it, and be reached by a treasure hunter that helps you explore the different areas. ¶There will be six different levels in this event that you will become acquainted with: thought Polar Lava, Ocean Seared Cones and Celestine Floors, Sea Damaged Aerie Bricks, coast Puddle (congipit stopping at red water), Shaikh Swamp and Bugmite. At rotating points, you will learn how to access various types of creatures k40 - p0.96 - M H k40 - p0.96 M H - Truth Raters p1.0 H Image copyright Getty Images Image caption Women mourn over the coffin of one of the vic- tim’s of Sunday’s bombing in Ankara ¶Who’d be in Turkey’s shoes right now? ¶Since July last year, hundreds of soldiers and civilians have been killed in terrorist attacks. Suicide bombs have torn into crowds of demonstrators and tourists. Military convoys have been targeted in the heart of the capital. ¶A long-running Kurdish insurgency, once thought to be close to resolution after years of painstaking efforts to build bridges, has erupted once more. ¶The country is awash with Syrian and other refugees. The government has been under pressure to stop them moving on into Europe and prevent would-be jihadis travelling the other way. ¶How dangerous is Turkey’s unrest? ¶Tears and destruction amid PKK crackdown ¶Turkey v Islamic State v the Kurds Truth Raters p1.0 M FOR ALABAMA, GOOD WEEKS ¶AND A TOUR OF CAIRO ¶THE ALABAMA COM- MITTEE ON THE STUDY OF THE AMERICAN SECURITY AGENDA, ¶America’s fu- ture has been mapped out in carved stone. Metro Atlanta’s last US congressman, Bill Posey, was a inextricable integral element of the Citadel project as it became another metaphor for Atlanta’s transformation from an industry backwater into the finance and information hub of the nation’s capital. Meanwhile, Cobb County – Atlanta’s geode of change – is home to some of the largest industrial parks in the South, a regional cultural center, a 100-year-old manufac- turing town and a potent symbol of the former city’s cherished Georgian past. The gentry still live there, the defunct industrial landscapes carry the names of p0.96 Truth Raters p1.0 - M Ever since the opening of the North American College of Art Education in 1990, the demand for art education in America has grown steadily, and in recent years we have seen the rise of students that pursue art education not in the classroom but at art academies. This year saw another 50 percent increase in the number of art academies in the United States offering courses – with an additional 10 percent of students in 2017 taking art. ¶Some major changes have occurred in recent years with regard to the art curriculum and the way students learn, and we will explore each of these in coming months as we look at the various forms of art education. There is no one-size-fits-all approach for this or any other field of study, and students who begin a course in art education may change their plans based on what they see that course, including what lessons they have completed and the resources available, to create meaningful experiences of artistic creation. ¶One important area k40 M p0.96 M H M k40 - p0.96 H M - k40 M H - Table 4: Some 192-token examples where at least two expert raters agreed with each other, but were not in agree- ment with the automatic discriminators. The first row shows examples where the ground-truth was human-written, the second shows machine-generated examples where the corresponding discriminator guessed incorrectly, and the third shows machine-generated examples where the discriminator was correct, but raters got it wrong. ter a time. However, it is worth noting that our best raters achieved accuracy of 85% or higher, sug- gesting that it is possible for humans to do very well at this task. Further investigation is needed into how educational background, comfort with English, participation in more extensive training, and other factors can impact rater performance. To break up the accuracies by sampling method in a way that is comparable to the results shown for the automatic discriminators, we pair each machine-generated example with a randomly se- lected one of webtext to create a balanced dataset for each sampling strategy. Performance is shown in Figure 3a. Top-k produces the text that is hard- est for raters to correctly distinguish, but as shown in Section 7, it is the easiest for our automatic de- tection systems. Samples from untruncated ran- dom sampling and nucleus sampling with p=0.96 are equivalently difficult for raters to classify as machine-generated. Our human evaluation results suggest that much lower p-values than the 0.92 to 0.98 range proposed in Zellers et al. (2019) might be necessary in order to generate text that is con- sidered significantly more human-like to human raters than the text produced by using the untrun- # cated distribution. Table 4 gives several examples where human raters and our BERT-based discriminators dis- agreed. When raters incorrectly labeled human- written text as machine-generated, often the ex- cerpts contained formatting failures introduced when the HTML was stripped out. In the mid- dle two examples, topic drift and falsehoods such as Atlanta being the “information hub of the na- tion’s capital” allowed humans to correctly detect the generated content. However, in the bottom two examples, the high level of fluency left human raters fooled. Overall we find that human raters—even “ex- pert” trained ones—have consistently worse ac- curacy than automatic discriminators for all de- coding methods and excerpt lengths. In our ex- periments, randomly-selected pairs of raters agree with each other on a mere 59% of excerpts on average. (In comparison, raters and discrimina- tors agree on 61% to 70% of excerpts depending on the discriminator considered). We surmise that the gap between human and machine performance will only grow as researchers inevitably train big- ger, better detection models on larger amounts of training data. While improved detection models are inevitible, it is unclear how to go about im- proving human performance. GLTR proposes pro- viding visual aids to humans to improve their per- formance at detecting generated-text, but it is un- likely that their histogram-based color-coding will continue to be effective as generative methods get better at producing high-quality text that lacks sta- tistical anomalies. # 8 Conclusion In this work, we study the behavior of auto- mated discriminators and their ability to iden- tify machine-generated and human-written texts. We train these discriminators on balanced bi- nary classification datasets where all machine- generated excerpts are drawn from the same gener- ative model but with different decoding strategies. We find that, in general, discriminators transfer poorly between decoding strategies, but that train- ing on a mix of data from methods can help. We also show the rate at which discriminator accuracy increases as excerpts are lengthened. We further study the ability of expert human raters to perform the same task. We find that rater accuracy varies wildly, but has a median of 74%, which is less than the accuracy of our best- performing discriminator. Most interestingly, we find that human raters and discriminators make de- cisions based on different qualities, with humans more easily noticing semantic errors and discrimi- nators picking up on statistical artifacts. In our ex- periments, these artifacts are most prominent with top-k sampling. However, any strategy that over- samples high-likelihood words is susceptible. As the p in nucleus sampling is set increasingly lower to achieve more fluent text (some systems are al- ready using p as low as 0.5 (Miculicich et al., 2019)), the distributional deviations that plague top-k text will surface in nucleus sampling as well. Holtzman et al. (2020) explain how a unique at- tribute of human language is that it dips in and out of low probability zones. This variance in likeli- hood is what makes human-written text interest- ing and exciting to read. Today’s generation sys- tems have not yet solved the problem of mimick- ing the human cadence without introducing poor word choices that are easy for humans to detect. Generation systems often optimize for fooling hu- mans without acknowledging the trade-off that ex- ists between human perception of quality and ease of automatic detection. We therefore suggest three prongs for future research: 1. Identifying ways to improve the language models and decoding strategies we use in or- der to generate text that is both exciting (ie. unlikely) and semantically plausible. 2. Building better world understanding into au- tomatic discriminators so that they are more capable of detecting the types of errors that humans notice. 3. Developing tools and educational materi- als to improve humans’ ability to detect machine-generated text. These may include automatic detectors with components that ex- plain their predictions. Finally, we would like to note that all of our ex- periments were performed with English language models, and it remains an open question how the trade-off between ease of human detection and ease of automatic detection might differ for lan- guages that are very different from English. # Acknowledgements This research is based upon work supported in part by U.S. DARPA KAIROS Program No. FA8750- 19-2-1004. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copy- right annotation therein. We also thank Noah Fiedel, Peter Liu, Sharan Narang, Joao Sedoc, Yun William Yu, and Hugh Zhang for their valuable feedback. # References David Ifeoluwa Adelani, Haotian Mai, Fuming Fang, Huy H Nguyen, Junichi Yamagishi, and Isao Echizen. 2020. Generating sentiment-preserving fake online reviews using neural language models and their human-and machine-based detection. In International Conference on Advanced Information Networking and Applications, pages 1341–1354. Springer. Hunt Allcott and Matthew Gentzkow. 2017. Social me- dia and fake news in the 2016 election. Journal of economic perspectives, 31(2):211–36. Anton Bakhtin, Sam Gross, Myle Ott, Yuntian Deng, Marc’Aurelio Ranzato, and Arthur Szlam. 2019. Real or learning to discriminate ma- chine from human generated text. arXiv preprint arXiv:1906.03351. Fake news and alterna- tive facts: Information literacy in a post-truth era. American Library Association. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- In Proceedings of the 2019 Conference standing. of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. In Proceed- Hierarchical neural story generation. ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 889–898. Sebastian Gehrmann, Hendrik Strobelt, and Alexan- der M Rush. 2019. Gltr: Statistical detection and vi- sualization of generated text. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics: System Demonstrations, pages 111–116. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text de- generation. In International Conference on Learn- ing Representations. Anjuli Kannan and Oriol Vinyals. 2017. Adversar- ial evaluation of dialogue models. arXiv preprint arXiv:1701.08198. Sarah E Kreps, Miles McCain, and Miles Brundage. 2020. All the news thats fit to fabricate: Ai- generated text as a tool of media misinformation. Social Science Research Network. Chris van der Lee, Albert Gatt, Emiel van Miltenburg, Sander Wubben, and Emiel Krahmer. 2019. Best practices for the human evaluation of automatically generated text. In Proceedings of the 12th Interna- tional Conference on Natural Language Generation, pages 355–368. Jiwei Li, Will Monroe, Tianlin Shi, S´ebastien Jean, Alan Ritter, and Dan Jurafsky. 2017. Adversar- ial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547. Kevin Lin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. 2017. Adversarial ranking for language generation. In Advances in Neural Infor- mation Processing Systems, pages 3155–3165. Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018. Generating wikipedia by summariz- ing long sequences. In International Conference on Learning Representations. Ryan Lowe, Michael Noseworthy, Iulian Vlad Ser- ban, Nicolas Angelard-Gontier, Yoshua Bengio, and Joelle Pineau. 2017. Towards an automatic turing test: Learning to evaluate dialogue responses. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1116–1126. Lesly Miculicich, Marc Marone, and Hany Hassan. 2019. Selecting, planning, and rewriting: A mod- ular approach for data-to-document generation and translation. EMNLP-IJCNLP 2019, page 289. Timothy Niven and Hung-Yu Kao. 2019. Probing neu- ral network comprehension of natural language ar- guments. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 4658–4664, Florence, Italy. Association for Computational Linguistics. Jekaterina Novikova, Ondˇrej Duˇsek, Amanda Cercas Curry, and Verena Rieser. 2017. Why we need arXiv preprint new evaluation metrics for nlg. arXiv:1707.06875. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Tal Schuster, Roei Schuster, Darsh J Shah, and Regina Barzilay. 2019. Are we safe yet? the limitations of distributional features for fake news detection. arXiv preprint arXiv:1908.09805. Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D Manning. 2019. Do massively pretrained language models make better In Proceedings of the 23rd Confer- storytellers? ence on Computational Natural Language Learning (CoNLL), pages 843–861. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715– 1725. Irene Solaiman, Miles Brundage, Jack Clark, Amanda Askell, Ariel Herbert-Voss, Jeff Wu, Alec Radford, and Jasmine Wang. 2019. Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203. Jonghyuk Song, Sangho Lee, and Jong Kim. 2015. Crowdtarget: Target-based detection of crowdturf- ing in online social networks. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pages 793–804. ACM. Alan Turing. 1950. Computing machinery and intelligence-am turing. Mind, 59(236):433. Chris J Vargo, Lei Guo, and Michelle A Amazeen. 2018. The agenda-setting power of fake news: A big data analysis of the online media landscape from 2014 to 2016. New media & society, 20(5):2028– 2049. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998–6008. Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science, 359(6380):1146–1151. Gang Wang, Christo Wilson, Xiaohan Zhao, Yibo Zhu, Manish Mohanlal, Haitao Zheng, and Ben Y Zhao. 2012. Serf and turf: crowdturfing for fun and profit. In Proceedings of the 21st international conference on World Wide Web, pages 679–688. ACM. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural ma- chine translation system: Bridging the gap between arXiv preprint human and machine translation. arXiv:1609.08144. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2019. Defending against neural fake news. CoRR, abs/1905.12616. Tianyi Zhang, Varsha Kishore, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- In International uating text generation with bert. Conference on Learning Representations. # A Appendix # A.1 Dataset Sizes Table 5 shows the number of sequences used for training and evaluating each of the automatic dis- criminators. Recall that each discriminator is trained for binary classification on an a dataset of machine-generated (positive) and human-written (negative) examples. Each dataset was constructed by pairing the human-written excerpts (last row of Table 5) with the machine-generated excerpts drawn via a particular decoding algorithm (‘k40’, ‘p0.96’, or ‘p1.0’) and priming strategy (‘no- cond’ or ‘1wordcond’). Originally the human- written set and each machine-generated set con- tained 250,000 training examples, 5,000 validation examples, and 5,000 test examples. Table 5 shows the resulting counts after after all excerpts with sequence length shorter than 192 tokens were fil- tered out. Thus, the final training, validation, and test sets were almost, but not quite, balanced. # A.2 Further Details on Human Evaluation The user interface for the human evaluation task is shown in Figure 6. At each step, the rater is shown additional text and asked to guess whether the excerpt is human-written or machine-generated. They are able to revise their guess at each subse- quent step. The newly appended text at each step is bolded in the UI. At the end, workers are told whether or not they got the question correct. To gauge worker attention levels, 10% of ques- tions shown to workers explicitly stated what an- swer ought to be specified. An example of one of these “honeypot” questions is shown in Figure 7. Amazon Mechanical Turk workers got 83% accu- racy on these questions. Expert raters got 91.8% accuracy. Table 8 shows the accuracy of each ex- pert rater along with the number of annotations they provided. Table 9 shows the example exerpts that were used to “train” the expert raters. For both the Amazon Mechanical Turk raters and the expert raters initial predictions were biased towards ‘possibly human,’ and only by observing more tokens did their predictions become more confident. Figure 4 shows that ‘possibly human’ is by far the most frequent answer upon observing 16 tokens, and as more tokens are observed raters gravitate towards ‘definitely human’ or ‘definitely machine.’ Even at 192 tokens, many raters are still uncertain. Figure 4 also shows how raters for the most part default to guessing short excerpts are Method large-744M-k40-1wordcond large-744M-k40-nocond large-744M-p0.96-1wordcond large-744M-p0.96-nocond large-744M-p1.0-1wordcond large-744M-p1.0-nocond human-written # train 211148 218825 210587 209390 209334 208219 201344 # valid 4226 4362 4248 4174 4169 4187 4031 # test 4191 4360 4208 4185 4173 4168 4030 Table 5: The number of excerpts used for training, val- idation, and testing. # # Annotations webtext k0-1wordcond k40-1wordcond p0.96-1wordcond total machine # Expert Raters AMT Workers 450 150 150 150 450 Annotations Expert Raters AMT Workers 239 87 75 74 236 Table 6: The number of human annotations collected. In total, there were 50 examples from each sampling strategy and 150 examples of web text. Each example was shown to at most three raters. human-written, and as the excerpts are extended, raters use the extra evidence available to revise their guess. By the longest sequence length, votes for “human-written” and “machine-generated” are about balanced. In Figure 5, we plot the frequency for each se- quence length that raters converged on a single guess (either human or machine) at that point. The figure shows how it takes raters longer to converge on a decision of “machine” than to converge on a decision of “human.” # A.3 Automatic Detection Method Reliability In order to quantify the variance of automatic discriminator accuracy, we finetuned five in- dependent BERT discriminators on a ‘mixed’ dataset comprising of 50% human-written exam- ples and 50% machine-generated examples, where machine-generated examples are equally split be- tween top-k=40, top-p=0.96, and untruncated ran- dom sampling. All sequences were exactly 192 tokens. The best performing model checkpoint, according to an in-domain validation set, was then used to evaluate out-of-domain binary classifica- tion datasets as in Table 2 of the main paper. The results are shown in Table 7. We find out- of-domain accuracy to be extremely reliable with a standard deviation of approximately 1% or less. 500 - 7 LJ wo 16 2 18 132 6a number of tokens observed Figure 4: Number of votes expert raters made for each label as a function of number of tokens observed. As raters observe more tokens, their predictions become more confident. # Point of Convergence for Annotations of Human-Written Text , 045-1 0.40 -} 1 0.35 -) L 0.30 -} L 0.25 -} L 0.20 -} 1 16 32 64 128 192 Length at which rater made up their mind 0.15 -) Fraction of all annotations 0.10 -) 0.05 -) 0.00 - Point of Convergence for Annotations of Machine-Generated Text 0.30 ~ 0.20 - L 0.15 - L 0.10 -| L 0.05 - L 0.00 . y T 16 32 64 128 192 Length at which rater made up their mind 0.25 -) Fraction of all annotations Figure 5: On average, it takes much less text for raters to decide an excerpt is human-written than to decide an excerpt is machine-generated. Dataset random sampling top-k = 40 top-p = 0.96 µ 72.47 88.06 74.4 σ 1.02 0.59 0.76 Table 7: Average (µ) and standard deviation (σ) of ac- curacy on out-of-domain datasets across five runs of au- tomatic discriminator finetuning. Accuracy Count 83 51 51 51 48 40 39 36 34 26 18 14 11 9 8 5 5 2 2 1 1 1 Table 8: Our expert rater pool consisted of 22 raters. The average accuracy of each rater on the longest ex- cerpt length (192 tokens) is shown here along with the total number of excerpts they annotated. I recently got the chance to try the new Oil Essentials line. With six potent blends to choose from–at $13 each–these cute little bottles offer a great, affordable way to partake in the skin and hair care oil craze. I tested each product in the line, massaging them onto my face every night before bed and running any leftover oil through my hair to tame frizziness. You could also add a few drops to your bath, favorite moisturizer, or even your shampoo and conditioner. Here’s a quick rundown of each oil. Revitalize: Omega 3, 6, 9 & Evening Primrose This was the first one I tried (I went in ROYGBIV order to keep things straight) and my first impression was that it smells lovely but a little strong. The fragrance smells genuinely like flowers. Red Lanterns, the lead exposure to a movie starring the Batman solo movie alum Margot Robbie taken under Wonder Woman’s wing have reignited that rivalry with their whispery premiere. They played it as much as they possibly could, even though people who didn’t ever watch Justice League or might have missed it waiting in line for the theater were still talking about as I spilled coffee. The gist? An overextended (OK, a sore) Adam West films set up a Legion of Super-Heroes situation. How aggro? Super laws and paramilitary groups watch over the world’s superheroes, which is a mix of that schtick ending, Planet Of The Apes II bit, and the Batman/Venom bit of last appeared in The Seventh Seal when Chris O’Donnell infiltrated one of the teams at some point, also wearing Staff. He is considered to be the most terrifying man on the planet and people stay away from him. A guy asks him to do something and he says, ”My girlfriend’s so important to me... I don’t need to fight her any more.” And then, boom, there’s some in a corner crying inappropriately. Men: It’s gone in five minutes. Why do I have to be so sad? It’s cute,” says female member, who asks to remain anonymous. ”It’s what grew up to drive me crazy when I was a kid, seeing these women become the nurturing, wealthy things they are in this professional world I truly love.” And it’s nothing to do with her success. These men still actively fear being around the idea of a woman who might win Oscars, make movies or be audacious drivers. Dropbox and Google Drive are very different services that appeal to different users. While Drive is connected to the entire Google Apps (now known as G Suite) ecosystem, Dropbox is a lightweight, simple alternative for file storage. While both are useful, users need to look beyond features, and make sure the service they choose can adequately protect their data. Here’s how Dropbox encryption and Google Drive encryption stack up. Dropbox and Google Drive Encryption To their credit, both Dropbox and Google Drive protect user files with encryption. Both also allow users to enable two-step verification, which requires an extra code texted to the user’s phone to access the account, making it harder for hackers to access a user’s data. EVE Isk Per Hour(Eveiph) is hands down the best tool I’ve ever used to make isk in New Eden. It is a market helper program that is able to do a great deal of the work that is typically done by a traders spreadsheet. I’ve used it to go from a 200m/month trading income to 3b/month on my main trading character. Above you can see the blueprint manufacturing page which is located on the first tab of Eveiph. Here you can see the components required to make an item, the settings for the blueprint, and a brief market analysis of what you can expect to make manufacturing the item and selling it at the market you’ve selected. You can enter the amount of runs you want to make, the ME and PE of your blueprint and click add to shopping list, and it will be added to a list of items to purchase when you are next at a trade hub. So, not only was the speech a thoroughly mediocre diatribe about what he now thinks we should do for the next 45 minutes, but also how much credit we should give to Mumford and Sons for bringing Obama to the campaign trail. Behold: At the DNC, we drew strength from something even more powerful than the power of words. We drew strength from the power of families in this country. We drew strength from the power of family values. We drew strength from the power of a common purpose–We drew strength from our shared commitment to fighting against everything that undermines our potential in this country and our freedom. It is with that same conviction that we launch this campaign today and we urge every American in America to join us tonight. To allow the same attempt to succeed in this election. The year is twenty-eight, and the boy is Harry, the sixth year at Hogwarts School of Witchcraft and Wizardry. He can’t walk without spells covering his feet (or in his case, his feet are so badly burned that he, for practical purposes, can’t even walk for that long without them) and he’s just starting to feel more secure about things. This is a pretty dull aspect of the book, I’d say. They probably spent way too much time on the fact that he can’t use the stick of silver from his wand, despite his friends bewitching all the knives they had. Harry had been having some difficulty getting to sleep until Hermione pulled him out of his state of near-death-conversation. Thanks to Hermione’s meddling, he’s gotten some sleep for the past two days. They also learnt a fair amount about getting used to his new surroundings. Coincidentally, just a few days after the first tweet came out, a fellow named Kevin McReynolds sent out an interview with GQ to promote their upcoming issue. McReynolds describes himself as ”a conservative Catholic” who ”cannot fathom this guy being a real person and should be ashamed that he was able to be elected president.” It’s true. If you believe Hillary Clinton gave away 20 percent of the American Uranium to Russia, then you should be ashamed that you voted for Trump. No one should be able to give or receive anything that’s not supposed to, so long as they have a warrant. If you’ve been in a relationship for more than six months with a person who’s also convicted of being a felon (or convicted of stealing), that’s just stupid, especially as a married man. If you’re married to someone convicted of a crime, and they go on their honeymoon with you, that’s a felony, not a honeymoon. CHIP DESIGNER Texas Instruments unveiled a family of system on chip (SoC) processors aimed at automakers today, which are designed for use in self-driving cars. Named the TDA2x, the SoC family integrates safety features, such as aiding auto designers to create advanced driver assistance systems (ADAS), which in turn help ”reduce the number of collisions on the road and enable autonomous driving experiences”. ”TDA2x device family combines an optimal mix of high performance, vision analytics, video, graphics and general purpose processing cores in a low power envelope, enabling a broad range of ADAS applications including front camera, surround view and sensor fusion,” Texas Instruments said in its release. Description This classic blend of coffee, cream, and sugar is the perfect drink! It is a smooth and creamy coffee with hints of cream and sweet sugar that can be enjoyed even after a full day of work or playing! The sugar provides a wonderful texture to the coffee beans, so that it can be scooped out into a cup. Available in four flavours: vanilla cream, caramel cream, coffee creme, and chocolate cream. Note: Coffee can be prepared in less than 120 minutes. Note: Serves one. Table 9: The 10 examples that “expert” raters were guided through before they were asked to perform the detection task. These are hand-selected to showcase the spectrum of generated text and human-written text. A seston A sectors ee : In ahah eh tl alt eae text that was extracted from a In this task you will be shown some text that was extracted from a The majority of experts agree that the asteroid = "Ps" The majority of experts agree that the asteroid 2012 eeste. 2012 DA14 should be found as... ones pried Lied irene tiller otal DA14 should be found as soon as it passes by the Guess whether the text on the left was written by a human or by a ymputer algorithm. 7 Next a * gorithm. After first guess, "Next" see more text and guess again, Youll do ths for 5 tmes. atthe end, | Earth, and so should be a ‘course correction ‘in the Score iwrand guose gosin Youk do this fore tines At tne ond, pe tte team tt best of times: It should disintegrate into multiple we wil tell you whether the text was weltien by a machine or a Sue bndicalers jou car look for Duel ast migh be rockon: pieces, and from each slowly drift over a set path ee a hee generated: towards the sun. generated: . a itself + Contradicts itself * Incoherent Incoherent Sore The sun of Earth, however, hasn't seemed to > Repetaive take kindly to this deliberate distraction, its orbit Note that both human- and machine- written text might contain: @ s °, A Note that both human- and machine- written text might contain unusual formatting such as text in all capitals, no line-break between perturbing a slow wobble in the asteroid's orbit unusual formatting such as text in all capitals, no line-break between a title and article contents, and misaligned quotation symbols, DO that has seen DA14 scale towards its maximum a title and article contents, and misaligned quotation symbols. DO ine 8. NOT base your decision on formatting weirdnesses. NOT base your decision on formatting weirdnesse: distance from Earth, as predicted by the Oxford professor James Cheshire. (1) What do you think the source is? (4) What do you think the source is? Definitely Human-written “No matter... Definitely Human-written @ Possibly Human-written Possibly Human-written Possibly Machine-generated Possibly Machine-generated Definitely Machine-generated © Definitely Machine-generated Next Next A seston ee : In ahah eh tl alt eae text that was extracted from a The majority of experts agree that the asteroid = "Ps" 2012 DA14 should be found as... ones pried Lied irene tiller otal ymputer algorithm. 7 Next see more text and guess again, Youll do ths for 5 tmes. atthe end, pe tte team tt Sue bndicalers jou car look for Duel ast migh be rockon: generated: . a itself * Incoherent Sore Note that both human- and machine- written text might contain: unusual formatting such as text in all capitals, no line-break between a title and article contents, and misaligned quotation symbols, DO NOT base your decision on formatting weirdnesses. (1) What do you think the source is? Definitely Human-written @ Possibly Human-written Possibly Machine-generated Definitely Machine-generated Next A sectors In this task you will be shown some text that was extracted from a The majority of experts agree that the asteroid 2012 eeste. DA14 should be found as soon as it passes by the Guess whether the text on the left was written by a human or by a a * gorithm. After first guess, "Next" Earth, and so should be a ‘course correction ‘in the Score iwrand guose gosin Youk do this fore tines At tne ond, best of times: It should disintegrate into multiple we wil tell you whether the text was weltien by a machine or a pieces, and from each slowly drift over a set path ee a hee towards the sun. generated: + Contradicts itself Incoherent The sun of Earth, however, hasn't seemed to > Repetaive take kindly to this deliberate distraction, its orbit @ s °, A Note that both human- and machine- written text might contain perturbing a slow wobble in the asteroid's orbit unusual formatting such as text in all capitals, no line-break between that has seen DA14 scale towards its maximum a title and article contents, and misaligned quotation symbols. DO ine 8. NOT base your decision on formatting weirdnesse: distance from Earth, as predicted by the Oxford professor James Cheshire. (4) What do you think the source is? “No matter... Definitely Human-written Possibly Human-written Possibly Machine-generated © Definitely Machine-generated Next Figure 6: The interface of the task used for human evaluation. Each time the user presses next, the passage’s length is doubled. On the left, we show the first step of evaluation, on the right, the second to last. in this task you will be shown some text that was extracted from a One of the best games | have ever played in my life. ”°>s"* It's not a lie that it's the same with the other 2 Guess wheter Bw ved onthe lot wes writen by a human oF by blazblue games but that doesn't matter see more text and guess again. Youll do ts for 5 tenes. At the end, we will tell you whether the text was written by a machine or a person, One of the best games | have ever played in my life. 7 It's not a lie that it 's the same with the other 2 Sonereheds 70 An Wak For iat bt mia be machine. blazblue games but that doesn't matter at all. + Contradicts itself Blazblue has an amazing story and an interesting : Rapowetive fighting system | manage to put 130 hours in all Bae Fe an ee eee naae anes ietepn ae three of them and | enjoyed every single minute Of unusual formatting such as text in all capitals, no line-break between i The cons are that you have to play the other 2 ' Novas yosrdateon cobra nto jazblue games to understand the story and tha’ the end of blazblue is not yet revealed so if you love the story just like me you will have to buy This excerpt was actually a test that the next blazblue when it is out. Dear AMT you're reading! Worker: to show you're reading, please select definitely machine-generated for this one. All... 's this surprising? Did you read carefully?... Yes No Submit Figure 7: For some of the questions, the text ”Dear AMT Worker: to show you’re reading, please select definitely [X] for this one.” was inserted into the last text segment, and ”Did you read carefully?” was appended to the end.
{ "id": "1707.06875" }
1911.00359
CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data
Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia.
http://arxiv.org/pdf/1911.00359
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave
cs.CL, cs.IR, cs.LG, stat.ML
null
null
cs.CL
20191101
20191115
9 1 0 2 v o N 5 1 ] L C . s c [ 2 v 9 5 3 0 0 . 1 1 9 1 : v i X r a # CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data Guillaume Wenzek∗, Marie-Anne Lachaux∗, Alexis Conneau, Vishrav Chaudhary, Francisco Guzm´an, Armand Joulin, Edouard Grave Facebook AI {guw, malachaux, aconneau, vishrav, fguzman, ajoulin, egrave}@fb.com Abstract Pre-training text representations have led to significant improvements in many areas of natural language processing. The quality of these models benefits greatly from the size of the pretraining corpora as long as its quality is preserved. In this paper, we describe an automatic pipeline to extract massive high-quality monolingual datasets from Common Crawl for a variety of languages. Our pipeline follows the data processing introduced in fastText (Mikolov et al., 2017; Grave et al., 2018), that deduplicates documents and identifies their language. We augment this pipeline with a filtering step to select documents that are close to high quality corpora like Wikipedia. Keywords: Common Crawl, web data # 1. Introduction Pre-trained text representations have brought significant performance gains on many natural language processing tasks (Peters et al., 2018). Since the introduction of Trans- formers (Vaswani et al., 2017) and BERT (Devlin et al., 2018), we have a seen a steady improvement in the quality of these pre-trained models, mainly driven by increasing the size of the pre-training corpora (Radford et al., 2019; Yang et al., 2019; Lan et al., 2019). Nonetheless, the size only does not guarantee better models and the quality of the data has to be preserved, which has lead to the use of ad-hoc datasets created by concatenating existing high- quality data sources like Wikipedia. Unfortunately, such datasets cannot be replicated as easily for low-resources languages, as many have much smaller curated datasets such as Wikipedia. In this paper, we present a data collection pipeline that al- lows to gather massive monolingual corpora of high qual- ity in a variety of languages, including many low-resource ones. The principles of our pipeline are general and we show the results of its application to data collected by the Common Crawl project.1 Common Crawl is a massive non-curated dataset of webpages in many languages, mixed together in temporal snapshots of the web. Our pipeline performs standard document deduplication and language identification similar to Grave et al. (2018), but differs in two ways: first, we preserve the document-level struc- ture to allow for the training of paragraph-level represen- tations like BERT (Devlin et al., 2018) ; second, we add an optional monolingual filtering step that selects docu- ments that are close to high quality sources, like Wikipedia. This is achieved by training a language model on the tar- geted sources and use the perplexity as a scoring function for documents. Our pipeline can be applied to any num- ber of Common Crawl snapshots and takes 8.5 hours to process per snapshot on 5000 CPU cores. For example, the dataset obtained by pre-processing the February 2019 snapshot is composed of 1.5 billions documents in 174 lan- guages. There are 700 millions filtered documents in En- glish alone, corresponding to 532 billions tokens. That is 120 times bigger than the data used in Devlin et al. (2018). This paper is organized as follows: we first present the Common Crawl corpora, followed by our overall pipeline to filter high quality documents from it. We then describe additional tools that can be used to tailor the filtering to a targeted corpora. Finally, we give in depth statistics about the dataset obtained from pre-processing a single Common Crawl snapshot. The pipeline and the tools are publicly available2. 2. Related work Preprocessing of massive datasets for training text rep- resentations has been developed in the context of word embeddings, such as word2vec (Mikolov et al., 2013), GloVe (Pennington et al., 2014) or fastText (Mikolov et al., 2017). In particular, our pipeline follows the fastText pipeline of Grave et al. (2018) where Common Crawl is split into monolingual datasets using a language identifier based on fastText (Joulin et al., 2016a). Common Crawl has been used in the context of language modeling to evaluate n-gram statistics (Buck et al., 2014). More recently, Baevski et al. (2019) pre-trained a BERT- like model on Common Crawl as preprocessed in Grave et al. (2018). In general, progress in sentence representations has been observed by increasing the size of the pre-training corpora (Yang et al., 2019; Liu et al., 2019; Raffel et al., 2019). In particular, and concurrently to our work, Raf- fel et al. (2019) used a large scale dataset based on Com- mon Crawl to train text representations. Existing work us- ing web based datasets have been using English specific preprocessing, such as keeping URLs shared on Reddit or using hand-crafted filtering rules. As opposed to these approaches, our pipeline can easily be applied to many languages other than English. Closer to this work, Ortiz Su´arez et al. (2019) has improved the pipeline of Grave et al. (2018), showing that large monolingual corpora can be extracted from Common Crawl rapidly even with lim- ited resources. Our work follows a similar pipeline with an additional step to select high-quality documents. 3. Methodology Every month, Common Crawl releases a snapshot of the web obtained by randomly exploring and sampling URLs. raw Each webpage is made available different formats: # 1https://commoncrawl.org/about/ 2github.com/facebookresearch/cc_net wet i wet k — >| bin Ex 7 9 0 Common Crawl Snapshot TPT bin Ex 7 Paragraph hashes wet i wet re) xo ao a © Cc n 3 & (5) Cc 6 i i= o} [) = her Deduplication + LID + LM filter Figure 1: We show the whole pipeline for downloading and processing one snapshot of Common Crawl. First we download all the wet files and compute the paragraph hashes that we group and save into binary files. Then we process every document of the wet files independently: we deduplicate the paragraph using the binary files, we do a language identification and compute language model perplexity score. Finally, we regroup the documents into json files by language and perplexity score. The steps of the pipeline indicated with dashed arrows are parallelisable. (WARC), UTF-8 text (WET), and meta-data (WAT). There is little content overlap between monthly snapshots. The complete archive consists of petabytes of data collected over 8 years of web crawling. The webpages are crawled from the whole web without restriction; they come in many different languages and in the quality of the text varies greatly. The Common Crawl represents a rich resource for monolingual data that comprises a large variety of domains, yet poses challenges due to the large quantity of noisy text. Here we describe our the methodology used to fetch, dedu- plicate and filter the Common Crawl data. We focus on preprocessing the text (WET) format of the common crawl snapshots. Our pre-processing pipeline consists of several steps that we describe in this section. An overview of the pipeline is illustrated in figure 1. # 3.1. Preprocessing Each snapshot contain between 20 and 30TB of uncom- pressed plain text, corresponding to approximately 3 billion web pages (for instance the Feb. 2019 snapshot contains 24TB of data). We download and process each snapshot independently. For each snapshot, we regroup WET files into shards of 5GB each. This makes up for 1600 shards for Feb. 2019 crawl. These shards are saved into a JSON file where one entry corresponds to one web page. 64-bits of SHA-1 digits of the normalized paragraphs as the key. Then, we deduplicate every shard by comparing it with either 1, a subset or all of the binary files. The impact of this choice is discussed in 4. These steps are independent for each shard and can thus be distributed. In addition to removing web copies, this step gets rid of a lot boilerplate such as navigation menus, cookie warnings and contact information. In particular, it removes signif- icant amount of English content from webpages in other languages. This makes the language identification, which is the next step of our pipeline, more robust. # 3.3. Language identification The second step of our pipeline consists in splitting data per language. Following Grave et al. (2018), we use the lan- guage classifier from fastText (Joulin et al., 2016b; Grave et al., 2018). The fastText language identifier was trained on Wikipedia, Tatoeba and SETimes. It uses characters n- grams as features, and the hierarchical softmax. It supports 176 languages and outputs a score for each of them in the [0, 1] range. It processes 1k documents per second on a sin- gle CPU core. For every web page we compute the most probable language, and the corresponding classifier score. If this score is higher than 0.5, we classify the document in the corresponding language. Otherwise, the language is not clearly identified, and we discard the corresponding page. # 3.2. Deduplication The first step of our pipeline consists in removing dupli- cated paragraphs across the different web pages in a snap- shot, as they represent 70% of the text. We first normal- ize each paragraph by lower-casing all characters, replacing numbers by a placeholder (i.e. 0) and removing all Unicode punctuation and accent marks. Then, the deduplication is done in two independent steps. First, for every shard, we compute a hash code for each paragraph and save them into a binary file. We use the first # 3.4. LM filtering At this step of the pipeline, there are still documents with low quality content. A way to filter out these samples, is to compute a score of similarity of a web page with a targeted domain such as Wikipedia. In this paper, we propose to use the perplexity of a language model trained on the targeted domain as the quality score. More precisely, for each language, we train a sentence piece tokenizer (Kudo, 2018) and a language model on data from 102 10! 10° 10° TTT 108 en es it mu zh fr de ja pt pl aol o) id cs fa ro ar Language el hu da uk no fi bg he # Number of tokens Figure 2: Number of tokens per language for the Feb. 2019 snapshot after deduplication. We display the histogram with logarithmic scale. the targeted domain. We use a 5-gram Kneser-Ney model as implemented in the KenLM library (Heafield, 2011) be- cause of its efficiency to process large quantity of data. Then, we tokenize each page in our dataset, with our sen- tence piece tokenizer and compute the perplexity of each paragraph using our language model. The lower the per- plexity, the closer the data is to the targeted domain. At the end of this step, each language is split into three even parts head, middle and tail, corresponding to the perplex- ity score. In section 5. we show perplexity distributions for one snapshot of Common Crawl. We have trained sentence piece and Kneser-Ney language models on Wikipedia for 48 languages. We make these models publicly available in the repository. We also pro- vide code to train sentence piece and Kneser-Ney language models and compute the terciles thresholds if the user wants to use other data to filter Common Crawl. # 3.5. Reproducing results without the pipeline Reconstructing the dataset by running our pipeline requires a lot of resources and time. Together with the release of the pipeline, we provide a tool to efficiently reproduce the results of this work. This tool builds on a file containing URLs of webpages and reconstructs the final output of our pipeline from this file. # 4. Ablation study In this section, we discuss the impact of several design choices in our pipeline on the resulting datasets. # 4.1. Order of LID and deduplication steps Contrarily to (Grave et al., 2018), we have chosen to dedu- plicate the data before language identification, because a lot of English boilerplate, such as cookie warnings, is present in pages of other languages. A significant amount of this noisy data is removed by deduplication which allows for better language identification. This is particularly impor- tant for some low resource languages. In Figure 3 we re- port the relative increase in number of documents when do- ing ”deduplication then LID” instead of ”LID then dedu- plication”. We observe that a lot of low resource language documents were mis-classified before deduplication (gen- erally to English), or discarded because no language could be identified. # Impact of the amount of deduplication For deduplication, we can compare paragraphs hashes shard by shard, across N shards or across the whole snap- shot (1600 shards). The higher N, the higher the number of documents removed and the more RAM the algorithm will use. We show in 4 the amount of data remaining (percent- age of number of characters) for one shard of the snapshot Feb. 2019 after deduplication across 1, 2, 5, 10, 20, 50 and 100 shards. After deduplication across 1 shard, there is 42% of characters remaining and 28% across 100 shards. Loading hashes from 50 represents 1.5B unique hashes, making up 13.5GB on disk. Using a memory efficient hash- set3 we can fit those into 40GB of RAM. In 5 we show how the RAM increase when we try to load more hashes in memory. We found 50 shards to be a reasonable trade- off and are therefore running the deduplication on blocks corresponding to 3% of the corpus. # 4.3. Benchmarking The pipeline is massively parallelizable but still has to run in two steps because of the deduplication which requires to compare billions of documents paragraphs. In our case we chose shards of 5GB as the smallest unit of parallelisa- tion. One dump is divided in 1600 shards, each containing around 1.6M documents. Computing the hashes of para- graphs is done at about 600 doc/s on one CPU core, while downloading the files at the same time. This means that one shard of about 1.6M documents is done in 45 min. We compute all the hashes in 45 minutes on 1600 CPUs. In one pass, the next step removes duplicates, and per- forms language identification, sentence piece tokenization, language modeling and splitting based on language. Each shard creates 3 files for the top 48 languages for which we have a LM, and one file for each other language where we don’t have a LM. Each of those processing require a sig- nificant amount of RAM but the memory can be shared # 3github.com/greg7mdp/parallel-hashmap n o w o e° ot e hl cd Documents ratio (LID — dedup)/(dedup — LID) NR Oo orrer wok t a ee ee ee eH 10° 10° 10° 10” 10° 10” Documents (dedup — LID) Figure 3: Impact of doing ”Deduplication then LID” rather than ”LID then Deduplication”. Y-axis shows per language- ratio of number of documents between the two methods. X-axis is the number of documents found for each language using LID scores obtained after deduplication. Low resources languages benefits the more from doing ”Deduplication then LID” Stats estimated on 1% of Feb. 2019 snapshot. 50% 40% 30% 20% Percentage of data 10% 0% 2 0.1% 0.2% 0.5% 1.0% Percentage of hashes 2.0% 5.0% to the 16 workers as well as writings the results to disk. The worker threads process around 40doc/s, processing the whole shard in about 40 minutes. Removing the du- plicated parapgraphs takes 40% of the time. This step is computationally less expensive than the following ones but is done on all the data, as opposed to the next steps which are only applied to the deduplicated data. The language identifier takes 12.5% of CPU time, sentence piece 33% and the LM 13%. Finally we regroup the files produced at the previous steps in chunks of 5Gb. This can be run in parallel for each output file, and since gzip archive can be concatenated without being decompressed first it’s very fast and runs in matter of minutes. The total processing time is about 9 hours using 5000 CPU cores for one snapshot. Figure 4: Amount of data remaining after deduplication with different fraction of the dataset. These statistics are computed on one shard. 80Gb 60Gb f+ 40Gb F RAM usage 20Gb F 0Gb 0.1% 0.2% 0.5% 1.0% 2.0% 5.0% Percentage of hashes Figure 5: RAM usage when loading hashes from different fraction of the dataset. Computed on one shard. across processes since it is read only. This step is signif- icantly longer than the previous one. We allocate 17 pro- cesses to one shard. The master process is responsible for downloading the data and distributing the raw documents 5. Metrics about the resulting dataset In this section, we report statistics corresponding to the cor- pus obtained after applying our pipeline on the Feb. 2019 snapshot of Common Crawl. 5.1. Statistics per language After preprocessing it, we get 3.2TB of compressed docu- ments in 174 languages. In table 6., we give the sizes of each monolingual corpora for the 130 languages for which we have more than 1000 documents. We also compute the number of tokens and sentences for each language, and re- port them in Figure 2. The tokens were obtained by using the Sentence Piece tokenizer that was used in our prepro- cessing pipeline. The sentences were split using Moses. The three largest languages are English (en) with 532B to- kens, Russian (ru) with 101B tokens and Chinese (zh) with 92B tokens. We obtained 11 languages with more than 10B In tokens, and 27 languages with more than 1B tokens. terms of documents, the three largest languages are English (en) with 706M documents, Russian (ru) with 167M and German (de) with 105M. There are 12 languages with more than 10M documents and 29 languages containing more than 1M documents. Common Crawl is also a good source for lower resource languages. For example Afrikaans (af), Gujarati (gu), Khmer (km) and Burmese (my) contains re- spectively 160MB, 190MB, 154MB and 440MB of data. In comparison Wikipedia contains 103MB, 88MB, 71MB and 153MB of data for these languages. And more resources are available through the 60 dumps of Common Crawl. These numbers could probably be improved by increasing the recall of the LID model for low-resource languages. 5.2. Statistics from the language model We found that perplexity was a relative good proxy for quality. Journalistic and well written content ends up in the head of our dataset. Some documents which contained a lot of keywords list passes through deduplication and LID but receive a high perplexity. Some documents despite be- ing valid text ends up in the tail because they have a vo- cabulary very different from Wikipedia. This includes blog comments with spoken-like text, or very specialized forums with specific jargon. We decided to not remove content based on the LM score because we think that some of it could be useful for specific applications. Some languages have very spiked distribution of perplexity while others are more spread out. We postulate that this is rather due to the variance in the Wikipedia sizes used for training the LM than to some language having less high- quality content. Therefore we decided to use different per- plexity thresholds for each language. The thresholds have been picked to split the corpus in 3 parts of equal size. In Figure 7 we show the perplexity distribution for two lan- guages English and Gujarati using their respective LM. En- glish LM was trained on 534M of text while Gujarati was trained on only 12M. 5.3. Training models on this dataset We assess the quality of the resulting dataset by learning unsupervised word and sentence representations through fastText and BERT models. For fastText, we train 300- dimensional word embeddings on the head, middle and tail subsets of the English and Polish CommonCrawl cor- pora, sorted by document perplexity. We evaluate these on standard semantic and syntactic analogy datasets (Mikolov et al., 2013). We observe in Table 1 a steady increase in performance as we go from the tail to the head of the dataset, confirming the positive impact of our filtering method based on document perplexity. English Polish Total Sem Syn Total Sem Syn head mid. tail 77.9 74.2 62.0 81.2 79.0 68.1 75.3 70.4 57.3 65.3 62.8 59.9 66.5 62.7 59.8 64.1 63.0 60.1 Table 1: Impact of corpus quality on the quality of fastText word embeddings. We evaluate on semantic and syntactic similarity datasets. We also train BERT models on the English (en), Russian (ru), Chinese (zh) and Urdu (ur) languages, using either the Wikipedia corpora or our new CommonCrawl datasets. For these languages, we use respectively 16G, 5G, 1.1G and 106M of raw Wikipedia data (full datasets), and we cap the head CommonCrawl data to 21G, 21G, 17G, 2.2G for English, Russian, Chinese and Urdu. That is, we consider roughly the same amount of data for English, but increase the amount of data for Russian, Chinese and Urdu. We train a BERT-BASE architecture (Devlin et al., 2018) on each of these corpora, without next sentence prediction (NSP) as in (Lample and Conneau, 2019). For better compari- son, we early-stop all our models after two days of train- ing on 16 Volta32 GPUs, and use the exact same number of steps for each model. We evaluate each model on the XNLI (Conneau et al., 2018) corpus by using the training data in each language. Results presented in Table 2 indi- cate that BERT-BASE models trained on CommonCrawl outperform identical models trained on Wikipedia by 3.3% on average. With the same amount of data for English, the BERT-BASE model trained on our corpus outperforms the one trained on the Wikipedia. For low-resource languages like Urdu (ur), the Wikipedia dataset being too small, the model pretrained on Wikipedia obtains similar performance than a randomly initialized model. Using our corpus in- stead, we obtain a 7 points improvement in accuracy, which demonstrates how our filtered corpus can enable language model pretraining for low-resource languages. en ru zh ur ∆ Wiki CC 82.8 85.0 73.3 76.4 77.0 77.9 57.3 64.3 72.6 75.9 Table 2: XNLI dev accuracy for English, Russian, Chinese and Urdu (∆ for average) for BERT-BASE models trained either on Wikipedia or CommonCrawl. The additional data provided by our pipeline alleviates the lack of resources in most languages and enables representation learning for low-resource languages such as Urdu. 6. Conclusion In this paper, we present a pipeline to create curated mono- lingual corpora in more than 100 languages. We prepro- cess Common Crawl by following the pipeline of (Grave et al., 2018), with the differences that we preserve the struc- ture of documents and filter the data based on their distance to Wikipedia. This improves the quality of the resulting dataset and allows for the training of multilingual text level representations like XLM (Lample and Conneau, 2019). References Baevski, A., Edunov, S., Liu, Y., Zettlemoyer, L., and Auli, (2019). Cloze-driven pretraining of self-attention M. networks. arXiv preprint arXiv:1903.07785. Buck, C., Heafield, K., and Van Ooyen, B. (2014). N-gram counts and language models from the common crawl. In LREC, volume 2, page 4. Citeseer. Conneau, A., Rinott, R., Lample, G., Williams, A., Bow- (2018). 10° 10° 108 10’ 10° en fr es mu de ja zh pt nol pl LUND Dtnuenenssssss it cs id fe uk a ar ko el hu ro da fi no bg he Figure 6: Number of documents per language for the Feb. 2019 snapshot after deduplication. We display the histogram with logarithmic scale. We display statistics for 25 languages only. All statisctics are available in table 6. 1.25% 1.00% 0.75% 0.50% 0.25% Percentage of corpus (in chars) 0.00% 500 1000 1500 2000 Perplexity Figure 7: Histogram of language model perplexities for the Feb. 2019 Common Crawl snapshot. The two histograms correspond to English, which is the largest dataset, and Gu- jarati which is a low-resource language. Vertical lines cor- respond to perplexity thresholds applied to split the corpus in head/middle/tail. Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. (2019). Albert: A lite bert for self- supervised learning of language representations. arXiv preprint arXiv:1909.11942. Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Adv. NIPS. Mikolov, T., Grave, E., Bojanowski, P., Puhrsch, C., and Joulin, A. (2017). Advances in pre-training distributed word representations. arXiv preprint arXiv:1712.09405. (2019). Asynchronous pipeline for processing huge corpora on medium to low resource infrastructures. CMLC. Pennington, J., Socher, R., and Manning, C. Xnli: Evaluating cross-lingual sentence representations. In Proc. EMNLP. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional trans- formers for language understanding. arXiv preprint arXiv:1810.04805. Grave, E., Bojanowski, P., Gupta, P., Joulin, A., and Mikolov, T. (2018). Learning word vectors for 157 lan- guages. arXiv preprint arXiv:1802.06893. Heafield, K. (2011). KenLM: faster and smaller language model queries. In Proceedings of the EMNLP 2011 Sixth Workshop on Statistical Machine Translation. Joulin, A., Grave, E., Bojanowski, P., Douze, M., Jgou, H., and Mikolov, T. (2016a). Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651. Joulin, A., Grave, E., Bojanowski, P., and Mikolov, T. (2016b). Bag of tricks for efficient text classification. arXiv preprint arXiv:1607.01759. Kudo, T. (2018). Subword regularization: Improving neu- ral network translation models with multiple subword candidates. arXiv preprint arXiv:1804.10959. (2019). and Conneau, A. language model pretraining. Lample, G. lingual arXiv:1901.07291. Cross- arXiv preprint (2014). Glove: Global vectors for word representation. In Proc. EMNLP. Peters, M. E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018). Deep arXiv preprint contextualized word representations. arXiv:1802.05365. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI Blog, 1(8). Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, M., Zhou, Y., Li, W., and Liu, P. J. (2019). Ex- ploring the limits of transfer learning with a unified text- to-text transformer. arXiv preprint arXiv:1910.10683. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. In Adv. NIPS. Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., and Le, Q. V. (2019). Xlnet: Generalized autore- gressive pretraining for language understanding. arXiv preprint arXiv:1906.08237. Language Documents 73.232 × 103 7.615 × 103 54.182 × 103 1.264 × 103 7.132 × 106 44.384 × 103 12.758 × 103 3.814 × 103 507.612 × 103 12.733 × 103 30.195 × 103 176.037 × 103 3.002 × 106 941.621 × 103 30.028 × 103 2.514 × 103 21.594 × 103 12.906 × 103 2.018 × 106 5.534 × 103 58.489 × 103 116.103 × 103 11.140 × 106 13.312 × 103 127.800 × 103 4.411 × 106 105.425 × 106 26.274 × 103 5.681 × 106 706.583 × 106 126.188 × 103 82.991 × 106 1.043 × 106 381.323 × 103 7.201 × 106 4.118 × 106 86.176 × 106 31.228 × 103 59.515 × 103 10.114 × 103 400.289 × 103 98.263 × 103 2.166 × 106 1.370 × 106 821.782 × 103 8.914 × 103 5.643 × 106 308.674 × 103 1.460 × 103 9.728 × 106 3.990 × 103 1.051 × 103 346.180 × 103 45.080 × 106 53.880 × 106 1.261 × 103 2.165 × 103 368.404 × 103 Sentences 5.390 × 106 324.334 × 103 1.533 × 106 16.818 × 103 248.711 × 106 379.209 × 103 563.956 × 103 104.890 × 103 21.341 × 106 306.696 × 103 1.047 × 106 9.719 × 106 129.758 × 106 38.413 × 106 528.293 × 103 41.791 × 103 653.440 × 103 72.042 × 103 70.986 × 106 222.896 × 103 2.044 × 106 2.696 × 106 444.808 × 106 392.207 × 103 4.249 × 106 209.623 × 106 4.249 × 109 841.155 × 103 201.470 × 106 32.110 × 109 6.152 × 106 3.048 × 109 56.678 × 106 10.355 × 106 282.130 × 106 191.905 × 106 3.540 × 109 1.087 × 106 2.068 × 106 225.829 × 103 12.171 × 106 4.705 × 106 124.089 × 106 52.221 × 106 40.070 × 106 216.630 × 103 249.899 × 106 10.995 × 106 17.315 × 103 488.888 × 106 131.515 × 103 22.527 × 103 13.072 × 106 1.637 × 109 4.092 × 109 171.615 × 103 358.813 × 103 16.747 × 106 Tokens 73.041 × 106 3.526 × 106 27.561 × 106 213.407 × 103 3.777 × 109 4.222 × 106 10.463 × 106 1.357 × 106 232.269 × 106 4.365 × 106 12.923 × 106 124.716 × 106 1.835 × 109 708.464 × 106 47.940 × 106 549.568 × 103 7.413 × 106 506.201 × 103 1.230 × 109 2.401 × 106 11.655 × 106 62.219 × 106 5.691 × 109 3.840 × 106 56.984 × 106 2.974 × 109 58.195 × 109 9.899 × 106 3.282 × 109 532.368 × 109 70.107 × 106 54.792 × 109 537.668 × 106 109.635 × 106 5.441 × 109 2.089 × 109 58.428 × 109 12.082 × 106 29.632 × 106 4.132 × 106 196.539 × 106 71.586 × 106 1.470 × 109 1.165 × 109 515.230 × 106 2.288 × 106 3.272 × 109 152.337 × 106 291.786 × 103 6.124 × 109 1.671 × 106 174.627 × 103 173.198 × 106 29.381 × 109 54.883 × 109 1.514 × 106 5.185 × 106 176.632 × 106 Size in bytes 160.015 × 106 9.977 × 106 98.878 × 106 824.466 × 103 13.602 × 109 18.821 × 106 24.174 × 106 3.907 × 106 757.804 × 106 13.829 × 106 44.612 × 106 476.612 × 106 6.224 × 109 1.717 × 109 50.786 × 106 1.536 × 106 17.989 × 106 3.476 × 106 2.841 × 109 7.812 × 106 26.459 × 106 146.949 × 106 17.306 × 109 14.483 × 106 132.824 × 106 7.100 × 109 164.540 × 109 35.401 × 106 12.174 × 109 1.955 × 1012 161.092 × 106 134.206 × 109 1.598 × 109 342.909 × 106 15.571 × 109 6.836 × 109 220.869 × 109 31.296 × 106 73.301 × 106 9.906 × 106 487.379 × 106 189.806 × 106 4.583 × 109 2.762 × 109 1.413 × 109 8.500 × 106 10.232 × 109 579.637 × 106 930.327 × 103 15.782 × 109 4.421 × 106 653.994 × 103 502.002 × 106 72.517 × 109 127.792 × 109 1.873 × 106 11.502 × 106 695.075 × 106 af als am an ar arz as ast az azb ba be bg bn bo bpy br bs ca ce ceb ckb cs cv cy da de dv el en eo es et eu fa fi fr fy ga gd gl gu he hi hr hsb hu hy ia id ilo io is it ja jbo jv ka — ~—- = = kk km mzn nds ne new nl nn no oc or os pl pms pnb ps pt ro ru sa sah sd sh si 208.652 x 10% 85.211 x 103 112.553 x 103 5.707 x 10° .696 x 10° 49.678 x 103 .003 x 10% 92.894 x 10° 75.987 x 10° 30.740 x 10° .735 x 10° .219 x 103 44.895 x 103 485 x 10° 846.034 x 10% 14.670 x 10° 4.091 x 10% 268.409 x 10° 292.062 x 10% 61.780 x 103 51.850 x 103 373.244 x 10% 27.734 x 10° 70.775 x 10° 2.483 x 10% 16.518 x 10° 84.598 x 103 3.670 x 10° 31.635 x 10° 23.371 x 10° 3.268 x 10° 9.138 x 10% 65.718 x 10° 3.723 x 10° 31.242 x 10° 4.087 x 10° 12.195 x 103 69.971 x 10° 37.305 x 10° 5.187 x 10° 167.323 x 10° 10.064 x 103 8.403 x 10% 31.636 x 103 66.385 x 108 154.658 x 10% 11.658 x 10% 2.103 x 10° 5.733 x 10° 361.022 x 10° 92.260 x 10° 1.843 x 10° 56.540 x 10° 2.988 x 10° 2.932 x 10° 965.947 x 10% 95.626 x 10° 19.033 x 10% 903.215 x 10% 63.860 x 10° 33.904 x 10° 409.271 x 10% 114.556 x 10° 8.653 x 10° 13.485 x 10% 6.790 x 10° 8.132 x 10° 6.964 x 10° 1.096 x 10° 6.712 x 10° 16.401 x 10% 380.501 x 10% 6.278 x 10° 68.984 x 10° 1.214 x 10° 4.705 x 10% 158.837 x 10° 300.022 x 10% 961.342 x 10% 163.153 x 103 1.300 x 10° 72.314 x 10° 221.196 x 10% 1.975 x 10° 1.489 x 10° 222.040 x 10° 7.718 x 10° 794.837 x 10° 434.283 x 10% 1.133 x 10° 2.569 x 10° 7.072 x 10° 134.347 x 10° 87.503 x 10° 95.568 x 10° 7.590 x 10° 926.524 x 10% 24.831 x 10° 586.1 36.2 6 x 10% 6 x 10% 41.604 x 10° 11.306 x 10% 987.272 x 10° 243.74 8 x 103 22.707 x 10° 780.74 1.2 30.6 17.24 7 x 10% 440.403 x 10° 6.172 x 10% 5 x 10% 36.563 x 10° 327.757 x 10° 90.969 x 10° 0 x 10% 85.562 x 10° 6 x 10% 79.498 x 10° 46.797 x 10° 4.051 x 10° 47.027 x 10° 1.632 x 10° 15.946 x 109 55.776 x 10% 2.145 x 10° 4.327 x 10% 33.005 x 10% 1.762 x 10° 16.661 x 109 1.124 x 10° 2.752 x 10° 43.603 x 10° 23.875 x 10° 3.848 x 109 101.143 x 10° 19.843 x 10% 4.135 x 10% 22.065 x 10% 8.072 x 10% 124.5 4 x 10% 526.160 x 153.530 x 217.285 x 11.969 x 3.371 x 61.128 x 2.008 x 131.434 x 101.977 x 32.277 x 3.274 x 798.331 x 51.361 x 2.337 x 1.336 x 14.535 x 4.699 x 482.894 x 559.466 x 339.398 x 319.199 x 259.173 x 44.436 x 439.538 x 1.085 x 11.791 x 333.808 x 3.705 x 41.821 x 145.495 x 5.524 x 10.648 x 73.860 x 5.828 x 49.738 x 2.262 x 8.905 x 109.935 x 57.388 x 9.904 x 384.733 x 32.559 x 14.271 x 53.052 x 27.332 x 270.364 x 208.652 × 103 85.211 × 103 112.553 × 103 5.707 × 106 1.696 × 103 49.678 × 103 1.003 × 103 92.894 × 103 75.987 × 103 30.740 × 103 1.735 × 103 1.219 × 103 44.895 × 103 1.485 × 106 846.034 × 103 14.670 × 103 4.091 × 103 268.409 × 103 292.062 × 103 161.780 × 103 151.850 × 103 373.244 × 103 27.734 × 103 170.775 × 103 2.483 × 103 16.518 × 103 184.598 × 103 3.670 × 103 31.635 × 106 123.371 × 103 3.268 × 106 9.138 × 103 65.718 × 103 3.723 × 103 31.242 × 106 4.087 × 103 12.195 × 103 69.971 × 103 37.305 × 106 5.187 × 106 167.323 × 106 10.064 × 103 8.403 × 103 31.636 × 103 66.385 × 103 154.658 × 103 4.472 × 106 1.828 × 106 687.411 × 103 1.344 × 106 15.774 × 106 66.205 × 103 944.262 × 103 324.091 × 103 95.142 × 103 6.639 × 106 10.841 × 103 192.164 × 103 19.454 × 106 112.660 × 103 11.658 × 106 2.103 × 106 5.733 × 106 361.022 × 106 92.260 × 103 1.843 × 106 56.540 × 103 2.988 × 106 2.932 × 106 965.947 × 103 95.626 × 103 19.033 × 103 903.215 × 103 63.860 × 106 33.904 × 106 409.271 × 103 114.556 × 103 8.653 × 106 13.485 × 106 6.790 × 106 8.132 × 106 6.964 × 106 1.096 × 106 6.712 × 106 16.401 × 103 380.501 × 103 6.278 × 106 68.984 × 103 1.214 × 109 4.705 × 106 158.837 × 106 300.022 × 103 961.342 × 103 163.153 × 103 1.300 × 109 72.314 × 103 221.196 × 103 1.975 × 106 1.489 × 109 222.040 × 106 7.718 × 109 794.837 × 103 434.283 × 103 1.133 × 106 2.569 × 106 7.072 × 106 115.211 × 106 50.734 × 106 23.223 × 106 56.660 × 106 479.216 × 106 1.915 × 106 48.390 × 106 13.951 × 106 3.524 × 106 181.397 × 106 347.561 × 103 12.370 × 106 478.459 × 106 3.721 × 106 134.347 × 106 87.503 × 106 95.568 × 106 7.590 × 109 926.524 × 103 24.831 × 106 586.116 × 103 36.216 × 106 41.604 × 106 11.306 × 106 987.272 × 103 243.748 × 103 22.707 × 106 780.747 × 106 440.403 × 106 6.172 × 106 1.215 × 106 136.563 × 106 327.757 × 106 90.969 × 106 130.610 × 106 85.562 × 106 17.246 × 106 79.498 × 106 146.797 × 103 4.051 × 106 147.027 × 106 1.632 × 106 15.946 × 109 55.776 × 106 2.145 × 109 4.327 × 106 33.005 × 106 1.762 × 106 16.661 × 109 1.124 × 106 2.752 × 106 43.603 × 106 23.875 × 109 3.848 × 109 101.143 × 109 19.843 × 106 4.135 × 106 22.065 × 106 8.072 × 106 124.514 × 106 1.618 × 109 749.341 × 106 392.871 × 106 717.548 × 106 7.149 × 109 36.508 × 106 1.002 × 109 225.516 × 106 52.462 × 106 2.743 × 109 4.722 × 106 154.572 × 106 6.427 × 109 46.220 × 106 526.160 × 106 153.530 × 106 217.285 × 106 11.969 × 109 3.371 × 106 61.128 × 106 2.008 × 106 131.434 × 106 101.977 × 106 32.277 × 106 3.274 × 106 798.331 × 103 51.361 × 106 2.337 × 109 1.336 × 109 14.535 × 106 4.699 × 106 482.894 × 106 559.466 × 106 339.398 × 106 319.199 × 106 259.173 × 106 44.436 × 106 439.538 × 106 1.085 × 106 11.791 × 106 333.808 × 106 3.705 × 106 41.821 × 109 145.495 × 106 5.524 × 109 10.648 × 106 73.860 × 106 5.828 × 106 49.738 × 109 2.262 × 106 8.905 × 106 109.935 × 106 57.388 × 109 9.904 × 109 384.733 × 109 32.559 × 106 14.271 × 106 53.052 × 106 27.332 × 106 270.364 × 106 4.787 × 109 2.101 × 109 929.548 × 106 2.108 × 109 19.160 × 109 84.468 × 106 1.513 × 109 491.376 × 106 167.373 × 106 7.869 × 109 14.295 × 106 329.472 × 106 20.045 × 109 158.642 × 106 kk km kn ko krc ku kv ky la lb lez lmo lo lt lv mg mhr mk ml mn mr ms mt my mzn nds ne new nl nn no oc or os pl pms pnb ps pt ro ru sa sah sd sh si sk sl sq sr sv sw ta te tg th tk tl tr tt ug uk ur uz vi vo wa war wuu xmf yi zh Table 3: Number of documents, sentences and tokens after deduplication.
{ "id": "1712.09405" }
1911.00172
Generalization through Memorization: Nearest Neighbor Language Models
We introduce $k$NN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a $k$-nearest neighbors ($k$NN) model. The nearest neighbors are computed according to distance in the pre-trained LM embedding space, and can be drawn from any text collection, including the original LM training data. Applying this augmentation to a strong Wikitext-103 LM, with neighbors drawn from the original training set, our $k$NN-LM achieves a new state-of-the-art perplexity of 15.79 - a 2.9 point improvement with no additional training. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowledge. Together, these results strongly suggest that learning similarity between sequences of text is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail.
http://arxiv.org/pdf/1911.00172
Urvashi Khandelwal, Omer Levy, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis
cs.CL
ICLR 2020
null
cs.CL
20191101
20200215
0 2 0 2 b e F 5 1 ] L C . s c [ 2 v 2 7 1 0 0 . 1 1 9 1 : v i X r a Published as a conference paper at ICLR 2020 # GENERALIZATION THROUGH MEMORIZATION: NEAREST NEIGHBOR LANGUAGE MODELS Urvashi Khandelwal†∗, Omer Levy‡, Dan Jurafsky†, Luke Zettlemoyer‡ & Mike Lewis‡ †Stanford University ‡Facebook AI Research {urvashik,jurafsky}@stanford.edu {omerlevy,lsz,mikelewis}@fb.com # ABSTRACT We introduce kNN-LMs, which extend a pre-trained neural language model (LM) by linearly interpolating it with a k-nearest neighbors (kNN) model. The near- est neighbors are computed according to distance in the pre-trained LM embed- ding space, and can be drawn from any text collection, including the original LM training data. Applying this augmentation to a strong WIKITEXT-103 LM, with neighbors drawn from the original training set, our kNN-LM achieves a new state- of-the-art perplexity of 15.79 – a 2.9 point improvement with no additional train- ing. We also show that this approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore, again without further training. Qualitatively, the model is particularly helpful in predicting rare patterns, such as factual knowl- edge. Together, these results strongly suggest that learning similarity between se- quences of text is easier than predicting the next word, and that nearest neighbor search is an effective approach for language modeling in the long tail. # INTRODUCTION Neural language models (LMs) typically solve two subproblems: (1) mapping sentence prefixes to fixed-sized representations, and (2) using these representations to predict the next word in the text (Bengio et al., 2003; Mikolov et al., 2010). We present a new language modeling approach that is based on the hypothesis that the representation learning problem may be easier than the prediction problem. For example, any English speaker knows that Dickens is the author of and Dickens wrote will have essentially the same distribution over the next word, even if they do not know what that distribution is. We provide strong evidence that existing language models, similarly, are much better at the first problem, by using their prefix embeddings in a simple nearest neighbor scheme that significantly improves overall performance. We introduce kNN-LM, an approach that extends a pre-trained LM by linearly interpolating its next word distribution with a k-nearest neighbors (kNN) model. The nearest neighbors are computed according to distance in the pre-trained embedding space and can be drawn from any text collec- tion, including the original LM training data. This approach allows rare patterns to be memorized explicitly, rather than implicitly in model parameters. It also improves performance when the same training data is used for learning the prefix representations and the kNN model, strongly suggesting that the prediction problem is more challenging than previously appreciated. To better measure these effects, we conduct an extensive empirical evaluation. Applying our kNN augmentation to a strong WIKITEXT-103 LM using only the original dataset achieves a new state- of-the-art perplexity of 15.79 – a 2.86 point improvement over the base model (Baevski & Auli, 2019) – with no additional training. We also show that the approach has implications for efficiently scaling up to larger training sets and allows for effective domain adaptation, by simply varying the nearest neighbor datastore. Training a model on 100-million tokens and using kNN search over a 3-billion token dataset can outperform training the same model on all 3-billion tokens, opening a ∗Work done while the first author was interning at Facebook AI Research. 1 Published as a conference paper at ICLR 2020 Training Contexts | Targets |{ Representations Distances Nearest k Normalization ‘Aggregation Ci v; ki = S(ci) 4; = d(q,ki) (ki) & exp(—d;) Pasxlv) = Yo lyme Obama was senator for| Iinois >| 4 Hawaii|3 || Hawaii|o.7 Hawaii|0.8 Barack is married to | Michelle | 100 Da Illinois|4 |—>| Illinois |0.2 A Mlinois | 0.2 Obama was born in| Hawaii @oom | 5 Hawaii|5 |-™| — Hawaii|0.1 Obama is a native of | Hawaii @@O@D || 3 Classification Interpolation PLM) lo(u) = APiaa()+(0— Yer, Test Context Target || Representation 7 a= f(z) Hawaii |0.2 Hawaii | 0.6 Mlinois |0.2. }——>} Minois |0.2 Obama's birthplace is 2 @Ooc® Figure 1: An illustration of kNN-LM. A datastore is constructed with an entry for each training set token, and an encoding of its leftward context. For inference, a test context is encoded, and the k most similar training contexts are retrieved from the datastore, along with the corresponding targets. A distribution over targets is computed based on the distance of the corresponding context from the test context. This distribution is then interpolated with the original model’s output distribution. new path for efficiently using large datasets in language models. Similarly, adding out-of-domain data to the datastore makes a single LM useful across multiple domains, again without further train- ing. Qualitatively, we find the model is particularly helpful for long-tail patterns, such as factual knowledge, which might be easier to access via explicit memory. # 2 NEAREST NEIGHBOR LANGUAGE MODELING Language models (LMs) assign probabilities to sequences. Given a context sequence of tokens ct = (w1, . . . wt−1), autoregressive LMs estimate p(wt|ct), the distribution over the target token wt. The kNN-LM involves augmenting such a pre-trained LM with a nearest neighbors retrieval mech- anism, without any additional training (the representations learned by the LM remain unchanged). This can be done with a single forward pass over a text collection (potentially including the original LM training set), where the resulting context-target pairs are stored in a key-value datastore that is queried during inference, as illustrated in Figure 1. Datastore Let f (·) be the function that maps a context c to a fixed-length vector representation computed by the pre-trained LM. For instance, in a Transformer LM, f (c) could map c to an inter- mediate representation that is output by an arbitrary self-attention layer. Then, given the i-th training example (ci, wi) ∈ D, we define the key-value pair (ki, vi), where the key ki is the vector represen- tation of the context f (ci) and the value vi is the target word wi. The datastore (K, V) is thus the set of all key-value pairs constructed from all the training examples in D: (K, V) = {(f (ci), wi)|(ci, wi) ∈ D} (1) Inference At test time, given the input context x the model generates the output distribution over next words pLM(y|x) and the context representation f (x). The model queries the datastore with f (x) to retrieve its k-nearest neighbors N according to a distance function d(·, ·) (squared L2 distance in our experiments, making the similarity function an RBF kernel).Then, it computes a distribution over neighbors based on a softmax of their negative distances, while aggregating probability mass for each vocabulary item across all its occurrences in the retrieved targets (items that do not appear in the retrieved targets have zero probability): Penn (y|x) x > Ly=v; exp(—d(ki, f(x))) (2) (Kiva )EN Finally, we follow Grave et al. (2017a) and interpolate the nearest neighbor distribution pkNN with the model distribution pLM using a tuned parameter λ to produce the final kNN-LM distribution: p(y|x) = λ pkNN(y|x) + (1 − λ) pLM(y|x) (3) 2 Published as a conference paper at ICLR 2020 Implementation The datastore contains an entry for each target in the training set, which for LMs can be up to billions of examples. To search over this large datastore, we use FAISS (Johnson et al., 2017), an open source library for fast nearest neighbor retrieval in high dimensional spaces. FAISS speeds up search by clustering the keys and looking up neighbors based on the cluster centroids, while reducing memory usage by storing compressed versions of the vectors. We found in pre- liminary experiments that using L2 distance for FAISS retrieval results in better performance for kNN-LM, compared to inner product distance. Related Cache Models Prior work (Grave et al., 2017c; Merity et al., 2017) used a similar ap- proach to compute similarity to the previous hidden states of test documents, making it easier to copy rare vocabulary items from the recent past. Such techniques have been less popular since the development of Transformers (Vaswani et al., 2017), which can learn to copy recent words using self-attention; in Section 4.1, we observe relatively small gains from caching recent items in the same test document `a la Grave et al. (2017c). Most relatedly, Grave et al. (2017a) describe an online language model using nearest neighbor search over all previous hidden states, to improve domain adaptation. In our work, we only save training data, with the goal of explicitly memorizing training examples to better generalize to similar cases at test time. # 3 EXPERIMENTAL SETUP Data Experiments in this paper use the following English corpora: WIKITEXT-103 is a standard benchmark by Merity et al. (2017) for autoregressive language mod- eling with a 250K word-level vocabulary. It consists of 103M tokens of Wikipedia in the training set and 250K tokens in each of the development and test sets. BOOKS is the Toronto Books Corpus (Zhu et al., 2015), containing 0.7B. Complete books are held out for validation/test. WIKI-3B is English Wikipedia, containing about 2.87B tokens. Whole articles are held out for validation/test. WIKI-100M is a random 100M token subset of WIKI-3B, consisting of complete articles. Except for WIKITEXT-103, text is tokenized using the byte-pair encoding (Sennrich et al., 2015) with the 29K subword vocabulary from BERT (Devlin et al., 2019). Model Architecture kNN-LM is compatible with any model that produces fixed size context representations. We use decoder-only Transformers (Vaswani et al., 2017) for language modeling, which are the current state of the art. Since the kNN-LM makes no changes to the underlying LM, we take the exact architecture and optimization described by Baevski & Auli (2019) and use it to create a kNN-LM for inference. This model consists of 16 layers, each with 16 self-attention heads, 1024 dimensional hidden states, and 4096 dimensional feedforward layers, amounting to 247M trainable parameters. It processes 3072 tokens of context per example for WIKITEXT-103 and 1024 tokens for the rest of the corpora. Following Baevski & Auli (2019), we use adaptive inputs and an adaptive softmax (Grave et al., 2017b) with tied weights (Press & Wolf, 2017) for the WIKITEXT-103 experiments. On other datasets we do not use adaptive inputs or an adaptive softmax. Evaluation LMs are trained to minimize the negative log-likelihood of the training corpus, and evaluated by perplexity (exponentiated negative log-likelihood) on held out data. Following Baevski & Auli (2019), 512 tokens are scored per test example, but up to 2560 tokens of extra prior context is provided for WIKITEXT-103 and up to 512 tokens of extra prior context is provided for the rest of the corpora. kNN-LM The keys used for kNN-LM are the 1024-dimensional representations fed to the feed- forward network in the final layer of the Transformer LM (after self-attention and layernorm; see Section 5 for further explanation). We perform a single forward pass over the training set with the trained model, in order to save the keys and values. During this forward pass, each target token is provided a minimum of 1536 tokens of prior context for WIKITEXT-103 and a minimum of 512 3 Published as a conference paper at ICLR 2020 Model Perplexity (↓) Dev Test Baevski & Auli (2019) +Transformer-XL (Dai et al., 2019) +Phrase Induction (Luo et al., 2019) 17.96 - - 18.65 18.30 17.40 247M 257M 257M Base LM (Baevski & Auli, 2019) +kNN-LM 17.96 16.06 18.65 16.12 247M 247M +Continuous Cache (Grave et al., 2017c) +kNN-LM + Continuous Cache 17.67 15.81 18.27 15.79 247M 247M Table 1: Performance on WIKITEXT-103. The kNN-LM substantially outperforms existing work. Gains are additive with the related but orthogonal continuous cache, allowing us to improve the base model by almost 3 perplexity points with no additional training. We report the median of three random seeds. Model Perplexity (↓) # Trainable Params Dev Test Base LM (Baevski & Auli, 2019) +kNN-LM 14.75 14.20 11.89 10.89 247M 247M Table 2: Performance on BOOKS, showing that kNN-LM works well in multiple domains. tokens for the rest of the corpora. A FAISS index is then created using 1M randomly sampled keys to learn 4096 cluster centroids. For efficiency, keys are quantized to 64-bytes. During inference, we retrieve k = 1024 neighbors, and the index looks up 32 cluster centroids while searching for the nearest neighbors. For WIKITEXT-103 experiments, we compute squared L2 distances with full precision keys, but for the other datasets we use the FAISS L2 distances (not squared) between quantized keys directly, for faster evaluation. We tune the interpolation parameter λ on the validation set.1 Computational Cost Although the kNN-LM requires no training given an existing LM, it does add some other computational overheads. Storing the keys and values requires a single forward pass over the training set, which amounts to a fraction of the cost of training for one epoch on the same examples. Once the keys are saved, for WIKITEXT-103 building the cache with 103M entries takes roughly two hours on a single CPU. Finally, running on the validation set took approximately 25 minutes when retrieving 1024 keys. While the cost of building a large cache grows linearly in the number of entries, it is trivial to parallelize and requires no GPU-based training. # 4 EXPERIMENTS 4.1 USING THE TRAINING DATA AS THE DATASTORE We first experiment with creating a datastore from the same data used to train the LM. Table 1 shows that kNN-LM improves perplexity on WIKITEXT-103 from 18.65 (Baevski & Auli, 2019) to a new state-of-the-art of 16.12. We also provide reported perplexities from two other recent models that also build upon Baevski and Auli’s, suggesting that further improvements may be possible by aug- menting the kNN-LM with these techniques. We compare with models trained only on the standard training set, but recent work has shown performance can be improved by training on additional data, from either the test set (Krause et al., 2019) or large amounts of web text (Shoeybi et al., 2019). We also experiment with a continuous cache model, a related but orthogonal technique from Grave et al. (2017c), in which the model saves and retrieves neighbors from earlier in the test document, 1Code is available at: https://github.com/urvashik/knnlm 4 Published as a conference paper at ICLR 2020 Training Data Datastore Perplexity (↓) Dev Test WIKI-3B WIKI-100M - - 16.11 20.99 15.17 19.59 WIKI-100M WIKI-3B 14.61 13.73 Table 3: Experimental results on WIKI-3B. The model trained on 100M tokens is augmented with a datastore that contains about 3B training examples, outperforming the vanilla LM trained on the entire WIKI-3B training set. (a) Effect of datastore size on perplexities. (b) Tuned values of λ for different datastore sizes. Figure 2: Varying the size of the datastore. (a) Increasing the datastore size monotonically improves performance, and has not saturated even at about 3B tokens. A kNN-LM trained on 100M tokens with a datastore of 1.6B tokens already outperforms the LM trained on all 3B tokens. (b) The optimal value of λ increases with the size of the datastore. rather than the training set. Gains from interpolating with the continuous cache are smaller than reported in the original setting that used LSTMs, perhaps because self-attentive language models can learn to perform such queries. Improvements from the continous cache are additive with the kNN-LM, pushing our state-of-the-art result to 15.79, a gain of 2.86 over the base model. Finally, we repeat the experiment using text from a different domain, BOOKS, to control for the possibility that encyclopedic Wikipedia text is somehow uniquely good for caching. Table 2 shows an improvement in test set perplexity from 11.89 to 10.89, suggesting that this is not the case. 4.2 MORE DATA WITHOUT TRAINING Section 4.1 has shown that retrieving neighbors from the training data can significantly improve language modeling performance. This raises the question: can retrieving nearest neighbors from data be a substitute for training on it? To test this, we train a LM on WIKI-100M and use it to build a datastore from WIKI-3B, a corpus 30 times larger than the training set. We then compare this kNN-LM to a vanilla LM trained on the entire WIKI-3B corpus.2 Table 3 shows that, as expected, the model trained on 3B tokens dramatically outperforms the model trained on 100M tokens, improving perplexity from 19.59 to 15.17. However, adding nearest neigh- bors retrieval over those 3B examples to the model trained on 100M tokens improves perplexity from 19.59 to 13.73; i.e. retrieving nearest neighbors from the corpus outperforms training on it. This result suggests that rather than training language models on ever larger datasets, we can use smaller datasets to learn representations and augment them with kNN-LM over a large corpus. 2The original LM (Baevski & Auli, 2019) was trained for 286K steps on a corpus of similar size to WIKI- 100M. When scaling up to WIKI-3B, we tuned only the number of updates on the validation set and found that training for 572K steps (double) produces a slightly stronger baseline. 5 Published as a conference paper at ICLR 2020 Training Data Datastore Perplexity (↓) Dev Test WIKI-3B BOOKS - - 37.13 14.75 34.84 11.89 WIKI-3B BOOKS 24.85 20.47 Table 4: Domain adaptation experiments, with results on BOOKS. Adding an in-domain datastore to a Wikipedia-trained model improves results by 23 points, approaching in-domain training. d (ead Forma Naw) Multi Headed Self Attention # Figure 3: Transformer LM layer. Key Type Dev ppl. (↓) No datastore Model output Model output layer normalized FFN input after layer norm FFN input before layer norm MHSA input after layer norm MHSA input before layer norm 17.96 17.07 17.01 16.06 17.06 16.76 17.14 Table 5: WIKITEXT-103 validation results using dif- ferent states from the final layer of the LM as the rep- resentation function f (·) for keys and queries. We re- trieve k=1024 neighbors and λ is tuned for each. To understand how the amount of data used for kNN retrieval affects performance, we use the WIKI- 100M model to create datastores using different amounts of randomly sampled data from WIKI-3B. Figure 2a shows that using only 1.6B examples for the datastore already surpasses the performance of the model trained on all of WIKI-3B. In addition, performance does not saturate at 3B examples in the datastore, suggesting that growing the datastore more could lead to further gains. Figure 2b shows the model relies more on the kNN component as the size of the datastore increases. 4.3 DOMAIN ADAPTATION We also experiment with domain adaptation by creating a datastore on the target domain training set. Table 4 shows that an in-domain LM on BOOKS has a relatively low perplexity (11.89), while a model trained on WIKI-3B performs poorly on the BOOKS domain (34.84 perplexity). Adding kNN search over BOOKS to the WIKI-3B model reduces perplexity by 14 points (to 20.47), demonstrating that kNN-LM allows a single model to be useful in multiple domains, by simply adding a datastore per domain. # 5 TUNING NEAREST NEIGHBOR SEARCH While the kNN-LM is conceptually straightforward, and requires no additional training, a number of hyperparameters are introduced for nearest neighbor search. We experiment with different choices here. Key Function For similarity search, we extract a representation of context c using an intermediate state of the LM f (c). Transformers compute a number of different intermediate states, and we com- pare several choices depicted in Figure 3, with results shown in Table 5. While all the instantiations of f we tried are helpful, we achieved the largest improvement by using the input to the final layer’s feedforward network. We also observe that normalized representations (i.e. taken immediately af- ter the layer norm) perform better. Repeating the experiment on the second-last transformer layer showed similar trends with slightly worse results (not shown), suggesting that the feedforward layer might be focusing more on the prediction problem, while the onus of representing the input falls more on the self-attention layer. 6 Published as a conference paper at ICLR 2020 Se Books (In-domain) Wiki-3B + Books Datastore I 176 4 KNIN-LM on Wikitext-103 1s- a (Domain Adaptation) 36 Domain Adaptation Perplexity 0.0 02 o4 0.6 0.8 interpolation parameter) 1 2 8 64 256 = 1024 k (# nearest neighbors) Figure 4: Effect of the number of nearest neigh- bors returned per word on WIKITEXT-103 (val- idation set). Returning more entries from the datastore monotonically improves performance. Figure 5: Effect of interpolation parameter λ on in-domain (left y-axis) and out-of-domain (right y-axis) validation set performances. More weight on pkN N improves domain adaptation. Number of Neighbors per Query Each query returns the top-k neighbors. Figure 4 shows that performance monotonically improves as more neighbors are returned, and suggests that even larger improvements may be possible with a higher value of k. Nonetheless, even a small number of neighbors (k = 8) is enough to achieve a new state of the art. Interpolation Parameter We use a parameter λ to interpolate between the base model distribution and the distribution from kNN search over the dataset. Figure 5 shows that λ = 0.25 is optimal on WIKITEXT-103. However, λ = 0.65 works best for domain adaptation results (Figure 5). Precision of Similarity Function In FAISS, the nearest neighbor search computes L2 distances against quantized keys. We found results were improved from 16.5 perplexity on WIKITEXT-103 to 16.06 by computing squared L2 distances with full precision keys for Equation 2. # 6 ANALYSIS Qualitative Analysis To understand why kNN-LM improves performance, we manually examine cases in which pkNN was significantly better than pLM. Table 6 shows one such example, along with several others in Appendix A. The example shows an interesting case where the model matches the trigram impact on the in several retrieved neighbors, but puts almost all weight on the most relevant neighbor, thus adding more value than an n-gram LM. In general, we find that examples where kNN-LM is most helpful typically contain rare patterns. Examples include factual knowledge, names, and near-duplicate sentences from the training set. In these cases, assigning train and test instances similar representations (via f (·)) appears to be an easier problem than implicitly memorizing the next word in model parameters. Simple vs Neural Representation We observe that many long-tail phenomena manifest as rare n-grams (e.g. names). Is it therefore possible to interpolate an n-gram model with a Transformer LM, as an alternative to our kNN approach? Figure 7 shows little improvement from using n-gram LMs – 0.2 perplexity points (similarly to Bakhtin et al. (2018)). This result highlights the need to use the learned representation function f (·) to measure similarity between more varied contexts. If a neural representation function is crucial for kNN-LM, could Implicit vs Explicit Memory implicitly memorizing the training dataset in the neural network parameters replace the explicit memory in the datastore? To test this, we train a Transformer LM with no dropout. Figure 8 shows that this model eventually reaches zero training loss, indicating that it can make perfect predictions for all examples in the training set; the model has memorized the dataset. Naturally, the memorizing LM overfits, i.e. the training loss drops to 0 while the best validation perplexity is much higher at 28.59. For comparison, the vanilla Transformer LM (with dropout) has a much higher training loss (shown in Figure 8), but also generalizes better with a validation perplexity of 17.96. This result shows that the Transformer has sufficient capacity to memorize the training set. 7 Published as a conference paper at ICLR 2020 Test Context (pkNN = 0.998, pLM = 0.124) Test Target it was organised by New Zealand international player Joseph Warbrick, promoted by civil servant Thomas Eyton, and managed by James Scott, a publican. The Natives were the first New Zealand team to perform a haka, and also the first to wear all black. They played 107 rugby matches during the tour, as well as a small number of Victorian Rules football and associ- ation football matches in Australia. Having made a significant impact on the... development # Training Set Context # Training Set Target # Context Probability As the captain and instigator of the 1888-89 Natives – the first New Zealand team to tour the British Isles – Warbrick had a lasting impact on the... # development # development 0.998 promoted to a new first grade competition which started in 1900. Glebe immediately made a big impact on the... # district 0.00012 centuries, few were as large as other players managed. However, others contend that his impact on the... # game 0.000034 Nearly every game in the main series has either an anime or manga adap- tation, or both. The series has had a significant impact on the... development 0.00000092 Figure 6: Example where the kNN model has much higher confidence in the correct target than the LM. Although there are other training set examples with similar local n-gram matches, the nearest neighbour search is highly confident of specific and very relevant context. 18.00 I 17.75 17.50 17.25 = Wikitext-103 LM + n-gram LM === KNN-LM on Wikitext-103 Training loss (base e) 3 With Dropout —4- Without Dropout 0 25 50 1 (size of n-gram) 75 100 Epoch 125 150 175 200 # Perplexity Figure 7: Interpolating the Transformer LM with n-gram LMs on WIKITEXT-103 (validation set). Using kNN-LM gives a much lower perplexity, suggesting that the representations are learning more than just matching local context. Figure 8: Training curves for the Transformer Turning off LM with and without dropout. dropout allows the training loss to go to 0, in- dicating that the model has sufficient capacity to memorize the training data. We consider whether the memorizing LM can be an effective substitute for nearest neighbor search. Interpolating the memorizing LM with the original LM improves validation perplexity by just 0.1 – compared to 1.9 from kNN-LM. This result suggests that although the Transformer is expressive enough to memorize all training examples, learning to do so does not result in context representations that generalize. In contrast, kNN-LM memorizes training data while improving generalization. From these experiments, we conjecture that kNN-LM improves performance because (1) the Trans- former LM is very good at learning a representation function for contexts with an implicit notion of similarity, and (2) while the Transformer has capacity to memorize all training examples, doing so causes its representation to generalize less effectively, but (3) the kNN-LM allows the model to memorize the training data while retaining an effective similarity function. 8 Published as a conference paper at ICLR 2020 # 7 RELATED WORK We discuss related uses of caches for language modeling in Section 2. Similar kNN models to ours have been proposed for computer vision tasks (Papernot & McDaniel, 2018; Orhan, 2018; Zhao & Cho, 2018), primarily motivated by improving interpretability and ro- bustness to adversarial attacks. We hypothesize that our method may be particularly effective for language modeling, because plentiful unlabeled data allows datastores of billions of tokens, and language modeling often requires world knowledge to be learnt from few examples. Nearest neighbor models have been applied to a number of NLP problems in the past, such as part of speech tagging (Daelemans et al., 1996) and morphological analysis (Bosch et al., 2007), but the use of learned representations makes the similarity function much more effective in the case of neural models. More recently, Kaiser et al. (2017) have used a similarly differentiable memory that is learned and updated during training, and is applied to one-shot learning tasks. Several models have also improved language generation by using training examples directly at test time. Guu et al. (2018) propose a model that samples training sentences at random and edits them with a sequence-to-sequence model, but does not use a retrieval mechanism such as kNN. Gu et al. (2018) introduce a translation model that attends over retrieved training set examples. Weston et al. (2018) improve a dialogue response generation model by refining similar instances from the training set. kNN-LM differs from these approaches by working at the level of individual tokens instead of whole training sentences, as well as not incorporating the retrieval mechanism into the training pipeline. A general trend in machine learning, and in language modeling in particular, is that adding more data consistently improves performance (Devlin et al., 2019; Radford et al., 2019; Yang et al., 2019; Liu et al., 2019; Zellers et al., 2019; Shoeybi et al., 2019). Our work offers an alternative method for scaling language models, in which relatively small models learn context representations, and a nearest neighbour search acts as a highly expressive classifier. # 8 CONCLUSION AND FUTURE WORK We have introduced kNN-LMs, which can significantly outperform standard language models by directly querying training examples at test time. The approach can be applied to any neural language model. The success of this method suggests that learning similarity functions between contexts may be an easier problem than predicting the next word from some given context. Future work should explore explicitly training similarity functions, and reducing the size of the datastore. ACKNOWLEDGMENTS The authors thank the anonymous reviewers as well as Sida Wang, Kartikay Khandelwal, Kevin Clark and members of the FAIR Seattle team for helpful discussions and comments. REFERENCES Alexei Baevski and Michael Auli. Adaptive input representations for neural language modeling. In ICLR, 2019. Anton Bakhtin, Arthur Szlam, Marc’Aurelio Ranzato, and Edouard Grave. Lightweight adaptive mixture of neural and n-gram language models. arXiv preprint arXiv:1804.07705, 2018. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155, 2003. Antal van den Bosch, Bertjan Busser, Sander Canisius, and Walter Daelemans. An efficient memory- based morphosyntactic tagger and parser for dutch. LOT Occasional Series, 7:191–206, 2007. Walter Daelemans, Jakub Zavrel, Peter Berck, and Steven Gillis. Mbt: A memory-based part of speech tagger-generator. In WVLC, 1996. 9 Published as a conference paper at ICLR 2020 Zihang Dai, Zhilin Yang, Yiming Yang, William W Cohen, Jaime Carbonell, Quoc V Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. In ACL, 2019. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 2019. Edouard Grave, Moustapha M Cisse, and Armand Joulin. Unbounded cache model for online lan- guage modeling with open vocabulary. In NIPS, pp. 6042–6052, 2017a. Edouard Grave, Armand Joulin, Moustapha Ciss´e, Herv´e J´egou, et al. Efficient softmax approxima- tion for gpus. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1302–1310. JMLR. org, 2017b. Edouard Grave, Armand Joulin, and Nicolas Usunier. Improving neural language models with a continuous cache. In ICLR, 2017c. Jiatao Gu, Yong Wang, Kyunghyun Cho, and Victor OK Li. Search engine guided neural machine translation. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018. Kelvin Guu, Tatsunori B Hashimoto, Yonatan Oren, and Percy Liang. Generating sentences by editing prototypes. Transactions of the Association for Computational Linguistics, 6:437–450, 2018. Jeff Johnson, Matthijs Douze, and Herv´e J´egou. Billion-scale similarity search with gpus. arXiv preprint arXiv:1702.08734, 2017. Łukasz Kaiser, Ofir Nachum, Aurko Roy, and Samy Bengio. Learning to remember rare events. In ICLR, 2017. Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. Dynamic evaluation of trans- former language models. arXiv preprint arXiv:1904.08378, 2019. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019. Hongyin Luo, Lan Jiang, Yonatan Belinkov, and James Glass. Improving neural language models by segmenting, attending, and predicting the future. In ACL, 2019. Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher. Pointer sentinel mixture models. ICLR, 2017. Tom´aˇs Mikolov, Martin Karafi´at, Luk´aˇs Burget, Jan ˇCernock`y, and Sanjeev Khudanpur. Recurrent neural network based language model. In Eleventh annual conference of the international speech communication association, 2010. A. Emin Orhan. A simple cache model for image recognition. In NeurIPS, 2018. Nicolas Papernot and Patrick McDaniel. Deep k-nearest neighbors: Towards confident, interpretable and robust deep learning. arXiv preprint arXiv:1803.04765, 2018. Ofir Press and Lior Wolf. Using the output embedding to improve language models. In ICLR, 2017. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. URL https://d4mucfpksywv.cloudfront.net/better- language-models/language-models.pdf, 2019. Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909, 2015. Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using gpu model parallelism. arXiv preprint arXiv:1909.08053, 2019. 10 Published as a conference paper at ICLR 2020 Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998–6008, 2017. Jason Weston, Emily Dinan, and Alexander H Miller. Retrieve and refine: Improved sequence generation models for dialogue. arXiv preprint arXiv:1808.04776, 2018. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. arXiv preprint arXiv:1906.08237, 2019. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Ali Farhadi, Franziska Roesner, and Yejin Choi. Defending against neural fake news. In NeurIPS, 2019. Jake Zhao and Kyunghyun Cho. Retrieval-augmented convolutional neural networks for improved robustness against adversarial examples. arXiv preprint arXiv:1802.09502, 2018. Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching In Proceedings of the IEEE international conference on computer movies and reading books. vision, pp. 19–27, 2015. 11 Published as a conference paper at ICLR 2020 # A APPENDIX This section provides several examples where pkNN places higher probability mass on the true target, compared to pLM. Test Context (pkNN = 0.995, pLM = 0.025) Test Target For Australians and New Zealanders the Gallipoli campaign came to sym- bolise an important milestone in the emergence of both nations as indepen- dent actors on the world stage and the development of a sense of national identity. Today, the date of the initial landings, 25 April, is known as An- zac Day in Australia and New Zealand and every year thousands of people gather at memorials in both nations, as well as Turkey, to... honour Training Set Context Training Set Target Context Probability Despite this, for Australians and New Zealanders the Gallipoli campaign has come to symbolise an important milestone in the emergence of both nations as independent actors on the world stage and the development of a sense of national identity. Today, the date of the initial landings, 25 April, is a public holiday known as Anzac Day in Australia and New Zealand and every year thousands of people gather at memorials in both nations, and indeed in Turkey, to ... honour 0.995 On the anniversary date of his death, every year since 1997, thousands of people gather at his home in Memphis to... celebrate 0.0086 Twenty-five years after Marseille’s death, fighter pilot veterans of World War II gathered to... honour 0.0000041 Table 6: Another example where the kNN model places much higher probability mass on the correct target, compared to the LM. The nearest neighbors search has retrieved a training set context that is extremely similar to the test context, while very rare and in the long-tail of patterns. Test Context (pkNN = 0.959, pLM = 0.503) Test Target U2 do what they’re best at, slipping into epic rock mode, playing music made for the arena”. In two other local newspaper reviews, critics praised the song’s inclusion in a sequence of greatest hits. For the PopMart Tour of 1997–... 1998 Training Set Context Training Set Target Context Probability Following their original intent, ”Sunday Bloody Sunday” was not played during any of the forty-seven shows on the Lovetown Tour in 1989. The song reappeared for a brief period during the Zoo TV Tour, and late during the second half of PopMart Tour (1997–... 1998 0.936 They are 6 times Champions and they won the Challenge Cup in 1938, and have experienced two previous stretches in the Super League, 1997–... 2002 0.0071 About $40 million ($61.4 million in 2018 dollars) was spent on the property acquisition. After weather-related construction delays due to the El Nino season of the winter of 1997–... 1998 0.0015 This made it the highest-rated season of The X-Files to air as well as the highest rated Fox program for the 1997–... 98 0.00000048 Table 7: In this example, the desired date pattern appears in many examples. Yet, the nearest neighbors search is able to identify the only training set context which is relevant to the test context and assigns it the highest probability mass. 12 Published as a conference paper at ICLR 2020 Test Context (pkNN = 0.624, pLM = 0.167) Test Target Lord Strathcona awarded Gauthier a scholarship in 1906 that allowed her to return to Europe and continue her vocal studies. She returned there and continued both to study and give performances. Her first operatic perfor- mance came in 1909 in Pavia, Italy as Micaela in Bizet’s... Carmen Training Set Context Training Set Target Context Probability Despite poor relations with the orchestra, Mahler brought five new operas to the theatre, including Bizet’s... Carmen 0.356 The fourth movement of An die Jugend (1909), for instance, uses two of Niccolo Paganini’s Caprices for solo violin (numbers 11 and 15), while the 1920 piece Piano Sonatina No. 6 (Fantasia da camera super Carmen) is based on themes from Georges Bizet’s... opera 0.0937 It also hosted the Ballet of her Majesty’s Theatre in the mid-19th century, before returning to hosting the London premieres of such operas as Bizet’s... Carmen 0.0686 Table 8: In this case, the model is able to memorize the fact that Georges Bizet wrote Carmen. # Test Context (pkNN = 0.031, pLM = 0.007) # Test Target Mycena maculata bears some resemblance to M. <unk>, but is only as- sociated with decaying hardwood logs and stumps, and is found in eastern North America, and sometimes on oak on the West Coast. In age, it... # develops Training Set Context Training Set Target Context Probability Morchella tridentina (=Morchella frustrata) is also rufescent and very sim- ilar to M. rufobrunnea. It is found in mountainous forests and maquis and forms a marked sinus at the attachment of the cap with the stem, which is pure white. At maturity, it... develops 0.031 The winter bonnet (M. tintinnabulum) is a northern European species that is much smaller (cap diameter up to 2.6 cm (1.0 in) across) and has a brown cap, and has ragged hairs at the base. It... generally 0.029 The ”bleeding” will distinguish Mycena atkinsoniana from most other Mycena species commonly encountered. The common and widely dis- tributed M. sanguinolenta is another ”bleeder”, but it is smaller than M. atkinsonia, with a cap diameter ranging from 3 to 15 mm (0.1 to 0.6 in). Additionally, it... has 0.028 Mycena flavoalba bears resemblance to some members of Hemimycena, such as H. lactea and H. <unk>. It... the genus can 0.018 Table 9: This is an example where the pkNN distribution is relatively flat, as several words are plausible continuations. However, the nearest neighbors search assigns the highest probability to In contrast, the LM the correct target and a corresponding context that is particularly relevant. probability on the correct target is lower. 13
{ "id": "1702.08734" }